sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
02097516276e2baa3e5a0a0956112851b76f5a40
|
# Dataset Card for "chunk_127"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_127
|
[
"region:us"
] |
2023-05-20T15:51:55+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1182591540, "num_examples": 232245}], "download_size": 1207344898, "dataset_size": 1182591540}}
|
2023-05-20T15:52:35+00:00
|
05eb133111dc953c06a23f863102e3e16367a4da
|
# Dataset Card for "chunk_128"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_128
|
[
"region:us"
] |
2023-05-20T16:00:23+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1155104924, "num_examples": 226847}], "download_size": 1179024232, "dataset_size": 1155104924}}
|
2023-05-20T16:01:01+00:00
|
5031b7c9fdf520a7f91d81a16668c7f356945068
|
# Dataset Card for "chunk_129"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_129
|
[
"region:us"
] |
2023-05-20T16:03:03+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1129960628, "num_examples": 221909}], "download_size": 1154208586, "dataset_size": 1129960628}}
|
2023-05-20T16:03:39+00:00
|
4e59f8797933941823d5411a06732c9e452f7f6f
|
EqUaL90/nga_modis_sr_qc250m1
|
[
"region:us"
] |
2023-05-20T16:12:42+00:00
|
{}
|
2023-05-20T16:35:50+00:00
|
|
b56368049077538f745941cc63005e530c4ac124
|
# Dog Emotions Dataset
This is a dataset of images of dogs with happy and sad faces; as simple as that. Use it train vision classifiers for happy and sad dogs. It comes already split into training and test datasets. Moreover, the labels can be infered from the file structure:
```
dog_emotions_dataset/
└── images/
├── train/
│ ├── happy/
│ │ ├── ed4QZAil6U779pL3ZndRNLvqxF2gMU890.jpg
│ │ ├── r5J1n5FFdTDAokesz72rKJQRJq3Ktn42.jpg
│ │ ├── efuF5XwayrlqgUVIXtDAkDHKJce4xG629.jpg
│ │ ├── rAawLrHoK1Cjvn2Os5jpM6uIZPNLMe114.jpg
│ │ ├── eghaZlxykdiy5GEaNnmZvdoc39QFXf35.jpg
│ │ └── ...
│ └── sad/
└── test/
├── happy/
└── sad/
```
|
Q-b1t/Dogs_Emotions_Dataset
|
[
"license:mit",
"region:us"
] |
2023-05-20T16:25:39+00:00
|
{"license": "mit"}
|
2023-05-21T00:23:22+00:00
|
1b686ceeb7e3cb1c34a1b068cc54be9c5bf26e59
|
# Dataset Card for "chunk_130"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_130
|
[
"region:us"
] |
2023-05-20T16:28:13+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1316821752, "num_examples": 258606}], "download_size": 1343837371, "dataset_size": 1316821752}}
|
2023-05-20T16:29:24+00:00
|
f31d4bbe29c88bb46084ab6239c9b3ccedd7ef0e
|
# Dataset Card for "chunk_132"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_132
|
[
"region:us"
] |
2023-05-20T16:41:48+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1104760320, "num_examples": 216960}], "download_size": 1128059021, "dataset_size": 1104760320}}
|
2023-05-20T16:43:48+00:00
|
cae3fc3ada631ffd0e3b4acdfd782b186100bf92
|
# Dataset Card for "chunk_133"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_133
|
[
"region:us"
] |
2023-05-20T17:08:31+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1276788448, "num_examples": 250744}], "download_size": 1302483404, "dataset_size": 1276788448}}
|
2023-05-20T17:10:49+00:00
|
a243c745cbe15f145412ab0510f2f7cc7d4c34e1
|
# Dataset Card for "cc_sbu_align"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jxu124/cc_sbu_align
|
[
"region:us"
] |
2023-05-20T17:37:49+00:00
|
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "global_image_id", "dtype": "string"}, {"name": "image_path", "dtype": "string"}, {"name": "anns_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1561212, "num_examples": 3439}], "download_size": 721956, "dataset_size": 1561212}}
|
2023-05-20T17:52:57+00:00
|
5397f379089908a9ca033da4c791282c939e6a74
|
# Dataset Card for "TinyStories-GPT4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
skeskinen/TinyStories-GPT4
|
[
"region:us"
] |
2023-05-20T17:58:41+00:00
|
{"dataset_info": {"features": [{"name": "story", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "features", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 3680196493, "num_examples": 2745100}], "download_size": 1553670972, "dataset_size": 3680196493}}
|
2023-05-20T18:00:22+00:00
|
5cc9dcf4c52959fd84145680c5019510991ac9fd
|
# Dataset Card for "TinyStories-GPT3.5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
skeskinen/TinyStories-GPT3.5
|
[
"region:us"
] |
2023-05-20T18:00:22+00:00
|
{"dataset_info": {"features": [{"name": "story", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "features", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 2837432460, "num_examples": 2222513}], "download_size": 1125071371, "dataset_size": 2837432460}}
|
2023-05-20T18:01:33+00:00
|
12e20b0b6039fbf656e89e2f26597e84c1037847
|
# Dataset Card for "refcocoplus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jxu124/refcocoplus
|
[
"region:us"
] |
2023-05-20T18:00:40+00:00
|
{"dataset_info": {"features": [{"name": "sent_ids", "sequence": "int64"}, {"name": "file_name", "dtype": "string"}, {"name": "ann_id", "dtype": "int64"}, {"name": "ref_id", "dtype": "int64"}, {"name": "image_id", "dtype": "int64"}, {"name": "split", "dtype": "string"}, {"name": "sentences", "list": [{"name": "raw", "dtype": "string"}, {"name": "sent", "dtype": "string"}, {"name": "sent_id", "dtype": "int64"}, {"name": "tokens", "sequence": "string"}]}, {"name": "category_id", "dtype": "int64"}, {"name": "raw_anns", "dtype": "string"}, {"name": "raw_image_info", "dtype": "string"}, {"name": "raw_sentences", "dtype": "string"}, {"name": "image_path", "dtype": "string"}, {"name": "bbox", "sequence": "float64"}, {"name": "captions", "sequence": "string"}, {"name": "global_image_id", "dtype": "string"}, {"name": "anns_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 81937869, "num_examples": 42278}, {"name": "testB", "num_bytes": 3273927, "num_examples": 1798}, {"name": "test", "num_bytes": 3969265, "num_examples": 1975}, {"name": "validation", "num_bytes": 7399541, "num_examples": 3805}], "download_size": 39772801, "dataset_size": 96580602}}
|
2023-05-20T18:01:22+00:00
|
ad4bd8f41877ce7c6d1ea94f949127574e7682af
|
This dataset contains 43 911 155 paragraphs from 6 458 670 [Wikipedia articles](https://huggingface.co/datasets/wikipedia). Size of each paragraph varies from 20 to 2000 characters. The article title is prepended to the text of each paragraph.
|
olmer/wiki_paragraphs
|
[
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] |
2023-05-20T18:15:13+00:00
|
{"language": ["en"], "license": "cc-by-sa-3.0", "pretty_name": "Wikipedia Paragraphs"}
|
2023-05-20T20:38:35+00:00
|
46c4c9368a792d2385c830ac00470706cefa411e
|
Embeddings of the [english Wikipedia](https://huggingface.co/datasets/wikipedia) [paragraphs](https://huggingface.co/datasets/olmer/wiki_paragraphs) using [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) sentence transformers encoder.
The dataset contains 43 911 155 paragraphs from 6 458 670 Wikipedia articles.
The size of each paragraph varies from 20 to 2000 characters.
For each paragraph there is an embedding of size 768.
Embeddings are stored in numpy files, 1 000 000 embeddings per file.
For each embedding file, there is an ids file that contains the list of ids of the corresponding paragraphs.
__Be careful, dataset size is 151Gb__.
|
olmer/wiki_mpnet_embeddings
|
[
"task_categories:text-retrieval",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] |
2023-05-20T18:27:56+00:00
|
{"language": ["en"], "license": "cc-by-sa-3.0", "task_categories": ["text-retrieval"], "pretty_name": "Wikpedia Paragraphs MPNet Embeddings"}
|
2023-05-28T20:16:50+00:00
|
9839c5d7d0508c036d794d6786d10ca175fc23c6
|
# Dataset Card for "shrutilipi_bengali"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ucalyptus/shrutilipi_bengali
|
[
"region:us"
] |
2023-05-20T18:38:56+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcriptions", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 78086461594.866, "num_examples": 378691}], "download_size": 74356189780, "dataset_size": 78086461594.866}}
|
2023-05-20T20:26:05+00:00
|
b9ce8d3596a3642fb909226750017d75dbc669a5
|
# Dataset Card for "objects365"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jxu124/objects365
|
[
"region:us"
] |
2023-05-20T18:55:12+00:00
|
{"dataset_info": {"features": [{"name": "global_image_id", "dtype": "string"}, {"name": "image_path", "dtype": "string"}, {"name": "anns_id", "dtype": "string"}, {"name": "format", "dtype": "string"}, {"name": "image_info", "struct": [{"name": "file_name", "dtype": "string"}, {"name": "height", "dtype": "int64"}, {"name": "id", "dtype": "int64"}, {"name": "license", "dtype": "int64"}, {"name": "url", "dtype": "string"}, {"name": "width", "dtype": "int64"}]}, {"name": "anns_info", "list": [{"name": "area", "dtype": "float64"}, {"name": "bbox", "sequence": "float64"}, {"name": "category", "dtype": "string"}, {"name": "category_id", "dtype": "int64"}, {"name": "id", "dtype": "int64"}, {"name": "image_id", "dtype": "int64"}, {"name": "iscrowd", "dtype": "int64"}, {"name": "isfake", "dtype": "int64"}, {"name": "isreflected", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 3000445884, "num_examples": 1742292}, {"name": "validation", "num_bytes": 145616533, "num_examples": 80000}], "download_size": 1646594676, "dataset_size": 3146062417}}
|
2023-05-20T19:09:43+00:00
|
a1124fe9475380b9bd1293b25ba4f7f2c6b4604f
|
cladius/temp
|
[
"region:us"
] |
2023-05-20T19:41:58+00:00
|
{}
|
2023-05-20T19:44:01+00:00
|
|
ec5cac83e044435adcbd75602c984fe37d4b4f0d
|
# Dataset Card for "chunk_139"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_139
|
[
"region:us"
] |
2023-05-20T19:46:42+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1130164308, "num_examples": 221949}], "download_size": 1151665269, "dataset_size": 1130164308}}
|
2023-05-20T19:47:42+00:00
|
1b2d94be064573642490a556b3d5e2c582bb5688
|
# Dataset Card for "RT_temp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
anwarshome/RT_temp
|
[
"region:us"
] |
2023-05-20T19:47:15+00:00
|
{"dataset_info": {"features": [{"name": "uuid", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 58112550.0, "num_examples": 686}, {"name": "test", "num_bytes": 740753.0, "num_examples": 10}], "download_size": 45644402, "dataset_size": 58853303.0}}
|
2023-05-20T19:47:19+00:00
|
c35c780dcc58fd35abd67f747d1ffc1eb37583f3
|
# Dataset Card for "chunk_135"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_135
|
[
"region:us"
] |
2023-05-20T19:48:49+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1184439936, "num_examples": 232608}], "download_size": 1209114440, "dataset_size": 1184439936}}
|
2023-05-20T19:49:52+00:00
|
58af6b062f678fb66abffb003c35a4edf95aed17
|
# Dataset Card for "chunk_137"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_137
|
[
"region:us"
] |
2023-05-20T19:49:03+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1161918020, "num_examples": 228185}], "download_size": 1185801431, "dataset_size": 1161918020}}
|
2023-05-20T19:49:43+00:00
|
47ea65d9b79d621d78d203c8dbdb985196a04194
|
# Dataset Card for "chunk_136"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_136
|
[
"region:us"
] |
2023-05-20T19:51:54+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1161938388, "num_examples": 228189}], "download_size": 1184197216, "dataset_size": 1161938388}}
|
2023-05-20T19:52:57+00:00
|
09c3c166b8927c122ae4f5308e0821841023dfd1
|
FMunyoz/AMB
|
[
"license:cc",
"region:us"
] |
2023-05-20T19:52:09+00:00
|
{"license": "cc"}
|
2023-05-27T12:30:16+00:00
|
|
ab461950da3f0ceec28c779864bb5ce36c7d424f
|
# Dataset Card for "chunk_134"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_134
|
[
"region:us"
] |
2023-05-20T19:53:09+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1211009992, "num_examples": 237826}], "download_size": 1235978510, "dataset_size": 1211009992}}
|
2023-05-20T19:54:01+00:00
|
72ecba35111d6514feace003e486387c6fc903dc
|
Leventk/Veri_kuma
|
[
"license:openrail",
"region:us"
] |
2023-05-20T19:57:27+00:00
|
{"license": "openrail"}
|
2023-05-20T19:58:32+00:00
|
|
1b23f24992091850db81fb51a6d34c990ac0a0fd
|
# Dataset Card for "chunk_121"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_121
|
[
"region:us"
] |
2023-05-20T20:00:45+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1413717420, "num_examples": 277635}], "download_size": 1444019810, "dataset_size": 1413717420}}
|
2023-05-20T20:06:30+00:00
|
2cd617e33a51ffd2848aed6f636f7e191bd14c50
|
# Dataset Card for "DOA_datasetmini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FidelOdok/DOA_datasetmini
|
[
"region:us"
] |
2023-05-20T20:15:37+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "10", "3": "100", "4": "101", "5": "102", "6": "103", "7": "104", "8": "105", "9": "106", "10": "107", "11": "108", "12": "109", "13": "11", "14": "110", "15": "111", "16": "112", "17": "113", "18": "114", "19": "115", "20": "116", "21": "117", "22": "118", "23": "119", "24": "12", "25": "120", "26": "121", "27": "122", "28": "123", "29": "124", "30": "125", "31": "126", "32": "127", "33": "128", "34": "129", "35": "13", "36": "130", "37": "131", "38": "132", "39": "133", "40": "134", "41": "135", "42": "136", "43": "137", "44": "138", "45": "139", "46": "14", "47": "140", "48": "141", "49": "142", "50": "143", "51": "144", "52": "145", "53": "146", "54": "147", "55": "148", "56": "149", "57": "15", "58": "150", "59": "151", "60": "152", "61": "153", "62": "154", "63": "155", "64": "156", "65": "157", "66": "158", "67": "159", "68": "16", "69": "160", "70": "161", "71": "162", "72": "163", "73": "164", "74": "165", "75": "166", "76": "167", "77": "168", "78": "169", "79": "17", "80": "170", "81": "171", "82": "172", "83": "173", "84": "174", "85": "175", "86": "176", "87": "177", "88": "178", "89": "179", "90": "18", "91": "180", "92": "181", "93": "182", "94": "183", "95": "184", "96": "185", "97": "186", "98": "187", "99": "188", "100": "189", "101": "19", "102": "190", "103": "191", "104": "192", "105": "193", "106": "194", "107": "195", "108": "197", "109": "198", "110": "199", "111": "2", "112": "20", "113": "200", "114": "201", "115": "202", "116": "203", "117": "204", "118": "205", "119": "206", "120": "207", "121": "208", "122": "209", "123": "21", "124": "210", "125": "211", "126": "212", "127": "213", "128": "214", "129": "215", "130": "216", "131": "217", "132": "218", "133": "219", "134": "22", "135": "220", "136": "221", "137": "222", "138": "223", "139": "224", "140": "225", "141": "226", "142": "227", "143": "228", "144": "229", "145": "23", "146": "230", "147": "231", "148": "232", "149": "233", "150": "234", "151": "235", "152": "236", "153": "237", "154": "238", "155": "239", "156": "24", "157": "240", "158": "241", "159": "242", "160": "243", "161": "244", "162": "245", "163": "246", "164": "247", "165": "248", "166": "249", "167": "25", "168": "250", "169": "251", "170": "252", "171": "253", "172": "254", "173": "255", "174": "256", "175": "257", "176": "258", "177": "259", "178": "26", "179": "260", "180": "261", "181": "262", "182": "263", "183": "264", "184": "265", "185": "266", "186": "267", "187": "268", "188": "269", "189": "27", "190": "270", "191": "271", "192": "272", "193": "273", "194": "274", "195": "275", "196": "276", "197": "277", "198": "278", "199": "279", "200": "28", "201": "280", "202": "281", "203": "282", "204": "283", "205": "284", "206": "285", "207": "286", "208": "287", "209": "288", "210": "289", "211": "29", "212": "290", "213": "291", "214": "292", "215": "293", "216": "294", "217": "295", "218": "296", "219": "297", "220": "298", "221": "299", "222": "3", "223": "30", "224": "300", "225": "301", "226": "302", "227": "303", "228": "304", "229": "305", "230": "306", "231": "307", "232": "308", "233": "309", "234": "31", "235": "310", "236": "311", "237": "312", "238": "313", "239": "314", "240": "315", "241": "316", "242": "317", "243": "318", "244": "319", "245": "32", "246": "320", "247": "321", "248": "322", "249": "323", "250": "324", "251": "325", "252": "326", "253": "327", "254": "328", "255": "329", "256": "33", "257": "330", "258": "331", "259": "332", "260": "333", "261": "334", "262": "335", "263": "336", "264": "337", "265": "338", "266": "339", "267": "34", "268": "340", "269": "341", "270": "342", "271": "343", "272": "344", "273": "345", "274": "346", "275": "347", "276": "348", "277": "349", "278": "35", "279": "350", "280": "351", "281": "352", "282": "353", "283": "354", "284": "355", "285": "356", "286": "357", "287": "358", "288": "359", "289": "36", "290": "360", "291": "361", "292": "362", "293": "363", "294": "364", "295": "365", "296": "366", "297": "367", "298": "368", "299": "369", "300": "37", "301": "370", "302": "371", "303": "372", "304": "373", "305": "374", "306": "375", "307": "376", "308": "377", "309": "378", "310": "379", "311": "38", "312": "380", "313": "381", "314": "382", "315": "383", "316": "384", "317": "385", "318": "386", "319": "387", "320": "388", "321": "389", "322": "39", "323": "390", "324": "391", "325": "392", "326": "393", "327": "394", "328": "395", "329": "396", "330": "397", "331": "398", "332": "399", "333": "4", "334": "40", "335": "400", "336": "401", "337": "402", "338": "403", "339": "404", "340": "405", "341": "406", "342": "407", "343": "408", "344": "409", "345": "41", "346": "410", "347": "411", "348": "412", "349": "413", "350": "414", "351": "415", "352": "416", "353": "417", "354": "418", "355": "419", "356": "42", "357": "420", "358": "421", "359": "422", "360": "423", "361": "424", "362": "425", "363": "426", "364": "427", "365": "428", "366": "43", "367": "44", "368": "45", "369": "46", "370": "47", "371": "48", "372": "49", "373": "5", "374": "50", "375": "51", "376": "52", "377": "53", "378": "54", "379": "55", "380": "56", "381": "57", "382": "58", "383": "59", "384": "6", "385": "60", "386": "61", "387": "62", "388": "63", "389": "64", "390": "65", "391": "66", "392": "67", "393": "68", "394": "69", "395": "7", "396": "70", "397": "71", "398": "72", "399": "73", "400": "74", "401": "75", "402": "76", "403": "77", "404": "78", "405": "79", "406": "8", "407": "80", "408": "81", "409": "82", "410": "83", "411": "84", "412": "85", "413": "86", "414": "87", "415": "88", "416": "89", "417": "9", "418": "90", "419": "91", "420": "92", "421": "93", "422": "94", "423": "95", "424": "96", "425": "97", "426": "98", "427": "99"}}}}], "splits": [{"name": "train", "num_bytes": 1938452772.0610814, "num_examples": 5030}], "download_size": 1936833914, "dataset_size": 1938452772.0610814}}
|
2023-05-20T20:16:59+00:00
|
ba7c06607e252d87ed91f3495c8aa679ec6b6dac
|
# Dataset Card for "many_emotions"
## Dataset Description
- **Homepage:**
### Dataset Summary
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
The data fields are:
- `id`: unique identifier
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `anger` (0), `fear` (1), `joy` (2), `love` (
3), `sadness` (4), `surprise` (5), `neutral` (6).
- `license`: inherited license from source dataset
- `dataset`: source dataset
- `language`: text language
### Data Splits
The dataset has 2 configurations:
- raw: with 5 configuration for each language
- split: with configurations train, validation, test
## Dataset Creation
### Curation Rationale
The raw split contains duplicates.
In the split "split" there may be equal rows but with different label.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
## Additional Information
### Licensing Information
Each row has its own license which is inherited from the source dataset.
|
ma2za/many_emotions
|
[
"task_categories:text-classification",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:dair-ai/emotion",
"source_datasets:daily_dialog",
"source_datasets:go_emotions",
"language:en",
"license:apache-2.0",
"emotion",
"region:us"
] |
2023-05-20T20:59:41+00:00
|
{"language": ["en"], "license": "apache-2.0", "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["dair-ai/emotion", "daily_dialog", "go_emotions"], "task_categories": ["text-classification"], "tags": ["emotion"]}
|
2023-06-10T01:18:01+00:00
|
8d7a93c5f41f245801cbb3541329945330975ca0
|
# Dataset Card for "empathetic_dialogues_context"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ashiyakatuka11/empathetic_dialogues_context
|
[
"region:us"
] |
2023-05-20T21:04:28+00:00
|
{"dataset_info": {"features": [{"name": "emotions", "dtype": "string"}, {"name": "prompts", "dtype": "string"}, {"name": "contexts", "dtype": "string"}, {"name": "utterances", "dtype": "string"}, {"name": "responses", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32358480, "num_examples": 64636}, {"name": "val", "num_bytes": 5110390, "num_examples": 9308}, {"name": "test", "num_bytes": 5113744, "num_examples": 8426}], "download_size": 15399742, "dataset_size": 42582614}}
|
2023-05-23T10:03:56+00:00
|
02db128f29bd27e1d8a8db85af97e7a246313097
|
# Dataset Card for "chunk_148"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_148
|
[
"region:us"
] |
2023-05-20T21:30:17+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 971416116, "num_examples": 190773}], "download_size": 989563451, "dataset_size": 971416116}}
|
2023-05-20T21:30:48+00:00
|
5b47be5167b16feb65587cfa93857254f2c35d8e
|
# Dataset Card for "05e388fb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/05e388fb
|
[
"region:us"
] |
2023-05-20T21:32:54+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 184, "num_examples": 10}], "download_size": 1339, "dataset_size": 184}}
|
2023-05-20T21:32:55+00:00
|
eb6ec2bc3bc2505e07be7d824b38b61b3eac3476
|
# Dataset Card for "chunk_141"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_141
|
[
"region:us"
] |
2023-05-20T21:36:48+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1055474852, "num_examples": 207281}], "download_size": 1075083900, "dataset_size": 1055474852}}
|
2023-05-20T21:37:20+00:00
|
8e6eb617149be079cfcc67b451ef5e772c570447
|
# Dataset Card for "chunk_146"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_146
|
[
"region:us"
] |
2023-05-20T21:37:17+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1043916012, "num_examples": 205011}], "download_size": 1063025895, "dataset_size": 1043916012}}
|
2023-05-20T21:37:55+00:00
|
32a5edc1a7a08eb2c136fa7f44705f400b4d1db1
|
# Dataset Card for "chunk_145"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_145
|
[
"region:us"
] |
2023-05-20T21:43:08+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1049389912, "num_examples": 206086}], "download_size": 1071559768, "dataset_size": 1049389912}}
|
2023-05-20T21:43:41+00:00
|
c3cb8835c80586a40208e784940ae731d2c16a6f
|
# Dataset Card for "chunk_147"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_147
|
[
"region:us"
] |
2023-05-20T21:43:38+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1072981148, "num_examples": 210719}], "download_size": 1094154918, "dataset_size": 1072981148}}
|
2023-05-20T21:44:36+00:00
|
5c955d49f5a5780edff4beae6722df35bb1b0a4c
|
# Dataset Card for "chunk_143"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_143
|
[
"region:us"
] |
2023-05-20T21:46:45+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1136895932, "num_examples": 223271}], "download_size": 1160180522, "dataset_size": 1136895932}}
|
2023-05-20T21:47:22+00:00
|
3d5867d451e68341cc55ed1c6614ec3e8469705e
|
# Dataset Card for "chunk_142"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_142
|
[
"region:us"
] |
2023-05-20T21:55:30+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1070348584, "num_examples": 210202}], "download_size": 1092140988, "dataset_size": 1070348584}}
|
2023-05-20T21:57:28+00:00
|
92ed25f264f956ffad8c96444f49291e9804bcbb
|
# Dataset Card for "chunk_138"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_138
|
[
"region:us"
] |
2023-05-20T21:56:45+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1127709964, "num_examples": 221467}], "download_size": 1150782160, "dataset_size": 1127709964}}
|
2023-05-20T21:58:52+00:00
|
362898de69948c2cc73f91c9d33feeddc85092ae
|
# Dataset Card for "chunk_144"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_144
|
[
"region:us"
] |
2023-05-20T21:59:39+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1087966904, "num_examples": 213662}], "download_size": 1110676908, "dataset_size": 1087966904}}
|
2023-05-20T22:01:39+00:00
|
85763f3c9c391c932bc606a1bb3dfe9dfe043310
|
# Dataset Card for "chunk_140"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_140
|
[
"region:us"
] |
2023-05-20T22:09:50+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1138260588, "num_examples": 223539}], "download_size": 1157728180, "dataset_size": 1138260588}}
|
2023-05-20T22:11:55+00:00
|
dbf7331e40380cdb61458c0d7db002227bcbafc2
|
zbrl/d-cube
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-05-20T22:13:06+00:00
|
{"license": "cc-by-4.0"}
|
2023-05-20T22:13:06+00:00
|
|
1d564f5783f794ee630c7986705b18418585ed0d
|
aframson/medic-test
|
[
"license:mit",
"region:us"
] |
2023-05-20T22:35:25+00:00
|
{"license": "mit"}
|
2023-05-20T22:37:40+00:00
|
|
b5e508580085f546dc2942bc8e8c8b52871118a9
|
IlyaGusev/ru_turbo_alpaca_evol_instruct
|
[
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:ru",
"license:cc-by-4.0",
"region:us"
] |
2023-05-20T22:43:27+00:00
|
{"language": ["ru"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "iteration", "dtype": "uint32"}], "splits": [{"name": "train", "num_bytes": 105428021, "num_examples": 47793}], "download_size": 27572163, "dataset_size": 105428021}}
|
2023-06-02T10:19:37+00:00
|
|
69f990251cb828b72acfefa19386b63d156c9d51
|
# AutoTrain Dataset for project: cilantroperejil
## Dataset Description
This dataset has been automatically processed by AutoTrain for project cilantroperejil.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<474x410 RGB PIL image>",
"target": 0
},
{
"image": "<474x575 RGB PIL image>",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['cilantro', 'perejil'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 160 |
| valid | 40 |
|
JesusPorto/autotrain-data-cilantroperejil
|
[
"task_categories:image-classification",
"region:us"
] |
2023-05-20T23:33:23+00:00
|
{"task_categories": ["image-classification"]}
|
2023-05-21T01:27:58+00:00
|
1c7b19271ef84c21a95d5920ade665d09353f87c
|
# zh-tw-pythia-ta8000-v1-it1-sg-002
This dataset is a part of the `zh-tw-llm` project.
* Tokenizer: `zh-tw-pythia-tokenizer-a8000-v1`
* Built with: `sharegpt`
* Rows: `train` `8054`, `test` `83`
* Max length: `2048`
* Full config:
```json
{"build_with": ["sharegpt"], "preview_length": 512, "sharegpt_settings": {"source_dataset": "zetavg/ShareGPT-Processed", "train_on_inputs": false, "languages": [{"en": 0.3}, {"zh": 0.2}, "zh_Hant"], "rows_limit": 10000, "test_size": 0.01, "test_split_seed": 42, "test_rows_limit": 100}}
```
|
zh-tw-llm-dv/zh-tw-pythia-ta8000-v1-it1-sg-002
|
[
"region:us"
] |
2023-05-21T00:15:18+00:00
|
{"dataset_info": {"dataset_size": 122992811.9138475, "download_size": 35818770, "features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"dtype": "string", "name": "preview"}, {"dtype": "int64", "name": "length"}, {"dtype": "int64", "name": "messages_count"}], "splits": [{"name": "train", "num_bytes": 121640569.98527607, "num_examples": 8054}, {"name": "test", "num_bytes": 1352241.9285714286, "num_examples": 83}]}}
|
2023-05-21T00:19:26+00:00
|
4d95ede1174601354466c791081cd70c014d284c
|
autopromptsgtp/9944a48a8f9f02ae75e5305a06ed3ff4e5d6cc2a
|
[
"license:afl-3.0",
"region:us"
] |
2023-05-21T00:18:31+00:00
|
{"license": "afl-3.0"}
|
2023-05-21T00:18:31+00:00
|
|
1973902015dae52985e24a22c312a0b64aed960a
|
# Dataset Card for "reward-modeling-eval-tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
andersonbcdefg/reward-modeling-eval-tokenized
|
[
"region:us"
] |
2023-05-21T01:01:21+00:00
|
{"dataset_info": {"features": [{"name": "preferred_input_ids", "sequence": "int64"}, {"name": "preferred_attention_masks", "sequence": "int64"}, {"name": "dispreferred_input_ids", "sequence": "int64"}, {"name": "dispreferred_attention_masks", "sequence": "int64"}], "splits": [{"name": "validation", "num_bytes": 1764790944, "num_examples": 26922}], "download_size": 28678242, "dataset_size": 1764790944}}
|
2023-05-21T01:02:04+00:00
|
d18dd6f79b4d470dafa0e637e771b4d9dfcbca32
|
# Dataset Card for "prd_news"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nakcnx/prd_news
|
[
"region:us"
] |
2023-05-21T01:05:06+00:00
|
{"dataset_info": {"features": [{"name": "date", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 83165933, "num_examples": 17601}], "download_size": 30244001, "dataset_size": 83165933}}
|
2023-05-21T01:05:23+00:00
|
bd01126007c74c2d8457f3d80135cd942cf1f59b
|
# Dataset Card for "shrutilipi_sanskrit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
surajp/shrutilipi_sanskrit
|
[
"region:us"
] |
2023-05-21T01:09:18+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcriptions", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7961235781.888, "num_examples": 14414}], "download_size": 7320639953, "dataset_size": 7961235781.888}}
|
2023-05-21T01:24:37+00:00
|
02a0e6c1c442eda7ec2eaf35a1a6f8a7f1e0c85d
|
Simulated dataset for the project https://github.com/bitDalei/Diabetes-Classification-with-Heterogeneous-Data
**Explanation**
- column 0: label
- column 1-576: FGM data
- column 576-587: Biomarkers data
You might notice some of consecutive rows have same biomarkers, this means that these few rows are contributed by the same patient. There are also some missing value in biomarkers, presented as '0'.
|
seidouz/Diabetes
|
[
"license:openrail",
"region:us"
] |
2023-05-21T03:36:00+00:00
|
{"license": "openrail"}
|
2023-05-21T03:46:11+00:00
|
2984de31ca07f4c34b776517b3c2e0f985a84567
|
Ardirc/models
|
[
"license:openrail",
"region:us"
] |
2023-05-21T04:45:55+00:00
|
{"license": "openrail"}
|
2023-06-11T08:35:44+00:00
|
|
99c67a35ecdb88ae0c035c7156eab6bc50938bd8
|
Retsadila/MO7041
|
[
"license:creativeml-openrail-m",
"region:us"
] |
2023-05-21T05:05:57+00:00
|
{"license": "creativeml-openrail-m"}
|
2023-05-24T19:44:37+00:00
|
|
e392d2feb58df0829436dc7238c7592d7f4d5156
|
xiaoqia/PATTERN
|
[
"license:openrail",
"region:us"
] |
2023-05-21T05:10:24+00:00
|
{"license": "openrail"}
|
2023-05-22T03:51:00+00:00
|
|
083078c993cbd0ab33e5fb24210b3abae7e3c716
|
ShiwenNi/instruction_patent_20k_conversations
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-21T05:12:38+00:00
|
{"license": "apache-2.0"}
|
2023-05-22T01:53:01+00:00
|
|
d8ec770a63495d21851c6e384f331fd56bbdc964
|
2nayun/trash1
|
[
"license:openrail",
"region:us"
] |
2023-05-21T05:38:22+00:00
|
{"license": "openrail"}
|
2023-05-21T05:43:25+00:00
|
|
123429a7445e674e7dd66d933ff280ac1b71553c
|
# Dataset Card for xP3x
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
### Dataset Summary
> xP3x (Crosslingual Public Pool of Prompts eXtended) is a collection of prompts & datasets across 277 languages & 16 NLP tasks. It contains all of xP3 + much more! It is used for training future contenders of mT0 & BLOOMZ at project Aya @[C4AI](https://cohere.for.ai/) 🧡
>
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3) together with the file in this repository named `xp3x_create.py`. We provide this version to save processing time.
- **Languages:** 277
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example looks as follows:
```json
{
'inputs': '11月、遂にクロームはファイヤーフォックスを引き離し始めた。_はインターネットユーザーの評価が高まったのだ。\nReplace the _ in the above sentence with the correct option: \n- ファイヤーフォックス\n- クローム',
'targets': 'クローム',
'language': 'jpn_Jpan',
'split': 'test',
'template': 'Replace',
'dataset': 'Muennighoff/xwinograd',
'config': 'jp'
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
- `language`: The language code. The codes are an extension of the FLORES-200 codes, where the first part is the language code and the second part the script code.
- `template`: The name of the prompt used.
- `dataset`: The Hugging Face dataset identifier of where the data stems from.
- `config`: The config of the Hugging Face dataset.
### Usage
The dataset has 680 gigabytes and 530 million samples. You may want to filter it and then deduplicate depending on your needs.
Loading by language:
```python
# pip install -q datasets
from datasets import load_dataset
ds = load_dataset("Muennighoff/xP3x", "zho_Hans", streaming=True) # Use streaming to not download all at once
for x in ds["train"]:
print(x)
break
```
You can then filter down by the data fields to e.g. only get certain configs or datasets.
As every dataset-config-template is its own jsonl file, you can also decide on the datasets, configs and templates you want and only download them.
For example, to download all Japanese xwinograd samples, you could do:
```python
# pip install -q datasets
from datasets import load_dataset
import multiprocessing
# pip install --upgrade huggingface-hub
from huggingface_hub import HfFileSystem, hf_hub_url
fs = HfFileSystem()
fps = fs.glob(f"datasets/Muennighoff/xP3x/data/jpn_Jpan/*xwinograd*")
resolved_paths = [fs.resolve_path(file) for file in fps]
data_files = [hf_hub_url(resolved_path.repo_id, resolved_path.path_in_repo, repo_type=resolved_path.repo_type) for resolved_path in resolved_paths]
ds = load_dataset("json", data_files=data_files, num_proc=8)["train"]
```
Sometimes it may be faster to clone the entire repo. To download all English files, you could do e.g.
```bash
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/Muennighoff/xP3x
cd xP3x
git lfs pull --include="xP3x/eng_Latn/*"
```
### Data Splits
|Language|Code|Kilobytes|%|Samples|%|
|--------|------:|------:|-:|---:|-:|
|Emilian|egl_Latn|104|0.0|402|0.0|
|Swiss German|gsw_Latn|104|0.0|408|0.0|
|Novial|nov_Latn|116|0.0|432|0.0|
|Ainu (Latin script)|ain_Latn|120|0.0|410|0.0|
|Chamorro|cha_Latn|120|0.0|452|0.0|
|Gothic|got_Goth|120|0.0|402|0.0|
|Prussian|prg_Latn|120|0.0|424|0.0|
|Picard|pcd_Latn|140|0.0|530|0.0|
|Northern Frisian|frr_Latn|156|0.0|554|0.0|
|Uzbek (Latin script)|uzb_Latn|156|0.0|600|0.0|
|Ottoman Turkish (Latin script)|ota_Latn|188|0.0|632|0.0|
|Swahili (macrolanguage)|swa_Latn|212|0.0|772|0.0|
|Talossan|tzl_Latn|220|0.0|836|0.0|
|Kven Finnish|fkv_Latn|260|0.0|910|0.0|
|Zaza|zza_Latn|260|0.0|1,056|0.0|
|Frisian|fry_Latn|268|0.0|956|0.0|
|Piemontese|pms_Latn|276|0.0|998|0.0|
|Kalmyk|xal_Cyrl|288|0.0|976|0.0|
|Hunsrik|hrx_Latn|352|0.0|1,380|0.0|
|Romany|rom_Latn|364|0.0|1,410|0.0|
|Ancient Greek (to 1453)|grc_Grek|392|0.0|1,226|0.0|
|Tase Naga|nst_Latn|424|0.0|1,608|0.0|
|Albanian|sqi_Latn|596|0.0|2,216|0.0|
|Guadeloupean Creole French|gcf_Latn|608|0.0|2,326|0.0|
|Yakut|sah_Cyrl|608|0.0|1,986|0.0|
|Ho (Latin script)|hoc_Latn|632|0.0|2,634|0.0|
|Khasi|kha_Latn|676|0.0|2,664|0.0|
|Algerian Arabic|arq_Arab|688|0.0|2,278|0.0|
|Lower Sorbian|dsb_Latn|692|0.0|2,596|0.0|
|Chuvash|chv_Cyrl|716|0.0|2,446|0.0|
|Old Russian|orv_Cyrl|752|0.0|2,586|0.0|
|Pampanga|pam_Latn|784|0.0|2,984|0.0|
|Kurdish (Latin script)|kur_Latn|796|0.0|3,050|0.0|
|Ottoman Turkish|ota_Arab|832|0.0|2,772|0.0|
|Kotava|avk_Latn|864|0.0|3,118|0.0|
|Upper Sorbian|hsb_Latn|900|0.0|3,474|0.0|
|Buryat|bua_Cyrl|924|0.0|3,218|0.0|
|Swabian|swg_Latn|996|0.0|3,366|0.0|
|Coastal Kadazan|kzj_Latn|1,136|0.0|3,766|0.0|
|Chavacano|cbk_Latn|1,352|0.0|4,994|0.0|
|Quechua|que_Latn|1,704|0.0|5,312|0.0|
|Lingua Franca Nova (Cyrillic script)|lfn_Cyrl|1,740|0.0|5,458|0.0|
|Gronings|gos_Latn|1,864|0.0|7,462|0.0|
|Volapük|vol_Latn|1,948|0.0|7,712|0.0|
|Yue Chinese (Simplified)|yue_Hans|2,300|0.0|7,872|0.0|
|Mari (Russia)|chm_Cyrl|2,540|0.0|7,496|0.0|
|Kadazan Dusun|dtp_Latn|2,548|0.0|8,892|0.0|
|Breton|bre_Latn|3,048|0.0|11,868|0.0|
|Ladino|lad_Latn|3,224|0.0|11,916|0.0|
|Cornish|cor_Latn|3,492|0.0|13,880|0.0|
|Interlingue|ile_Latn|3,700|0.0|14,468|0.0|
|Wu Chinese|wuu_Hans|3,784|0.0|13,062|0.0|
|Japanese (Katakana)|jpn_Kana|4,208|0.0|13,942|0.0|
|Ido|ido_Latn|6,180|0.0|23,742|0.0|
|Yiddishi|yid_Hebr|9,896|0.0|34,412|0.01|
|Klingon|tlh_Latn|11,716|0.0|46,010|0.01|
|Lingua Franca Nova|lfn_Latn|13,328|0.0|46,826|0.01|
|Lojban|jbo_Latn|17,468|0.0|66,694|0.01|
|Low German|nds_Latn|18,364|0.0|68,098|0.01|
|Interlingua (International Auxiliary Language Association)|ina_Latn|25,700|0.0|76,584|0.01|
|Java|java|25,904|0.0|13,551|0.0|
|Japanese (Kanji)|jpn_Hani|26,292|0.0|89,978|0.02|
|Norwegian|nor_Latn|26,724|0.0|93,116|0.02|
|Toki Pona|toki_Latn|26,808|0.0|97,170|0.02|
|Latin|lat_Latn|28,900|0.0|101,390|0.02|
|Serbo-Croatian|hbs_Latn|29,452|0.0|105,748|0.02|
|Nigerian Pidgin|pcm_Latn|145,872|0.02|88,992|0.02|
|Azerbaijani (South or North; Latin script)|aze_Latn|147,564|0.02|77,875|0.01|
|Serbian (Latin script)|srp_Latn|179,072|0.03|131,101|0.02|
|Japanese (Hiragana)|jpn_Hira|188,944|0.03|628,758|0.12|
|Berber (Latin script)|ber_Latn|201,464|0.03|693,602|0.13|
|Jupyter Notebook|jupyter_notebook|416,056|0.06|400,000|0.08|
|Yue Chinese|yue_Hant|613,352|0.09|1,227,429|0.23|
|Haitian Creole|hat_Latn|629,420|0.09|1,228,281|0.23|
|Mossi|mos_Latn|630,416|0.09|1,223,481|0.23|
|Pangasinan|pag_Latn|630,684|0.09|1,223,481|0.23|
|Twi|twi_Latn|631,172|0.09|1,223,481|0.23|
|Bosnian|bos_Latn|633,016|0.09|1,224,479|0.23|
|Ewe|ewe_Latn|633,292|0.09|1,223,481|0.23|
|Bambara|bam_Latn|634,520|0.09|1,223,481|0.23|
|Javanese|jav_Latn|635,248|0.09|1,224,003|0.23|
|Southwestern Dinka|dik_Latn|635,416|0.09|1,223,481|0.23|
|Kabuverdianu|kea_Latn|636,144|0.09|1,223,481|0.23|
|Dyula|dyu_Latn|636,464|0.09|1,223,481|0.23|
|Venetian|vec_Latn|637,412|0.09|1,223,481|0.23|
|Chokwe|cjk_Latn|637,532|0.09|1,223,481|0.23|
|Latgalian|ltg_Latn|637,612|0.09|1,223,481|0.23|
|Sundanese|sun_Latn|638,120|0.09|1,223,481|0.23|
|Asturian|ast_Latn|638,708|0.09|1,223,481|0.23|
|Akan|aka_Latn|639,648|0.09|1,223,481|0.23|
|Mizo|lus_Latn|639,680|0.09|1,223,481|0.23|
|Guarani|grn_Latn|641,540|0.09|1,225,647|0.23|
|Limburgish|lim_Latn|642,368|0.09|1,223,481|0.23|
|Faroese|fao_Latn|642,432|0.09|1,224,067|0.23|
|Buginese|bug_Latn|643,472|0.09|1,223,481|0.23|
|Sango|sag_Latn|643,596|0.09|1,223,481|0.23|
|Luba-Kasai|lua_Latn|643,640|0.09|1,223,481|0.23|
|Papiamento|pap_Latn|643,648|0.09|1,223,481|0.23|
|Silesian|szl_Latn|644,608|0.09|1,223,481|0.23|
|Sicilian|scn_Latn|645,636|0.1|1,223,481|0.23|
|Kimbundu|kmb_Latn|645,964|0.1|1,223,481|0.23|
|Basque|eus_Latn|646,084|0.1|1,246,877|0.23|
|Balinese|ban_Latn|646,408|0.1|1,223,481|0.23|
|Norwegian Nynorsk|nno_Latn|646,996|0.1|1,229,699|0.23|
|Central Aymara|ayr_Latn|647,236|0.1|1,223,481|0.23|
|Tamasheq (Latin script)|taq_Latn|648,656|0.1|1,223,481|0.23|
|Kikongo|kon_Latn|648,992|0.1|1,223,481|0.23|
|Friulian|fur_Latn|649,272|0.1|1,223,481|0.23|
|Ayacucho Quechua|quy_Latn|649,992|0.1|1,223,481|0.23|
|Maori|mri_Latn|650,336|0.1|1,224,211|0.23|
|Icelandic|isl_Latn|650,372|0.1|1,246,623|0.23|
|Galician|glg_Latn|652,088|0.1|1,233,291|0.23|
|Catalan|cat_Latn|652,116|0.1|1,241,381|0.23|
|Lombard|lmo_Latn|652,120|0.1|1,223,481|0.23|
|Banjar (Latin script)|bjn_Latn|652,372|0.1|1,223,481|0.23|
|Fijian|fij_Latn|652,796|0.1|1,223,481|0.23|
|Crimean Tatar|crh_Latn|653,920|0.1|1,223,895|0.23|
|Northern Kurdish|kmr_Latn|654,108|0.1|1,223,481|0.23|
|Ligurian|lij_Latn|654,432|0.1|1,223,481|0.23|
|Occitan|oci_Latn|655,676|0.1|1,227,945|0.23|
|Turkmen|tuk_Latn|658,672|0.1|1,241,205|0.23|
|Luxembourgish|ltz_Latn|658,768|0.1|1,225,339|0.23|
|Cebuano|ceb_Latn|659,124|0.1|1,226,039|0.23|
|Samoan|smo_Latn|659,704|0.1|1,223,481|0.23|
|Sardinian|srd_Latn|660,000|0.1|1,223,481|0.23|
|Bemba|bem_Latn|660,504|0.1|1,223,481|0.23|
|Minangkabau (Latin script)|min_Latn|660,672|0.1|1,223,481|0.23|
|Acehnese (Latin script)|ace_Latn|661,084|0.1|1,223,481|0.23|
|Ilocano|ilo_Latn|661,184|0.1|1,227,663|0.23|
|Irish|gle_Latn|661,660|0.1|1,227,357|0.23|
|Fon|fon_Latn|663,124|0.1|1,223,481|0.23|
|Waray|war_Latn|664,120|0.1|1,226,503|0.23|
|Norwegian Bokmål|nob_Latn|666,240|0.1|1,300,607|0.24|
|Tosk Albanian|als_Latn|666,692|0.1|1,223,481|0.23|
|Standard Malay|zsm_Latn|667,088|0.1|1,270,715|0.24|
|Southern Sotho|sot_Latn|667,728|0.1|1,223,481|0.23|
|Kabyle|kab_Latn|668,128|0.1|1,346,605|0.25|
|Jingpho|kac_Latn|669,464|0.1|1,223,481|0.23|
|Lingala|lin_Latn|670,428|0.1|1,323,481|0.25|
|Wolof|wol_Latn|670,568|0.1|1,373,481|0.26|
|Central Kanuri (Latin script)|knc_Latn|670,800|0.1|1,223,481|0.23|
|Kikuyu|kik_Latn|672,096|0.1|1,223,481|0.23|
|Tok Pisin|tpi_Latn|672,916|0.1|1,223,481|0.23|
|Nuer|nus_Latn|673,632|0.1|1,223,481|0.23|
|Tagalog|tgl_Latn|673,684|0.1|1,247,417|0.23|
|Tumbuka|tum_Latn|676,948|0.1|1,223,481|0.23|
|Plateau Malagasy|plt_Latn|677,852|0.1|1,223,481|0.23|
|Afrikaans|afr_Latn|679,164|0.1|1,337,091|0.25|
|North Azerbaijani|azj_Latn|679,820|0.1|1,223,481|0.23|
|Kabiyè|kbp_Latn|684,880|0.1|1,223,481|0.23|
|Modern Standard Arabic (Romanized)|arb_Latn|685,408|0.1|1,223,481|0.23|
|Scottish Gaelic|gla_Latn|708,620|0.1|1,243,627|0.23|
|Sindhi|snd_Arab|718,680|0.11|1,223,481|0.23|
|North Levantine Arabic|apc_Arab|720,048|0.11|1,223,481|0.23|
|Tunisian Arabic|aeb_Arab|720,360|0.11|1,223,481|0.23|
|South Levantine Arabic|ajp_Arab|720,488|0.11|1,223,481|0.23|
|Dari|prs_Arab|720,500|0.11|1,223,481|0.23|
|Moroccan Arabic|ary_Arab|722,904|0.11|1,223,481|0.23|
|Egyptian Arabic|arz_Arab|723,356|0.11|1,223,481|0.23|
|Najdi Arabic|ars_Arab|725,784|0.11|1,223,481|0.23|
|Acehnese (Arabic script)|ace_Arab|726,272|0.11|1,223,481|0.23|
|Mesopotamian Arabic|acm_Arab|728,472|0.11|1,223,481|0.23|
|Ta’izzi-Adeni Arabic|acq_Arab|734,780|0.11|1,223,481|0.23|
|South Azerbaijani|azb_Arab|735,728|0.11|1,223,481|0.23|
|Central Kanuri (Arabic script)|knc_Arab|746,936|0.11|1,223,481|0.23|
|Rundi|run_Latn|749,792|0.11|1,296,111|0.24|
|Banjar (Arabic script)|bjn_Arab|751,112|0.11|1,223,481|0.23|
|Central Kurdish|ckb_Arab|756,804|0.11|1,223,481|0.23|
|Bashkir|bak_Cyrl|758,816|0.11|1,223,481|0.23|
|Kashmiri (Arabic script)|kas_Arab|759,140|0.11|1,223,481|0.23|
|Tatar|tat_Cyrl|764,212|0.11|1,247,685|0.23|
|Minangkabau (Arabic script)|min_Arab|765,384|0.11|1,223,481|0.23|
|Kazakh|kaz_Cyrl|766,176|0.11|1,232,697|0.23|
|Halh Mongolian|khk_Cyrl|776,384|0.11|1,224,353|0.23|
|Tajik|tgk_Cyrl|780,452|0.11|1,223,481|0.23|
|Eastern Yiddish|ydd_Hebr|781,452|0.12|1,223,481|0.23|
|Uyghur|uig_Arab|785,444|0.12|1,256,999|0.24|
|Armenian|hye_Armn|789,952|0.12|1,228,171|0.23|
|Hebrew|heb_Hebr|793,144|0.12|1,604,365|0.3|
|Belarusian|bel_Cyrl|806,588|0.12|1,261,197|0.24|
|Macedonian|mkd_Cyrl|813,436|0.12|1,384,567|0.26|
|Welsh|cym_Latn|821,036|0.12|1,321,455|0.25|
|Northern Uzbek|uzn_Latn|835,560|0.12|1,273,404|0.24|
|Central Atlas Tamazight|tzm_Tfng|843,508|0.12|1,223,481|0.23|
|Tamasheq (Tifinagh script)|taq_Tfng|848,104|0.12|1,223,481|0.23|
|Magahi|mag_Deva|851,360|0.13|1,223,481|0.23|
|Bhojpuri|bho_Deva|854,848|0.13|1,223,481|0.23|
|Awadhi|awa_Deva|857,096|0.13|1,224,037|0.23|
|Chhattisgarhi|hne_Deva|859,332|0.13|1,223,481|0.23|
|Kyrgyz|kir_Cyrl|860,700|0.13|1,250,163|0.23|
|Maithili|mai_Deva|863,476|0.13|1,223,481|0.23|
|Assamese|asm_Beng|865,904|0.13|1,223,481|0.23|
|Kashmiri (Devanagari script)|kas_Deva|867,232|0.13|1,223,481|0.23|
|Sanskrit|san_Deva|879,236|0.13|1,223,481|0.23|
|Lao|lao_Laoo|888,240|0.13|1,223,481|0.23|
|Odia|ory_Orya|890,508|0.13|1,223,481|0.23|
|Santali|sat_Olck|902,300|0.13|1,223,481|0.23|
|Kannada|kan_Knda|909,260|0.13|1,223,481|0.23|
|Meitei (Bengali script)|mni_Beng|917,984|0.14|1,223,481|0.23|
|Georgian|kat_Geor|928,712|0.14|1,226,729|0.23|
|Kamba|kam_Latn|936,468|0.14|2,136,615|0.4|
|Tigrinya|tir_Ethi|949,608|0.14|1,276,536|0.24|
|Swati|ssw_Latn|950,564|0.14|2,195,002|0.41|
|Malayalam|mal_Mlym|953,984|0.14|1,225,083|0.23|
|Nigerian Fulfulde|fuv_Latn|956,328|0.14|2,126,652|0.4|
|Umbundu|umb_Latn|974,104|0.14|2,264,553|0.43|
|Ganda|lug_Latn|975,780|0.14|2,273,481|0.43|
|Northern Sotho|nso_Latn|978,484|0.14|2,250,971|0.42|
|Khmer|khm_Khmr|984,756|0.14|1,227,825|0.23|
|Luo|luo_Latn|993,068|0.15|2,249,242|0.42|
|Standard Tibetan|bod_Tibt|993,732|0.15|1,223,481|0.23|
|Tswana|tsn_Latn|1,009,328|0.15|2,323,481|0.44|
|Kinyarwanda|kin_Latn|1,010,752|0.15|2,273,481|0.43|
|Sinhala|sin_Sinh|1,012,012|0.15|1,256,582|0.24|
|Xhosa|xho_Latn|1,019,804|0.15|2,323,481|0.44|
|Shona|sna_Latn|1,026,320|0.15|2,273,481|0.43|
|Esperanto|epo_Latn|1,029,444|0.15|2,612,083|0.49|
|Tsonga|tso_Latn|1,031,856|0.15|2,323,481|0.44|
|Dzongkha|dzo_Tibt|1,033,552|0.15|1,223,481|0.23|
|Zulu|zul_Latn|1,039,296|0.15|2,323,481|0.44|
|Serbian|srp_Cyrl|1,040,024|0.15|1,362,598|0.26|
|Nyanja|nya_Latn|1,061,780|0.16|2,323,481|0.44|
|Shan|shn_Mymr|1,074,940|0.16|1,223,481|0.23|
|Igbo|ibo_Latn|1,095,300|0.16|2,282,301|0.43|
|Hausa|hau_Latn|1,112,272|0.16|2,335,738|0.44|
|West Central Oromo|gaz_Latn|1,115,600|0.16|2,343,260|0.44|
|Nepali|npi_Deva|1,144,676|0.17|1,281,430|0.24|
|Yoruba|yor_Latn|1,164,540|0.17|2,334,801|0.44|
|Southern Pashto|pbt_Arab|1,170,840|0.17|1,365,533|0.26|
|Somali|som_Latn|1,198,320|0.18|2,482,437|0.47|
|Burmese|mya_Mymr|1,228,196|0.18|1,279,882|0.24|
|Amharic|amh_Ethi|1,261,128|0.19|1,980,215|0.37|
|Eastern Panjabi|pan_Guru|1,305,636|0.19|1,307,897|0.25|
|Gujarati|guj_Gujr|1,331,780|0.2|1,317,314|0.25|
|Marathi|mar_Deva|1,494,024|0.22|1,443,950|0.27|
|Bengali|ben_Beng|1,650,272|0.24|1,411,514|0.27|
|Chinese (Traditional)|zho_Hant|1,778,736|0.26|1,956,189|0.37|
|Tamil|tam_Taml|1,833,328|0.27|1,394,473|0.26|
|Swahili|swh_Latn|1,970,784|0.29|4,185,608|0.79|
|Telugu|tel_Telu|2,224,480|0.33|1,573,325|0.3|
|Ukrainian|ukr_Cyrl|2,227,616|0.33|2,216,119|0.42|
|Western Persian|pes_Arab|2,389,340|0.35|1,811,121|0.34|
|Turkish|tur_Latn|3,106,600|0.46|4,146,153|0.78|
|Urdu|urd_Arab|3,553,960|0.52|3,513,218|0.66|
|Korean|kor_Hang|4,642,468|0.68|3,415,920|0.64|
|Python|python|4,728,504|0.7|3,142,962|0.59|
|Japanese|jpn_Jpan|5,079,788|0.75|4,193,570|0.79|
|Thai|tha_Thai|6,860,704|1.01|4,666,299|0.88|
|Chinese (Simplified)|zho_Hans|8,063,684|1.19|7,355,509|1.38|
|Vietnamese|vie_Latn|8,398,824|1.24|6,194,925|1.16|
|Indonesian|ind_Latn|9,380,144|1.38|5,301,812|1.0|
|Hindi|hin_Deva|9,914,328|1.46|5,612,176|1.05|
|Croatian|hrv_Latn|10,028,028|1.48|5,583,975|1.05|
|Modern Standard Arabic|arb_Arab|11,051,064|1.63|7,232,551|1.36|
|Romanian|ron_Latn|11,441,636|1.68|5,594,927|1.05|
|Maltese|mlt_Latn|11,614,488|1.71|5,513,885|1.04|
|Slovenian|slv_Latn|12,014,912|1.77|5,533,689|1.04|
|Estonian|est_Latn|12,126,212|1.79|5,584,057|1.05|
|Lithuanian|lit_Latn|12,253,976|1.8|5,603,047|1.05|
|Slovak|slk_Latn|12,286,300|1.81|5,513,481|1.04|
|Standard Latvian|lvs_Latn|12,298,584|1.81|5,517,287|1.04|
|Polish|pol_Latn|12,409,684|1.83|5,868,631|1.1|
|Hungarian|hun_Latn|12,607,420|1.86|6,086,621|1.14|
|Russian|rus_Cyrl|13,110,908|1.93|8,798,927|1.65|
|Czech|ces_Latn|14,316,052|2.11|6,418,462|1.21|
|Bulgarian|bul_Cyrl|14,615,468|2.15|7,265,885|1.37|
|Swedish|swe_Latn|14,646,656|2.16|5,634,363|1.06|
|Finnish|fin_Latn|15,011,464|2.21|6,077,501|1.14|
|Danish|dan_Latn|16,136,612|2.38|5,831,109|1.1|
|Dutch|nld_Latn|22,387,020|3.3|8,992,864|1.69|
|Greek|ell_Grek|23,144,296|3.41|7,224,001|1.36|
|Italian|ita_Latn|23,952,824|3.53|9,967,738|1.87|
|Portuguese|por_Latn|27,297,252|4.02|11,242,808|2.11|
|German|deu_Latn|27,909,808|4.11|15,806,969|2.97|
|French|fra_Latn|28,428,608|4.18|16,365,984|3.08|
|Spanish|spa_Latn|30,969,580|4.56|16,315,928|3.07|
|English|eng_Latn|69,530,384|10.24|53,015,690|9.96|
|Total|-|679,318,704|100|532,107,156|100|
#### Language specifics
- `Japanese`: Data in `jpn_Hira`, `jpn_Kana`, `jpn_Hani` is guaranteed to have Hiragana, Katakana or Kanji, respectively in each sample. However, they may still include other styles. So while all samples in `jpn_Kana` are guaranteed to have Katakana, there may still be Hiragana or Kanji.
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- [MultiEURLEX](https://huggingface.co/datasets/multi_eurlex)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
- Natural Language Inference (NLI)
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
#### Dataset specifics
- Flores-200: There are three prompts for Flores: `continuation`, `question`, `command`, which represent three commonly used prompting styles, i.e. making a prompt seem like a natural continuation, turning it into a question or commanding the model to do something.
- tatoeba_mt: Contains duplicates. For example, it has data that is both classified as `jpn_Kana` and `jpn_Jpan`, so you may want to deduplicate.
## Additional Information
### Licensing Information
The dataset collection is released under Apache 2.0. Note that individual datasets may have different licenses.
### Citation Information
```bibtex
@article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset.
Thanks to the Aya team @[C4AI](https://cohere.for.ai/) 🧡
|
CohereForAI/xP3x
|
[
"task_categories:other",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100M<n<1B",
"language:af",
"language:ar",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:bs",
"language:ca",
"language:ch",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fo",
"language:fr",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:gn",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:ie",
"language:io",
"language:is",
"language:it",
"language:ja",
"language:jv",
"language:ka",
"language:kk",
"language:km",
"language:ko",
"language:ku",
"language:kw",
"language:la",
"language:lb",
"language:lt",
"language:lv",
"language:mi",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:my",
"language:nb",
"language:nl",
"language:nn",
"language:no",
"language:oc",
"language:pl",
"language:pt",
"language:qu",
"language:rn",
"language:ro",
"language:ru",
"language:sh",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tk",
"language:tl",
"language:tr",
"language:tt",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:vo",
"language:yi",
"language:zh",
"language:ace",
"language:acm",
"language:acq",
"language:aeb",
"language:ajp",
"language:ak",
"language:als",
"language:am",
"language:apc",
"language:ars",
"language:ary",
"language:arz",
"language:as",
"language:ast",
"language:awa",
"language:ayr",
"language:azb",
"language:azj",
"language:ba",
"language:bm",
"language:ban",
"language:bem",
"language:bho",
"language:bjn",
"language:bo",
"language:bug",
"language:ceb",
"language:cjk",
"language:ckb",
"language:crh",
"language:dik",
"language:dyu",
"language:dz",
"language:ee",
"language:fj",
"language:fon",
"language:fur",
"language:fuv",
"language:gaz",
"language:gu",
"language:ht",
"language:ha",
"language:hne",
"language:ig",
"language:ilo",
"language:kab",
"language:kac",
"language:kam",
"language:kn",
"language:ks",
"language:kbp",
"language:kea",
"language:khk",
"language:ki",
"language:rw",
"language:ky",
"language:kmb",
"language:kmr",
"language:knc",
"language:kg",
"language:lo",
"language:lij",
"language:li",
"language:ln",
"language:lmo",
"language:ltg",
"language:lua",
"language:lg",
"language:luo",
"language:lus",
"language:lvs",
"language:mag",
"language:mai",
"language:mar",
"language:min",
"language:mni",
"language:mos",
"language:npi",
"language:nso",
"language:nus",
"language:ny",
"language:ory",
"language:pag",
"language:pa",
"language:pap",
"language:pbt",
"language:pes",
"language:plt",
"language:prs",
"language:quy",
"language:sg",
"language:sa",
"language:sat",
"language:scn",
"language:shn",
"language:si",
"language:sk",
"language:sm",
"language:sn",
"language:sd",
"language:so",
"language:st",
"language:sc",
"language:ss",
"language:su",
"language:swh",
"language:szl",
"language:taq",
"language:tg",
"language:ti",
"language:tpi",
"language:tn",
"language:ts",
"language:tum",
"language:tw",
"language:tzm",
"language:umb",
"language:uzn",
"language:vec",
"language:war",
"language:wo",
"language:xh",
"language:ydd",
"language:yo",
"language:yue",
"language:zsm",
"language:zu",
"license:apache-2.0",
"arxiv:2211.01786",
"region:us"
] |
2023-05-21T05:38:52+00:00
|
{"annotations_creators": ["expert-generated", "crowdsourced"], "language": ["af", "ar", "az", "be", "bg", "bn", "br", "bs", "ca", "ch", "cs", "cv", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fo", "fr", "fy", "ga", "gd", "gl", "gn", "he", "hi", "hr", "hu", "hy", "ia", "id", "ie", "io", "is", "it", "ja", "jv", "ka", "kk", "km", "ko", "ku", "kw", "la", "lb", "lt", "lv", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "nb", "nl", "nn", "no", "oc", "pl", "pt", "qu", "rn", "ro", "ru", "sh", "sl", "sq", "sr", "sv", "sw", "ta", "te", "th", "tk", "tl", "tr", "tt", "ug", "uk", "ur", "uz", "vi", "vo", "yi", "zh", "ace", "acm", "acq", "aeb", "af", "ajp", "ak", "als", "am", "apc", "ar", "ars", "ary", "arz", "as", "ast", "awa", "ayr", "azb", "azj", "ba", "bm", "ban", "be", "bem", "bn", "bho", "bjn", "bo", "bs", "bug", "bg", "ca", "ceb", "cs", "cjk", "ckb", "crh", "cy", "da", "de", "dik", "dyu", "dz", "el", "en", "eo", "et", "eu", "ee", "fo", "fj", "fi", "fon", "fr", "fur", "fuv", "gaz", "gd", "ga", "gl", "gn", "gu", "ht", "ha", "he", "hi", "hne", "hr", "hu", "hy", "ig", "ilo", "id", "is", "it", "jv", "ja", "kab", "kac", "kam", "kn", "ks", "ka", "kk", "kbp", "kea", "khk", "km", "ki", "rw", "ky", "kmb", "kmr", "knc", "kg", "ko", "lo", "lij", "li", "ln", "lt", "lmo", "ltg", "lb", "lua", "lg", "luo", "lus", "lvs", "mag", "mai", "ml", "mar", "min", "mk", "mt", "mni", "mos", "mi", "my", "nl", "nn", "nb", "npi", "nso", "nus", "ny", "oc", "ory", "pag", "pa", "pap", "pbt", "pes", "plt", "pl", "pt", "prs", "quy", "ro", "rn", "ru", "sg", "sa", "sat", "scn", "shn", "si", "sk", "sl", "sm", "sn", "sd", "so", "st", "es", "sc", "sr", "ss", "su", "sv", "swh", "szl", "ta", "taq", "tt", "te", "tg", "tl", "th", "ti", "tpi", "tn", "ts", "tk", "tum", "tr", "tw", "tzm", "ug", "uk", "umb", "ur", "uzn", "vec", "vi", "war", "wo", "xh", "ydd", "yo", "yue", "zh", "zsm", "zu"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": ["100M<n<1B"], "task_categories": ["other"], "pretty_name": "xP3x", "programming_language": ["Java", "Python", "Jupyter-Notebook"]}
|
2023-11-09T04:45:25+00:00
|
ea2b49b97395e45762bfdb32a521c13b1801aeb5
|
# Dataset Card for "realnewslike_with_title"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rossil/realnewslike_with_title
|
[
"language:en",
"region:us"
] |
2023-05-21T05:56:01+00:00
|
{"language": "en", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "timestamp", "dtype": "timestamp[s]"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 38733473854, "num_examples": 13813701}], "download_size": 24654646282, "dataset_size": 38733473854}}
|
2023-07-13T10:02:58+00:00
|
87589d12857e598ca8ad672e855c3d669146b417
|
# Dataset Card for "aims_segm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aleh/aims_segm
|
[
"region:us"
] |
2023-05-21T06:13:04+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1596319469.0, "num_examples": 25}], "download_size": 434727171, "dataset_size": 1596319469.0}}
|
2023-05-21T06:17:12+00:00
|
8b63e367d78c1cae56eb07bf7a5176dd881209d2
|
# Dataset Card for "chunk_149"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_149
|
[
"region:us"
] |
2023-05-21T06:23:21+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 932859492, "num_examples": 183201}], "download_size": 950256177, "dataset_size": 932859492}}
|
2023-05-21T06:23:51+00:00
|
33cf8a7363eeaec3724452794898fd9ebd5bb765
|
# Dataset Card for "chunk_175"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_175
|
[
"region:us"
] |
2023-05-21T06:30:57+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 971721636, "num_examples": 190833}], "download_size": 991426532, "dataset_size": 971721636}}
|
2023-05-21T06:31:26+00:00
|
585060e08f5e85a145edf871958c2ac59cb4dd6c
|
# Dataset Card for "chunk_150"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_150
|
[
"region:us"
] |
2023-05-21T06:33:21+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 846611196, "num_examples": 166263}], "download_size": 862305045, "dataset_size": 846611196}}
|
2023-05-21T06:34:57+00:00
|
e18c90e3b71ebc70a4895fb490f96e5765e6de5b
|
# Dataset Card for "chunk_172"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_172
|
[
"region:us"
] |
2023-05-21T06:37:40+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1064044688, "num_examples": 208964}], "download_size": 1074878084, "dataset_size": 1064044688}}
|
2023-05-21T06:38:17+00:00
|
3c7efd3d122ed178f92b37e1a9e0ebc6cca8205f
|
# Dataset Card for "chunk_174"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_174
|
[
"region:us"
] |
2023-05-21T06:41:08+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1116807992, "num_examples": 219326}], "download_size": 1140160965, "dataset_size": 1116807992}}
|
2023-05-21T06:41:42+00:00
|
0c8210b0bca1a155fee439316aa225f0eb368b62
|
# Dataset Card for "chunk_171"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_171
|
[
"region:us"
] |
2023-05-21T06:45:22+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1185061160, "num_examples": 232730}], "download_size": 1199459094, "dataset_size": 1185061160}}
|
2023-05-21T06:46:18+00:00
|
cddd4d00314ee85468a616a3b798c4fa87fbdf2b
|
# Dataset Card for "chunk_160"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_160
|
[
"region:us"
] |
2023-05-21T06:45:30+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1179566892, "num_examples": 231651}], "download_size": 1203138651, "dataset_size": 1179566892}}
|
2023-05-21T06:46:09+00:00
|
c237d1bc9d43ae07c936a6ba87b52b449c897e60
|
# Dataset Card for "chunk_170"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_170
|
[
"region:us"
] |
2023-05-21T06:45:48+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1237712440, "num_examples": 243070}], "download_size": 1261211885, "dataset_size": 1237712440}}
|
2023-05-21T06:46:28+00:00
|
bc9e978d8db43247e2b30fc7adab22cf25899452
|
# Dataset Card for "chunk_173"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_173
|
[
"region:us"
] |
2023-05-21T06:52:49+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1018527300, "num_examples": 200025}], "download_size": 1038188159, "dataset_size": 1018527300}}
|
2023-05-21T06:54:39+00:00
|
5fb89dd3ebc4c57d4c1f3811b75eba65752e9cb6
|
# Dataset Card for "chunk_176"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_176
|
[
"region:us"
] |
2023-05-21T06:56:34+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1272898160, "num_examples": 249980}], "download_size": 1299136381, "dataset_size": 1272898160}}
|
2023-05-21T06:57:18+00:00
|
18b8313f5edb0d5e7397f786ac33c69b313203e4
|
# Dataset Card for "aims_segm_crop"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aleh/aims_segm_crop
|
[
"region:us"
] |
2023-05-21T07:20:44+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 630264241.0, "num_examples": 25}], "download_size": 142370545, "dataset_size": 630264241.0}}
|
2023-05-21T07:21:30+00:00
|
38544162590fba81cd76f36a8828b43b61e947c4
|
# Dataset Card for "imda_dataset_clean"
HAS TWO EXTRA EXAMPLES CONTAINING '.' NEED TO FILTER
num_examples: 1408808
|
averageandyyy/imda_dataset_clean
|
[
"region:us"
] |
2023-05-21T07:33:21+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcript", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 215809231255.29318, "num_examples": 1408808}], "download_size": 210065803478, "dataset_size": 215809231255.29318}}
|
2023-05-25T11:08:03+00:00
|
72d69d9280ce8bbc1fafa9334ef59420fbed1b40
|
GitMylo/bark-semantic-training
|
[
"license:mit",
"region:us"
] |
2023-05-21T08:13:24+00:00
|
{"license": "mit"}
|
2023-05-21T08:19:58+00:00
|
|
40c587eb3e391635341529a6432e6a8afc33e83e
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is currently for private sharing only.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
Ryan1122/reality_qa_290k
|
[
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:zh",
"license:cc-by-nc-4.0",
"QA",
"CN",
"self-instruct",
"region:us"
] |
2023-05-21T08:23:01+00:00
|
{"language": ["zh"], "license": "cc-by-nc-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["question-answering"], "tags": ["QA", "CN", "self-instruct"]}
|
2023-05-21T08:35:58+00:00
|
52c612130c1e9d7faf24e8046a64459ced38ad28
|
# Dataset Card for "chunk_157"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_157
|
[
"region:us"
] |
2023-05-21T08:36:13+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 901803384, "num_examples": 177102}], "download_size": 918316765, "dataset_size": 901803384}}
|
2023-05-21T08:37:05+00:00
|
6813046c362305ccb4dcc97dd9a7153bb531b487
|
# Dataset Card for "chunk_152"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_152
|
[
"region:us"
] |
2023-05-21T08:39:35+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 895321268, "num_examples": 175829}], "download_size": 911830043, "dataset_size": 895321268}}
|
2023-05-21T08:40:06+00:00
|
f4731fbf806956dbf65b2363523b7215cfcf193e
|
<p align="center">
<img src="fairseq_logo.png" width="150">
<br />
<br />
<a href="https://github.com/pytorch/fairseq/blob/master/LICENSE"><img alt="MIT License" src="https://img.shields.io/badge/license-MIT-blue.svg" /></a>
<a href="https://github.com/pytorch/fairseq/releases"><img alt="Latest Release" src="https://img.shields.io/github/release/pytorch/fairseq.svg" /></a>
<a href="https://github.com/pytorch/fairseq/actions?query=workflow:build"><img alt="Build Status" src="https://github.com/pytorch/fairseq/workflows/build/badge.svg" /></a>
<a href="https://fairseq.readthedocs.io/en/latest/?badge=latest"><img alt="Documentation Status" src="https://readthedocs.org/projects/fairseq/badge/?version=latest" /></a>
</p>
--------------------------------------------------------------------------------
Fairseq(-py) is a sequence modeling toolkit that allows researchers and
developers to train custom models for translation, summarization, language
modeling and other text generation tasks.
### What's New:
- April 2020: [Initial model parallel support and 11B parameters unidirectional LM released](examples/megatron_11b/README.md)
- March 2020: [Byte-level BPE code released](examples/byte_level_bpe/README.md)
- February 2020: [mBART model and code released](examples/mbart/README.md)
- February 2020: [Added tutorial for back-translation](https://github.com/pytorch/fairseq/tree/master/examples/backtranslation#training-your-own-model-wmt18-english-german)
- December 2019: [fairseq 0.9.0 released](https://github.com/pytorch/fairseq/releases/tag/v0.9.0)
- November 2019: [VizSeq released (a visual analysis toolkit for evaluating fairseq models)](https://facebookresearch.github.io/vizseq/docs/getting_started/fairseq_example)
- November 2019: [CamemBERT model and code released](examples/camembert/README.md)
- November 2019: [BART model and code released](examples/bart/README.md)
- November 2019: [XLM-R models and code released](examples/xlmr/README.md)
- September 2019: [Nonautoregressive translation code released](examples/nonautoregressive_translation/README.md)
- August 2019: [WMT'19 models released](examples/wmt19/README.md)
- July 2019: fairseq relicensed under MIT license
- July 2019: [RoBERTa models and code released](examples/roberta/README.md)
- June 2019: [wav2vec models and code released](examples/wav2vec/README.md)
### Features:
Fairseq provides reference implementations of various sequence-to-sequence models, including:
- **Convolutional Neural Networks (CNN)**
- [Language Modeling with Gated Convolutional Networks (Dauphin et al., 2017)](examples/language_model/conv_lm/README.md)
- [Convolutional Sequence to Sequence Learning (Gehring et al., 2017)](examples/conv_seq2seq/README.md)
- [Classical Structured Prediction Losses for Sequence to Sequence Learning (Edunov et al., 2018)](https://github.com/pytorch/fairseq/tree/classic_seqlevel)
- [Hierarchical Neural Story Generation (Fan et al., 2018)](examples/stories/README.md)
- [wav2vec: Unsupervised Pre-training for Speech Recognition (Schneider et al., 2019)](examples/wav2vec/README.md)
- **LightConv and DynamicConv models**
- [Pay Less Attention with Lightweight and Dynamic Convolutions (Wu et al., 2019)](examples/pay_less_attention_paper/README.md)
- **Long Short-Term Memory (LSTM) networks**
- Effective Approaches to Attention-based Neural Machine Translation (Luong et al., 2015)
- **Transformer (self-attention) networks**
- Attention Is All You Need (Vaswani et al., 2017)
- [Scaling Neural Machine Translation (Ott et al., 2018)](examples/scaling_nmt/README.md)
- [Understanding Back-Translation at Scale (Edunov et al., 2018)](examples/backtranslation/README.md)
- [Adaptive Input Representations for Neural Language Modeling (Baevski and Auli, 2018)](examples/language_model/transformer_lm/README.md)
- [Mixture Models for Diverse Machine Translation: Tricks of the Trade (Shen et al., 2019)](examples/translation_moe/README.md)
- [RoBERTa: A Robustly Optimized BERT Pretraining Approach (Liu et al., 2019)](examples/roberta/README.md)
- [Facebook FAIR's WMT19 News Translation Task Submission (Ng et al., 2019)](examples/wmt19/README.md)
- [Jointly Learning to Align and Translate with Transformer Models (Garg et al., 2019)](examples/joint_alignment_translation/README.md )
- [Multilingual Denoising Pre-training for Neural Machine Translation (Liu et at., 2020)](examples/mbart/README.md)
- [Neural Machine Translation with Byte-Level Subwords (Wang et al., 2020)](examples/byte_level_bpe/README.md)
- **Non-autoregressive Transformers**
- Non-Autoregressive Neural Machine Translation (Gu et al., 2017)
- Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement (Lee et al. 2018)
- Insertion Transformer: Flexible Sequence Generation via Insertion Operations (Stern et al. 2019)
- Mask-Predict: Parallel Decoding of Conditional Masked Language Models (Ghazvininejad et al., 2019)
- [Levenshtein Transformer (Gu et al., 2019)](examples/nonautoregressive_translation/README.md)
**Additionally:**
- multi-GPU (distributed) training on one machine or across multiple machines
- fast generation on both CPU and GPU with multiple search algorithms implemented:
- beam search
- Diverse Beam Search ([Vijayakumar et al., 2016](https://arxiv.org/abs/1610.02424))
- sampling (unconstrained, top-k and top-p/nucleus)
- large mini-batch training even on a single GPU via delayed updates
- mixed precision training (trains faster with less GPU memory on [NVIDIA tensor cores](https://developer.nvidia.com/tensor-cores))
- extensible: easily register new models, criterions, tasks, optimizers and learning rate schedulers
We also provide [pre-trained models for translation and language modeling](#pre-trained-models-and-examples)
with a convenient `torch.hub` interface:
```python
en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model')
en2de.translate('Hello world', beam=5)
# 'Hallo Welt'
```
See the PyTorch Hub tutorials for [translation](https://pytorch.org/hub/pytorch_fairseq_translation/)
and [RoBERTa](https://pytorch.org/hub/pytorch_fairseq_roberta/) for more examples.

# Requirements and Installation
* [PyTorch](http://pytorch.org/) version >= 1.4.0
* Python version >= 3.6
* For training new models, you'll also need an NVIDIA GPU and [NCCL](https://github.com/NVIDIA/nccl)
* **For faster training** install NVIDIA's [apex](https://github.com/NVIDIA/apex) library:
```bash
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--deprecated_fused_adam" --global-option="--xentropy" --global-option="--fast_multihead_attn" ./
```
To install fairseq:
```bash
pip install fairseq
```
On MacOS:
```bash
CFLAGS="-stdlib=libc++" pip install fairseq
```
If you use Docker make sure to increase the shared memory size either with
`--ipc=host` or `--shm-size` as command line options to `nvidia-docker run`.
**Installing from source**
To install fairseq from source and develop locally:
```bash
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable .
```
# Getting Started
The [full documentation](https://fairseq.readthedocs.io/) contains instructions
for getting started, training new models and extending fairseq with new model
types and tasks.
# Pre-trained models and examples
We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below,
as well as example training and evaluation commands.
- [Translation](examples/translation/README.md): convolutional and transformer models are available
- [Language Modeling](examples/language_model/README.md): convolutional and transformer models are available
- [wav2vec](examples/wav2vec/README.md): wav2vec large model is available
We also have more detailed READMEs to reproduce results from specific papers:
- [Neural Machine Translation with Byte-Level Subwords (Wang et al., 2020)](examples/byte_level_bpe/README.md)
- [Jointly Learning to Align and Translate with Transformer Models (Garg et al., 2019)](examples/joint_alignment_translation/README.md )
- [Levenshtein Transformer (Gu et al., 2019)](examples/nonautoregressive_translation/README.md)
- [Facebook FAIR's WMT19 News Translation Task Submission (Ng et al., 2019)](examples/wmt19/README.md)
- [RoBERTa: A Robustly Optimized BERT Pretraining Approach (Liu et al., 2019)](examples/roberta/README.md)
- [wav2vec: Unsupervised Pre-training for Speech Recognition (Schneider et al., 2019)](examples/wav2vec/README.md)
- [Mixture Models for Diverse Machine Translation: Tricks of the Trade (Shen et al., 2019)](examples/translation_moe/README.md)
- [Pay Less Attention with Lightweight and Dynamic Convolutions (Wu et al., 2019)](examples/pay_less_attention_paper/README.md)
- [Understanding Back-Translation at Scale (Edunov et al., 2018)](examples/backtranslation/README.md)
- [Classical Structured Prediction Losses for Sequence to Sequence Learning (Edunov et al., 2018)](https://github.com/pytorch/fairseq/tree/classic_seqlevel)
- [Hierarchical Neural Story Generation (Fan et al., 2018)](examples/stories/README.md)
- [Scaling Neural Machine Translation (Ott et al., 2018)](examples/scaling_nmt/README.md)
- [Convolutional Sequence to Sequence Learning (Gehring et al., 2017)](examples/conv_seq2seq/README.md)
- [Language Modeling with Gated Convolutional Networks (Dauphin et al., 2017)](examples/language_model/conv_lm/README.md)
# Join the fairseq community
* Facebook page: https://www.facebook.com/groups/fairseq.users
* Google group: https://groups.google.com/forum/#!forum/fairseq-users
# License
fairseq(-py) is MIT-licensed.
The license applies to the pre-trained models as well.
# Citation
Please cite as:
```bibtex
@inproceedings{ott2019fairseq,
title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
year = {2019},
}
```
|
powerpuffpomelo/mello_test
|
[
"arxiv:1610.02424",
"region:us"
] |
2023-05-21T08:56:40+00:00
|
{}
|
2023-05-21T08:59:00+00:00
|
8719629662b251f3e56d18eb2b802d1f304096e4
|
## Dataset Description
- **Homepage:** [ZeroSCROLLS](https://www.zero.scrolls-benchmark.com/)
- **Leaderboard:** [Leaderboard](https://www.zero.scrolls-benchmark.com/leaderboard)
- **Point of Contact:** [[email protected]]([email protected])
# Dataset Card for ZeroSCROLLS
## Overview
ZeroSCROLLS is a zero-shot benchmark for natural language understanding over long texts.
The validation sets contain only ~20 examples per task and are meant for eyeballing alone.
## Leaderboard
The ZeroSCROLLS benchmark leaderboard can be found [here](https://www.zero.scrolls-benchmark.com/leaderboard).
## Tasks
ZeroSCROLLS contains the following tasks:
#### GovReport ([Huang et al., 2021](https://arxiv.org/pdf/2104.02112.pdf))
GovReport is a summarization dataset of reports addressing various national policy issues published by the
Congressional Research Service and the U.S. Government Accountability Office, where each document is paired with a hand-written executive summary.
The reports and their summaries are longer than their equivalents in other popular long-document summarization datasets;
for example, GovReport's documents are approximately 1.5 and 2.5 times longer than the documents in Arxiv and PubMed, respectively.
#### SummScreenFD ([Chen et al., 2022](https://arxiv.org/pdf/2104.07091.pdf))
SummScreenFD is a summarization dataset in the domain of TV shows (e.g. Friends, Game of Thrones).
Given a transcript of a specific episode, the goal is to produce the episode's recap.
The original dataset is divided into two complementary subsets, based on the source of its community contributed transcripts.
For SCROLLS, we use the ForeverDreaming (FD) subset, as it incorporates 88 different shows,
making it a more diverse alternative to the TV MegaSite (TMS) subset, which has only 10 shows.
Community-authored recaps for the ForeverDreaming transcripts were collected from English Wikipedia and TVMaze.
#### QMSum ([Zhong et al., 2021](https://arxiv.org/pdf/2104.05938.pdf))
QMSum is a query-based summarization dataset, consisting of 232 meetings transcripts from multiple domains.
The corpus covers academic group meetings at the International Computer Science Institute and their summaries, industrial product meetings for designing a remote control,
and committee meetings of the Welsh and Canadian Parliaments, dealing with a variety of public policy issues.
Annotators were tasked with writing queries about the broad contents of the meetings, as well as specific questions about certain topics or decisions,
while ensuring that the relevant text for answering each query spans at least 200 words or 10 turns.
#### SQuALITY ([Wang et al., 2022](https://arxiv.org/pdf/2205.11465.pdf))
SQuALITY (Wang et al., 2022) is a question-focused summarization dataset, where given a story from Project Gutenberg,
the task is to produce a summary of the story or aspects of it based on a guiding question.
The questions and summaries are original and crowdsourced; experienced writers were guided to design questions that require reading significant parts of the story to answer correctly.
#### Qasper ([Dasigi et al., 2021](https://arxiv.org/pdf/2105.03011.pdf))
Qasper is a question answering dataset over NLP papers filtered from the Semantic Scholar Open Research Corpus (S2ORC).
Questions were written by NLP practitioners after reading only the title and abstract of the papers,
while another set of NLP practitioners annotated the answers given the entire document.
Qasper contains abstractive, extractive, and yes/no questions, as well as unanswerable ones.
#### NarrativeQA ([Kočiský et al., 2018](https://arxiv.org/pdf/1712.07040.pdf))
NarrativeQA (Kočiský et al., 2021) is an established question answering dataset over entire books from Project Gutenberg and movie scripts from different websites.
Annotators were given summaries of the books and scripts obtained from Wikipedia, and asked to generate question-answer pairs,
resulting in about 30 questions and answers for each of the 1,567 books and scripts.
They were encouraged to use their own words rather then copying, and avoid asking yes/no questions or ones about the cast.
Each question was then answered by an additional annotator, providing each question with two reference answers (unless both answers are identical).
#### QuALITY ([Pang et al., 2022](https://arxiv.org/pdf/2112.08608.pdf))
QuALITY is a multiple-choice question answering dataset over articles and stories sourced from Project Gutenberg,
the Open American National Corpus, and more.
Experienced writers wrote questions and distractors, and were incentivized to write answerable, unambiguous questions such that in order to correctly answer them,
human annotators must read large portions of the given document.
Reference answers were then calculated using the majority vote between of the annotators and writer's answers.
To measure the difficulty of their questions, Pang et al. conducted a speed validation process,
where another set of annotators were asked to answer questions given only a short period of time to skim through the document.
As a result, 50% of the questions in QuALITY are labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong answer.
#### MuSiQue ([Trivedi et al., 2022](https://arxiv.org/pdf/2108.00573.pdf))
MuSiQue is a multi-hop question answering dataset, where the inputs are 20 Wikipedia paragraphs and a question that requires multiple hops between different paragraphs.
In the original dataset, each question also has an unanswerable twin question, where the correct answer is not present in the paragraphs.
#### SpaceDigest (New)
SpaceDigest is a new sentiment aggregation task. Given 50 hotel reviews (without their ratings) from the Space dataset (Angelidis et al., 2021), the task is to determine the percentage of positive reviews.
#### BookSumSort (New)
BookSumSort is a new task based on the BookSum dataset (Kry ́sci ́nski et al., 2022), which contains summaries of chapters (or parts) of novels, plays, and long poems from various sources.
Given a shuffled list of chapter summaries, the task is to reorder them according to the original order of summaries in BookSum.
## Data Fields
Most datasets in the benchmark are in the same input-output format
- `input`: a `string` feature. The input document.
- `output`: this feature is always None, as ZeroSCROLLS contains only test sets.
- `id`: a `string` feature. Unique per input.
- `pid`: a `string` feature, identical to 'id`. Facilitates evaluating tasks with multiple refrences per input.
- `document_start_index`: an `int32` feature. Character index that enables easy parsing of the context document.
- `document_end_index`: an `int32` feature. Character index that enables easy parsing of the context document.
- `query_start_index`: an `int32` feature. Character index that enables easy parsing of the query, if exists.
- `query_end_index`: an `int32` feature. Character index that enables easy parsing of the query, if exists.
- `truncation_seperator`: a `string` feature. The string used to append to a trimmed context document, mentioning the context was trimmed.
Datasets containing multiple documents inside the `input` feature are MuSiQue, SpaceDigest, and BookSumSort. They also have the following feature:
- `inner_docs_start_indices`: a sequence of `int32` feature. Character indexes that enables easy parsing of the the inner documents, e.g. Reviews, of Summaries.
## Citation
If you use the ZeroSCROLLS data, **please make sure to cite all of the original dataset papers.** [[bibtex](https://zero-scrolls-tau.s3.us-east-2.amazonaws.com/zero_scrolls_datasets.bib)]
```
@inproceedings{shaham-etal-2023-zeroscrolls,
title = "{Z}ero{SCROLLS}: A Zero-Shot Benchmark for Long Text Understanding",
author = "Shaham, Uri and
Ivgi, Maor and
Efrat, Avia and
Berant, Jonathan and
Levy, Omer",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.536",
doi = "10.18653/v1/2023.findings-emnlp.536",
pages = "7977--7989"
}
```
|
tau/zero_scrolls
|
[
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:multiple-choice-qa",
"language:en",
"query-based-summarization",
"long-texts",
"arxiv:2104.02112",
"arxiv:2104.07091",
"arxiv:2104.05938",
"arxiv:2205.11465",
"arxiv:2105.03011",
"arxiv:1712.07040",
"arxiv:2112.08608",
"arxiv:2108.00573",
"region:us"
] |
2023-05-21T09:47:57+00:00
|
{"language": ["en"], "task_categories": ["question-answering", "summarization", "text-generation"], "task_ids": ["multiple-choice-qa"], "tags": ["query-based-summarization", "long-texts"]}
|
2024-01-12T12:31:16+00:00
|
c7030f942f4184016be3dfad99a62fc085100553
|
SIVANNIM/rvcmodels
|
[
"license:other",
"region:us"
] |
2023-05-21T10:09:19+00:00
|
{"license": "other"}
|
2023-05-21T10:11:40+00:00
|
|
4a71290ccb1872ceed05d5700e32b4d0575cbb46
|
# Dataset Card for "c6ba040f"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/c6ba040f
|
[
"region:us"
] |
2023-05-21T10:20:30+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1341, "dataset_size": 186}}
|
2023-05-21T10:20:31+00:00
|
12e38c803b8556b8e8c6a14cb0fedb54f5d63fb5
|
# Dataset Card for "dataset_with_ocr1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Pratha1m/dataset_with_ocr1
|
[
"region:us"
] |
2023-05-21T10:38:40+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "image", "sequence": {"sequence": {"sequence": "uint8"}}}, {"name": "answer", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "boxes", "sequence": {"sequence": "int64"}}], "splits": [{"name": "train", "num_bytes": 147847251, "num_examples": 904}, {"name": "test", "num_bytes": 30521871, "num_examples": 190}], "download_size": 37387817, "dataset_size": 178369122}}
|
2023-05-21T10:38:58+00:00
|
d2164d0252fcdc375c4169fcffb23cffd41ff5b9
|
# Dataset Card for "dataset_with_ocr2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Pratha1m/dataset_with_ocr2
|
[
"region:us"
] |
2023-05-21T10:42:44+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "image", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "answer", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "boxes", "sequence": {"sequence": "int64"}}], "splits": [{"name": "train", "num_bytes": 556079187, "num_examples": 904}, {"name": "test", "num_bytes": 116322831, "num_examples": 190}], "download_size": 37400722, "dataset_size": 672402018}}
|
2023-05-21T10:43:01+00:00
|
62030e0a7d387d2f279f093ba3d93e94bdc7dbb7
|
# Public Ground-Truth Dataset for Handwritten Circuit Diagrams (GTDB-HD)
This repository contains images of hand-drawn electrical circuit diagrams as well as accompanying bounding box annotation for object detection as well as segmentation ground truth files. This dataset is intended to train (e.g. neural network) models for the purpose of the extraction of electrical graphs from raster graphics.
## Structure
The folder structure is made up as follows:
```
gtdh-hd
│ README.md # This File
│ classes.json # Classes List
│ classes_color.json # Classes to Color Map
│ classes_discontinuous.json # Classes Morphology Info
│ classes_ports.json # Electrical Port Descriptions for Classes
│ consistency.py # Dataset Statistics and Consistency Check
| loader.py # Simple Dataset Loader and Storage Functions
│ segmentation.py # Multiclass Segmentation Generation
│ utils.py # Helper Functions
└───drafter_D
│ └───annotations # Bounding Box Annotations
│ │ │ CX_DY_PZ.xml
│ │ │ ...
│ │
│ └───images # Raw Images
│ │ │ CX_DY_PZ.jpg
│ │ │ ...
│ │
│ └───instances # Instance Segmentation Polygons
│ │ │ CX_DY_PZ.json
│ │ │ ...
│ │
│ └───segmentation # Binary Segmentation Maps (Strokes vs. Background)
│ │ │ CX_DY_PZ.jpg
│ │ │ ...
...
```
Where:
- `D` is the (globally) running number of a drafter
- `X` is the (globally) running number of the circuit (12 Circuits per Drafter)
- `Y` is the Local Number of the Circuit's Drawings (2 Drawings per Circuit)
- `Z` is the Local Number of the Drawing's Image (4 Pictures per Drawing)
### Image Files
Every image is RGB-colored and either stored as `jpg`, `jpeg` or `png` (both uppercase and lowercase suffixes exist).
### Bounding Box Annotations
A complete list of class labels including a suggested mapping table to integer numbers for training and prediction purposes can be found in `classes.json`. The annotations contains **BB**s (Bounding Boxes) of **RoI**s (Regions of Interest) like electrical symbols or texts within the raw images and are stored in the [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/) format.
Please note: *For every Raw image in the dataset, there is an accompanying bounding box annotation file.*
#### Known Labeled Issues
- C25_D1_P4 cuts off a text
- C27 cuts of some texts
- C29_D1_P1 has one additional text
- C31_D2_P4 has a text less
- C33_D1_P4 has a text less
- C46_D2_P2 cuts of a text
### Instance Segmentation
For every binary segmentation map, there is an accompanying polygonal annotation file for instance segmentation purposes, which is stored in the [labelme](https://github.com/wkentaro/labelme) format. Note that the contained polygons are quite coarse, intended to be used in conjunction with the binary segmentation maps for connection extraction and to tell individual instances with overlapping BBs apart.
### Segmentation Maps
Binary Segmentation images are available for some samples and bear the same resolution as the respective image files. They are considered to contain only black and white pixels indicating areas of drawings strokes and background respectively.
### Netlists
For some images, there are also netlist files available, which are stored in the [ASC](http://ltwiki.org/LTspiceHelp/LTspiceHelp/Spice_Netlist.htm) format.
### Consistency and Statistics
This repository comes with a stand-alone script to:
- Obtain Statistics on
- Class Distribution
- BB Sizes
- Check the BB Consistency
- Classes with Regards to the `classes.json`
- Counts between Pictures of the same Drawing
- Ensure a uniform writing style of the Annotation Files (indent)
The respective script is called without arguments to operate on the **entire** dataset:
```
$ python3 consistency.py
```
Note that due to a complete re-write of the annotation data, the script takes several seconds to finish. A drafter can be specified as CLI argument to restrict the evaluation (for example drafter 15):
```
$ python3 consistency.py 15
```
### Multi-Class (Instance) Segmentation Processing
This dataset comes with a script to process both new and existing (instance) segmentation files. It is invoked as follows:
```
$ python3 segmentation.py <command> <drafter_id> <target> <source>
```
Where:
- `<command>` has to be one of:
- `transform`
- Converts existing BB Annotations to Polygon Annotations
- Default target folder: `instances`
- Existing polygon files will not be overridden in the default settings, hence this command will take no effect in an completely populated dataset.
- Intended to be invoked after adding new binary segmentation maps
- **This step has to be performed before all other commands**
- `wire`
- Generates Wire Describing Polygons
- Default target folder: `wires`
- `keypoint`
- Generates Keypoints for Component Terminals
- Default target folder: `keypoints`
- `create`
- Generates Multi-Class segmentation Maps
- Default target folder: `segmentation_multi_class`
- `refine`
- Refines Coarse Polygon Annotations to precisely match the annotated objects
- Default target folder: `instances_refined`
- For instance segmentation purposes
- `pipeline`
- executes `wire`,`keypoint` and `refine` stacked, with one common `source` and `target` folder
- Default target folder: `instances_refined`
- `assign`
- Connector Point to Port Type Assignment by Geometric Transformation Matching
- `<drafter_id>` **optionally** restricts the process to one of the drafters
- `<target>` **optionally** specifies a divergent target folder for results to be placed in
- `<source>` **optionally** specifies a divergent source folder to read from
Please note that source and target forlders are **always** subfolder inside the individual drafter folders. Specifying source and target folders allow to stack the results of individual processing steps. For example, to perform the entire pipeline for drafter 20 manually, use:
```
python3 segmentation.py wire 20 instances_processed instances
python3 segmentation.py keypoint 20 instances_processed instances_processed
python3 segmentation.py refine 20 instances_processed instances_processed
```
### Dataset Loader
This dataset is also shipped with a set of loader and writer functions, which are internally used by the segmentation and consistency scripts and can be used for training. The dataset loader is simple, framework-agnostic and has been prepared to be callable from any location in the file system. Basic usage:
```
from loader import read_dataset
db_bb = read_dataset() # Read all BB Annotations
db_seg = read_dataset(segmentation=True) # Read all Polygon Annotations
db_bb_val = read_dataset(drafter=12) # Read Drafter 12 BB Annotations
len(db_bb) # Get The Amount of Samples
db_bb[5] # Get an Arbitrary Sample
db = read_images(drafter=12) # Returns a list of (Image, Annotation) pairs
db = read_snippets(drafter=12) # Returns a list of (Image, Annotation) pairs
```
## Citation
If you use this dataset for scientific publications, please consider citing us as follows:
```
@inproceedings{thoma2021public,
title={A Public Ground-Truth Dataset for Handwritten Circuit Diagram Images},
author={Thoma, Felix and Bayer, Johannes and Li, Yakun and Dengel, Andreas},
booktitle={International Conference on Document Analysis and Recognition},
pages={20--27},
year={2021},
organization={Springer}
}
```
## How to Contribute
If you want to contribute to the dataset as a drafter or in case of any further questions, please send an email to: <[email protected]> (corresponding author), <[email protected]>, <[email protected]>
## Guidelines
These guidelines are used throughout the generation of the dataset. They can be used as an instruction for participants and data providers.
### Drafter Guidelines
- 12 Circuits should be drawn, each of them twice (24 drawings in total)
- Most important: The drawing should be as natural to the drafter as possible
- Free-Hand sketches are preferred, using rulers and drawing Template stencils should be avoided unless it appears unnatural to the drafter
- Different types of pens/pencils should be used for different drawings
- Different kinds of (colored, structured, ruled, lined) paper should be used
- One symbol set (European/American) should be used throughout one drawing (consistency)
- It is recommended to use the symbol set that the drafter is most familiar with
- It is **strongly** recommended to share the first one or two circuits for review by the dataset organizers before drawing the rest to avoid problems (complete redrawing in worst case)
### Image Capturing Guidelines
- For each drawing, 4 images should be taken (96 images in total per drafter)
- Angle should vary
- Lighting should vary
- Moderate (e.g. motion) blur is allowed
- All circuit-related aspects of the drawing must be _human-recognicable_
- The drawing should be the main part of the image, but _naturally_ occurring objects from the environment are welcomed
- The first image should be _clean_, i.e. ideal capturing conditions
- Kinks and Buckling can be applied to the drawing between individual image capturing
- Try to use the file name convention (`CX_DY_PZ.jpg`) as early as possible
- The circuit range `X` will be given to you
- `Y` should be `1` or `2` for the drawing
- `Z` should be `1`,`2`,`3` or `4` for the picture
### Object Annotation Guidelines
- General Placement
- A **RoI** must be **completely** surrounded by its **BB**
- A **BB** should be as tight as possible to the **RoI**
- In case of connecting lines not completely touching the symbol, the BB should extended (only by a small margin) to enclose those gaps (epecially considering junctions)
- Characters that are part of the **essential symbol definition** should be included in the BB (e.g. the `+` of a polarized capacitor should be included in its BB)
- **Junction** annotations
- Used for actual junction points (Connection of three or more wire segments with a small solid circle)
- Used for connection of three or more sraight line wire segements where a physical connection can be inferred by context (i.e. can be distinuished from **crossover**)
- Used for wire line corners
- Redundant Junction Points should **not** be annotated (small solid circle in the middle of a straight line segment)
- Should not be used for corners or junctions that are part of the symbol definition (e.g. Transistors)
- **Crossover** Annotations
- If dashed/dotted line: BB should cover the two next dots/dashes
- **Text** annotations
- Individual Text Lines should be annotated Individually
- Text Blocks should only be annotated If Related to Circuit or Circuit's Components
- Semantically meaningful chunks of information should be annotated Individually
- component characteristics enclosed in a single annotation (e.g. __100Ohms__, __10%__ tolerance, __5V__ max voltage)
- Component Names and Types (e.g. __C1__, __R5__, __ATTINY2313__)
- Custom Component Terminal Labels (i.e. __Integrated Circuit__ Pins)
- Circuit Descriptor (e.g. "Radio Amplifier")
- Texts not related to the Circuit should be ignored
- e.g. Brief paper, Company Logos
- Drafters auxiliary markings for internal organization like "D12"
- Texts on Surrounding or Background Papers
- Characters which are part of the essential symbol definition should __not__ be annotated as Text dedicatedly
- e.g. Schmitt Trigger __S__, , and gate __&__, motor __M__, Polarized capacitor __+__
- Only add terminal text annotation if the terminal is not part of the essential symbol definition
- **Table** cells should be annotated independently
- **Operation Amplifiers**
- Both the triangular US symbols and the european IC-like symbols symbols for OpAmps should be labeled `operational_amplifier`
- The `+` and `-` signs at the OpAmp's input terminals are considered essential and should therefore not be annotated as texts
- **Complex Components**
- Both the entire Component and its sub-Components and internal connections should be annotated:
| Complex Component | Annotation |
| ----------------- | ------------------------------------------------------ |
| Optocoupler | 0. `optocoupler` as Overall Annotation |
| | 1. `diode.light_emitting` |
| | 2. `transistor.photo` (or `resistor.photo`) |
| | 3. `optical` if LED and Photo-Sensor arrows are shared |
| | Then the arrows area should be includes in all |
| Relay | 0. `relay` as Overall Annotation |
| (also for | 1. `inductor` |
| coupled switches) | 2. `switch` |
| | 3. `mechanical` for the dashed line between them |
| Transformer | 0. `transformer` as Overall Annotation |
| | 1. `inductor` or `inductor.coupled` (watch the dot) |
| | 3. `magnetic` for the core |
#### Rotation Annotations
The Rotation (integer in degree) should capture the overall rotation of the symbol shape. However, the position of the terminals should also be taked into consideration. Under idealized circumstances (no perspective distorion and accurately drawn symbols according to the symbol library), these two requirements equal each other. For pathological cases however, in which shape and the set of terminals (or even individual terminals) are conflicting, the rotation should compromise between all factors.
Rotation annotations are currently work in progress. They should be provided for at least the following classes:
- "voltage.dc"
- "resistor"
- "capacitor.unpolarized"
- "diode"
- "transistor.bjt"
#### Text Annotations
- The Character Sequence in the Text Label Annotations should describe the actual Characters depicted in the respective Bounding Box as Precisely as Possible
- Bounding Box Annotations of class `text`
- Bear an additional `<text>` tag in which their content is given as string
- The `Omega` and `Mikro` Symbols are escaped respectively
- Currently Work in Progress
- The utils script allows for migrating text annotations from one annotation file to another: `python3 utils.py source target`
### Segmentation Map Guidelines
- Areas of __Intended__ drawing strokes (ink and pencil abrasion respectively) should be marked black, all other pixels (background) should be white
- shining through the paper (from the rear side or other sheets) should be considered background
### Polygon Annotation Guidelines
0. Before starting, make sure the respective files exist for the image sample to be polygon-annotated:
- BB Annotations (Pascal VOC XML File)
- (Binary) Segmentation Map
1. Transform the BB annotations into raw polygons
- Use: `python3 segmentation.py transform`
2. Refine the Polygons
- **To Avoid Embedding Image Data into the resulting JSON**, use: `labelme --nodata`
- Just make sure there are no overlaps between instances
- Especially take care about overlaps with structural elements like junctions and crossovers
3. Generate Multi-Class Segmentation Maps from the refined polygons
- Use: `python3 segmentation.py create`
- Use the generated images for a visual inspection
- After spotting problems, continue with Step 2
### Terminal Annotation Guidelines
```
labelme --labels "connector" --config "{shift_auto_shape_color: 1}" --nodata
```
|
lowercaseonly/cghd
|
[
"task_categories:object-detection",
"task_categories:image-segmentation",
"size_categories:1K<n<10K",
"language:en",
"language:de",
"license:cc-by-3.0",
"region:us"
] |
2023-05-21T11:20:21+00:00
|
{"language": ["en", "de"], "license": "cc-by-3.0", "size_categories": ["1K<n<10K"], "task_categories": ["object-detection", "image-segmentation"], "pretty_name": "A Public Ground-Truth Dataset for Handwritten Circuit Diagram Images"}
|
2023-05-22T07:57:28+00:00
|
727a8d2ed66e357f868fbbee249f6d50a93c4522
|
The dataset is relevant to Ukrainian reviews in three different domains:
1) Hotels.
2) Reustarants.
3) Products.
The dataset is comrpised of several .csv files, which one can found useful:
1) processed_data.csv - the processed dataset itself.
2) train_val_test_indices.csv - csv file with train/val/test indices. The split was stratified w.r.t dataset name (hotels, reustarants, products) and rating.
3) bad_ids.csv - csv file with ids of bad samples marked using model filtering approach, only ids of those samples for which difference between actual and predicted rating is bigger than 2 points are maintained in this file.
The data is scrapped from Tripadvisor (https://www.tripadvisor.com/) and Rozetka (https://rozetka.com.ua/).
The dataset was initially used for extraction of key-phrases relevant to one of rating categories, based on trained machine learning model (future article link will be here).
Dataset is processed to include two additional columns: one with lemmatized tokens and another one with POS tags. Both lemmatization and POS tagging are done using pymorphy2 (https://pymorphy2.readthedocs.io/en/stable/) library.
The words are tokenized using a specific regex tokenizer to account for usage of apostroph.
Those reviews which weren't in Ukrainian were translated to it using Microsoft translator and re-checked manually afterwards.
|
vkovenko/cross_domain_uk_reviews
|
[
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:uk",
"license:cc",
"region:us"
] |
2023-05-21T11:42:26+00:00
|
{"language": ["uk"], "license": "cc", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"]}
|
2023-05-21T13:49:09+00:00
|
cf0c7f230fe855cd7485e454dbc84a4d42c6bd3b
|
juege/ssss
|
[
"license:openrail",
"region:us"
] |
2023-05-21T12:01:52+00:00
|
{"license": "openrail"}
|
2023-05-21T12:50:22+00:00
|
|
0b11190693a3140d2fa4005555aefdabb6009b3b
|
# Dataset Card for "DOA_dataset_6_classes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FidelOdok/DOA_dataset_6_classes
|
[
"region:us"
] |
2023-05-21T12:32:21+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6"}}}}], "splits": [{"name": "train", "num_bytes": 48226567777.932, "num_examples": 125738}], "download_size": 4496848646, "dataset_size": 48226567777.932}}
|
2023-05-21T14:23:05+00:00
|
72e7d1a86c05c9ee6242b00c351cd9a6104d50fc
|
# Dataset Card for "flores200_devtest_mt5-1b-flores200-packed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hlillemark/flores200_devtest_mt5-1b-flores200-packed
|
[
"region:us"
] |
2023-05-21T12:33:38+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "source_lang", "dtype": "string"}, {"name": "target_lang", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "chrf_unreduced", "dtype": "string"}], "splits": [{"name": "devtest", "num_bytes": 375897794, "num_examples": 500000}], "download_size": 258138418, "dataset_size": 375897794}}
|
2023-05-21T12:34:01+00:00
|
211c48d57b05be3319b311b240ebdcae0e78185d
|
# Dataset Card for "chunk_151"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_151
|
[
"region:us"
] |
2023-05-21T12:47:31+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 973488560, "num_examples": 191180}], "download_size": 991743849, "dataset_size": 973488560}}
|
2023-05-21T12:48:13+00:00
|
1771c1cd2ad7c4c58c90bdba78e90d5792d70a3d
|
# Dataset Card for "chunk_158"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_158
|
[
"region:us"
] |
2023-05-21T12:56:13+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1016276636, "num_examples": 199583}], "download_size": 1035715202, "dataset_size": 1016276636}}
|
2023-05-21T12:56:45+00:00
|
f314c2d15c0adf51ca46e47006f2a42c18630925
|
# Dataset Card for "chunk_156"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_156
|
[
"region:us"
] |
2023-05-21T13:07:47+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1179022048, "num_examples": 231544}], "download_size": 1200963148, "dataset_size": 1179022048}}
|
2023-05-21T13:08:26+00:00
|
26a997e799f066038ec2afcd396d42d0eea83c2a
|
# Dataset Card for "chunk_163"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_163
|
[
"region:us"
] |
2023-05-21T13:10:01+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1128570512, "num_examples": 221636}], "download_size": 1150087634, "dataset_size": 1128570512}}
|
2023-05-21T13:11:02+00:00
|
aa3ebe3acada32683ae47c5090dba3611249abe9
|
# Dataset Card for "chunk_162"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_162
|
[
"region:us"
] |
2023-05-21T13:11:25+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1145537056, "num_examples": 224968}], "download_size": 1168999281, "dataset_size": 1145537056}}
|
2023-05-21T13:12:25+00:00
|
5b8ea7c1ec4d59e1daa4785eb7ea7db4b2e24483
|
# Dataset Card for "chunk_155"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_155
|
[
"region:us"
] |
2023-05-21T13:12:40+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1319571432, "num_examples": 259146}], "download_size": 1344733360, "dataset_size": 1319571432}}
|
2023-05-21T13:15:02+00:00
|
0cef48a99d5c81ac05fd852759ff946fe39fe1c5
|
# Dataset Card for "chunk_154"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_154
|
[
"region:us"
] |
2023-05-21T13:15:11+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1163379424, "num_examples": 228472}], "download_size": 1182358065, "dataset_size": 1163379424}}
|
2023-05-21T13:16:13+00:00
|
5312461f888c8ab233ded22403dd7fec992f542e
|
# Dataset Card for "chunk_153"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_153
|
[
"region:us"
] |
2023-05-21T13:34:18+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1123768756, "num_examples": 220693}], "download_size": 1147222908, "dataset_size": 1123768756}}
|
2023-05-21T13:36:21+00:00
|
ee90fed2c13d24cf6e5c52392d8659e8bce583bf
|
# Dataset Card for "chunk_161"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_161
|
[
"region:us"
] |
2023-05-21T13:35:04+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1150710528, "num_examples": 225984}], "download_size": 1173677880, "dataset_size": 1150710528}}
|
2023-05-21T13:37:10+00:00
|
26aaa0bbaefcb82ed0dbd5e1cf72f1684db47274
|
Dataset contain raw prompts from Mid Journey v5
Total Records : 4245117
Sample Data
| AuthorID | Author | Date | Content | Attachments | Reactions |
| --- | --- | --- | --- | --- | --- |
| 936929561302675456 | Midjourney Bot#9282 | 04/20/2023 12:00 AM | benjamin frankling with rayban sunglasses reflecting a usa flag walking on a side of penguin, whit... | [Link](https://cdn.discordapp.com/attachments/933565701162168371/1098276830525538494/vanDyke_benjamin_frank...) | |
| 936929561302675456 | Midjourney Bot#9282 | 04/20/2023 12:00 AM | Street vendor robot in 80's Poland, meat market, fruit stall, communist style, real photo, real ph... | [Link](https://cdn.discordapp.com/attachments/933565701162168371/1098276841426526290/alepasztet_Street_vend...) | |
| 936929561302675456 | Midjourney Bot#9282 | 04/20/2023 12:00 AM | one of the guys is looking at another man , in the style of kris knight, realistic, detailed rende... | [Link](https://cdn.discordapp.com/attachments/933565701162168371/1098276845394333818/iflwlou_one_of_the_guy...) | |
You can clean the data with the help of Data Clean Notebook Provided in the Dataset.
|
tarungupta83/MidJourney_v5_Prompt_dataset
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-21T13:37:31+00:00
|
{"license": "apache-2.0"}
|
2023-05-21T13:46:19+00:00
|
3077aac85c744c585bb3c83739473f957c9f3274
|
HaiderSultanArc/Unani-Dataset
|
[
"license:mit",
"region:us"
] |
2023-05-21T13:37:31+00:00
|
{"license": "mit"}
|
2023-05-21T13:37:31+00:00
|
|
9780033795c4143da6a3a4f387c9c9d6860bc5c3
|
# Dataset Card for "chunk_159"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_159
|
[
"region:us"
] |
2023-05-21T13:42:08+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1233577736, "num_examples": 242258}], "download_size": 1257136487, "dataset_size": 1233577736}}
|
2023-05-21T13:44:21+00:00
|
d13921ba8de09aa4c13c6808a50d675baf50d4b8
|
## Homepage
Exploring and Verbalizing Academic Ideas by Concept Co-occurrence
[https://github.com/xyjigsaw/Kiscovery](https://github.com/xyjigsaw/Kiscovery)
## Evolving Concept Co-occurrence Graph
It is the official **Evolving Concept Co-occurrence Graph** dataset of paper *Exploring and Verbalizing Academic Ideas by Concept Co-occurrence*.
To train our model for temporal link prediction, we first collect 240 essential and common queries from 19 disciplines and one special topic (COVID-19). Then, we enter these queries into the paper database to fetch the most relevant papers between 2000 and 2021 with Elasticsearch, a modern text retrieval engine that stores and retrieves papers. Afterward, we use information extraction tools including [AutoPhrase](https://github.com/shangjingbo1226/AutoPhrase) to identify concepts. Only high-quality concepts that appear in our database will be preserved. Finally, we construct 240 evolving concept co-occurrence graphs, each containing 22 snapshots according to the co-occurrence relationship. The statistics of the concept co-occurrence graphs are provided in Appendix I.
Download with git, and you should install git-lfs first
```bash
sudo apt-get install git-lfs
# OR
brew install git-lfs
git lfs install
git clone https://huggingface.co/datasets/Reacubeth/ConceptGraph
```
## Citation
If you use our work in your research or publication, please cite us as follows:
```
@inproceedings{xu2023exploring,
title={Exploring and Verbalizing Academic Ideas by Concept Co-occurrence},
author={Xu, Yi and Sheng, Shuqian and Xue, Bo and Fu, Luoyi and Wang, Xinbing and Zhou, Chenghu},
booktitle={Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL)},
year={2023}
}
```
Please let us know if you have any questions or feedback. Thank you for your interest in our work!
|
Reacubeth/ConceptGraph
|
[
"license:gpl-3.0",
"region:us"
] |
2023-05-21T14:38:05+00:00
|
{"license": "gpl-3.0"}
|
2023-05-22T06:48:29+00:00
|
c493721e19e296eb615420036f2a2eed08412bb4
|
# ImageRewardDB
## Dataset Description
- **Homepage: https://huggingface.co/datasets/wuyuchen/ImageRewardDB**
- **Repository: https://github.com/THUDM/ImageReward**
- **Paper: https://arxiv.org/abs/2304.05977**
### Dataset Summary
ImageRewardDB is a comprehensive text-to-image comparison dataset, focusing on text-to-image human preference.
It consists of 137k pairs of expert comparisons, based on text prompts and corresponding model outputs from DiffusionDB.
To build the ImageRewadDB, we design a pipeline tailored for it, establishing criteria for quantitative assessment and
annotator training, optimizing labeling experience, and ensuring quality validation. And ImageRewardDB is now publicly available at
[🤗 Hugging Face Dataset](https://huggingface.co/datasets/wuyuchen/ImageRewardDB).
Notice: All images in ImageRewardDB are collected from DiffusionDB, and in addition, we gathered together images corresponding to the same prompt.
### Languages
The text in the dataset is all in English.
### Four Subsets
Considering that the ImageRewardDB contains a large number of images, we provide four subsets in different scales to support different needs.
For all subsets, the validation and test splits remain the same. The validation split(1.10GB) contains 412 prompts and 2.6K images(7.32K pairs) and
the test(1.16GB) split contains 466 prompts and 2.7K images(7.23K pairs). The information on the train split in different scales is as follows:
|Subset|Num of Pairs|Num of Images|Num of Prompts|Size|
|:--|--:|--:|--:|--:|
|ImageRewardDB 1K|17.6K|6.2K|1K|2.7GB|
|ImageRewardDB 2K|35.5K|12.5K|2K|5.5GB|
|ImageRewardDB 4K|71.0K|25.1K|4K|10.8GB|
|ImageRewardDB 8K|141.1K|49.9K|8K|20.9GB|
## Dataset Structure
All the data in this repository is stored in a well-organized way. The 62.6K images in ImageRewardDB are split into several folders,
stored in corresponding directories under "./images" according to its split. Each folder contains around 500 prompts, their corresponding
images, and a JSON file. The JSON file links the image with its corresponding prompt and annotation.
The file structure is as follows:
```
# ImageRewardDB
./
├── images
│ ├── train
│ │ ├── train_1
│ │ │ ├── 0a1ed3a5-04f6-4a1b-aee6-d584e7c8ed9c.webp
│ │ │ ├── 0a58cfa8-ff61-4d31-9757-27322aec3aaf.webp
│ │ │ ├── [...]
│ │ │ └── train_1.json
│ │ ├── train_2
│ │ ├── train_3
│ │ ├── [...]
│ │ └── train_32
│ ├── validation
│ │ └── [...]
│ └── test
│ └── [...]
├── metadata-train.parquet
├── metadata-validation.parquet
└── metadata-test.parquet
```
The sub-folders have the name of {split_name}_{part_id}, and the JSON file has the same name as the sub-folder.
Each image is a lossless WebP file and has a unique name generated by [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier).
### Data Instances
For instance, below is the image of `1b4b2d61-89c2-4091-a1c0-f547ad5065cb.webp` and its information in train_1.json.
```json
{
"image_path": "images/train/train_1/0280642d-f69f-41d1-8598-5a44e296aa8b.webp",
"prompt_id": "000864-0061",
"prompt": "painting of a holy woman, decorated, intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha, 8 k ",
"classification": "People",
"image_amount_in_total": 9,
"rank": 5,
"overall_rating": 4,
"image_text_alignment_rating": 3,
"fidelity_rating": 4
}
```
### Data Fields
* image: The image object
* prompt_id: The id of the corresponding prompt
* prompt: The text of the corresponding prompt
* classification: The classification of the corresponding prompt
* image_amount_in_total: Total amount of images related to the prompt
* rank: The relative rank of the image in all related images
* overall_rating: The overall score of this image
* image_text_alignment_rating: The score of how well the generated image matches the given text
* fidelity_rating: The score of whether the output image is true to the shape and characteristics that the object should have
### Data Splits
As we mentioned above, all scales of the subsets we provided have three splits of "train", "validation", and "test".
And all the subsets share the same validation and test splits.
### Dataset Metadata
We also include three metadata tables `metadata-train.parquet`, `metadata-validation.parquet`, and `metadata-test.parquet` to
help you access and comprehend ImageRewardDB without downloading the Zip files.
All the tables share the same schema, and each row refers to an image. The schema is shown below,
and actually, the JSON files we mentioned above share the same schema:
|Column|Type|Description|
|:---|:---|:---|
|`image_path`|`string`|The relative path of the image in the repository.|
|`prompt_id`|`string`|The id of the corresponding prompt.|
|`prompt`|`string`|The text of the corresponding prompt.|
|`classification`|`string`| The classification of the corresponding prompt.|
|`image_amount_in_total`|`int`| Total amount of images related to the prompt.|
|`rank`|`int`| The relative rank of the image in all related images.|
|`overall_rating`|`int`| The overall score of this image.
|`image_text_alignment_rating`|`int`|The score of how well the generated image matches the given text.|
|`fidelity_rating`|`int`|The score of whether the output image is true to the shape and characteristics that the object should have.|
Below is an example row from metadata-train.parquet.
|image_path|prompt_id|prompt|classification|image_amount_in_total|rank|overall_rating|image_text_alignment_rating|fidelity_rating|
|:---|:---|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---|:---|:---|:---|:---|:---|
|images/train/train_1/1b4b2d61-89c2-4091-a1c0-f547ad5065cb.webp|001324-0093|a magical forest that separates the good world from the dark world, ...|Outdoor Scenes|8|3|6|6|6|
## Loading ImageRewardDB
You can use the Hugging Face [Datasets](https://huggingface.co/docs/datasets/quickstart) library to easily load the ImageRewardDB.
As we mentioned before, we provide four subsets in the scales of 1k, 2k, 4k, and 8k. You can load them using as following:
```python
from datasets import load_dataset
# Load the 1K-scale dataset
dataset = load_dataset("THUDM/ImageRewardDB", "1k")
# Load the 2K-scale dataset
dataset = load_dataset("THUDM/ImageRewardDB", "2k")
# Load the 4K-scale dataset
dataset = load_dataset("THUDM/ImageRewardDB", "4K")
# Load the 8K-scale dataset
dataset = load_dataset("THUDM/ImageRewardDB", "8k")
```
## Additional Information
### Licensing Information
The ImageRewardDB dataset is available under the [Apache license 2.0](https://www.apache.org/licenses/LICENSE-2.0.html).
The Python code in this repository is available under the [MIT License](https://github.com/poloclub/diffusiondb/blob/main/LICENSE).
### Citation Information
```
@misc{xu2023imagereward,
title={ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation},
author={Jiazheng Xu and Xiao Liu and Yuchen Wu and Yuxuan Tong and Qinkai Li and Ming Ding and Jie Tang and Yuxiao Dong},
year={2023},
eprint={2304.05977},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
THUDM/ImageRewardDB
|
[
"task_categories:text-to-image",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"arxiv:2304.05977",
"region:us"
] |
2023-05-21T14:39:22+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-to-image"], "pretty_name": "ImageReward Dataset"}
|
2023-06-21T05:36:29+00:00
|
26cc5fcf40e91a111c5797dd64f4cdf9e9b823e5
|
# Dataset Card for "chunk_166"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mask-distilled-one-sec-cv12/chunk_166
|
[
"region:us"
] |
2023-05-21T14:45:37+00:00
|
{"dataset_info": {"features": [{"name": "logits", "sequence": "float32"}, {"name": "mfcc", "sequence": {"sequence": "float64"}}], "splits": [{"name": "train", "num_bytes": 1164112672, "num_examples": 228616}], "download_size": 1189684678, "dataset_size": 1164112672}}
|
2023-05-21T14:46:30+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.