sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
cc38cbe53b8986643d88530a449ed7467f99b56f
|
# Dataset Card for "mmlu-high_school_physics-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-high_school_physics-neg-answer
|
[
"region:us"
] |
2023-05-09T06:42:44+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 66580, "num_examples": 151}], "download_size": 37589, "dataset_size": 66580}}
|
2023-05-15T04:52:38+00:00
|
c6e3a6db0b6817e37259ae5bf98e6e66a646fc21
|
# Dataset Card for "mmlu-high_school_psychology-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-high_school_psychology-neg-answer
|
[
"region:us"
] |
2023-05-09T06:44:33+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 181482, "num_examples": 545}], "download_size": 108462, "dataset_size": 181482}}
|
2023-05-15T04:56:18+00:00
|
707b4340b7f20b77a05a1d17fc8b7f7fe7a208df
|
# Dataset Card for "mmlu-high_school_statistics-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-high_school_statistics-neg-answer
|
[
"region:us"
] |
2023-05-09T06:45:09+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 123885, "num_examples": 216}], "download_size": 66388, "dataset_size": 123885}}
|
2023-05-15T04:57:17+00:00
|
ae1f00fe9d4551c5e0e7351850f28242d13166dc
|
# Dataset Card for "mmlu-high_school_us_history-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-high_school_us_history-neg-answer
|
[
"region:us"
] |
2023-05-09T06:45:59+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 309230, "num_examples": 204}], "download_size": 163790, "dataset_size": 309230}}
|
2023-05-15T04:58:41+00:00
|
fc6b46bd859762968512c3c49574444c198edbcf
|
# Dataset Card for "mmlu-high_school_world_history-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-high_school_world_history-neg-answer
|
[
"region:us"
] |
2023-05-09T06:46:51+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 393405, "num_examples": 237}], "download_size": 212102, "dataset_size": 393405}}
|
2023-05-15T05:00:06+00:00
|
a5b1fb0d98f31111bb0a3599b8d806465749e383
|
# Dataset Card for "mmlu-human_aging-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-human_aging-neg-answer
|
[
"region:us"
] |
2023-05-09T06:47:38+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 53595, "num_examples": 223}], "download_size": 37070, "dataset_size": 53595}}
|
2023-05-15T05:01:29+00:00
|
9895516732cf9184acc203a965c6829a4597eaff
|
# Dataset Card for "mmlu-human_sexuality-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-human_sexuality-neg-answer
|
[
"region:us"
] |
2023-05-09T06:48:09+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 36907, "num_examples": 131}], "download_size": 26930, "dataset_size": 36907}}
|
2023-05-15T05:02:26+00:00
|
3a3a789da34865ef766240628c34ef9434564fdc
|
# Dataset Card for "mmlu-international_law-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-international_law-neg-answer
|
[
"region:us"
] |
2023-05-09T06:48:22+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 67365, "num_examples": 121}], "download_size": 38742, "dataset_size": 67365}}
|
2023-05-15T05:02:44+00:00
|
8776567009458292a4212c08770da13b3d05990e
|
# Dataset Card for "mmlu-jurisprudence-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-jurisprudence-neg-answer
|
[
"region:us"
] |
2023-05-09T06:48:39+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 40272, "num_examples": 108}], "download_size": 27824, "dataset_size": 40272}}
|
2023-05-15T05:03:17+00:00
|
0ab2939c58a743bdfb098732614a89a6f36e12fe
|
# Dataset Card for "mmlu-logical_fallacies-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-logical_fallacies-neg-answer
|
[
"region:us"
] |
2023-05-09T06:49:30+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 58621, "num_examples": 163}], "download_size": 28609, "dataset_size": 58621}}
|
2023-05-15T05:04:52+00:00
|
2a0ea2329c56245baa75e214a818e6918f13f871
|
# Dataset Card for "mmlu-machine_learning-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-machine_learning-neg-answer
|
[
"region:us"
] |
2023-05-09T06:49:59+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 36792, "num_examples": 112}], "download_size": 21874, "dataset_size": 36792}}
|
2023-05-15T05:05:39+00:00
|
59f3c9559afd47088baefcce71da6a6586edacf2
|
# Dataset Card for "mmlu-management-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-management-neg-answer
|
[
"region:us"
] |
2023-05-09T06:50:26+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 23489, "num_examples": 103}], "download_size": 17634, "dataset_size": 23489}}
|
2023-05-15T05:06:27+00:00
|
a2c2d28d4728832a1921d9980c778ab82aeb174f
|
# Dataset Card for "mmlu-marketing-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-marketing-neg-answer
|
[
"region:us"
] |
2023-05-09T06:51:19+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 70803, "num_examples": 234}], "download_size": 43225, "dataset_size": 70803}}
|
2023-05-15T05:08:06+00:00
|
96a041af157e2201ef5502f499b1f248c859500e
|
# Dataset Card for "mmlu-medical_genetics-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-medical_genetics-neg-answer
|
[
"region:us"
] |
2023-05-09T06:51:47+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 24607, "num_examples": 100}], "download_size": 19647, "dataset_size": 24607}}
|
2023-05-15T05:08:57+00:00
|
44312c82edd199f7bffdc0c26c4ae2f0b0cbe918
|
# Dataset Card for "mmlu-miscellaneous-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-miscellaneous-neg-answer
|
[
"region:us"
] |
2023-05-09T06:54:29+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 170602, "num_examples": 783}], "download_size": 117116, "dataset_size": 170602}}
|
2023-05-15T05:14:24+00:00
|
7c49cbe59a8c70e907a97d1374eae358ab9adfee
|
# Dataset Card for "mmlu-moral_disputes-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-moral_disputes-neg-answer
|
[
"region:us"
] |
2023-05-09T06:55:26+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 126761, "num_examples": 346}], "download_size": 73650, "dataset_size": 126761}}
|
2023-05-15T05:16:07+00:00
|
3dd9f142bf15e0a5ce6425df4b8a0e5ddffde745
|
# Dataset Card for "mmlu-moral_scenarios-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-moral_scenarios-neg-answer
|
[
"region:us"
] |
2023-05-09T06:58:45+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 392005, "num_examples": 895}], "download_size": 94736, "dataset_size": 392005}}
|
2023-05-15T05:22:15+00:00
|
0bcdc88fea8fa9c38eb388149aa1febdc125d9ae
|
# Dataset Card for "mmlu-nutrition-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-nutrition-neg-answer
|
[
"region:us"
] |
2023-05-09T06:59:53+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 110092, "num_examples": 306}], "download_size": 67198, "dataset_size": 110092}}
|
2023-05-15T05:24:14+00:00
|
fd1284ac5ba77a238ed2b96d304bdc814a22c5b4
|
# Dataset Card for "mmlu-philosophy-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-philosophy-neg-answer
|
[
"region:us"
] |
2023-05-09T07:00:48+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 93843, "num_examples": 311}], "download_size": 58150, "dataset_size": 93843}}
|
2023-05-15T05:25:48+00:00
|
cfc93759f1d91bc559eecad0c2a092ba9d1e2102
|
# Dataset Card for "mmlu-prehistory-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-prehistory-neg-answer
|
[
"region:us"
] |
2023-05-09T07:02:09+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 103985, "num_examples": 324}], "download_size": 65025, "dataset_size": 103985}}
|
2023-05-15T05:28:20+00:00
|
76d0d6e4a14952a74271b1a1beb480f6387ad48f
|
# Dataset Card for "mmlu-professional_accounting-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-professional_accounting-neg-answer
|
[
"region:us"
] |
2023-05-09T07:03:04+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 138388, "num_examples": 282}], "download_size": 79040, "dataset_size": 138388}}
|
2023-05-15T05:29:59+00:00
|
065c70aa0f8773bf97117462764fd7bf2e05c6a1
|
# Dataset Card for "mmlu-professional_law-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-professional_law-neg-answer
|
[
"region:us"
] |
2023-05-09T07:05:48+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2047921, "num_examples": 1534}], "download_size": 1128003, "dataset_size": 2047921}}
|
2023-05-15T05:34:17+00:00
|
0ad461c3497e2692cbbc58886ef625a643ef8c82
|
# Dataset Card for "mmlu-professional_medicine-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-professional_medicine-neg-answer
|
[
"region:us"
] |
2023-05-09T07:07:01+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 227505, "num_examples": 272}], "download_size": 132698, "dataset_size": 227505}}
|
2023-05-15T05:36:35+00:00
|
c4eb5237081623228f72587ec3b8ad15d82e3c4b
|
san5167/new-user-data
|
[
"language:aa",
"license:bigcode-openrail-m",
"region:us"
] |
2023-05-09T07:08:42+00:00
|
{"language": ["aa"], "license": "bigcode-openrail-m"}
|
2023-08-03T11:06:45+00:00
|
|
e725cbbae4705d7bf4ad5bf0830a5e9bfb4c52d7
|
# Dataset Card for "mmlu-professional_psychology-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-professional_psychology-neg-answer
|
[
"region:us"
] |
2023-05-09T07:08:43+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 259865, "num_examples": 612}], "download_size": 155435, "dataset_size": 259865}}
|
2023-05-15T05:39:32+00:00
|
ee286b04ced0c78d30610a46472f7bd5cc07d8d6
|
# Dataset Card for "mmlu-public_relations-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-public_relations-neg-answer
|
[
"region:us"
] |
2023-05-09T07:09:08+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 32498, "num_examples": 110}], "download_size": 23737, "dataset_size": 32498}}
|
2023-05-15T05:40:19+00:00
|
7dd3969085d398e9afc1a4fcf0d6aee41575f871
|
# Dataset Card for "mmlu-security_studies-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-security_studies-neg-answer
|
[
"region:us"
] |
2023-05-09T07:09:42+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 248109, "num_examples": 245}], "download_size": 139044, "dataset_size": 248109}}
|
2023-05-15T05:41:12+00:00
|
8af7b3dcb77ec96058dcc323eacc5b5108ccae7a
|
# Dataset Card for "mmlu-sociology-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-sociology-neg-answer
|
[
"region:us"
] |
2023-05-09T07:10:27+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 78697, "num_examples": 201}], "download_size": 52898, "dataset_size": 78697}}
|
2023-05-15T05:42:35+00:00
|
313d40bec69cf046c0312693b05ba0f31844b2f2
|
# Dataset Card for "mmlu-us_foreign_policy-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-us_foreign_policy-neg-answer
|
[
"region:us"
] |
2023-05-09T07:10:49+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 33831, "num_examples": 100}], "download_size": 23499, "dataset_size": 33831}}
|
2023-05-15T05:43:12+00:00
|
995e543f0271893880d9d8db56ec0979ac00d3f2
|
# Dataset Card for "mmlu-virology-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-virology-neg-answer
|
[
"region:us"
] |
2023-05-09T07:11:34+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 44531, "num_examples": 166}], "download_size": 31956, "dataset_size": 44531}}
|
2023-05-15T05:44:38+00:00
|
a3a0686929c00777a1771cbbe5790d10a34439f6
|
# Dataset Card for "mmlu-world_religions-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
joey234/mmlu-world_religions-neg-answer
|
[
"region:us"
] |
2023-05-09T07:12:15+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "neg_answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 29072, "num_examples": 171}], "download_size": 22307, "dataset_size": 29072}}
|
2023-05-15T05:45:54+00:00
|
f52417ca77bd71e9888ddc29f92587660725d2b4
|
# Dataset Card for MGSM
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://openai.com/blog/grade-school-math/
- **Repository:** https://github.com/openai/grade-school-math
- **Paper:** https://arxiv.org/abs/2110.14168
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Multilingual Grade School Math Benchmark (MGSM) is a benchmark of grade-school math problems, proposed in the paper [Language models are multilingual chain-of-thought reasoners](http://arxiv.org/abs/2210.03057).
The same 250 problems from [GSM8K](https://arxiv.org/abs/2110.14168) are each translated via human annotators in 10 languages. The 10 languages are:
- Spanish
- French
- German
- Russian
- Chinese
- Japanese
- Thai
- Swahili
- Bengali
- Telugu
GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
You can find the input and targets for each of the ten languages (and English) as `.tsv` files.
We also include few-shot exemplars that are also manually translated from each language in `exemplars.py`.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The same 250 problems from [GSM8K](https://arxiv.org/abs/2110.14168) are each translated via human annotators in 10 languages. The 10 languages are:
- Spanish
- French
- German
- Russian
- Chinese
- Japanese
- Thai
- Swahili
- Bengali
- Telugu
## Dataset Structure
### Data Instances
Each instance in the train split contains:
- a string for the grade-school level math question
- a string for the corresponding answer with chain-of-thought steps.
- the numeric solution to the question
- the equation solution to the question
```python
{'question': 'Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?',
'answer': 'Step-by-Step Answer: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11.',
'answer_number': 11,
'equation_solution': '5 + 6 = 11.'}
```
Each instance in the test split contains:
- a string for the grade-school level math question
- the numeric solution to the question
```python
{'question': "Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?",
'answer': None,
'answer_number': 18,
'equation_solution': None}
```
### Data Fields
The data fields are the same among `train` and `test` splits.
- question: The question string to a grade school math problem.
- answer: The full solution string to the `question`. It contains multiple steps of reasoning with calculator annotations and the final numeric solution.
- answer_number: The numeric solution to the `question`.
- equation_solution: The equation solution to the `question`.
### Data Splits
- The train split includes 8 few-shot exemplars that are also manually translated from each language.
- The test split includes the same 250 problems from GSM8K translated via human annotators in 10 languages.
| name |train|test |
|--------|----:|---------:|
|en | 8 | 250 |
|es | 8 | 250 |
|fr | 8 | 250 |
|de | 8 | 250 |
|ru | 8 | 250 |
|zh | 8 | 250 |
|ja | 8 | 250 |
|th | 8 | 250 |
|sw | 8 | 250 |
|bn | 8 | 250 |
|te | 8 | 250 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> We initially collected a starting set of a thousand problems and natural language solutions by hiring freelance contractors on Upwork (upwork.com). We then worked with Surge AI (surgehq.ai), an NLP data labeling platform, to scale up our data collection. After collecting the full dataset, we asked workers to re-solve all problems, with no workers re-solving problems they originally wrote. We checked whether their final answers agreed with the original solu- tions, and any problems that produced disagreements were either repaired or discarded. We then performed another round of agreement checks on a smaller subset of problems, finding that 1.7% of problems still produce disagreements among contractors. We estimate this to be the fraction of problems that con- tain breaking errors or ambiguities. It is possible that a larger percentage of problems contain subtle errors.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
Surge AI (surgehq.ai)
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The GSM8K dataset is licensed under the [MIT License](https://opensource.org/licenses/MIT).
### Citation Information
```bibtex
@article{cobbe2021gsm8k,
title={Training Verifiers to Solve Math Word Problems},
author={Cobbe, Karl and Kosaraju, Vineet and Bavarian, Mohammad and Chen, Mark and Jun, Heewoo and Kaiser, Lukasz and Plappert, Matthias and Tworek, Jerry and Hilton, Jacob and Nakano, Reiichiro and Hesse, Christopher and Schulman, John},
journal={arXiv preprint arXiv:2110.14168},
year={2021}
}
@misc{shi2022language,
title={Language Models are Multilingual Chain-of-Thought Reasoners},
author={Freda Shi and Mirac Suzgun and Markus Freitag and Xuezhi Wang and Suraj Srivats and Soroush Vosoughi and Hyung Won Chung and Yi Tay and Sebastian Ruder and Denny Zhou and Dipanjan Das and Jason Wei},
year={2022},
eprint={2210.03057},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@juletx](https://github.com/juletx) for adding this dataset.
|
juletxara/mgsm
|
[
"task_categories:text2text-generation",
"annotations_creators:found",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:extended|gsm8k",
"language:en",
"language:es",
"language:fr",
"language:de",
"language:ru",
"language:zh",
"language:ja",
"language:th",
"language:sw",
"language:bn",
"license:cc-by-sa-4.0",
"math-word-problems",
"arxiv:2110.14168",
"arxiv:2210.03057",
"region:us"
] |
2023-05-09T07:20:29+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found", "expert-generated"], "language": ["en", "es", "fr", "de", "ru", "zh", "ja", "th", "sw", "bn"], "license": ["cc-by-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|gsm8k"], "task_categories": ["text2text-generation"], "task_ids": [], "paperswithcode_id": "multi-task-language-understanding-on-mgsm", "pretty_name": "Multilingual Grade School Math Benchmark (MGSM)", "tags": ["math-word-problems"], "dataset_info": [{"config_name": "en", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_number", "dtype": "int32"}, {"name": "equation_solution", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3963202, "num_examples": 8}, {"name": "test", "num_bytes": 713732, "num_examples": 250}], "download_size": 4915944, "dataset_size": 4676934}, {"config_name": "es", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_number", "dtype": "int32"}, {"name": "equation_solution", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3963202, "num_examples": 8}, {"name": "test", "num_bytes": 713732, "num_examples": 250}], "download_size": 4915944, "dataset_size": 4676934}]}
|
2023-05-09T15:46:31+00:00
|
99f4231279f163519499a7d278a602680ef7d7af
|
# Carl Johnson Voice Pack/Dataset
## Dataset Description
- 来源:游戏数据Audio/SFX 和 Audio/Streams,共5657条
- 收集者:B站蝈总(Katock),其他平台同名
- 用途:AI训练
- 全网找不到很全的数据集,故手动收集之
|
Katock/carl_johnson_voice_pack
|
[
"license:bsd-3-clause-clear",
"music",
"region:us"
] |
2023-05-09T07:23:23+00:00
|
{"license": "bsd-3-clause-clear", "tags": ["music"]}
|
2023-05-13T14:07:45+00:00
|
27ad86b20b06fc060b4fc841cc6183ead8def0e5
|
# demo-data
这个一个测试数据集,用来熟悉 huggingface 的主要功能。
## Usage
@Copyright 2023 [email protected]
|
jianboy/demo-data
|
[
"license:cc0-1.0",
"region:us"
] |
2023-05-09T07:24:16+00:00
|
{"license": "cc0-1.0"}
|
2023-05-09T07:32:28+00:00
|
0b5e742b66768e46ebc39a6ece22c70c63d7d84e
|
spitfire4794/BanglaNMT
|
[
"task_categories:translation",
"size_categories:1M<n<10M",
"region:us"
] |
2023-05-09T07:28:58+00:00
|
{"size_categories": ["1M<n<10M"], "task_categories": ["translation"]}
|
2023-05-09T07:45:55+00:00
|
|
e6614e7efb8c4d8a978105d8b8b915cb1aff0e19
|
zhangfei2023/cccc
|
[
"license:openrail",
"region:us"
] |
2023-05-09T07:33:17+00:00
|
{"license": "openrail"}
|
2023-05-09T07:36:17+00:00
|
|
54a4bf1f06dd2669ac24815a139df1d75a617ee9
|
# Dataset Card for Food-101-Enriched (Enhanced by Renumics)
## Dataset Description
- **Homepage:** [Renumics Homepage](https://renumics.com/?hf-dataset-card=food101-enriched)
- **GitHub** [Spotlight](https://github.com/Renumics/spotlight)
- **Dataset Homepage** [data.vision.ee.ethz.ch](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/)
- **Paper:** [Food-101 – Mining Discriminative Components with Random Forests](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/static/bossard_eccv14_food-101.pdf)
### Dataset Summary
📊 [Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases.
At [Renumics](https://renumics.com/?hf-dataset-card=food101-enriched) we believe that classical benchmark datasets and competitions should be extended to reflect this development.
🔍 This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:
1. Enable new researchers to quickly develop a profound understanding of the dataset.
2. Popularize data-centric AI principles and tooling in the ML community.
3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.
📚 This dataset is an enriched version of the [Food101 Data Set](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/).
### Explore the Dataset

The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) enables that with just a few lines of code:
Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip):
```python
!pip install renumics-spotlight datasets
```
Load the dataset from huggingface in your notebook:
```python
import datasets
dataset = datasets.load_dataset("renumics/food101-enriched", split="train")
```
Start exploring with a simple view:
```python
from renumics import spotlight
df_show = dataset.to_pandas()
spotlight.show(df_show, port=8000, dtype={"image": spotlight.Image})
```
You can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.
### Food101 Dataset
This data set contains 101'000 images from 101 food categories.
For each class, 250 manually reviewed test images are provided as well as 750 training images.
On purpose, the training images were not cleaned, and thus still contain some amount of noise.
This comes mostly in the form of intense colors and sometimes wrong labels.
All images were rescaled to have a maximum side length of 512 pixels.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image of a dish into one of 101 classes. The leaderboard is available [here](https://paperswithcode.com/sota/fine-grained-image-classification-on-food-101).
### Languages
English class labels.
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```python
{
"image": "/huggingface/datasets/downloads/extracted/49750366cbaf225ce1b5a5c033fa85ceddeee2e82f1d6e0365e8287859b4c7c8/0/0.jpg",
"label": 6,
"label_str": "beignets",
"split": "train"
}
```
<details>
<summary>Class Label Mappings</summary>
```json
{
"apple_pie": 0,
"baby_back_ribs": 1,
"baklava": 2,
"beef_carpaccio": 3,
"beef_tartare": 4,
"beet_salad": 5,
"beignets": 6,
"bibimbap": 7,
"bread_pudding": 8,
"breakfast_burrito": 9,
"bruschetta": 10,
"caesar_salad": 11,
"cannoli": 12,
"caprese_salad": 13,
"carrot_cake": 14,
"ceviche": 15,
"cheesecake": 16,
"cheese_plate": 17,
"chicken_curry": 18,
"chicken_quesadilla": 19,
"chicken_wings": 20,
"chocolate_cake": 21,
"chocolate_mousse": 22,
"churros": 23,
"clam_chowder": 24,
"club_sandwich": 25,
"crab_cakes": 26,
"creme_brulee": 27,
"croque_madame": 28,
"cup_cakes": 29,
"deviled_eggs": 30,
"donuts": 31,
"dumplings": 32,
"edamame": 33,
"eggs_benedict": 34,
"escargots": 35,
"falafel": 36,
"filet_mignon": 37,
"fish_and_chips": 38,
"foie_gras": 39,
"french_fries": 40,
"french_onion_soup": 41,
"french_toast": 42,
"fried_calamari": 43,
"fried_rice": 44,
"frozen_yogurt": 45,
"garlic_bread": 46,
"gnocchi": 47,
"greek_salad": 48,
"grilled_cheese_sandwich": 49,
"grilled_salmon": 50,
"guacamole": 51,
"gyoza": 52,
"hamburger": 53,
"hot_and_sour_soup": 54,
"hot_dog": 55,
"huevos_rancheros": 56,
"hummus": 57,
"ice_cream": 58,
"lasagna": 59,
"lobster_bisque": 60,
"lobster_roll_sandwich": 61,
"macaroni_and_cheese": 62,
"macarons": 63,
"miso_soup": 64,
"mussels": 65,
"nachos": 66,
"omelette": 67,
"onion_rings": 68,
"oysters": 69,
"pad_thai": 70,
"paella": 71,
"pancakes": 72,
"panna_cotta": 73,
"peking_duck": 74,
"pho": 75,
"pizza": 76,
"pork_chop": 77,
"poutine": 78,
"prime_rib": 79,
"pulled_pork_sandwich": 80,
"ramen": 81,
"ravioli": 82,
"red_velvet_cake": 83,
"risotto": 84,
"samosa": 85,
"sashimi": 86,
"scallops": 87,
"seaweed_salad": 88,
"shrimp_and_grits": 89,
"spaghetti_bolognese": 90,
"spaghetti_carbonara": 91,
"spring_rolls": 92,
"steak": 93,
"strawberry_shortcake": 94,
"sushi": 95,
"tacos": 96,
"takoyaki": 97,
"tiramisu": 98,
"tuna_tartare": 99,
"waffles": 100
}
```
</details>
### Data Fields
| Feature | Data Type |
|---------------------------------|-----------------------------------------------|
| image | Image(decode=True, id=None) |
| split | Value(dtype='string', id=None) |
| label | ClassLabel(names=[...], id=None) |
| label_str | Value(dtype='string', id=None) |
### Data Splits
| Dataset Split | Number of Images in Split |
| ------------- |---------------------------|
| Train | 75750 |
| Test | 25250 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The Food-101 data set consists of images from Foodspotting [1] which are not property of the Federal Institute of Technology Zurich (ETHZ). Any use beyond scientific fair use must be negociated with the respective picture owners according to the Foodspotting terms of use [2].
[1] [http://www.foodspotting.com/](http://www.foodspotting.com/)
[2] [http://www.foodspotting.com/terms/](http://www.foodspotting.com/terms/)
### Citation Information
If you use this dataset, please cite the following paper:
```
@inproceedings{bossard14,
title = {Food-101 -- Mining Discriminative Components with Random Forests},
author = {Bossard, Lukas and Guillaumin, Matthieu and Van Gool, Luc},
booktitle = {European Conference on Computer Vision},
year = {2014}
}
```
### Contributions
Lukas Bossard, Matthieu Guillaumin, Luc Van Gool, and Renumics GmbH.
|
renumics/food101-enriched
|
[
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"size_categories:100K<n<1M",
"source_datasets:extended|other-foodspotting",
"source_datasets:extended|food101",
"language:en",
"license:unknown",
"image classification",
"food-101",
"food-101-enriched",
"embeddings",
"enhanced",
"spotlight",
"region:us"
] |
2023-05-09T07:41:13+00:00
|
{"language": ["en"], "license": "unknown", "size_categories": ["100K<n<1M"], "source_datasets": ["extended|other-foodspotting", "extended|food101"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "paperswithcode_id": "food-101", "pretty_name": "Food-101 Data Set", "tags": ["image classification", "food-101", "food-101-enriched", "embeddings", "enhanced", "spotlight"]}
|
2023-06-06T07:15:28+00:00
|
ad79fbfb71fa8d4f43eb564f3dfbdf60e3f30a26
|
# Dataset Card for "yaxis_chart_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aravind-selvam/yaxis_chart_data
|
[
"region:us"
] |
2023-05-09T07:46:04+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 43306041.0, "num_examples": 4000}, {"name": "validation", "num_bytes": 9046607.0, "num_examples": 1000}], "download_size": 51862633, "dataset_size": 52352648.0}}
|
2023-05-09T07:46:19+00:00
|
f6ae2fb62990a0025ec5261c85aadb4253c15245
|
# Dataset Card for "empathetic_dialogues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lighteval/empathetic_dialogues
|
[
"region:us"
] |
2023-05-09T07:58:55+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "references", "sequence": "string"}, {"name": "subsplit", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7490124, "num_examples": 17647}, {"name": "validation", "num_bytes": 1256124, "num_examples": 2748}, {"name": "test", "num_bytes": 1241167, "num_examples": 2538}], "download_size": 5009784, "dataset_size": 9987415}}
|
2023-05-09T08:19:47+00:00
|
45e648c2fc051d31ea79c3b78bcf99158c9adb29
|
rakesh-ai/Medical_Transcription
|
[
"license:other",
"region:us"
] |
2023-05-09T08:11:13+00:00
|
{"license": "other"}
|
2023-05-09T09:08:30+00:00
|
|
de3dfde669d3c4adc1fe35223aed8b4e1f06a177
|
# Dataset Card for "oasst1_pairwise_rlhf_reward"
[OASST1 dataset](https://huggingface.co/datasets/OpenAssistant/oasst1) preprocessed for reward modeling:
```python
import pandas as pd
from datasets import load_dataset,concatenate_datasets, Dataset, DatasetDict
import numpy as np
dataset = load_dataset("OpenAssistant/oasst1")
df=concatenate_datasets(list(dataset.values())).to_pandas()
m2t=df.set_index("message_id")['text'].to_dict()
m2r=df.set_index("message_id")['role'].to_dict()
m2p=df.set_index('message_id')['parent_id'].to_dict()
m2history=dict() # message id to unrolled history
for k,v in m2p.items():
history=[k]
while history[-1] in m2p:
history+=[m2p[history[-1]]]
m2history[k]="\n".join([f"{m2r[m]}: {m2t[m]}" for m in history[::-1] if m])
d=dict()
for split in "train","validation":
df=dataset[split].to_pandas()
df['prompt']=df.parent_id.map(lambda x: m2history.get(x,''))
df=df[~df['rank'].isna()]
def agg(x):
x=list(x)
return [x[0],x[-1]]
df=df.groupby(['prompt',"parent_id",'lang'])[['text','rank']].agg(agg).reset_index()
df=df[df['rank'].map(lambda x:len(set(x))>1)]
df['chosen'] = df.apply(lambda x:x['text'][np.argmin(x['rank'])],axis=1)
df['rejected'] = df.apply(lambda x:x['text'][np.argmax(x['rank'])],axis=1)
d[split]=Dataset.from_pandas(df[['lang','parent_id','prompt','chosen','rejected']],preserve_index=False)
DatasetDict(d).push_to_hub('tasksource/oasst1_pairwise_rlhf_reward')
```
|
tasksource/oasst1_pairwise_rlhf_reward
|
[
"language:en",
"language:es",
"language:ru",
"language:de",
"language:pl",
"language:th",
"language:vi",
"language:sv",
"language:bn",
"language:da",
"language:he",
"language:it",
"language:fa",
"language:sk",
"language:id",
"language:nb",
"language:el",
"language:nl",
"language:hu",
"language:eu",
"language:zh",
"language:eo",
"language:ja",
"language:ca",
"language:cs",
"language:bg",
"language:fi",
"language:pt",
"language:tr",
"language:ro",
"language:ar",
"language:uk",
"language:gl",
"language:fr",
"language:ko",
"region:us"
] |
2023-05-09T08:16:01+00:00
|
{"language": ["en", "es", "ru", "de", "pl", "th", "vi", "sv", "bn", "da", "he", "it", "fa", "sk", "id", "nb", "el", "nl", "hu", "eu", "zh", "eo", "ja", "ca", "cs", "bg", "fi", "pt", "tr", "ro", "ar", "uk", "gl", "fr", "ko"], "dataset_info": {"features": [{"name": "lang", "dtype": "string"}, {"name": "parent_id", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 40736437, "num_examples": 17966}, {"name": "validation", "num_bytes": 2152443, "num_examples": 952}], "download_size": 22371458, "dataset_size": 42888880}}
|
2023-07-04T16:47:46+00:00
|
004b0d3d198c0f69e7ebfc7c499e72b6ff60dd8d
|
cetacean/Thchs30
|
[
"license:openrail",
"region:us"
] |
2023-05-09T08:42:37+00:00
|
{"license": "openrail"}
|
2023-05-09T08:42:37+00:00
|
|
7f02dc1e5e72696ff671a0d39aee7f599db5c599
|
cetacean/tt
|
[
"license:openrail",
"region:us"
] |
2023-05-09T08:43:18+00:00
|
{"license": "openrail"}
|
2023-05-09T08:43:18+00:00
|
|
e6edd42e300a20e888b09f4fe67f55b98d32452c
|
cetacean/ttt
|
[
"license:unknown",
"region:us"
] |
2023-05-09T08:43:48+00:00
|
{"license": "unknown"}
|
2023-05-09T08:43:48+00:00
|
|
c5db9d7e2206a91763a6f6c2df7a719ec0efbb51
|
## Source
This repository contains 3 datasets created within the POPP project ([Project for the Oceration of the Paris Population Census](https://popp.hypotheses.org/#ancre2)) for the task of handwriting text recognition. These datasets have been published in [Recognition and information extraction in historical handwritten tables: toward understanding early 20th century Paris census at DAS 2022](https://link.springer.com/chapter/10.1007/978-3-031-06555-2_10).
The 3 datasets are called “Generic dataset”, “Belleville”, and “Chaussée d’Antin” and contains lines made from the extracted rows of census tables from 1926. Each table in the Paris census contains 30 rows, thus each page in these datasets corresponds to 30 lines.
We publish here only the lines. If you want the pages, go [here](https://zenodo.org/record/6581158). This dataset is made 4800 annotated lines extracted from 80 double pages of the 1926 Paris census.
## Data Info
Since the lines are extracted from table rows, we defined 4 special characters to describe the structure of the text:
- ¤ : indicates an empty cell
- / : indicates the separation into columns
- ? : indicates that the content of the cell following this symbol is written above the regular baseline
- ! : indicates that the content of the cell following this symbol is written below the regular baseline
There are three splits: train, valid and test.
## How to use it
```python
from datasets import load_dataset
import numpy as np
dataset = load_dataset("agomberto/FrenchCensus-handwritten-texts")
i = np.random.randint(len(dataset['train']))
img = dataset['train']['image'][i]
text = dataset['train']['text'][i]
print(text)
img
```
## BibTeX entry and citation info
```bibtex
@InProceedings{10.1007/978-3-031-06555-2_10,
author="Constum, Thomas
and Kempf, Nicolas
and Paquet, Thierry
and Tranouez, Pierrick
and Chatelain, Cl{\'e}ment
and Br{\'e}e, Sandra
and Merveille, Fran{\c{c}}ois",
editor="Uchida, Seiichi
and Barney, Elisa
and Eglin, V{\'e}ronique",
title="Recognition and Information Extraction in Historical Handwritten Tables: Toward Understanding Early {\$}{\$}20^{\{}th{\}}{\$}{\$}Century Paris Census",
booktitle="Document Analysis Systems",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="143--157",
abstract="We aim to build a vast database (up to 9 million individuals) from the handwritten tabular nominal census of Paris of 1926, 1931 and 1936, each composed of about 100,000 handwritten simple pages in a tabular format. We created a complete pipeline that goes from the scan of double pages to text prediction while minimizing the need for segmentation labels. We describe how weighted finite state transducers, writer specialization and self-training further improved our results. We also introduce through this communication two annotated datasets for handwriting recognition that are now publicly available, and an open-source toolkit to apply WFST on CTC lattices.",
isbn="978-3-031-06555-2"
}
```
|
agomberto/FrenchCensus-handwritten-texts
|
[
"task_categories:image-to-text",
"size_categories:1K<n<10K",
"language:fr",
"license:mit",
"imate-to-text",
"trocr",
"region:us"
] |
2023-05-09T10:21:00+00:00
|
{"language": ["fr"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["image-to-text"], "tags": ["imate-to-text", "trocr"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 501750699.816, "num_examples": 5601}, {"name": "validation", "num_bytes": 45084242.0, "num_examples": 707}, {"name": "test", "num_bytes": 49133043.0, "num_examples": 734}], "download_size": 459795745, "dataset_size": 595967984.816}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
|
2023-11-28T17:35:18+00:00
|
df407f87f0b0b9e94efc499f74aa09e453a8166d
|
# Dataset Card for "dft23-full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bio-datasets/dft23-full
|
[
"region:us"
] |
2023-05-09T11:29:38+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer_a", "dtype": "string"}, {"name": "answer_b", "dtype": "string"}, {"name": "answer_c", "dtype": "string"}, {"name": "answer_d", "dtype": "string"}, {"name": "answer_e", "dtype": "string"}, {"name": "correct_answers", "sequence": {"class_label": {"names": {"0": "a", "1": "b", "2": "c", "3": "d", "4": "e"}}}}, {"name": "subject_name", "dtype": "string"}, {"name": "number_correct_answers", "dtype": {"class_label": {"names": {"0": "1", "1": "2", "2": "3", "3": "4", "4": "5"}}}}], "splits": [{"name": "train", "num_bytes": 1004721, "num_examples": 2171}, {"name": "validation", "num_bytes": 136786, "num_examples": 312}, {"name": "test", "num_bytes": 284765, "num_examples": 622}], "download_size": 894075, "dataset_size": 1426272}}
|
2023-05-09T14:38:16+00:00
|
3050b3549eb47b82dbc9fdb4dff76954f51e2b34
|
# Dataset Card for "disinformation_wedging"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lighteval/disinformation_wedging
|
[
"region:us"
] |
2023-05-09T11:41:58+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "references", "sequence": "null"}, {"name": "none", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 7406, "num_examples": 11}], "download_size": 9583, "dataset_size": 7406}}
|
2023-05-09T11:42:01+00:00
|
fd3ca8273e6afabb7541cc968f572c2b0245d0e0
|
TrainThenObtain-ai/Jarvis-tiny
|
[
"license:creativeml-openrail-m",
"region:us"
] |
2023-05-09T11:59:01+00:00
|
{"license": "creativeml-openrail-m"}
|
2023-05-09T11:59:01+00:00
|
|
7886d2bcd79ad47aebb6ef39ac71e1fbacf1fbdb
|
davanstrien/amazonian_fish_classifier_data
|
[
"task_categories:image-classification",
"size_categories:1K<n<10K",
"license:cc-by-4.0",
"biology",
"lam",
"region:us"
] |
2023-05-09T11:59:24+00:00
|
{"license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["image-classification"], "pretty_name": "cc", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Ancistrus", "1": "Apistogramma", "2": "Astyanax", "3": "Bario", "4": "Bryconops", "5": "Bujurquina", "6": "Bunocephalus", "7": "Characidium", "8": "Charax", "9": "Copella", "10": "Corydoras", "11": "Creagrutus", "12": "Curimata", "13": "Doras", "14": "Erythrinus", "15": "Gasteropelecus", "16": "Gymnotus", "17": "Hemigrammus", "18": "Hyphessobrycon", "19": "Knodus", "20": "Moenkhausia", "21": "Otocinclus", "22": "Oxyropsis", "23": "Phenacogaster", "24": "Pimelodella", "25": "Prochilodus", "26": "Pygocentrus", "27": "Pyrrhulina", "28": "Rineloricaria", "29": "Sorubim", "30": "Tatia", "31": "Tetragonopterus", "32": "Tyttocharax"}}}}], "splits": [{"name": "train", "num_bytes": 1068363405, "num_examples": 3068}], "download_size": 330399200, "dataset_size": 1068363405}, "tags": ["biology", "lam"]}
|
2023-05-09T13:56:52+00:00
|
|
63953169afb3da96856e0e30563c17a1da65a5bf
|
TrainThenObtain-ai/jarvis
|
[
"license:openrail",
"region:us"
] |
2023-05-09T12:00:02+00:00
|
{"license": "openrail"}
|
2023-05-09T12:00:02+00:00
|
|
b0e2619c648b00660835f3fa810413ede27df2c5
|
yao123/test
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-09T12:04:45+00:00
|
{"license": "apache-2.0"}
|
2023-05-09T12:04:45+00:00
|
|
8a6f9579fb0bf86f1f27a1993473b0315fcd273b
|
yao123/cloth_for_self333
|
[
"license:other",
"region:us"
] |
2023-05-09T12:05:59+00:00
|
{"license": "other"}
|
2023-05-09T13:32:45+00:00
|
|
c868e2985f487aba6b985fd1567dd66648d5b42a
|
fraug-library/thesaurus
|
[
"region:us"
] |
2023-05-09T12:13:35+00:00
|
{"configs": [{"config_name": "ara", "data_files": "thesaurus_ara.csv"}, {"config_name": "cat", "data_files": "thesaurus_cat.csv"}, {"config_name": "ces", "data_files": "thesaurus_ces.csv"}, {"config_name": "dan", "data_files": "thesaurus_dan.csv"}, {"config_name": "deu", "data_files": "thesaurus_deu.csv"}, {"config_name": "ell", "data_files": "thesaurus_ell.csv"}, {"config_name": "eng_AU", "data_files": "thesaurus_eng_AU.csv"}, {"config_name": "eng_GB", "data_files": "thesaurus_eng_GB.csv"}, {"config_name": "eng_US", "data_files": "thesaurus_eng_US.csv"}, {"config_name": "fra", "data_files": "thesaurus_fra.csv"}, {"config_name": "gle", "data_files": "thesaurus_gle.csv"}, {"config_name": "glg", "data_files": "thesaurus_glg.csv"}, {"config_name": "gsw", "data_files": "thesaurus_gsw.csv"}, {"config_name": "hun", "data_files": "thesaurus_hun.csv"}, {"config_name": "isl", "data_files": "thesaurus_isl.csv"}, {"config_name": "ita", "data_files": "thesaurus_ita.csv"}, {"config_name": "nno", "data_files": "thesaurus_nno.csv"}, {"config_name": "nob", "data_files": "thesaurus_nob.csv"}, {"config_name": "pol", "data_files": "thesaurus_pol.csv"}, {"config_name": "por", "data_files": "thesaurus_por.csv"}, {"config_name": "ron", "data_files": "thesaurus_ron.csv"}, {"config_name": "rus", "data_files": "thesaurus_rus.csv"}, {"config_name": "sin", "data_files": "thesaurus_sin.csv"}, {"config_name": "slk", "data_files": "thesaurus_slk.csv"}, {"config_name": "spa", "data_files": "thesaurus_spa.csv"}, {"config_name": "swe", "data_files": "thesaurus_swe.csv"}, {"config_name": "ukr", "data_files": "thesaurus_ukr.csv"}]}
|
2023-11-12T21:15:40+00:00
|
|
e105a46dd74c192a7ead6f439e08e0d1a2a629de
|
fraug-library/keyboards
|
[
"region:us"
] |
2023-05-09T12:14:00+00:00
|
{"configs": [{"config_name": "deu", "data_files": "keyboard_deu.txt"}, {"config_name": "eng", "data_files": "keyboard_eng.txt"}, {"config_name": "fra", "data_files": "keyboard_fra.txt"}, {"config_name": "heb", "data_files": "keyboard_heb.txt"}, {"config_name": "ita", "data_files": "keyboard_ita.txt"}, {"config_name": "nld", "data_files": "keyboard_nld.txt"}, {"config_name": "pol", "data_files": "keyboard_pol.txt"}, {"config_name": "spa", "data_files": "keyboard_spa.txt"}, {"config_name": "tha", "data_files": "keyboard_tha.txt"}, {"config_name": "tur", "data_files": "keyboard_tur.txt"}, {"config_name": "ukr", "data_files": "keyboard_ukr.txt"}]}
|
2023-11-12T20:58:52+00:00
|
|
c3e0c0f1b84ef2ad8b89073b977f3b340de00c18
|
ESLO audio dataset
configs:
- no_overlap_no_hesitation
- no_hesitation
- no_overlap
- raw
Licence Creative Commons Attribution - Pas d'Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International
Dependencies:
- ffmpeg: `sudo apt-get install ffmpeg`
- ffmpeg-python: `pip install ffmpeg-python`
```
{'audio': {'array': array([-0.00250244, 0.00039673, 0.00326538, ..., 0.01953125,
0.02206421, 0.02304077]),
'path': None,
'sampling_rate': 16000},
'end_timestamp': 8.939,
'file': 'ESLO1_INTPERS_437',
'overlap': False,
'sentence': "eh bien je voudrais vous demander d'abord en quoi consiste votre "
'entreprise ici ? exactement',
'speaker': 'spk1',
'start_timestamp': 0.954}
```
Eshkol-Taravella I., Baude O., Maurel D., Hriba L., Dugua C., Tellier I., (2012), Un grand corpus oral « disponible » : le corpus d’Orléans 1968-2012., in Ressources linguistiques libres, TAL. Volume 52 – n° 3/2011, 17-46
Laboratoire Ligérien de Linguistique - UMR 7270 (LLL) (2023). ESLO [Corpus]. ORTOLANG (Open Resources and TOols for LANGuage) - www.ortolang.fr, v1, https://hdl.handle.net/11403/eslo/v1.
|
illuin/ESLO
|
[
"task_categories:automatic-speech-recognition",
"language:fr",
"license:cc-by-nc-4.0",
"region:us"
] |
2023-05-09T13:02:52+00:00
|
{"language": ["fr"], "license": "cc-by-nc-4.0", "task_categories": ["automatic-speech-recognition"]}
|
2023-05-15T14:21:41+00:00
|
18901c8e937d3f88674efdf398375dc2b919baac
|
# Dataset Card for dynamically generated hate speech dataset
## Dataset Description
- **Homepage:** [GitHub](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset)
- **Point of Contact:** [[email protected]](mailto:[email protected])
### Dataset Summary
This is a copy of the Dynamically-Generated-Hate-Speech-Dataset, presented in [this paper](https://arxiv.org/abs/2012.15761) by
- **Bertie Vidgen**, **Tristan Thrush**, **Zeerak Waseem** and **Douwe Kiela**
## Original README from [GitHub](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset/blob/main/README.md)
## Dynamically-Generated-Hate-Speech-Dataset
ReadMe for v0.2 of the Dynamically Generated Hate Speech Dataset from Vidgen et al. (2021). If you use the dataset, please cite our paper in the Proceedings of ACL 2021, and available on [Arxiv](https://arxiv.org/abs/2012.15761).
Contact Dr. Bertie Vidgen if you have feedback or queries: [email protected].
The full author list is: Bertie Vidgen (The Alan Turing Institute), Tristan Thrush (Facebook AI Research), Zeerak Waseem (University of Sheffield) and Douwe Kiela (Facebook AI Research). This paper is an output of the Dynabench project: https://dynabench.org/tasks/5#overall
### Dataset descriptions
v0.2.2.csv is the full dataset used in our ACL paper.
v0.2.3.csv removes duplicate entries, all of which occurred in round 1. Duplicates come from two sources: (1) annotators entering the same content multiple times and (2) different annotators entering the same content. The duplicates are interesting for understanding the annotation process, and the challenges of dynamically generating datasets. However, they are likely to be less useful for training classifiers and so are removed in v0.2.3. We did not lower case the text before removing duplicates as capitalisations contain potentially useful signals.
### Overview
The Dynamically Generated Hate Speech Dataset is provided in one table.
'acl.id' is the unique ID of the entry.
'Text' is the content which has been entered. All content is synthetic.
'Label' is a binary variable, indicating whether or not the content has been identified as hateful. It takes two values: hate, nothate.
'Type' is a categorical variable, providing a secondary label for hateful content. For hate it can take five values: Animosity, Derogation, Dehumanization, Threatening and Support for Hateful Entities. Please see the paper for more detail. For nothate the 'type' is 'none'. In round 1 the 'type' was not given and is marked as 'notgiven'.
'Target' is a categorical variable, providing the group that is attacked by the hate. It can include intersectional characteristics and multiple groups can be identified. For nothate the type is 'none'. Note that in round 1 the 'target' was not given and is marked as 'notgiven'.
'Level' reports whether the entry is original content or a perturbation.
'Round' is a categorical variable. It gives the round of data entry (1, 2, 3 or 4) with a letter for whether the entry is original content ('a') or a perturbation ('b'). Perturbations were not made for round 1.
'Round.base' is a categorical variable. It gives the round of data entry, indicated with just a number (1, 2, 3 or 4).
'Split' is a categorical variable. it gives the data split that the entry has been assigned to. This can take the values 'train', 'dev' and 'test'. The choice of splits is explained in the paper.
'Annotator' is a categorical variable. It gives the annotator who entered the content. Annotator IDs are random alphanumeric strings. There are 20 annotators in the dataset.
'acl.id.matched' is the ID of the matched entry, connecting the original (given in 'acl.id') and the perturbed version.
For identities (recorded under 'Target') we use shorthand labels to constructed the dataset, which can be converted (and grouped) as follows:
none -> for non hateful entries
NoTargetRecorded -> for hateful entries with no target recorded
mixed -> Mixed race background
ethnic minority -> Ethnic Minorities
indig -> Indigenous people
indigwom -> Indigenous Women
non-white -> Non-whites (attacked as 'non-whites', rather than specific non-white groups which are generally addressed separately)
trav -> Travellers (including Roma, gypsies)
bla -> Black people
blawom -> Black women
blaman -> Black men
african -> African (all 'African' attacks will also be an attack against Black people)
jew -> Jewish people
mus -> Muslims
muswom -> Muslim women
wom -> Women
trans -> Trans people
gendermin -> Gender minorities,
bis -> Bisexual
gay -> Gay people (both men and women)
gayman -> Gay men
gaywom -> Lesbians
dis -> People with disabilities
working -> Working class people
old -> Elderly people
asi -> Asians
asiwom -> Asian women
east -> East Asians
south -> South Asians (e.g. Indians)
chinese -> Chinese people
pak -> Pakistanis
arab -> Arabs, including people from the Middle East
immig -> Immigrants
asylum -> Asylum seekers
ref -> Refguees
for -> Foreigners
eastern european -> Eastern Europeans
russian -> Russian people
pol -> Polish people
hispanic -> Hispanic people, including latinx and Mexicans
nazi -> Nazis ('Support' type of hate)
hitler -> Hitler ('Support' type of hate)
### Code
Code was implemented using hugging face transformers library.
## Additional Information
### Licensing Information
The original repository does not provide any license, but is free for use with proper citation of the original paper in the Proceedings of ACL 2021, available on [Arxiv](https://arxiv.org/abs/2012.15761)
### Citation Information
cite as [arXiv:2012.15761](https://arxiv.org/abs/2012.15761)
or [https://doi.org/10.48550/arXiv.2012.15761](https://[doi.org/10.48550/arXiv.2012.15761)
|
LennardZuendorf/Dynamically-Generated-Hate-Speech-Dataset
|
[
"task_categories:text-classification",
"task_categories:text-generation",
"language:en",
"not-for-all-audiences",
"legal",
"arxiv:2012.15761",
"region:us"
] |
2023-05-09T13:04:29+00:00
|
{"language": ["en"], "task_categories": ["text-classification", "text-generation"], "pretty_name": "dynamically generated hate speech dataset", "tags": ["not-for-all-audiences", "legal"]}
|
2023-05-16T15:01:46+00:00
|
2a2ad8bd5bb2dfcb4121e913da4832a1d5f86d61
|
# Dataset Card for "amazon-shoe-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lazarustda/amazon-shoe-reviews
|
[
"region:us"
] |
2023-05-09T13:15:11+00:00
|
{"dataset_info": {"features": [{"name": "labels", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16847665.2, "num_examples": 90000}, {"name": "test", "num_bytes": 1871962.8, "num_examples": 10000}], "download_size": 11141108, "dataset_size": 18719628.0}}
|
2023-05-09T13:18:01+00:00
|
ac4f9ccfd5b0a3f5d27f521daef7714abe0e890f
|
# Dataset Card for "amazon-shoe-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mse5357/amazon-shoe-reviews
|
[
"region:us"
] |
2023-05-09T13:18:08+00:00
|
{"dataset_info": {"features": [{"name": "labels", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16847665.2, "num_examples": 90000}, {"name": "test", "num_bytes": 1871962.8, "num_examples": 10000}], "download_size": 11141108, "dataset_size": 18719628.0}}
|
2023-05-09T13:18:26+00:00
|
73c99eb3a5d48c1f22c15adc45bd68038af8a866
|
# Dataset Card for "CsFEVERv2"
## Dataset Description
CsFEVERv2 is a dataset for Czech fact-checking developed as part of a bachelor thesis at the Artificial Intelligence Center of the Faculty of Electrical Engineering of
the Czech technical university in Prague. The dataset consists of an **original** subset, which is only an iteration of CsFEVER with new data and better processing and
**f1**, **precision**, and **07** subsets filtered using an NLI model and optimized threshold values. The subset **wiki_pages** is a processed Wikipedia dump from
August 2022 with correct revids. This subset should be used to map evidence from datasets to Wikipedia texts. Additionaly preprocessed datasets **original_nli**, **f1_nli**, **precision_nli**, **07_nli**,
for training of NLI models are included.
The original subset can be used to generate other filtered datasets by filtering with other thresholds using predicted_label and predicted_score fields.
### Languages
Czech
## Dataset Usage Example
```python
from datasets import load_dataset
#load default (original) subset
dataset = load_dataset("/home/mlynatom/csfever_v2")
dataset = load_dataset("/home/mlynatom/csfever_v2", "original")
#load f1, f1_nli, precision, precision_nli, 07, and 07_nli subsets
dataset = load_dataset("/home/mlynatom/csfever_v2", "f1")
#load wiki_pages subset
dataset = load_dataset("/home/mlynatom/csfever_v2", "wiki_pages")
```
## Dataset Structure
### Data Instances
#### original
An example of 'train' looks as follows.
```json
{'id': 75397,
'label': 'SUPPORTS',
'predicted_label': 'SUPPORTS',
'predicted_score': 0.921731
'claim': 'Nikolaj Coster-Waldau pracoval pro Fox Broadcasting Company.',
'evidence': [ [ "Nikolaj Coster-Waldau", "Nikolaj Coster-Waldau" ], [ "Fox Broadcasting Company", "Fox Broadcasting Company" ] ]}
```
#### f1, precision, 07
An example of 'train' looks as follows.
```json
{'id': 75397,
'label': 'SUPPORTS',
'claim': 'Nikolaj Coster-Waldau pracoval pro Fox Broadcasting Company.',
'evidence': [ [ "Nikolaj Coster-Waldau", "Nikolaj Coster-Waldau" ], [ "Fox Broadcasting Company", "Fox Broadcasting Company" ] ]}
```
#### original_nli, f1_nli, precision_nli, 07_nli
An example of 'train' looks as follows.
```json
{'id': 155439,
'label': 2,
'claim': 'Newcastle United FC vyhrál pět ligových titulů.',
'evidence': "Ronnie Simpson. Ronnie Simpson (21. října 1930, Glasgow – 19. dubna 2004, Edinburgh) byl skotský fotbalový brankář..."}
```
#### wiki_pages
An example of 'wiki_pages' looks as follows.
```json
{'id': 80916,
'revid': 20561555,
'url': "https://cs.wikipedia.org/wiki?curid=80916",
'title': "Altruismus",
'text': "Altruismus (z lat. "alter", druhý, 3. pád "altrui", druhému) je moderní ..."}
```
### Data Fields
#### original
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `predicted_label`: a `string` feature. (label predicted by NLI model)
- `predicted_score`: a `int32` feature. (confidence of predicted_label predicted by NLI model)
- `claim`: a `string` feature.
- `evidence`: a `sequence` feature.
#### f1, precision, 07
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `claim`: a `string` feature.
- `evidence`: a `sequence` feature.
#### original_nli, f1_nli, precision_nli, 07_nli
- `id`: a `int32` feature.
- `label`: a `int32` feature.
- `claim`: a `string` feature.
- `evidence`: a `string` feature.
#### wiki_pages
- `id`: a `int32` feature.
- `revid`: a `int32` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `text`: a `string` feature.
### Data Splits
### Data Splits
#### original
| | train | dev | test |
|----------|-------:|-----:|------:|
| original | 118950 | 7458 | 7520 |
#### f1
| | train | dev | test |
|----|------:|-----:|-----:|
| f1 | 83438 | 5445 | 5328 |
#### precision
| | train | dev | test |
|-----------|-------:|-----:|------:|
| precision | 60828 | 4288 | 4236 |
#### 07
| | train | dev | test |
|----|-------:|-----:|------:|
| 07 | 108607 | 6685 | 6623 |
#### wiki_pages
| | wiki_pages |
|------------|-----------:|
| wiki_pages | 825078 |
# Citation
```bibtex
@article{Ullrich_2023,
doi = {10.1007/s10579-023-09654-3},
url = {https://doi.org/10.1007%2Fs10579-023-09654-3},
year = 2023,
month = {may},
publisher = {Springer Science and Business Media {LLC}},
author = {Herbert Ullrich and Jan Drchal and Martin Rýpar and Hana Vincourová and Václav Moravec},
title = {{CsFEVER} and {CTKFacts}: acquiring Czech data for fact verification},
journal = {Language Resources and Evaluation},
archivePrefix={arXiv},
eprint={2201.11115},
}
```
```bibtex
@thesis{Mlynar_2023,
author = {Mlynář, Tomáš},
type = {Bachelor's Thesis}
title = {Automated Fact Checking Based on Czech Wikipedia},
institution = {Czech Technical University in Prague, Faculty of Electrical Engineering},
date = {2023},
url = {http://hdl.handle.net/10467/109219}
}
```
|
ctu-aic/csfever_v2
|
[
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_ids:natural-language-inference",
"task_ids:document-retrieval",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:fever",
"language:cs",
"license:cc-by-sa-3.0",
"Fact-checking",
"arxiv:2201.11115",
"region:us"
] |
2023-05-09T13:19:36+00:00
|
{"language": ["cs"], "license": "cc-by-sa-3.0", "multilinguality": "monolingual", "size_categories": ["100K<n<1M"], "source_datasets": "fever", "task_categories": ["text-classification", "text-retrieval"], "task_ids": ["natural-language-inference", "document-retrieval"], "pretty_name": "CsFEVERv2", "tags": ["Fact-checking"]}
|
2023-07-27T07:52:58+00:00
|
c3601d16ba8e826864a0f435272a15bc93f566ab
|
# Dataset Card for "Buy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lighteval/Buy
|
[
"region:us"
] |
2023-05-09T13:26:11+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "gold", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 80363, "num_examples": 469}, {"name": "valid", "num_bytes": 99497, "num_examples": 586}, {"name": "test", "num_bytes": 110198, "num_examples": 651}], "download_size": 115246, "dataset_size": 290058}}
|
2023-05-09T13:26:19+00:00
|
1d178c1fdc537a13eef77f17f4d8211e0c0ce806
|
# Dataset Card for "VQAv2_test_left"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_test_left
|
[
"region:us"
] |
2023-05-09T13:29:18+00:00
|
{"dataset_info": {"features": [{"name": "question_type", "dtype": "string"}, {"name": "multiple_choice_answer", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "answers_original", "list": [{"name": "answer", "dtype": "string"}, {"name": "answer_confidence", "dtype": "string"}, {"name": "answer_id", "dtype": "int64"}]}, {"name": "id_image", "dtype": "int64"}, {"name": "answer_type", "dtype": "string"}, {"name": "question_id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "clip_tags_ViT_L_14", "sequence": "string"}, {"name": "blip_caption", "dtype": "string"}, {"name": "LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14", "sequence": "string"}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float32"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float32"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "Attributes_ViT_L_14_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "clip_tags_ViT_L_14_wo_openai", "sequence": "string"}, {"name": "clip_tags_ViT_L_14_with_openai", "sequence": "string"}, {"name": "clip_tags_LAION_ViT_H_14_2B_wo_openai", "sequence": "string"}, {"name": "clip_tags_LAION_ViT_H_14_2B_with_openai", "sequence": "string"}, {"name": "clip_tags_LAION_ViT_bigG_14_2B_wo_openai", "sequence": "string"}, {"name": "clip_tags_LAION_ViT_bigG_14_2B_with_openai", "sequence": "string"}, {"name": "Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "Attributes_LAION_ViT_bigG_14_2B_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "id", "dtype": "int64"}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "captions_module", "sequence": "string"}, {"name": "captions_module_filter", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}], "splits": [{"name": "test", "num_bytes": 1745506526.0, "num_examples": 8403}], "download_size": 1544063705, "dataset_size": 1745506526.0}}
|
2023-05-12T07:24:31+00:00
|
e0fa6cea00a7cc270a6061a95e47581ae4ddb013
|
# Dataset Card for "amazon-shoe-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sumanlama2000/amazon-shoe-reviews
|
[
"region:us"
] |
2023-05-09T13:29:48+00:00
|
{"dataset_info": {"features": [{"name": "labels", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16847665.2, "num_examples": 90000}, {"name": "test", "num_bytes": 1871962.8, "num_examples": 10000}], "download_size": 11141108, "dataset_size": 18719628.0}}
|
2023-05-09T13:30:01+00:00
|
6f64a211589b6d95caf16cc9ac7075e321852f7f
|
# Dataset Card for "Restaurant"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lighteval/Restaurant
|
[
"region:us"
] |
2023-05-09T13:30:40+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "gold", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 68968, "num_examples": 622}, {"name": "valid", "num_bytes": 86314, "num_examples": 778}, {"name": "test", "num_bytes": 95609, "num_examples": 864}], "download_size": 102295, "dataset_size": 250891}}
|
2023-05-09T13:30:47+00:00
|
6a47c013b92ba7f61dbe87054e0f5f8ec43b8970
|
# Summary
This is a Thai 🇹🇭-instructed dataset translated from `databricks-dolly-15k` using Google Cloud Translation.
`databricks-dolly-15k` is an open-source dataset of instruction-following records generated by thousands of Databricks employees in several behavioral
categories outlined in the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
---
|
Thaweewat/databricks-dolly-15k-th
|
[
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:th",
"license:cc-by-sa-3.0",
"instruction-finetuning",
"region:us"
] |
2023-05-09T14:13:01+00:00
|
{"language": ["th"], "license": "cc-by-sa-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "summarization"], "tags": ["instruction-finetuning"]}
|
2023-05-09T15:15:52+00:00
|
79be83e253b4bb1872ec22b8b592b8b1f9a24b6c
|
fraug-library/english_contractions_extensions
|
[
"region:us"
] |
2023-05-09T14:33:53+00:00
|
{"configs": [{"config_name": "contractions", "data_files": "df_contractions.csv", "sep": ";"}, {"config_name": "extensions", "data_files": "df_extensions.csv", "sep": ";"}]}
|
2023-11-12T20:52:30+00:00
|
|
2761b9bfad879f9e21c51e3fdbaa6993b4f1c8bf
|
# Summary
This is a Thai 🇹🇭-instructed dataset translated from cleaned version of the original Alpaca Dataset released by Stanford using Google Cloud Translation, contain 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine.
This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
The following issues have been identified in the original release and fixed in this dataset:
1. **Hallucinations:** Many instructions in the original dataset had instructions referencing data on the internet, which just caused GPT3 to hallucinate an answer.
2. **Merged Instructions:** There were many instructions that were merged together in the original dataset for some reason.
3. **Empty outputs:** Some entries in the original dataset had empty outputs.
4. **Empty code examples:** Some descriptions in the original dataset were missing code examples, making it difficult to understand the intended behavior of the code.
5. **Instructions to generate images:** Some descriptions in the original dataset included instructions to generate images, something obviously not possible.
6. **N/A outputs:** Some code snippets in the original dataset had N/A outputs.
7. **Inconsistent input field:** The original dataset had inconsistent usage of the input field when it was supposed to be empty.
8. **Wrong answers:** Some instructions/questions in the original dataset had incorrect answers. About 80% of the math problems are estimated to have incorrect answers.
9. **Non-Sensical/Unclear instructions:** Many instructions are unclear, we try to clarify (or re-write) if instructions are non-sensical. Instructions that are slightly unclear, but where one could deduce the meaning are not altered.
10. **Extraneous escape and control characters:** The original dataset had several entries with extraneous escape and control characters.
### Original Alpaca Dataset Summary
Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
- The `text-davinci-003` engine to generate the instruction data instead of `davinci`.
- A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`.
- Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
- The `text-davinci-003` engine to generate the instruction data instead of `davinci`.
- A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`.
- Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500).
In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
---
|
Thaweewat/alpaca-cleaned-52k-th
|
[
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:th",
"license:cc-by-sa-3.0",
"instruction-finetuning",
"region:us"
] |
2023-05-09T14:45:46+00:00
|
{"language": ["th"], "license": "cc-by-sa-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "summarization"], "tags": ["instruction-finetuning"]}
|
2023-05-09T15:18:02+00:00
|
2a8d4eca2e91f725cc2cf34c5a4fcb3fdb1f173c
|
# Dataset Card for "wikisql"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
htriedman/wikisql
|
[
"language:en",
"region:us"
] |
2023-05-09T14:52:41+00:00
|
{"language": "en", "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2822323, "num_examples": 15878}, {"name": "validation", "num_bytes": 1491147, "num_examples": 8421}, {"name": "train", "num_bytes": 9989066, "num_examples": 56355}], "download_size": 5107706, "dataset_size": 14302536}}
|
2023-07-12T16:44:58+00:00
|
c29c66cf42be81edd498097bfbfc05fddffc44e0
|
# Burke Training Data Set
This is a set of blog posts that I've written over the years in .txt format.
|
burkeholland/burke
|
[
"region:us"
] |
2023-05-09T14:55:15+00:00
|
{}
|
2023-05-09T15:01:03+00:00
|
45203d74241a2fd200c0c65a023580fa0010f6bc
|
this is my first dataset made from 80k VGM midi tracks found on archive.org
|
yankscally/midiset
|
[
"license:unknown",
"region:us"
] |
2023-05-09T15:15:33+00:00
|
{"license": "unknown"}
|
2023-05-09T15:30:09+00:00
|
70e3f0125a2415764c7bae7a47d01426fac3e7e9
|
# Dataset Card for "oasst_hh_shp_hellaswag_webgpt_rm_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pvduy/oasst_hh_shp_hellaswag_webgpt_rm_dataset
|
[
"region:us"
] |
2023-05-09T15:35:16+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "replies", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 395107894, "num_examples": 264534}, {"name": "test", "num_bytes": 5742299, "num_examples": 2874}], "download_size": 225578359, "dataset_size": 400850193}}
|
2023-05-09T15:35:45+00:00
|
d7d299ddffa7eae0cdab8e1a3c093536b1018d87
|
# Summary
This is a 🇹🇭 Thai-instructed dataset translated from [InstructionWild](https://github.com/XueFuzhao/InstructionWild) using Google Cloud Translation.
It contains 52,191 English and 51,504 Chinese instructions, which are collected from Twitter, where users tend to share their interesting prompts of mostly generation, open QA, and mind-storm types
which also be used by [Colossal AI](https://github.com/hpcaitech/ColossalAI) to train the ColossalChat model.
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
---
|
Thaweewat/instruction-wild-52k-th
|
[
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:th",
"license:cc-by-sa-3.0",
"instruction-finetuning",
"region:us"
] |
2023-05-09T15:53:22+00:00
|
{"language": ["th"], "license": "cc-by-sa-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "summarization"], "tags": ["instruction-finetuning"]}
|
2023-05-09T18:05:42+00:00
|
511f859c7ca9552b099245f537bc5f82c5464977
|
# Dataset Card for "dataset_test2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ThraggBilly/dataset_test2
|
[
"region:us"
] |
2023-05-09T15:56:21+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 352888.0, "num_examples": 10}], "download_size": 347825, "dataset_size": 352888.0}}
|
2023-05-09T15:56:26+00:00
|
f0d005ade9257248e8eea0891cd031187fdc0826
|
# Dataset Card for "test_dataset3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ThraggBilly/flickr30k_dataset
|
[
"region:us"
] |
2023-05-09T16:26:42+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4178820473.876, "num_examples": 31783}], "download_size": 4402850196, "dataset_size": 4178820473.876}}
|
2023-05-09T16:31:11+00:00
|
d378341a06b566b4de10d68e340f5e558039a2f9
|
# Dataset Card for "buryat-russian_parallel_corpus"
Датасет состоит из 38260 пар на русском и бурятском языках. Из них 19411 пар предложений и 20058 пар слов. <br/>
Статистика по источникам: <br/>
<br/>
библия 7519 <br/>
книги 5250 <br/>
татоеба 807 <br/>
стихи 471 <br/>
стихи Нимбуев 1210 <br/>
словарь 20058 <br/>
википедия 1882 <br/>
законы 1063 <br/>
<br/>
The dataset consists of 38260 pairs in Russian and Buryat languages. Of these, 19411 pairs of sentences and 20058 pairs of words. <br/>
Source stats: <br/>
<br/>
bible 7519 <br/>
books 5250 <br/>
tatoeba 807 <br/>
poems 471 <br/>
poems Nimbuev 1210 <br/>
dictionary 20058 <br/>
wikipedia 1882 <br/>
laws 1063 <br/>
<br/>
<br/>
@inproceedings{<br/>
title={Buryat-Russian parallel corpus},<br/>
author={Sarana Abidueva, Dari Baturova},<br/>
year={2023}<br/>
}
|
SaranaAbidueva/buryat-russian_parallel_corpus
|
[
"language:ru",
"license:cc-by-4.0",
"region:us"
] |
2023-05-09T16:33:45+00:00
|
{"language": ["ru"], "license": "cc-by-4.0", "dataset_info": {"features": [{"name": "bxr", "dtype": "string"}, {"name": "ru", "dtype": "string"}, {"name": "corpus", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8989074, "num_examples": 38260}], "download_size": 4394110, "dataset_size": 8989074}}
|
2023-05-14T11:15:39+00:00
|
a8861698d08834141139d34540d3810baef38a4d
|
# Summary
This is a 🇹🇭 Thai-instructed dataset translated using Google Cloud Translation from [GPTeacher](https://github.com/teknium1/GPTeacher), A collection of modular datasets generated by GPT-4, General-Instruct & Roleplay-Instruct
and is comprised of around 20,000 examples with deduplication. The dataset was asked to include reasoning and thought steps in the example responses where appropriate.
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
---
|
Thaweewat/gpteacher-20k-th
|
[
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:th",
"license:cc-by-sa-3.0",
"instruction-finetuning",
"region:us"
] |
2023-05-09T16:34:31+00:00
|
{"language": ["th"], "license": "cc-by-sa-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "summarization"], "tags": ["instruction-finetuning"]}
|
2023-05-09T16:54:22+00:00
|
2e3b232b4e73f7f9fdd2d2dbd04132223c652fb9
|
### Dataset Summary
A bunch of datasets preprocessed and formatted with https://github.com/openai/openai-python/blob/main/chatml.md (with an addition of a context message to help RWKV (no lookback))
The dataset makes use of two more tokens. You will need to use the supplied 20b_tokeniser file with both training and inference.
### Languages
English mainly, might be a few bits of other languages.
### Things to do
1. Improve system prompt effect on output.
2. Get more reasoning data.
3. More data!
### Format
Example below
```
<|im_start|>system
You are a teacher.<|im_end|>
<|im_start|>user
Given this paragraph about Dartmouth College traditions, which homecoming-related traditions are illegal?<|im_end|>
<|im_start|>context
Dartmouth Night starts the college's traditional \"Homecoming\" weekend with an evening of speeches, a parade, and a bonfire. Traditionally, the freshman class builds the bonfire and then runs around it a set number of times in concordance with their class year; the class of 2009 performed 109 circuits, the class of 1999 performed 99, etc. The College officially discourages a number of student traditions of varying degrees of antiquity. During the circling of the bonfire, upperclassmen encourage the freshmen to \"touch the fire\", an action legally considered trespassing and prohibited by police officials present. At halftime of the Homecoming football game on the Saturday of the weekend, some upperclassmen encourage freshman to \"rush the field\", although no upperclassman has seen a significant rush since several injuries sustained during the 1986 rush prompted the school to ban the practice. Among the two or three students who sometimes run across the field, those who are arrested are charged with trespassing (the independent newspaper The Dartmouth Review claimed to set up a fund to automatically pay any fines associated with freshman who rush the field.) However, in 2012 this was proven false when two students rushed the field and were arrested for disorderly conduct. The Dartmouth Review ignored their emails until finally replying and denying that this fund had ever existed. These students then had to pay $300 fines out of pocket. For the 2011 Homecoming game, however, over 40 members of the Class of 2015 rushed the field at homecoming without any action taken by Safety and Security or the Hanover Police Department.<|im_end|>
<|im_start|>assistant
Touching the bonfire, and rushing the football field during halftime of the homecoming game<|im_end|>
```
|
m8than/raccoon_instruct_mini
|
[
"region:us"
] |
2023-05-09T17:23:35+00:00
|
{}
|
2023-05-09T22:25:31+00:00
|
33372139b09ff52defe905c9daed480ad77395a0
|
# Summary
This is a 🇹🇭 Thai-instructed dataset translated using Google Cloud Translation from [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3)
( Included total **24K**, 17K reddit_eli5, 4K finance, 1.2K medicine, 1.2K open_qa and 0.8K wiki_csai )
The first human-ChatGPT comparison corpus which is introduced in this paper:
- [How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection](https://arxiv.org/abs/2301.07597)
Code, models and analysis are available on GitHub:
- GitHub: [Chatgpt-Comparison-Detection project 🔬](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection)
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
---
|
Thaweewat/hc3-24k-th
|
[
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:th",
"license:cc-by-sa-3.0",
"instruction-finetuning",
"arxiv:2301.07597",
"region:us"
] |
2023-05-09T17:38:41+00:00
|
{"language": ["th"], "license": "cc-by-sa-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "summarization"], "tags": ["instruction-finetuning"]}
|
2023-05-09T18:23:15+00:00
|
7f2941cca2e8c974806ee3a5a0716df911306350
|
# Summary
🇹🇭 Thai-instructed dataset translated from [gbharti/wealth-alpaca_lora](https://huggingface.co/datasets/gbharti/wealth-alpaca_lora) using Google Cloud Translation.
This dataset is a combination of Stanford's Alpaca (https://github.com/tatsu-lab/stanford_alpaca) and FiQA (https://sites.google.com/view/fiqa/) with another 1.3k pairs custom generated using GPT3.5
Script for tuning through Kaggle's (https://www.kaggle.com) free resources using PEFT/LoRa: https://www.kaggle.com/code/gbhacker23/wealth-alpaca-lora
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
---
|
Thaweewat/alpaca-finance-43k-th
|
[
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:th",
"license:cc-by-sa-3.0",
"instruction-finetuning",
"region:us"
] |
2023-05-09T18:01:32+00:00
|
{"language": ["th"], "license": "cc-by-sa-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "summarization"], "tags": ["instruction-finetuning"]}
|
2023-05-09T18:05:48+00:00
|
1347588118f8917698baeec4bce39cc888d2e74f
|
# Dataset Card for "hed_filter"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mingyy/hed_filter
|
[
"region:us"
] |
2023-05-09T18:21:01+00:00
|
{"dataset_info": {"features": [{"name": "hed", "dtype": "image"}, {"name": "Unnamed: 0", "dtype": "int64"}, {"name": "filename", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8381375865.03, "num_examples": 52582}], "download_size": 7857481203, "dataset_size": 8381375865.03}}
|
2023-05-09T19:02:02+00:00
|
e8f53d148736c3f40e7364006ddca7eba5013354
|
# myanimelist-embeddings
This dataset is every non-empty anime synopsis from [MyAnimeList.net](https://myanimelist.net) ran
through the `embed-multilingual-v2.0` embedding model from [Cohere AI](https://cohere.com).
## Sample code for searching for anime
Install some dependencies
```
pip install cohere==4.4.1 datasets==2.12.0 torch==2.0.1
```
Code heavily inspired by the [Cohere Wikipedia embeddings sample](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings#search)
```python
import os
import cohere
import torch
from datasets import load_dataset
co = cohere.Client(
os.environ["COHERE_API_KEY"]
) # Add your cohere API key from www.cohere.com
docs_stream = load_dataset(
f"abatilo/myanimelist-embeddings", split="train", streaming=True
)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc["embedding"])
doc_embeddings = torch.tensor(doc_embeddings)
while True:
query = input("What do you want to see?: ")
response = co.embed(texts=[query], model="embed-multilingual-v2.0")
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]["title"])
print(docs[doc_id]["synopsis"], "\n")
```
## Sample search queries
### high schoolers with super powers fight evil
```
What do you want to see?: high schoolers with super powers fight evil
Kigurumi Sentai Quiltian
Twin schoolgirls transform into their superhero aspects to save the world from an evil cabal of would-be dictators, but they can only fight for justice by having a lot of sex.
(Source: ANN)
Kekkaishi
Yoshimura Sumimura comes from a long line of "Kekkaishi," individuals who have supernatural abilities and are able to destroy evil creatures called Ayakashi that venture into the human realm from time to time. The Ayakashi are demons that look to feast on the power emanating from the land of Karasumori, which also happens to be where Yoshimura's high school is located. Now, Yoshimura must fight to protect his beloved school and hometown. Although, if it were up to him, he would rather be baking cakes than fighting off the ugly characters that show up at night.
Thankfully, Yoshimura isn't the only one helping to keep the baddies at bay. His childhood friend and neighbor, Tokine Yukimura, joins him in this righteous battle. Despite the fact that they are from rival clans, these two make a fantastic team. And teamwork is something vital to fighting the evil that is closing in, as the Ayakashi attack in waves, looking to claim the land as their own, and a shadowy organization looks on, ready to pounce when the time is right...
Shiritsu Araiso Koutougakkou Seitokai Shikkoubu
Kubota Makoto and Tokitoh Minoru (characters from Kazuya Minekura's manga Wild Adaptor—though no reference is made to the darker storyline of WA in this light-hearted anime)—are the muscle of their high school's all-powerful student council. They defend the student body from disorder—generated by both humans and demons—while avoiding their classes.
(Source: ANN)
```
### a pokemon trainer wants to be the very best
```
What do you want to see?: a pokemon trainer wants to be the very best
Pokemon
Pokémon are peculiar creatures with a vast array of different abilities and appearances; many people, known as Pokémon trainers, capture and train them, often with the intent of battling others. Young Satoshi has not only dreamed of becoming a Pokémon trainer but also a "Pokémon Master," and on the arrival of his 10th birthday, he finally has a chance to make that dream a reality. Unfortunately for him, all three Pokémon available to beginning trainers have already been claimed and only Pikachu, a rebellious Electric-type Pokémon, remains. However, this chance encounter would mark the start of a lifelong friendship and an epic adventure!
Setting off on a journey to become the very best, Satoshi and Pikachu travel across beautiful, sprawling regions with their friends Kasumi, a Water-type trainer, and Takeshi, a Rock-type trainer. But danger lurks around every corner. The infamous Team Rocket is always nearby, seeking to steal powerful Pokémon through nefarious schemes. It'll be up to Satoshi and his friends to thwart their efforts as he also strives to earn the eight Pokémon Gym Badges he'll need to challenge the Pokémon League, and eventually claim the title of Pokémon Master.
[Written by MAL Rewrite]
Pokemon Best Wishes!
As with both the Advanced Generation and Diamond & Pearl series before it, the Best Wishes! series begins with only Satoshi, headed off to the Isshu region, located far away from Kanto, Johto, Houen, and Sinnoh, with his Pikachu. After he meets up with the new trainer and rival Shooty and the region's Professor Araragi, he gains traveling companions in Iris, a girl from a town known for its Dragon Pokémon, and Dent, Pokémon Connoisseur and the Grass Pokémon specialist of the three Sanyou City Gym Leaders.
Pokemon Sun & Moon
After his mother wins a free trip to the islands, Pokémon trainer Satoshi and his partner Pikachu head for Melemele Island of the beautiful Alola region, which is filled with lots of new Pokémon and even variations of familiar faces. Eager to explore the island, Satoshi and Pikachu run wild with excitement, quickly losing their way while chasing after a Pokémon. The pair eventually stumbles upon the Pokémon School, an institution where students come to learn more about these fascinating creatures.
At the school, when he and one of the students—the no-nonsense Kaki—have a run-in with the nefarious thugs of Team Skull, Satoshi discovers the overwhelming might of the Z-Moves, powerful attacks originating from the Alola region that require the trainer and Pokémon to be in sync. Later that night, he and Pikachu have an encounter with the guardian deity Pokémon of Melemele Island, the mysterious Kapu Kokeko. The Pokémon of legend bestows upon them a Z-Ring, a necessary tool in using the Z-Moves. Dazzled by their earlier battle and now in possession of a Z-Ring, Satoshi and Pikachu decide to stay behind in the Alola Region to learn and master the strength of these powerful new attacks.
Enrolling in the Pokémon School, Satoshi is joined by classmates such as Lillie, who loves Pokémon but cannot bring herself to touch them, Kaki, and many others. Between attending classes, fending off the pesky Team Rocket—who themselves have arrived in Alola to pave the way for their organization's future plans—and taking on the Island Challenge that is necessary to master the Z-Moves, Satoshi and Pikachu are in for an exciting new adventure.
[Written by MAL Rewrite]
```
### hunting demons with swords
```
What do you want to see?: hunting demons with swords
Grandeek
This is a tale of swords and sorcery as the young warrior-woman Tia Allbright and her hapless assistant, Luke, battle demon assassins in a fantasy land.
Tia arrives on the island of Marcleida with her trusted sword 'Grandeek,' which holds a spirit within that helps her on her quests. She is soon turned away however. Determined to get on the island, Tia searches for a way past the fences that guard the entrance, as another stranger arrives on the island to take on a mysterious job. Someone has been killing the inhabitants of the island and has the ability to appear and disappear at will. Seems the sword 'Aihorn' has been stolen and the spirit that resides within it seeks vengenance on those who killed its master 50 years before.
As Tia makes her way inside the island, it becomes clear that both she, and the stranger, are after the sword Aihorn, hoping to bring to an end its bloody goal. But the sword has the ability to possess the person who wields it - putting Tia and the stranger at a great disadvantage.
Based on the manga by Kohime Ohse, Tia and Grandeek will have to face their most difficult challenge yet...
(Source: AnimeNfo)
Bemubemu Hunter Kotengu Tenmaru
Adventures of a demon slayer Tenmaru.
Karasu Tengu Kabuto
500 years ago in the Tensho Era of Japan, a man was born who defied the will of a demon; a man who had gods of good on his side; a man destined to battle evil....his name was Kabuto. Somehow, Kuroyasya Douki, the vile Black Night Demon, escaped his prison in hell and returned to the earthly plane to wreak vengeance on the family-line of Kabuto. None can escape his deadly magic and masterful skills with the blade; however, the gods of the North, West, East, and South band together to help Kabuto stand for Justice. With the questionable help of a diabolical talking sword that his own father forged, Kabuto may live another day to see his own sons born....
```
|
abatilo/myanimelist-embeddings
|
[
"task_categories:text-classification",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] |
2023-05-09T18:28:09+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "summarization"], "pretty_name": "MyAnimeList Embeddings"}
|
2023-05-09T19:51:17+00:00
|
0222d17a20864daddedb4deb78cc507b691c38ce
|
# Dataset Card for "opam-source"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sadiqj/opam-source
|
[
"region:us"
] |
2023-05-09T18:36:29+00:00
|
{"dataset_info": {"features": [{"name": "filename", "dtype": "string"}, {"name": "data", "dtype": "string"}, {"name": "license", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1112023408.5842562, "num_examples": 114769}, {"name": "test", "num_bytes": 58532647.41574373, "num_examples": 6041}], "download_size": 330412075, "dataset_size": 1170556056.0}}
|
2023-06-03T19:36:59+00:00
|
d8cd76ab6a28b7c54b127c89c98e9edacddec629
|
# Dataset Card for "TACO_Test_Reformatted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
RandyHuynh5815/TACO_Test_Reformatted
|
[
"region:us"
] |
2023-05-09T18:38:49+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "categories", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 2720258641.5, "num_examples": 1500}], "download_size": 2621965640, "dataset_size": 2720258641.5}}
|
2023-05-09T19:02:05+00:00
|
2b4aef76bbe4d4f538b368fc48a94fbb7ef60aa6
|
# Dataset Card for "la_en_parallel_wrapped"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
grosenthal/la_en_parallel_wrapped
|
[
"region:us"
] |
2023-05-09T18:50:24+00:00
|
{"dataset_info": {"features": [{"name": "translation", "struct": [{"name": "en", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "la", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 38435471, "num_examples": 99343}, {"name": "test", "num_bytes": 397107, "num_examples": 1014}, {"name": "valid", "num_bytes": 383636, "num_examples": 1014}], "download_size": 25074136, "dataset_size": 39216214}}
|
2023-05-09T18:50:32+00:00
|
8eebca76b96437bbcaf74d22d40b3d6884018ae3
|
# Dataset Card for "source_filter"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mingyy/source_filter
|
[
"region:us"
] |
2023-05-09T19:02:02+00:00
|
{"dataset_info": {"features": [{"name": "source", "dtype": "image"}, {"name": "Unnamed: 0", "dtype": "int64"}, {"name": "filename", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 35130969894.44, "num_examples": 52564}], "download_size": 5459000038, "dataset_size": 35130969894.44}}
|
2023-05-10T05:03:44+00:00
|
ca9342eb5eadaba0cf63453b2a336aba07a28804
|
# IVA Swift GitHub Code Dataset
## Dataset Description
This is the curated IVA Swift dataset extracted from GitHub.
It contains curated Swift files gathered with the purpose to train a code generation model.
The dataset consists of 383380 swift code files from GitHub totaling ~542MB of data.
The [uncurated](https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint) dataset was created from the public GitHub dataset on Google BiqQuery.
### How to use it
To download the full dataset:
```python
from datasets import load_dataset
dataset = load_dataset('mvasiliniuc/iva-swift-codeint-clean', split='train')
```
```python
from datasets import load_dataset
dataset = load_dataset('mvasiliniuc/iva-swift-codeint-clean', split='train')
print(dataset[723])
#OUTPUT:
{
"repo_name":"jdkelley/Udacity-OnTheMap-ExampleApps",
"path":"TheMovieManager-v2/TheMovieManager/BorderedButton.swift",
"copies":"2",
"size":"2649",
"content":"...let phoneBorderedButtonExtraPadding: CGFloat = 14.0\n \n var backingColor: UIColor? = nil\n var highlightedBackingColor: UIColor? = nil\n \n // MARK: Initialization\n}",
"license":"mit",
"hash":"db1587fd117e9a835f58cf8203d8bf05",
"line_mean":29.1136363636,
"line_max":87,
"alpha_frac":0.6700641752,
"ratio":5.298,
"autogenerated":false,
"config_or_test":false,
"has_no_keywords":false,
"has_few_assignments":false
}
```
## Data Structure
### Data Fields
|Field|Type|Description|
|---|---|---|
|repo_name|string|name of the GitHub repository|
|path|string|path of the file in GitHub repository|
|copies|string|number of occurrences in dataset|
|content|string|content of source file|
|size|string|size of the source file in bytes|
|license|string|license of GitHub repository|
|hash|string|Hash of content field.|
|line_mean|number|Mean line length of the content.
|line_max|number|Max line length of the content.
|alpha_frac|number|Fraction between mean and max line length of content.
|ratio|number|Character/token ratio of the file with tokenizer.
|autogenerated|boolean|True if the content is autogenerated by looking for keywords in the first few lines of the file.
|config_or_test|boolean|True if the content is a configuration file or a unit test.
|has_no_keywords|boolean|True if a file has none of the keywords for Swift Programming Language.
|has_few_assignments|boolean|True if file uses symbol '=' less than `minimum` times.
### Instance
```json
{
"repo_name":"...",
"path":".../BorderedButton.swift",
"copies":"2",
"size":"2649",
"content":"...",
"license":"mit",
"hash":"db1587fd117e9a835f58cf8203d8bf05",
"line_mean":29.1136363636,
"line_max":87,
"alpha_frac":0.6700641752,
"ratio":5.298,
"autogenerated":false,
"config_or_test":false,
"has_no_keywords":false,
"has_few_assignments":false
}
```
## Languages
The dataset contains only Swift files.
```json
{
"Swift": [".swift"]
}
```
## Licenses
Each entry in the dataset contains the associated license. The following is a list of licenses involved and their occurrences.
```json
{
"agpl-3.0":1695,
"apache-2.0":85514,
"artistic-2.0":207,
"bsd-2-clause":3132,
"bsd-3-clause":6600,
"cc0-1.0":1409,
"epl-1.0":605,
"gpl-2.0":9374,
"gpl-3.0":18920,
"isc":808,
"lgpl-2.1":1122,
"lgpl-3.0":3103,
"mit":240929,
"mpl-2.0":8181,
"unlicense":1781
}
```
## Dataset Statistics
```json
{
"Total size": "~542 MB",
"Number of files": 383380,
"Number of files under 500 bytes": 3680,
"Average file size in bytes": 5942,
}
```
## Curation Process
* Removal of duplication files based on file hash.
* Removal of file templates. File containing the following: `___FILENAME___, ___PACKAGENAME___, ___FILEBASENAME___, ___FILEHEADER___, ___VARIABLE`
* Removal of the files containing the following words in the first 10 lines: `generated, auto-generated", "autogenerated", "automatically generated`
* Removal of the files containing the following words in the first 10 lines with a probability of 0.7: `test", "unit test", "config", "XCTest", "JUnit`
* Removal of file with the rate of alphanumeric characters below 0.3 of the file.
* Removal of near duplication based MinHash and Jaccard similarity.
* Removal of files with mean line length above 100.
* Removal of files without mention of keywords with a probability of 0.7: `struct ", "class ", "for ", "while ", "enum ", "func ", "typealias ", "var ", "let ", "protocol ", "public ", "private ", "internal ", "import "`
* Removal of files that use the assignment operator `=` less than 3 times.
* Removal of files with the ratio between the number of characters and number of tokens after tokenization lower than 1.5.
Curation process is a derivation of the one used in CodeParrot project: https://huggingface.co/codeparrot
## Data Splits
The dataset only contains a train split which is separated into train and valid which can be found here:
* Clean Version Train: https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint-clean-train
* Clean Version Valid: https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint-clean-valid
# Considerations for Using the Data
The dataset comprises source code from various repositories, potentially containing harmful or biased code,
along with sensitive information such as passwords or usernames.
|
mvasiliniuc/iva-swift-codeint-clean
|
[
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"size_categories:100K<n<1M",
"language:code",
"license:other",
"code, swift, native iOS development, curated",
"region:us"
] |
2023-05-09T19:20:44+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["code"], "license": "other", "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "iva-swift-codeint-clean", "tags": ["code, swift, native iOS development, curated"]}
|
2023-06-15T13:48:16+00:00
|
dbf02e80d52986b6636a3c1f039c2b5292f83a43
|
curr. size: 53,081 videos
goal (todo): 100,000+
|
TempoFunk/medium
|
[
"task_categories:text-to-video",
"size_categories:10K<n<100K",
"language:en",
"license:agpl-3.0",
"region:us"
] |
2023-05-09T19:29:08+00:00
|
{"language": ["en"], "license": "agpl-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-to-video"], "pretty_name": "Medium"}
|
2023-05-13T06:50:37+00:00
|
a77ffcc467331eb0c2c7a38a02c7ffd9aa613c8e
|
chan127ck/temp-dataset
|
[
"license:mit",
"region:us"
] |
2023-05-09T20:26:29+00:00
|
{"license": "mit"}
|
2023-05-10T07:21:58+00:00
|
|
47686238fa95dfe572822b4c02b1ae059d39c0b4
|
# Dataset Card for "trivy-go-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
arkaprav0/trivy-go-test
|
[
"region:us"
] |
2023-05-09T20:46:08+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "package_name", "dtype": "string"}, {"name": "installed_version", "dtype": "string"}, {"name": "affected_range", "dtype": "string"}, {"name": "fixed_version", "dtype": "string"}, {"name": "is_false_positive", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 5479, "num_examples": 75}], "download_size": 5484, "dataset_size": 5479}}
|
2023-05-09T21:02:10+00:00
|
c4baf1b50ffeac2d1028fc1f6403ddaa014b8568
|
# Distilled CNN/DailyMail Dataset
This folder contains the distilled data and dataset loading script to build a dataset on top of it.
- `cnn_bart_pl` is downloaded from [Saved Pseudo-Labels](https://github.com/huggingface/transformers/blob/main/examples/research_projects/seq2seq-distillation/precomputed_pseudo_labels.md), which is generated by facebook/bart-large-cnn, this corresponds to version "1.0.0". It contains train/validataion/test splits.
- `pegasus_cnn_cnn_pls` is also downloaded from [Saved Pseudo-Labels](https://github.com/huggingface/transformers/blob/main/examples/research_projects/seq2seq-distillation/precomputed_pseudo_labels.md). It is generated by sshleifer/pegasus-cnn-ft-v2, and it corresponds to version "2.0.0". It only includes the train split.
## Updates
- 03/16/2023
1. Remove "(CNN)" in the beginning of articles.
|
yuyang/distil_cnndm
|
[
"region:us"
] |
2023-05-09T20:49:50+00:00
|
{}
|
2023-05-14T03:21:46+00:00
|
1f19ff25c54478613acb493cb5d2b9f595e5c37d
|
Originally from [here](https://github.com/amazon-science/dstc11-track2-intent-induction/tree/969b95a0d7365fbc6cd0e05989f1be6b44e6680c/dstc11)
|
gneubig/dstc11
|
[
"license:other",
"region:us"
] |
2023-05-09T23:21:43+00:00
|
{"license": "other"}
|
2023-05-10T00:07:12+00:00
|
2d58c283fe88ebb33d7c4c0b0fffdafb8ca3e5f8
|
robert-altmiller/dolly-code-migration
|
[
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"code",
"dataset",
"region:us"
] |
2023-05-09T23:53:21+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "pretty_name": "dolly-code-migration", "tags": ["code", "dataset"]}
|
2023-05-15T11:58:40+00:00
|
|
35ab144886d0e0563244a2569060ef6bb55eefef
|
koishi instruct metharme dataset, currently 414862 lines
- oasst is from ewof/oasst-convo-unfiltered-deduped
- sharegpt (vicuna) is from ewof/sharegpt-instruct-unfiltered-deduped
- dolly is from ewof/dolly-instruct-unfiltered-deduped
- hh-rlhf is from ewof/hh-rlhf-instruct-unfiltered-deduped
- self_instruct is from ewof/self-instruct-unfiltered-deduped
- hf_instruction is from ewof/hf-instruction-unfiltered
- gpteacher is from ewof/gpteacher-unfiltered
- asss is from ewof/asss-unfiltered-deduped
- code_alpaca is from ewof/code-alpaca-instruct-unfiltered
- synthetic_instruct is from ewof/synthetic-instruct-unfiltered-deduped
- flan is from ewof/flan_unfiltered
these each have their own READMEs that explain how i parsed them
- evol instruct code is from nickrosh/Evol-Instruct-Code-80k-v1
- wizard is from ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- airoboros is from jondurbin/airoboros-2.2.1 (i filtered out orca entries since orca has flan prompts and koishi already has flan)
- llamini is from MBZUAI/LaMini-instruction i ran llamini_to_metharme.py then i ran llamini_merge_dedupe.py with koishi_data_metharme.jsonl (generated with merge.py and everything in subsets folder except llamini_data_metharme.jsonl) as k file and llamini_data_metharme.jsonl as lm file
|
ewof/koishi-instruct-metharme
|
[
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-05-10T00:17:40+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "pretty_name": "koishi instruct metharme", "viewer": false}
|
2024-02-02T04:22:55+00:00
|
318b67acb6aa53813bdde729c47cd608ec3cc826
|
copy from https://huggingface.co/datasets/vctk
and support set data_files
use-eg.
```python
ds = load_dataset(f'SeanSleat/vctk',data_files='/path/to/VCTK-Corpus-0.92.zip')
```
|
SeanSleat/vctk
|
[
"region:us"
] |
2023-05-10T00:28:08+00:00
|
{}
|
2023-05-10T00:35:43+00:00
|
99e00aa0ed8bae137886964c0046b6eaab01dfe9
|
# DivSumm summarization dataset
Dataset introduced in the paper: Analyzing the Dialect Diversity in Multi-document Summaries (COLING 2022)
_Olubusayo Olabisi, Aaron Hudson, Antonie Jetter, Ameeta Agrawal_
DivSumm is a novel dataset consisting of dialect-diverse tweets and human-written extractive and abstractive summaries. It consists of 90 tweets each on 25 topics in multiple English dialects (African-American, Hispanic and White), and two reference summaries per input.
## Directories
input_docs - 90 tweets per topic evenly distributed among 3 dialects; total 25 topics
abstractive - Two annotators were asked to summarize each topic in 5 sentences using their own words.
extractive - Two annotators were asked to select 5 tweets from each topic that summarized the input tweets.
## Paper
You can find our paper [here](https://aclanthology.org/2022.coling-1.542/). If you use this dataset in your work, please cite our paper:
@inproceedings{olabisi-etal-2022-analyzing,
title = "Analyzing the Dialect Diversity in Multi-document Summaries",
author = "Olabisi, Olubusayo and Hudson, Aaron and Jetter, Antonie and Agrawal, Ameeta",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
}
|
Bisi/DivSumm
|
[
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:text2text-generation",
"region:us"
] |
2023-05-10T00:31:36+00:00
|
{"task_categories": ["summarization", "text-generation", "text2text-generation"]}
|
2023-05-10T02:02:22+00:00
|
bd38dca7ba17ad5f324822e5635caba19789e000
|
This dataset is databricks/databricks-dolly-15k unfiltered and deduped, removing 640 instances of blatant alignment and 14 duplicates.
14357 instructions remain.
clean.py was first ran on https://huggingface.co/datasets/databricks/databricks-dolly-15k/blob/d72c16e4644a463b9c678c71d9440befd4594556/databricks-dolly-15k.jsonl and then dedupe.py was ran on it. renamed to .json not .jsonl
inspired by https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
All credit to anon8231489123 for the cleanup script that I adapted to wizardlm_clean.py, I then took this script and adapted it to clean.py
|
ewof/dolly-instruct-unfiltered-deduped
|
[
"region:us"
] |
2023-05-10T00:39:20+00:00
|
{}
|
2023-05-13T02:54:10+00:00
|
fc191943f068d044cbe963c365c0a59384699ee1
|
# Dataset Card for `arxiv_astro_co_ga`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a dataset consisting of titles and abstracts for all Cosmology and Galaxy Astrophysics arXiv articles to date (99,659 papers).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
```
{'title': 'Probing cluster formation under extreme conditions: massive star clusters in blue compact galaxies',
'abstract': ' The numerous and massive young star clusters in blue compact galaxies (BCGs) are used to investigate the properties of their hosts. We test whether BCGs follow claimed relations between cluster populations and their hosts, such as the the fraction of the total luminosity contributed by the clusters as function of the mean star formation rate density; the $V$ band luminosity of the brightest youngest cluster as related to the mean host star formation rate; and the cluster formation efficiency (i.e., the fraction of star formation happening in star clusters) versus the density of the SFR. We find that BCGs follow the trends, supporting a scenario where cluster formation and environmental properties of the host are correlated. They occupy, in all the diagrams, the regions of higher SFRs, as expected by the extreme nature of the starbursts operating in these systems. We find that the star clusters contribute almost to the 20 % of the UV luminosity of the hosts. We suggest that the BCG starburst environment has most likely favoured the compression and collapse of the giant molecular clouds, enhancing the local star formation efficiency, so that massive clusters have been formed. The estimated cluster formation efficiency supports this scenario. BCGs have a cluster formation efficiency comparable to luminous IR galaxies and spiral starburst nuclei (the averaged value is about 35 %) which is much higher than the 8 - 10 % reported for quiescent spirals and dwarf star-forming galaxies. '
}
```
### Data Fields
- `title`: Title of the paper
- `abstract`: The abstract of the paper
### Data Splits
This dataset has 3 splits: _train_, _validation_, and _test_. Below are the statistics for these splits.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 79,727 |
| Validation | 9966 |
| Test | 9966 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
The original dataset from which this subset was constructed can be found here: [Kaggle arXiv Dataset Homepage](https://www.kaggle.com/Cornell-University/arxiv).
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Various authors.
### Annotations
This dataset contains no annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
No author information included in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The original data is maintained by ArXiv, huge thanks to the team for building and maintaining that dataset.
### Licensing Information
The arxiv_astro_co_ga dataset version 1.0.0 is released under the [MIT License](https://mitsloan.mit.edu/licensing).
### Citation Information
```
@misc{clement2019arxiv,
title={On the Use of ArXiv as a Dataset},
author={Colin B. Clement and Matthew Bierbaum and Kevin P. O'Keeffe and Alexander A. Alemi},
year={2019},
eprint={1905.00075},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
[More Information Needed]
|
mehnaazasad/arxiv_astro_co_ga
|
[
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"arxiv:1905.00075",
"region:us"
] |
2023-05-10T00:54:30+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["summarization"]}
|
2023-05-10T01:47:29+00:00
|
b11930911764dab4e9a294c5b22779491aaec9db
|
This dataset is https://github.com/tatsu-lab/stanford_alpaca unfiltered, removing 2095 instances of blatant alignment.
49907 instructions remain.
clean.py was first ran on https://github.com/tatsu-lab/stanford_alpaca/blob/65512697dc67779a6e53c267488aba0ec4d7c02a/alpaca_data.json
normal dedupe.py script didn't find any dupes here.
inspired by https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
All credit to anon8231489123 for the cleanup script that I adapted to wizardlm_clean.py, I then took this script and adapted it to clean.py
|
ewof/alpaca-instruct-unfiltered
|
[
"region:us"
] |
2023-05-10T00:55:07+00:00
|
{}
|
2023-05-13T02:54:52+00:00
|
1483d6f4e0fef77f606b639a339ffd60cb145b77
|
rizquuula/IndonesianFactHoaxPoliticalNews
|
[
"license:apache-2.0",
"region:us"
] |
2023-05-10T01:02:39+00:00
|
{"license": "apache-2.0"}
|
2023-05-10T01:02:39+00:00
|
|
731b87e75038c2ac2fae498d80e9f46da2e432ff
|
# Dataset Card for ChemSum
## ChemSum Description
<!---- **Homepage:**
- **Leaderboard:**
----->
- **Paper:** [What are the Desired Characteristics of Calibration Sets? Identifying Correlates on Long Form Scientific Summarization ](https://arxiv.org/abs/2305.07615)
- **Journal:** ACL 2023
- **Point of Contact:** [email protected]
- **Repository:** https://github.com/griff4692/calibrating-summaries
### ChemSum Summary
We introduce a dataset with a pure chemistry focus by compiling a list of chemistry academic journals with Open-Access articles. For each journal, we downloaded full-text article PDFs from the Open-Access portion of the journal using available APIs, or scraping this content using [Selenium Chrome WebDriver](https://www.selenium.dev/documentation/webdriver/).
Each PDF was processed with Grobid via a locally installed [client](https://pypi.org/project/grobid-client-python/) to extract free-text paragraphs with sections.
The table below shows the journals from which Open Access articles were sourced, as well as the number of papers processed.
For all journals, we filtered for papers with the provided topic of Chemistry when papers from other disciplines were also available (e.g. PubMed).
| Source | # of Articles |
| ----------- | ----------- |
| Beilstein | 1,829 |
| Chem Cell | 546 |
| ChemRxiv | 12,231 |
| Chemistry Open | 398 |
| Nature Communications Chemistry | 572 |
| PubMed Author Manuscript | 57,680 |
| PubMed Open Access | 29,540 |
| Royal Society of Chemistry (RSC) | 9,334 |
| Scientific Reports - Nature | 6,826 |
<!---
### Supported Tasks and Leaderboards
[More Information Needed]
--->
### Languages
English
## Dataset Structure
<!--- ### Data Instances --->
### Data Fields
| Column | Description |
| ----------- | ----------- |
| `uuid` | Unique Identifier for the Example |
| `title` | Title of the Article |
| `article_source` | Open Source Journal (see above for list) |
| `abstract` | Abstract (summary reference) |
| `sections` | Full-text sections from the main body of paper (<!> indicates section boundaries)|
| `headers` | Corresponding section headers for `sections` field (<!> delimited) |
| `source_toks` | Aggregate number of tokens across `sections` |
| `target_toks` | Number of tokens in the `abstract` |
| `compression` | Ratio of `source_toks` to `target_toks` |
Please refer to `load_chemistry()` in https://github.com/griff4692/calibrating-summaries/blob/master/preprocess/preprocess.py for pre-processing as a summarization dataset. The inputs are `sections` and `headers` and the targets is the `abstract`.
### Data Splits
| Split | Count |
| ----------- | ----------- |
| `train` | 115,956 |
| `validation` | 1,000 |
| `test` | 2,000 |
### Citation Information
```
@inproceedings{adams-etal-2023-desired,
title = "What are the Desired Characteristics of Calibration Sets? Identifying Correlates on Long Form Scientific Summarization",
author = "Adams, Griffin and
Nguyen, Bichlien and
Smith, Jake and
Xia, Yingce and
Xie, Shufang and
Ostropolets, Anna and
Deb, Budhaditya and
Chen, Yuan-Jyue and
Naumann, Tristan and
Elhadad, No{\'e}mie",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.587",
doi = "10.18653/v1/2023.acl-long.587",
pages = "10520--10542",
abstract = "Summarization models often generate text that is poorly calibrated to quality metrics because they are trained to maximize the likelihood of a single reference (MLE). To address this, recent work has added a calibration step, which exposes a model to its own ranked outputs to improve relevance or, in a separate line of work, contrasts positive and negative sets to improve faithfulness. While effective, much of this work has focused on \textit{how} to generate and optimize these sets. Less is known about \textit{why} one setup is more effective than another. In this work, we uncover the underlying characteristics of effective sets. For each training instance, we form a large, diverse pool of candidates and systematically vary the subsets used for calibration fine-tuning. Each selection strategy targets distinct aspects of the sets, such as lexical diversity or the size of the gap between positive and negatives. On three diverse scientific long-form summarization datasets (spanning biomedical, clinical, and chemical domains), we find, among others, that faithfulness calibration is optimal when the negative sets are extractive and more likely to be generated, whereas for relevance calibration, the metric margin between candidates should be maximized and surprise{--}the disagreement between model and metric defined candidate rankings{--}minimized.",
}
```
<!---
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Contributions
[More Information Needed]
--->
|
griffin/ChemSum
|
[
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:en",
"chemistry",
"biology",
"medical",
"arxiv:2305.07615",
"region:us"
] |
2023-05-10T01:05:05+00:00
|
{"language": ["en"], "size_categories": ["100K<n<1M"], "task_categories": ["summarization"], "pretty_name": "Generating Abstracts of Academic Chemistry Papers", "tags": ["chemistry", "biology", "medical"]}
|
2024-01-20T12:38:53+00:00
|
94f8a39ed6cc74cadf4066649c363ce2f30f6bbd
|
This dataset is https://github.com/yizhongw/self-instruct unfiltered and deduped, removing 1600 instances of blatant alignment and 26 duplicates.
80813 instructions remain.
clean.py was first ran on https://github.com/yizhongw/self-instruct/blob/0b26ccaa415992100fa32df62d41b994cf928e23/data/gpt3_generations/batch_221203/all_instances_82K.jsonl and then dedupe.py was ran on it, renamed to .json not .jsonl
inspired by https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
All credit to anon8231489123 for the cleanup script that I adapted to wizardlm_clean.py, I then took this script and adapted it to clean.py
|
ewof/self-instruct-unfiltered-deduped
|
[
"region:us"
] |
2023-05-10T01:14:49+00:00
|
{}
|
2023-09-30T23:24:21+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.