sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
3cb17f541d4bc97f52cf5d1a3031a11a859ee79d
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_30_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_30_1000
|
[
"region:us"
] |
2023-04-20T14:51:45+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 978, "num_examples": 32}], "download_size": 2232, "dataset_size": 978}}
|
2023-04-20T14:51:47+00:00
|
495b9905fa243966b78cf215c0d8398354e45a58
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_24_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_24_1000
|
[
"region:us"
] |
2023-04-20T14:51:47+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 954, "num_examples": 32}], "download_size": 2023, "dataset_size": 954}}
|
2023-04-20T14:51:49+00:00
|
1e10cb8ace0b8fb404e3b31d972106d780e838ff
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_27_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_27_1000
|
[
"region:us"
] |
2023-04-20T14:51:47+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 960, "num_examples": 32}], "download_size": 1935, "dataset_size": 960}}
|
2023-04-20T14:51:48+00:00
|
aa9164cb21e963f6ce8447f937da69dc46c432ac
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_26_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_26_1000
|
[
"region:us"
] |
2023-04-20T14:51:47+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 982, "num_examples": 32}], "download_size": 2050, "dataset_size": 982}}
|
2023-04-20T14:51:49+00:00
|
e8c3167d30aad3b5e385b3624940862a15164ddd
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_29_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_29_1000
|
[
"region:us"
] |
2023-04-20T14:51:49+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 958, "num_examples": 32}], "download_size": 2078, "dataset_size": 958}}
|
2023-04-20T14:51:51+00:00
|
26059a2ca3619e78649f76f1c3da9e108cc78471
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_28_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_28_1000
|
[
"region:us"
] |
2023-04-20T14:51:52+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1025, "num_examples": 32}], "download_size": 2147, "dataset_size": 1025}}
|
2023-04-20T14:51:54+00:00
|
1aa297a5b289db1f3ad09c04c7b2171edeb569ab
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_22_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_22_1000
|
[
"region:us"
] |
2023-04-20T14:51:53+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 961, "num_examples": 32}], "download_size": 2111, "dataset_size": 961}}
|
2023-04-20T14:51:55+00:00
|
f4a1d817150610bc0d7884e7dbe6d23b5c3f4a8d
|
# Dataset Card for "sanskrit-stemming-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chronbmm/sanskrit-stemming-512
|
[
"region:us"
] |
2023-04-20T15:09:21+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "unsandhied", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 162195057, "num_examples": 176802}, {"name": "validation", "num_bytes": 1134772, "num_examples": 1293}, {"name": "test", "num_bytes": 1177379, "num_examples": 1352}, {"name": "test_long_500", "num_bytes": 428925, "num_examples": 500}, {"name": "validation_long_500", "num_bytes": 440736, "num_examples": 500}], "download_size": 84910078, "dataset_size": 165376869}}
|
2023-05-02T01:57:30+00:00
|
9ddb5545bf84ea2780fcf741cff9fc5ede93a3c2
|
# Dataset Card for "sanskrit-stemming-sentences"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chronbmm/sanskrit-stemming-sentences
|
[
"region:us"
] |
2023-04-20T15:09:42+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "unsandhied", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 72623052, "num_examples": 614286}, {"name": "validation", "num_bytes": 4340386, "num_examples": 38227}, {"name": "test", "num_bytes": 3794629, "num_examples": 32045}, {"name": "test_500", "num_bytes": 53161, "num_examples": 500}, {"name": "validation_500", "num_bytes": 64578, "num_examples": 500}], "download_size": 38399, "dataset_size": 80875806}}
|
2023-04-20T15:23:49+00:00
|
61d32de02ff062ebc37eb11540e15803de4353f3
|
EpicAlpha/philosophy
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-20T15:26:03+00:00
|
{"license": "apache-2.0"}
|
2023-04-20T15:26:03+00:00
|
|
aac4777f0df22ca87a07c82d37e24e9bb0624c9f
|
# Dataset Card for "arquivo_news_coco"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jfecunha/arquivo_news_coco
|
[
"region:us"
] |
2023-04-20T15:28:25+00:00
|
{"dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image_path", "dtype": "string"}, {"name": "image", "dtype": "binary"}, {"name": "objects", "list": [{"name": "area", "dtype": "float64"}, {"name": "bbox", "sequence": "float64"}, {"name": "category_id", "dtype": "int64"}, {"name": "id", "dtype": "int64"}, {"name": "ignore", "dtype": "int64"}, {"name": "image_id", "dtype": "int64"}, {"name": "iscrowd", "dtype": "int64"}, {"name": "segmentation", "sequence": "null"}]}, {"name": "source", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 416095901, "num_examples": 531}, {"name": "test", "num_bytes": 103109045, "num_examples": 133}], "download_size": 512598278, "dataset_size": 519204946}}
|
2023-04-20T15:46:59+00:00
|
e91d267deff12109a5d5a322d79e3c27dbc67589
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_29_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_29_10000000
|
[
"region:us"
] |
2023-04-20T15:31:46+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 191924, "num_examples": 6699}], "download_size": 123415, "dataset_size": 191924}}
|
2023-04-20T15:31:48+00:00
|
997eea0394b183b1031832ce1b9e045ca4ab4212
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_27_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_27_10000000
|
[
"region:us"
] |
2023-04-20T15:32:11+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 191649, "num_examples": 6699}], "download_size": 122334, "dataset_size": 191649}}
|
2023-04-20T15:32:14+00:00
|
b634c5dcabc6fcffc5da72e7a94618d71e28ee65
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_18_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_18_10000000
|
[
"region:us"
] |
2023-04-20T15:32:14+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 192278, "num_examples": 6699}], "download_size": 123264, "dataset_size": 192278}}
|
2023-04-20T15:32:16+00:00
|
2c11ff1adaed7f3cfa821f02ae391b729ed6fc58
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_26_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_26_10000000
|
[
"region:us"
] |
2023-04-20T15:32:21+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 191998, "num_examples": 6699}], "download_size": 122451, "dataset_size": 191998}}
|
2023-04-20T15:32:23+00:00
|
24a6a61623d153cfb0563df079f11a0ed051cffb
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_2_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_2_10000000
|
[
"region:us"
] |
2023-04-20T15:32:22+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 191148, "num_examples": 6699}], "download_size": 121747, "dataset_size": 191148}}
|
2023-04-20T15:32:24+00:00
|
1c8a67c57f9001060d75ba2a590a78070e993fee
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_25_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_25_10000000
|
[
"region:us"
] |
2023-04-20T15:32:24+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 191229, "num_examples": 6699}], "download_size": 122240, "dataset_size": 191229}}
|
2023-04-20T15:32:26+00:00
|
96f48080729ecca2dbdd79fa1bd4e9e807848ecb
|
# Dataset Card for "bigbio-ner-merged"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bio-datasets/bigbio-ner-merged
|
[
"region:us"
] |
2023-04-20T15:32:25+00:00
|
{"dataset_info": {"features": [{"name": "answer", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "ner_tags", "sequence": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "types", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 731669097, "num_examples": 125928}], "download_size": 141384126, "dataset_size": 731669097}}
|
2023-04-20T15:34:19+00:00
|
a5a070afa50cade665a2fd4bf5e062ba08b2e53e
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_3_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_3_10000000
|
[
"region:us"
] |
2023-04-20T15:32:28+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 191373, "num_examples": 6699}], "download_size": 121674, "dataset_size": 191373}}
|
2023-04-20T15:32:30+00:00
|
57ca8297aa7ab0d776218651cafa41953c877877
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_20_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_20_10000000
|
[
"region:us"
] |
2023-04-20T15:32:31+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 190944, "num_examples": 6699}], "download_size": 122716, "dataset_size": 190944}}
|
2023-04-20T15:32:34+00:00
|
d9ddfae9fdd2f60015f37c5b9ec98acb99b4885e
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_28_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_28_10000000
|
[
"region:us"
] |
2023-04-20T15:32:37+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 193557, "num_examples": 6699}], "download_size": 123586, "dataset_size": 193557}}
|
2023-04-20T15:32:40+00:00
|
7197677a95d4368e512c99a567a2b8244a5251ec
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_14_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_14_10000000
|
[
"region:us"
] |
2023-04-20T15:32:38+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 191231, "num_examples": 6699}], "download_size": 122788, "dataset_size": 191231}}
|
2023-04-20T15:32:40+00:00
|
6d9525f74fb33e1bbe63d1d21eb8489423ac562f
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_30_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_30_10000000
|
[
"region:us"
] |
2023-04-20T15:32:38+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 192545, "num_examples": 6699}], "download_size": 122913, "dataset_size": 192545}}
|
2023-04-20T15:32:40+00:00
|
6b7aeebebd67e6726d00f332bfbb814084c797af
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_19_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_19_10000000
|
[
"region:us"
] |
2023-04-20T15:32:39+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 191895, "num_examples": 6699}], "download_size": 121556, "dataset_size": 191895}}
|
2023-04-20T15:32:42+00:00
|
f7dc0a511c988e98a8e184e44b15d36c1709e904
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_5_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_5_10000000
|
[
"region:us"
] |
2023-04-20T15:32:44+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 192165, "num_examples": 6699}], "download_size": 123631, "dataset_size": 192165}}
|
2023-04-20T15:32:46+00:00
|
c0e3cd16e0354a6ed812932200196f5d19efc5cc
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_16_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_16_10000000
|
[
"region:us"
] |
2023-04-20T15:32:48+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 190465, "num_examples": 6699}], "download_size": 122040, "dataset_size": 190465}}
|
2023-04-20T15:32:51+00:00
|
8c53086087fc30f2ccb37c34db828ec5aaac3edc
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_21_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_21_10000000
|
[
"region:us"
] |
2023-04-20T15:32:48+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 192155, "num_examples": 6699}], "download_size": 122819, "dataset_size": 192155}}
|
2023-04-20T15:32:51+00:00
|
0bf36793feb28d88d99352fbfea9ce2a50cd6d05
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_1_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_1_10000000
|
[
"region:us"
] |
2023-04-20T15:32:53+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 191265, "num_examples": 6699}], "download_size": 122134, "dataset_size": 191265}}
|
2023-04-20T15:32:55+00:00
|
bbfcac22df91c5e337df10c8b4a47d1aa7977169
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_31_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_31_10000000
|
[
"region:us"
] |
2023-04-20T15:32:53+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 190615, "num_examples": 6699}], "download_size": 122686, "dataset_size": 190615}}
|
2023-04-20T15:32:55+00:00
|
40fcf9b57f0c6ff95841a6fdef8024ed31b2f669
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_8_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_8_10000000
|
[
"region:us"
] |
2023-04-20T15:32:54+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 192259, "num_examples": 6699}], "download_size": 123193, "dataset_size": 192259}}
|
2023-04-20T15:32:56+00:00
|
e1a5619cc408455d7c4292433936ea4cab5b1683
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_22_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_22_10000000
|
[
"region:us"
] |
2023-04-20T15:33:07+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 192708, "num_examples": 6699}], "download_size": 122988, "dataset_size": 192708}}
|
2023-04-20T15:33:09+00:00
|
51c62faefe716c1ab74e456dbcbcaa9a6335925e
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_24_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_24_10000000
|
[
"region:us"
] |
2023-04-20T15:33:08+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 193096, "num_examples": 6699}], "download_size": 124294, "dataset_size": 193096}}
|
2023-04-20T15:33:10+00:00
|
b0bac17ccc8f43ea13fafae7b4c6990fe89e32ec
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_7_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_7_10000000
|
[
"region:us"
] |
2023-04-20T15:33:09+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 192629, "num_examples": 6699}], "download_size": 123258, "dataset_size": 192629}}
|
2023-04-20T15:33:12+00:00
|
f30ef8c3e5913621fbe0c9ebd70f7b0a48f77566
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_17_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_17_10000000
|
[
"region:us"
] |
2023-04-20T15:33:18+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 193027, "num_examples": 6699}], "download_size": 123620, "dataset_size": 193027}}
|
2023-04-20T15:33:20+00:00
|
c6b48527f07bbf5c5366a80699d7a9af6867b14e
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_4_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_4_10000000
|
[
"region:us"
] |
2023-04-20T15:33:30+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 192489, "num_examples": 6699}], "download_size": 122768, "dataset_size": 192489}}
|
2023-04-20T15:33:32+00:00
|
c82e69cb1c4fe9d70c01bdbf20989af44f3653a3
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_9_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_9_10000000
|
[
"region:us"
] |
2023-04-20T15:33:33+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 192245, "num_examples": 6699}], "download_size": 123141, "dataset_size": 192245}}
|
2023-04-20T15:33:35+00:00
|
bff1247a1b36176943c136af7f7fdeeb29f34723
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_11_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_11_10000000
|
[
"region:us"
] |
2023-04-20T15:33:35+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 193615, "num_examples": 6699}], "download_size": 124007, "dataset_size": 193615}}
|
2023-04-20T15:33:37+00:00
|
4f5492ef2b41d381888da4fef1479e5f1f61d475
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_0_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_0_10000000
|
[
"region:us"
] |
2023-04-20T15:33:39+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 190746, "num_examples": 6699}], "download_size": 122266, "dataset_size": 190746}}
|
2023-04-20T15:33:42+00:00
|
ac18f410613b604e7599179e04e303aa51e5f2f9
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_10_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_10_10000000
|
[
"region:us"
] |
2023-04-20T15:33:41+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 193066, "num_examples": 6699}], "download_size": 122885, "dataset_size": 193066}}
|
2023-04-20T15:33:43+00:00
|
653ac3733fd2149249423d3fbe1f019b8d094e55
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_15_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_15_10000000
|
[
"region:us"
] |
2023-04-20T15:33:52+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 192585, "num_examples": 6699}], "download_size": 123664, "dataset_size": 192585}}
|
2023-04-20T15:33:54+00:00
|
9dd106940b298f189f8651a9e481c5cbcf3b5ca3
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_23_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_23_10000000
|
[
"region:us"
] |
2023-04-20T15:34:03+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 192706, "num_examples": 6699}], "download_size": 122845, "dataset_size": 192706}}
|
2023-04-20T15:34:05+00:00
|
e3ba6443cdad25a72af9f0d7e24fc0f8419f5ff0
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_13_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_13_10000000
|
[
"region:us"
] |
2023-04-20T15:34:13+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 192699, "num_examples": 6699}], "download_size": 124923, "dataset_size": 192699}}
|
2023-04-20T15:34:16+00:00
|
4ebb2ac5e3ece2c765faa36e7234cdb1654e7541
|
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_12_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_12_10000000
|
[
"region:us"
] |
2023-04-20T15:34:22+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 191441, "num_examples": 6699}], "download_size": 122424, "dataset_size": 191441}}
|
2023-04-20T15:34:25+00:00
|
2a61307909cead3bc5f84e4de55cc7af4dad6a2c
|
# Dataset Card for "voxelgym_demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Cubpaw/voxelgym_demo
|
[
"region:us"
] |
2023-04-20T16:03:28+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}, {"name": "rgb_label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 70717.0, "num_examples": 40}, {"name": "validation", "num_bytes": 17077.0, "num_examples": 10}], "download_size": 79483, "dataset_size": 87794.0}}
|
2023-04-20T16:12:57+00:00
|
c25b641118dc5abe5faabe803acc55138a36b99d
|
# Dataset Card for "hu_corpora_parliament_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
berrypi/hu_corpora_parliament_processed
|
[
"region:us"
] |
2023-04-20T16:10:37+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 105042739, "num_examples": 625178}], "download_size": 59005996, "dataset_size": 105042739}}
|
2023-04-20T16:10:53+00:00
|
ffa28e9666548516b067396a117c554ce63ab21f
|
totoztak/totokatz
|
[
"license:unknown",
"region:us"
] |
2023-04-20T16:13:35+00:00
|
{"license": "unknown"}
|
2023-04-20T16:20:19+00:00
|
|
f80e7accdbe9beb1521da19ef831308463b5538d
|
# Dataset Card for "alpaca-bangla_validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nihalbaig/alpaca-bangla_validation
|
[
"region:us"
] |
2023-04-20T16:29:03+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "input", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "prediction_agent", "dtype": "string"}, {"name": "annotation", "dtype": "string"}, {"name": "annotation_agent", "dtype": "string"}, {"name": "vectors", "dtype": "null"}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 2487054, "num_examples": 1258}], "download_size": 0, "dataset_size": 2487054}}
|
2023-11-15T14:21:29+00:00
|
302698398220b1fb16f4012607821dc207b9ad73
|
Dataset from Kaggle
|
andrewgray11/paison_et_al
|
[
"region:us"
] |
2023-04-20T16:30:52+00:00
|
{}
|
2023-04-20T16:33:05+00:00
|
0257f5e932d3d7e8678d6c63287db07d8101f423
|
# Dataset Card for "mnist_palette_1_bit_num"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
h4rr9/mnist_palette_num_1_bit
|
[
"region:us"
] |
2023-04-20T16:52:01+00:00
|
{"dataset_info": {"features": [{"name": "captions", "dtype": "string"}, {"name": "palette_images", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 30810000, "num_examples": 10000}, {"name": "train", "num_bytes": 184860000, "num_examples": 60000}], "download_size": 19420162, "dataset_size": 215670000}}
|
2023-04-20T16:52:09+00:00
|
65e3c86c7260236567748c1b7df7d5689aae0c46
|
# Dataset Card for "mnist_palette_9_bit_num"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
h4rr9/mnist_palette_num_9_bit
|
[
"region:us"
] |
2023-04-20T16:52:28+00:00
|
{"dataset_info": {"features": [{"name": "captions", "dtype": "string"}, {"name": "palette_images", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 51290000, "num_examples": 10000}, {"name": "train", "num_bytes": 307740000, "num_examples": 60000}], "download_size": 41085975, "dataset_size": 359030000}}
|
2023-04-20T16:52:38+00:00
|
6e6ebf615cc48382daae6fd3bc4b5518dc493c68
|
# Dataset Card for "discursos_balanceados_con_etiqueta_cortos"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Sleoruiz/discursos_balanceados_con_etiqueta_cortos
|
[
"region:us"
] |
2023-04-20T17:12:22+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "comision", "dtype": "string"}, {"name": "gaceta_numero", "dtype": "string"}, {"name": "fecha_gaceta", "dtype": "string"}, {"name": "labels", "sequence": "string"}, {"name": "idx", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3690107, "num_examples": 2773}], "download_size": 1989489, "dataset_size": 3690107}}
|
2023-04-20T17:12:28+00:00
|
eab68caa5bff1ccdf3624e484c245f833153bcfa
|
kenjiqq/imagereward-evaluation
|
[
"license:cc0-1.0",
"region:us"
] |
2023-04-20T17:12:32+00:00
|
{"license": "cc0-1.0"}
|
2023-04-20T21:26:24+00:00
|
|
63103348d4e2e1561e20e931a1faf0650f6fda7f
|
# Dataset Card for AbLit
## Dataset Description
- **Homepage:** https://github.com/roemmele/AbLit
- **Repository:** https://github.com/roemmele/AbLit
- **Paper:** https://arxiv.org/pdf/2302.06579.pdf
- **Point of Contact:** [email protected]
### Dataset Summary
The AbLit dataset contains **ab**ridged versions of 10 classic English **lit**erature books, aligned with their original versions on various passage levels.
The abridgements were written and made publically available by Emma Laybourn [here](http://www.englishliteratureebooks.com/classicnovelsabridged.html).
This is the first known dataset for NLP research that focuses on the abridgement task.
See the paper for a detailed description of the dataset, as well as the results of several modeling experiments. The GitHub repo also provides more extensive ways to interact with the data beyond what is provided here.
### Languages
English
## Dataset Structure
Each passage in the original version of a book chapter is aligned with its corresponding passage in the abridged version. These aligned pairs are available for various passage sizes: sentences, paragraphs, and multi-paragraph "chunks". The passage size is specified when loading the dataset. There are train/dev/test splits for items of each size.
| Passage Size | Description | # Train | # Dev | # Test |
| --------------------- | ------------- | ------- | ------- | ------- |
| chapters | Each passage is a single chapter | 808 | 10 | 50
| sentences | Each passage is a sentence delimited by the NLTK sentence tokenizer | 122,219 | 1,143 | 10,431 |
| paragraphs | Each passage is a paragraph delimited by a line break | 37,227 | 313 | 3,125 |
| chunks-10-sentences | Each passage consists of up to X=10 number of sentences, which may span more than one paragraph. To derive chunks with other lengths X, see GitHub repo above | 14,857 | 141 | 1,264
#### Example Usage
To load aligned paragraphs:
```
from datasets import load_dataset
data = load_dataset("roemmele/ablit", "paragraphs")
```
### Data Fields
- original: passage text in the original version
- abridged: passage text in the abridged version
- book: title of book containing passage
- chapter: title of chapter containing passage
## Dataset Creation
### Curation Rationale
Abridgement is the task of making a text easier to understand while preserving its linguistic qualities. Abridgements are different from typical summaries: whereas summaries abstractively describe the original text, abridgements simplify the original primarily through a process of extraction. We present this dataset to promote further research on modeling the abridgement process.
### Source Data
The author Emma Laybourn wrote abridged versions of classic English literature books available through Project Gutenberg. She has also provided her abridgements for free on her [website](http://www.englishliteratureebooks.com/classicnovelsabridged.html). This is how she describes her work: “This is a collection of famous novels which have been shortened and slightly simplified for the general reader. These are not summaries; each is half to two-thirds of the original length. I’ve selected works that people often find daunting because of their density or complexity: the aim is to make them easier to read, while keeping the style intact.”
#### Initial Data Collection and Normalization
We obtained the original and abridged versions of the books from the respective websites.
#### Who are the source language producers?
Emma Laybourn
### Annotations
#### Annotation process
We designed a procedure for automatically aligning passages between the original and abridged version of each chapter. We conducted a human evaluation to verify these alignments had high accuracy. The training split of the dataset has ~99% accuracy. The dev and test splits of the dataset were fully human-validated to ensure 100% accuracy. See the paper for further explanation.
#### Who are the annotators?
The alignment accuracy evaluation was conducted by the authors of the paper, who have expertise in linguistics and NLP.
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset will promote more research on the authoring process for producing abridgements, including models for automatically generating abridgements. Because it is a labor-intensive writing task, there are relatively few abridged versions of books. Systems that automatically produce abridgements could vastly expand the number of abridged versions of books and thus increase their readership.
### Discussion of Biases
We present this dataset to introduce abridgement as an NLP task, but these abridgements are scoped to one small set of texts associated with a specific domain and author. There are significant practical reasons for this limited scope. In particular, in constrast to the books in AbLit, most recently published books are not included in publicly accessible datasets due to copyright restrictions, and the same restrictions typically apply to any abridgements of these books. For this reason, AbLit consists of British English literature from the 18th and 19th centuries. Some of the linguistic properties of these original books do not generalize to other types of English texts that would be beneficial to abridge. Moreover, the narrow cultural perspective reflected in these books is certainly not representative of the diverse modern population. Readers may find some content offensive.
### Dataset Curators
The curators are the authors of the paper.
### Licensing Information
cc-by-sa-4.0
### Citation Information
Roemmele, Melissa, Kyle Shaffer, Katrina Olsen, Yiyi Wang, and Steve DeNeefe. "AbLit: A Resource for Analyzing and Generating Abridged Versions of English Literature." Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume (2023).
|
roemmele/ablit
|
[
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:summarization",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:2302.06579",
"region:us"
] |
2023-04-20T18:50:35+00:00
|
{"language": ["en"], "license": "cc-by-sa-4.0", "task_categories": ["text-generation", "text2text-generation", "summarization"]}
|
2023-05-08T15:26:23+00:00
|
6e04407c645004f26aa6088163c1d7d7df60a396
|
# Dataset Card for "poses-controlnet-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kfahn/poses-controlnet-dataset
|
[
"region:us"
] |
2023-04-20T19:14:28+00:00
|
{"dataset_info": {"features": [{"name": "original_image", "dtype": "image"}, {"name": "condtioning_image", "dtype": "image"}, {"name": "overlaid", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 123997217.0, "num_examples": 496}], "download_size": 124012907, "dataset_size": 123997217.0}}
|
2023-04-20T19:14:43+00:00
|
e3f60f3a04714f6274ff2f168a730cb4d33af757
|
# Dataset Card for Running Records Errors Dataset
## Dataset Description
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Running Records Errors dataset is an English-language dataset containing 1,055,601 sentences based on the Europarl corpus. As described in our paper,
we take the sentences from the English version of the Europarl corpus and randomly inject three types of errors into the sentences: *repetitions*, where
certain words or phrases are repeated, *substitutions*, where certain words are replaced with a different word, and *deletions*, where the word is completely
omitted. The sentences are then passed into a TTS pipeline consisting of TacoTron2 and HifiGAN model to produce audio recordings of those mutated sentences. Lastly,
the data is passed into a Quartznet 15x5 model which produces a transcript of the spoken audio.
### Supported Tasks and Leaderboards
The original purpose of this dataset was to construct a model pipeline that could score running records assesments given a transcript of a child's speech along with
the true text for that assesment. However, we provide this dataset to support other tasks involving error detection in text.
### Languages
All of the data in the dataset is in English.
## Dataset Structure
### Data Instances
For each instance, there is a string for the audio transcript, a string for the original text before we added any errors, as well as a string of the sentence with the errors we generated.
In addition, we provide two lists. One list denotes the original position of each word in the mutated text, and the second list denotes the error applied to that word.
### Data Fields
- asr_transcript: The transcript of the audio processed by our Quartznet 15x5 model.
- original_text: The original text that was in the Europarl corupus. This text contains no artificial errors.
- mutated_text: This text contains the errors we injected.
- index_tags: This list denotes the original position of each word in `mutated_text.`
- mutated_tags: This list denotes the error applied to each word in `mutated_text.`
### Data Splits
- DEL: Sentences that have had random words removed.
- REP: Sentences that have had repetitions inserted.
- SUB: Sentences that have had words randomly substituted.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was generated with the guidance of Carl Ehrett.
|
JDaniel423/running-records-errors-dataset
|
[
"task_categories:token-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"education",
"doi:10.57967/hf/0600",
"region:us"
] |
2023-04-20T19:41:35+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["token-classification"], "tags": ["education"], "dataset_info": {"features": [{"name": "audio_path", "dtype": "string"}, {"name": "asr_transcript", "dtype": "string"}, {"name": "original_text", "dtype": "string"}, {"name": "mutated_text", "dtype": "string"}, {"name": "index_tags", "dtype": "string"}, {"name": "mutated_tags", "dtype": "string"}], "splits": [{"name": "DEL", "num_bytes": 208676326, "num_examples": 351867}, {"name": "SUB", "num_bytes": 243003228, "num_examples": 351867}, {"name": "REP", "num_bytes": 303304320, "num_examples": 351867}], "download_size": 0, "dataset_size": 754983874}}
|
2023-05-02T17:57:55+00:00
|
c6e12beb9cfe41a97a983e8c8e24e85e5216a3a1
|
# Dataset Card for "sft-static"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Dahoas/sft-static
|
[
"region:us"
] |
2023-04-20T20:13:52+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16012237, "num_examples": 20000}], "download_size": 9353471, "dataset_size": 16012237}}
|
2023-04-20T20:13:57+00:00
|
bbc04164823ff0d1b8e2727165d927cebe300a66
|
mskov/ESC50
|
[
"license:cc",
"region:us"
] |
2023-04-20T20:24:22+00:00
|
{"license": "cc", "dataset_info": {"features": [{"name": "filename", "dtype": "string"}, {"name": "fold", "dtype": "int64"}, {"name": "target", "dtype": "int64"}, {"name": "category", "dtype": "string"}, {"name": "esc10", "dtype": "bool"}, {"name": "src_file", "dtype": "int64"}, {"name": "take", "dtype": "string"}, {"name": "audio", "dtype": "audio", "struct": [{"name": "bytes", "dtype": "binary"}, {"name": "path", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 441114557, "num_examples": 1000}, {"name": "test", "num_bytes": 441115029, "num_examples": 1000}], "download_size": 773323386, "dataset_size": 882229586}}
|
2023-08-31T20:54:49+00:00
|
|
dc516f2ec955d44a8075684a778a093afc3103bd
|
This data accompanies the WebUI project (https://dl.acm.org/doi/abs/10.1145/3544548.3581158)
For more information, check out the project website: https://uimodeling.github.io/
To download this dataset, you need to install the huggingface-hub package
```
pip install huggingface-hub
```
Use snapshot_download
```
from huggingface_hub import snapshot_download
snapshot_download(repo_id="biglab/webui-val", repo_type="dataset")
```
IMPORTANT
* Before downloading and using, please review the copyright info here: https://github.com/js0nwu/webui/blob/main/COPYRIGHT.txt
* Not all data samples have the same number of files (e.g., same number of device screenshots) due to the fact that the crawler used a timeout during collection
* The dataset released on HuggingFace was filtered using a list of explicit words and therefore contains fewer samples than the experiments originally used in the paper. The raw dataset is currently available (https://drive.google.com/drive/folders/1hcO75W2FjsZoibsj2TIbKz67hy9JkOBz?usp=share_link) but may be removed in the future.
|
biglab/webui-val
|
[
"license:other",
"region:us"
] |
2023-04-20T20:54:19+00:00
|
{"license": "other"}
|
2023-05-05T01:24:41+00:00
|
e6bd38bbdcf0458d8e411b5a0d33c23330ab40a1
|
This data accompanies the WebUI project (https://dl.acm.org/doi/abs/10.1145/3544548.3581158)
For more information, check out the project website: https://uimodeling.github.io/
To download this dataset, you need to install the huggingface-hub package
```
pip install huggingface-hub
```
Use snapshot_download
```
from huggingface_hub import snapshot_download
snapshot_download(repo_id="biglab/webui-test", repo_type="dataset")
```
IMPORTANT
* Before downloading and using, please review the copyright info here: https://github.com/js0nwu/webui/blob/main/COPYRIGHT.txt
* Not all data samples have the same number of files (e.g., same number of device screenshots) due to the fact that the crawler used a timeout during collection
* The dataset released on HuggingFace was filtered using a list of explicit words and therefore contains fewer samples than the experiments originally used in the paper. The raw dataset is currently available (https://drive.google.com/drive/folders/1hcO75W2FjsZoibsj2TIbKz67hy9JkOBz?usp=share_link) but may be removed in the future.
|
biglab/webui-test
|
[
"license:other",
"region:us"
] |
2023-04-20T20:58:17+00:00
|
{"license": "other"}
|
2023-05-05T01:25:22+00:00
|
d709322b30bc6ffe2b6705604cd4c12cc4a998ae
|
# AutoTrain Dataset for project: classificacion
## Dataset Description
This dataset has been automatically processed by AutoTrain for project classificacion.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<511x511 RGBA PIL image>",
"target": 4
},
{
"image": "<511x511 RGBA PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Ak', 'Ala_Idris', 'Buzgulu', 'Dimnit', 'Nazli'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 400 |
| valid | 100 |
|
juanArevalo/autotrain-data-classificacion
|
[
"task_categories:image-classification",
"region:us"
] |
2023-04-20T21:12:42+00:00
|
{"task_categories": ["image-classification"]}
|
2023-04-20T21:22:06+00:00
|
ef2fc3777a5619b4e51a809617ea050a50009d66
|
frostymelonade/SemEval2017-task7-pun-detection
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"region:us"
] |
2023-04-20T21:40:30+00:00
|
{"language": ["en"], "license": "cc", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"]}
|
2023-04-25T15:05:26+00:00
|
|
39c63450e1c632433d9f3c3ed22674c1fcbb3e9a
|
kthalas/lora_models
|
[
"license:unknown",
"region:us"
] |
2023-04-20T21:55:14+00:00
|
{"license": "unknown"}
|
2023-04-20T22:02:28+00:00
|
|
61fe8727c5cab1f99eff0391e11d6b157146c6ec
|
gauss314/bitcoin_daily
|
[
"task_categories:tabular-regression",
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"license:gpl-3.0",
"bitcoin",
"cryptocurrencies",
"crypto",
"region:us"
] |
2023-04-20T21:56:11+00:00
|
{"license": "gpl-3.0", "size_categories": ["1K<n<10K"], "task_categories": ["tabular-regression", "tabular-classification"], "tags": ["bitcoin", "cryptocurrencies", "crypto"]}
|
2023-07-30T01:20:32+00:00
|
|
8ccc72e69e65f40c70e117d8b3c08306bb788b60
|
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [homepage](https://github.com/masakhane-io/masakhane-news)
- **Repository:** [github](https://github.com/masakhane-io/masakhane-news)
- **Paper:** [paper]()
- **Point of Contact:** [Masakhane](https://www.masakhane.io/) or [email protected]
### Dataset Summary
MasakhaNEWS is the largest publicly available dataset for news topic classification in 16 languages widely spoken in Africa.
The train/validation/test sets are available for all the 16 languages.
### Supported Tasks and Leaderboards
[More Information Needed]
- `news topic classification`: categorize news articles into new topics e.g business, sport sor politics.
### Languages
There are 16 languages available :
- Amharic (amh)
- English (eng)
- French (fra)
- Hausa (hau)
- Igbo (ibo)
- Lingala (lin)
- Luganda (lug)
- Oromo (orm)
- Nigerian Pidgin (pcm)
- Rundi (run)
- chShona (sna)
- Somali (som)
- Kiswahili (swą)
- Tigrinya (tir)
- isiXhosa (xho)
- Yorùbá (yor)
## Dataset Structure
### Data Instances
The examples look like this for Yorùbá:
```
from datasets import load_dataset
data = load_dataset('masakhane/masakhanews', 'yor')
# Please, specify the language code
# A data point example is below:
{
'label': 0,
'headline': "'The barriers to entry have gone - go for it now'",
'text': "j Lalvani, CEO of Vitabiotics and former Dragons' Den star, shares his business advice for our CEO Secrets series.\nProduced, filmed and edited by Dougal Shaw",
'headline_text': "'The barriers to entry have gone - go for it now' j Lalvani, CEO of Vitabiotics and former Dragons' Den star, shares his business advice for our CEO Secrets series.\nProduced, filmed and edited by Dougal Shaw",
'url': '/news/business-61880859'
}
```
### Data Fields
- `label`: news topic id
- `headline`: news title/headline
- `text`: news body
- `headline_text`: concatenation of headline and news body
- `url`: website address
The news topics correspond to this list:
```
"business", "entertainment", "health", "politics", "religion", "sports", "technology"
```
### Data Splits
For all languages, there are three splits.
The original splits were named `train`, `dev` and `test` and they correspond to the `train`, `validation` and `test` splits.
The splits have the following sizes :
| Language | train | validation | test |
|-----------------|------:|-----------:|-----:|
| Amharic | 1311 | 188 | 376 |
| English | 3309 | 472 | 948 |
| French | 1476 | 211 | 422 |
| Hausa | 2219 | 317 | 637 |
| Igbo | 1356 | 194 | 390 |
| Lingala | 608 | 87 | 175 |
| Luganda | 771 | 110 | 223 |
| Oromo | 1015 | 145 | 292 |
| Nigerian-Pidgin | 1060 | 152 | 305 |
| Rundi | 1117 | 159 | 322 |
| chiShona | 1288 | 185 | 369 |
| Somali | 1021 | 148 | 294 |
| Kiswahili | 1658 | 237 | 476 |
| Tigrinya | 947 | 137 | 272 |
| isiXhosa | 1032 | 147 | 297 |
| Yoruba | 1433 | 206 | 411 |
## Dataset Creation
### Curation Rationale
The dataset was introduced to introduce new resources to 20 languages that were under-served for natural language processing.
[More Information Needed]
### Source Data
The source of the data is from the news domain, details can be found here ****
#### Initial Data Collection and Normalization
The articles were word-tokenized, information on the exact pre-processing pipeline is unavailable.
#### Who are the source language producers?
The source language was produced by journalists and writers employed by the news agency and newspaper mentioned above.
### Annotations
#### Annotation process
Details can be found here **
#### Who are the annotators?
Annotators were recruited from [Masakhane](https://www.masakhane.io/)
### Personal and Sensitive Information
The data is sourced from newspaper source and only contains mentions of public figures or individuals
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.
## Additional Information
### Dataset Curators
### Licensing Information
The licensing status of the data is CC 4.0 Non-Commercial
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@article{Adelani2023MasakhaNEWS,
title={MasakhaNEWS: News Topic Classification for African languages},
author={David Ifeoluwa Adelani and Marek Masiak and Israel Abebe Azime and Jesujoba Oluwadara Alabi and Atnafu Lambebo Tonja and Christine Mwase and Odunayo Ogundepo and Bonaventure F. P. Dossou and Akintunde Oladipo and Doreen Nixdorf and Chris Chinenye Emezue and Sana Sabah al-azzawi and Blessing K. Sibanda and Davis David and Lolwethu Ndolela and Jonathan Mukiibi and Tunde Oluwaseyi Ajayi and Tatiana Moteu Ngoli and Brian Odhiambo and Abraham Toluwase Owodunni and Nnaemeka C. Obiefuna and Shamsuddeen Hassan Muhammad and Saheed Salahudeen Abdullahi and Mesay Gemeda Yigezu and Tajuddeen Gwadabe and Idris Abdulmumin and Mahlet Taye Bame and Oluwabusayo Olufunke Awoyomi and Iyanuoluwa Shode and Tolulope Anu Adelani and Habiba Abdulganiy Kailani and Abdul-Hakeem Omotayo and Adetola Adeeko and Afolabi Abeeb and Anuoluwapo Aremu and Olanrewaju Samuel and Clemencia Siro and Wangari Kimotho and Onyekachi Raphael Ogbu and Chinedu E. Mbonu and Chiamaka I. Chukwuneke and Samuel Fanijo and Jessica Ojo and Oyinkansola F. Awosan and Tadesse Kebede Guge and Sakayo Toadoum Sari and Pamela Nyatsine and Freedmore Sidume and Oreen Yousuf and Mardiyyah Oduwole and Ussen Kimanuka and Kanda Patrick Tshinu and Thina Diko and Siyanda Nxakama and Abdulmejid Tuni Johar and Sinodos Gebre and Muhidin Mohamed and Shafie Abdi Mohamed and Fuad Mire Hassan and Moges Ahmed Mehamed and Evrard Ngabire and and Pontus Stenetorp},
journal={ArXiv},
year={2023},
volume={}
}
```
### Contributions
Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
|
masakhane/masakhanews
|
[
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:am",
"language:en",
"language:fr",
"language:ha",
"language:ig",
"language:ln",
"language:lg",
"language:om",
"language:pcm",
"language:rn",
"language:sn",
"language:so",
"language:sw",
"language:ti",
"language:xh",
"language:yo",
"license:afl-3.0",
"news-topic",
"masakhanews",
"masakhane",
"region:us"
] |
2023-04-20T22:06:34+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["am", "en", "fr", "ha", "ig", "ln", "lg", "om", "pcm", "rn", "sn", "so", "sw", "ti", "xh", "yo"], "license": ["afl-3.0"], "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["topic-classification"], "pretty_name": "masakhanews", "tags": ["news-topic", "masakhanews", "masakhane"]}
|
2023-05-25T21:27:40+00:00
|
25f3e07d15c315662dcbab8de46e899caec736cd
|
# Dataset Card for "sanskrit-sandhi-split-sighum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chronbmm/sanskrit-sandhi-split-sighum
|
[
"region:us"
] |
2023-04-20T22:31:58+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "unsandhied", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10973642, "num_examples": 99889}, {"name": "validation", "num_bytes": 470141, "num_examples": 4200}, {"name": "test", "num_bytes": 470141, "num_examples": 4200}, {"name": "test_500", "num_bytes": 58711, "num_examples": 500}, {"name": "validation_500", "num_bytes": 58711, "num_examples": 500}], "download_size": 7463353, "dataset_size": 12031346}}
|
2023-04-20T22:32:07+00:00
|
7d09275e4757864a5038e177c3dad4644dfc7439
|
# Dataset Card for "sanskrit-sandhi-split-hackathon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chronbmm/sanskrit-sandhi-split-hackathon
|
[
"region:us"
] |
2023-04-20T22:32:46+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "unsandhied", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9350944, "num_examples": 89323}, {"name": "validation", "num_bytes": 1164083, "num_examples": 10235}, {"name": "test", "num_bytes": 1169683, "num_examples": 9965}, {"name": "test_500", "num_bytes": 62539, "num_examples": 500}, {"name": "validation_500", "num_bytes": 53738, "num_examples": 500}], "download_size": 7114072, "dataset_size": 11800987}}
|
2023-04-20T22:32:57+00:00
|
4120f66a2e642f0735a7912bed0e16b8dc950ca9
|
https://github.com/oracle-devrel/redbull-analytics-hol
|
jasperan/redbull-analytics-hol
|
[
"license:gpl-3.0",
"region:us"
] |
2023-04-20T22:50:14+00:00
|
{"license": "gpl-3.0"}
|
2023-04-21T02:38:24+00:00
|
0160a909cef7705dddde24f94d0c9950e3d10910
|
# Dataset Card for "cancer_test_data2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
SHS/cancer_test_data2
|
[
"region:us"
] |
2023-04-21T00:30:18+00:00
|
{"dataset_info": {"features": [{"name": "passage", "dtype": "string"}, {"name": "passage_token", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 46724, "num_examples": 1}], "download_size": 0, "dataset_size": 46724}}
|
2023-04-21T04:44:55+00:00
|
1387785de961d5ea7efecc7b55795c970a225867
|
thewall/Aptani2Param
|
[
"license:openrail",
"region:us"
] |
2023-04-21T00:37:36+00:00
|
{"license": "openrail"}
|
2023-04-27T05:30:07+00:00
|
|
cd55f8475f3a7e80e0e53ed3601ec6371ffa23ff
|
### Dataset Description
This dataset, compiled by Brendan Dolan-Gavitt, contains ~100 thousand `c++` functions and GPT-3.5 turbo-generated summaries of the code's purpose.
An example of Brendan's original prompt and GPT-3.5's summary may be found below.
```
int gg_set_focus_pos(gg_widget_t *widget, int x, int y) {
return 1;
}
Q. What language is the above code written in?
A. C/C++.
Q. What is the purpose of the above code?
A.
This code defines a function called `gg_set_focus_pos` that takes a pointer to a `gg_widget_t` structure
and two integer arguments `x` and `y`. The function returns an integer value of 1. The purpose of this
function is not clear from this code snippet alone, as it does not contain any implementation details.
However, based on the function name and arguments, it is likely that this function sets the focus
position of a graphical user interface (GUI) widget to the specified `x` and `y` coordinates.
Tags: GUI, widget, focus, position, function.
```
The link to the dataset's announcement may be found here: https://twitter.com/moyix/status/1644355889602654210
|
casey-martin/oa_cpp_annotate_gen
|
[
"task_categories:question-answering",
"task_categories:text-classification",
"language:en",
"region:us"
] |
2023-04-21T00:59:35+00:00
|
{"language": ["en"], "task_categories": ["question-answering", "text-classification"]}
|
2023-08-16T04:34:15+00:00
|
85881de88ec7c7b749e1767c068d26c0a4b10d59
|
# Dataset Card for "vsec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hieunguyen1053/vsec
|
[
"region:us"
] |
2023-04-21T01:15:02+00:00
|
{"dataset_info": {"features": [{"name": "choice", "dtype": "string"}, {"name": "reject", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3459144, "num_examples": 9341}], "download_size": 1962184, "dataset_size": 3459144}}
|
2023-04-21T01:15:07+00:00
|
92cb6b516cf4d15d9cbfe48c77adda52a5efa07f
|
# Dataset Card for "weibo_ner_knowledge_V3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
doushabao4766/weibo_ner_knowledge_V3
|
[
"region:us"
] |
2023-04-21T01:40:43+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-GPE.NAM", "1": "B-GPE.NOM", "2": "B-LOC.NAM", "3": "B-LOC.NOM", "4": "B-ORG.NAM", "5": "B-ORG.NOM", "6": "B-PER.NAM", "7": "B-PER.NOM", "8": "I-GPE.NAM", "9": "I-GPE.NOM", "10": "I-LOC.NAM", "11": "I-LOC.NOM", "12": "I-ORG.NAM", "13": "I-ORG.NOM", "14": "I-PER.NAM", "15": "I-PER.NOM", "16": "O"}}}}, {"name": "knowledge", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1539888, "num_examples": 1350}, {"name": "validation", "num_bytes": 275813, "num_examples": 270}, {"name": "test", "num_bytes": 276958, "num_examples": 270}], "download_size": 553090, "dataset_size": 2092659}}
|
2023-04-21T01:40:48+00:00
|
a263eb7e65e444f3d951fda38d1c1d7f79f5a43b
|
EdwardLin2023/MELD-Audio
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-04-21T01:47:11+00:00
|
{"license": "cc-by-4.0"}
|
2023-04-24T03:04:52+00:00
|
|
a0e9790fbc4ef2d5c312920adffa551c566c8ebf
|
# Dataset Card for "hagrid50k"
This dataset is designed to train a ControlNet with human hands. It includes hand landmarks detected by MediaPipe(for more information refer to: https://developers.google.com/mediapipe/solutions/vision/hand_landmarker).
The source image data is from [HaGRID dataset](https://github.com/hukenovs/hagrid) and we use a modified version from Kaggle(https://www.kaggle.com/datasets/innominate817/hagrid-classification-512p) to build this dataset. We randomly select 50k data samples to build this dataset and the caption is generated using BLIP-2 model.
|
Vincent-luo/hagrid50k
|
[
"region:us"
] |
2023-04-21T01:49:41+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning_image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11037324721.0, "num_examples": 50000}], "download_size": 11046953921, "dataset_size": 11037324721.0}}
|
2023-05-03T07:30:11+00:00
|
d2df1affe00dbd7e46b5893870d3b03387a86703
|
# Dataset Card for "vedic-dependency-parsing"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chronbmm/vedic-dependency-parsing
|
[
"region:us"
] |
2023-04-21T01:58:32+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "unsandhied", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4155754, "num_examples": 7178}, {"name": "validation", "num_bytes": 198491, "num_examples": 330}, {"name": "test", "num_bytes": 196230, "num_examples": 340}], "download_size": 2351596, "dataset_size": 4550475}}
|
2023-04-22T14:25:27+00:00
|
9d80a6a118b838d9defc3798d659a54a2ac2ff37
|
#### Overview
This dataset builds from [WikiSQL](https://huggingface.co/datasets/wikisql) and [Spider](https://huggingface.co/datasets/spider).
There are 78,577 examples of natural language queries, SQL CREATE TABLE statements, and SQL Query answering the question using the CREATE statement as context. This dataset was built with text-to-sql LLMs in mind, intending to prevent hallucination of column and table names often seen when trained on text-to-sql datasets. The CREATE TABLE statement can often be copy and pasted from different DBMS and provides table names, column names and their data types. By providing just the CREATE TABLE statement as context, we can hopefully provide better grounding for models without having to provide actual rows of data, limiting token usage and exposure to private, sensitive, or proprietary data.
#### Cleansing and Augmentation
Cleansing and data augmentation has been done on the combined WikiSQL and Spider data. I used [SQLGlot](https://github.com/tobymao/sqlglot) on queries from Spider and WikiSQL and parsed them into different tables and columns, I then inferred column data types based on usage of `>` `<` operators as well as the use of `MIN()` `MAX()` `AVG()` `SUM()` on columns. While this isn't perfect, it increases the likelihood of inferring the correct datatype for a column, the columns otherwise default to VARCHAR type. These tables and columns are then used to generate CREATE TABLE statements using the inferred types. SQLGlot is used again to ensure both the SQL queries and CREATE TABLE statements parse without errors.
Some queries that do not have column names, e.g. SELECT * FROM table, have a default Id column added to the CREATE TABLE statement. Some other queries which use the generic `table` as the FROM table have instead been changed to a variation of `table_name_1` or some other number which is also reflected in the CREATE TABLE statement.
#### TODO
- Further augment the data by converting queries and CREATE TABLE statements into different SQL dialects, this can be done with SQLGlot. Reference to the dialect might also be added to the question.
- Support other informative contexts beyond CREATE TABLE
- Better parse datatypes to clean up things like numbers for column names and other numbers as strings
If you have any edits you'd like to see in a version 2 of this dataset, let me know.
Random sample:
```json
{
"question": "Please show the themes of competitions with host cities having populations larger than 1000.",
"context": "CREATE TABLE city (City_ID VARCHAR, Population INTEGER); CREATE TABLE farm_competition (Theme VARCHAR, Host_city_ID VARCHAR)",
"answer": "SELECT T2.Theme FROM city AS T1 JOIN farm_competition AS T2 ON T1.City_ID = T2.Host_city_ID WHERE T1.Population > 1000"
},
{
"question": "Please show the different statuses of cities and the average population of cities with each status.",
"context": "CREATE TABLE city (Status VARCHAR, Population INTEGER)",
"answer": "SELECT Status, AVG(Population) FROM city GROUP BY Status"
},
```
#### Citing this work
```TeX
@misc{b-mc2_2023_sql-create-context,
title = {sql-create-context Dataset},
author = {b-mc2},
year = {2023},
url = {https://huggingface.co/datasets/b-mc2/sql-create-context},
note = {This dataset was created by modifying data from the following sources: \cite{zhongSeq2SQL2017, yu2018spider}.},
}
```
#### Datasets used to create this dataset
```TeX
@article{zhongSeq2SQL2017,
author = {Victor Zhong and Caiming Xiong and Richard Socher},
title = {Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning},
journal = {CoRR},
volume = {abs/1709.00103},
year = {2017}
}
@article{yu2018spider,
title = {Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task},
author = {Yu, Tao and Zhang, Rui and Yang, Kai and Yasunaga, Michihiro and Wang, Dongxu and Li, Zifan and Ma, James and Li, Irene and Yao, Qingning and Roman, Shanelle and others},
journal = {arXiv preprint arXiv:1809.08887},
year = {2018}
}
```
|
b-mc2/sql-create-context
|
[
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:table-question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"SQL",
"code",
"NLP",
"text-to-sql",
"context-sql",
"spider",
"wikisql",
"sqlglot",
"region:us"
] |
2023-04-21T02:23:24+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "question-answering", "table-question-answering"], "pretty_name": "sql-create-context", "tags": ["SQL", "code", "NLP", "text-to-sql", "context-sql", "spider", "wikisql", "sqlglot"]}
|
2024-01-25T22:01:25+00:00
|
810123ede9ad7a1783d6739472a33228668e46ac
|
# Dataset Card for "CS6301_sampledata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chetahy0711/CS6301_sampledata
|
[
"region:us"
] |
2023-04-21T03:23:43+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "expression", "dtype": "string"}, {"name": "img_width", "dtype": "int64"}, {"name": "img_height", "dtype": "int64"}, {"name": "x", "dtype": "float64"}, {"name": "y", "dtype": "float64"}, {"name": "w", "dtype": "float64"}, {"name": "h", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 3093853.0, "num_examples": 20}], "download_size": 3094944, "dataset_size": 3093853.0}}
|
2023-04-21T03:30:55+00:00
|
b819c6d71b81279b32f295f6cab9c69d8afad108
|
Spico/ChCatExt
|
[
"language:zh",
"license:apache-2.0",
"finance",
"region:us"
] |
2023-04-21T03:38:05+00:00
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["finance"]}
|
2023-04-21T03:39:21+00:00
|
|
18359bb0c45b794133159d803ebceb3e8a1a961f
|
# Dataset Card for "cat-blip-datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ethers/cat-blip-datasets
|
[
"region:us"
] |
2023-04-21T04:10:59+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10308254.0, "num_examples": 77}], "download_size": 10308600, "dataset_size": 10308254.0}}
|
2023-04-21T04:14:33+00:00
|
a44422bd2f132b600f389759956ccd8a596ab418
|
Chat-Error/Vietnamese_x_Alpaca
|
[
"license:mit",
"region:us"
] |
2023-04-21T04:27:15+00:00
|
{"license": "mit"}
|
2023-04-21T04:32:45+00:00
|
|
31af319480ba20837dae1f90bb4f767cce12a7dd
|
# Dataset Card for "cool_new_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
markkerzner/cool_new_dataset
|
[
"region:us"
] |
2023-04-21T04:27:49+00:00
|
{"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "ad", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3099, "num_examples": 5}], "download_size": 7195, "dataset_size": 3099}}
|
2023-04-21T04:27:52+00:00
|
be39210202f503cc70d4c90c66182c86875c5f61
|
# Dataset Card for "cat-waifu-datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ethers/cat-waifu-datasets
|
[
"region:us"
] |
2023-04-21T04:29:41+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10317603.0, "num_examples": 77}], "download_size": 10311448, "dataset_size": 10317603.0}}
|
2023-04-21T04:30:00+00:00
|
366ed7a7d4d3fcf798608985b793360358421836
|
# Dataset Card for "db-simpsons-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
JerryMo/db-simpsons-dataset
|
[
"region:us"
] |
2023-04-21T04:35:12+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5589264.0, "num_examples": 101}], "download_size": 5572816, "dataset_size": 5589264.0}}
|
2023-04-23T01:17:03+00:00
|
1e87bda379f788ee13251645a4dc97c586f4f981
|
minxdragon/ddpm-butterflies-128
|
[
"license:apache-2.0",
"region:us"
] |
2023-04-21T05:19:23+00:00
|
{"license": "apache-2.0"}
|
2023-04-21T07:35:16+00:00
|
|
e6891b2d7d79d0824d98e24c8e8c7df180a50ade
|
thewall/Simulation
|
[
"license:openrail",
"region:us"
] |
2023-04-21T05:40:24+00:00
|
{"license": "openrail", "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "seq", "dtype": "string"}, {"name": "motif", "dtype": "string"}, {"name": "motif_ids", "dtype": "int32"}, {"name": "motif_mask", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 585000.0, "num_examples": 4500}, {"name": "test", "num_bytes": 65000.0, "num_examples": 500}], "download_size": 97808, "dataset_size": 650000.0}}
|
2023-06-16T00:38:49+00:00
|
|
31a0e52ceec8651ceee8a6b7633d5625b9d0ad4c
|
# Dataset Card for GPT4All-Community-Discussions
## Dataset Description
This dataset contains ethically gathered discussions from the community, who shared their experiences with various open source discussion models using the GPT4All-ui tool. The dataset is open for any use, including commercial use, as long as proper citation is given to acknowledge the contributions of the community.
The GPT4All-ui tool allows users to have conversations with various open source AIs and export their discussions in JSON format. Every input and output is ranked or enhanced by the user, enabling them to correct any mistakes made by the AI and embed the correction into the database. The aim of this tool is to create an ethically sourced database made by the community for the community.
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card serves as a base template for new datasets and has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
This dataset currently has no supported tasks or leaderboards.
### Languages
This dataset contains discussions in English, French, German, Arabic, Italian, and Spanish.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
This dataset was created to provide a platform for the community to share their experiences with various open source discussion models using the GPT4All-ui tool.
### Source Data
#### Initial Data Collection and Normalization
The data was collected from users who willingly shared their experiences using the GPT4All-ui tool.
#### Who are the source language producers?
The source language producers are the community members who shared their discussions using the GPT4All-ui tool.
### Annotations
#### Annotation process
No annotations were made for this dataset.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
This dataset does not contain any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was curated by the community members who shared their discussions using the GPT4All-ui tool.
### Licensing Information
This dataset is licensed under the Apache 2.0 license.
### Citation Information
[More Information Needed]
### Contributions
Contributions to this dataset are open to any user. Users can fork the tool, add their entry, and then do a pull request.
The GPT4All-ui tool can be found at: https://github.com/nomic-ai/gpt4all-ui
|
ParisNeo/LoLLMS-Open-Community-discussions
|
[
"task_categories:conversational",
"language:en",
"language:fr",
"language:de",
"language:ar",
"language:it",
"language:es",
"license:apache-2.0",
"region:us"
] |
2023-04-21T05:50:50+00:00
|
{"language": ["en", "fr", "de", "ar", "it", "es"], "license": "apache-2.0", "task_categories": ["conversational"]}
|
2023-07-06T19:50:14+00:00
|
27807c29777320bcab4735626142c589813849c6
|
# Dataset Card for "news-programmatic-labeling"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Phonecharger/news-programmatic-labeling
|
[
"region:us"
] |
2023-04-21T06:21:09+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Business", "1": "Sci/Tech", "2": "Sports", "3": "World"}}}}], "splits": [{"name": "train", "num_bytes": 407587.2, "num_examples": 1632}, {"name": "test", "num_bytes": 101896.8, "num_examples": 408}], "download_size": 347138, "dataset_size": 509484.0}}
|
2023-04-25T15:55:44+00:00
|
70480dafd8c5d9c1d7e1140144adfda7092f3cb6
|
# Dataset Card for "rmh_tokenized_512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
stoddur/rmh_tokenized_512
|
[
"region:us"
] |
2023-04-21T06:42:19+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 71378959604, "num_examples": 10704703}, {"name": "validation", "num_bytes": 3467780084, "num_examples": 520063}], "download_size": 11385412584, "dataset_size": 74846739688}}
|
2023-04-21T13:33:59+00:00
|
2d650ce18196fe7622d8595dca17bda1be89033c
|
# Dataset Card for "batch_indexing_machine_720f_768px"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Circularmachines/batch_indexing_machine_720f_768px
|
[
"region:us"
] |
2023-04-21T07:12:31+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 159145222.0, "num_examples": 720}], "download_size": 159156289, "dataset_size": 159145222.0}}
|
2023-04-21T07:12:51+00:00
|
63351e6b35e7f2a89b09a460cd5276bc1bfd40d0
|
# Dataset Card for "hpqa-fid-input"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carnival13/hpqa-fid-input
|
[
"region:us"
] |
2023-04-21T07:39:27+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": {"sequence": "int32"}}, {"name": "attention_mask", "sequence": {"sequence": "int8"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 1351280756, "num_examples": 90447}, {"name": "validation", "num_bytes": 110630700, "num_examples": 7405}], "download_size": 278016776, "dataset_size": 1461911456}}
|
2023-04-21T09:24:14+00:00
|
ea897072023ab83d3b81e85bd30c3b0835306ae1
|
# Dataset Card for "assin2_por_Latn_to_eng_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ruanchaves/assin2_por_Latn_to_eng_Latn
|
[
"region:us"
] |
2023-04-21T07:49:48+00:00
|
{"dataset_info": {"features": [{"name": "sentence_pair_id", "dtype": "int64"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "relatedness_score", "dtype": "float32"}, {"name": "entailment_judgment", "dtype": {"class_label": {"names": {"0": "NONE", "1": "ENTAILMENT"}}}}, {"name": "__language__", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 802897, "num_examples": 6500}, {"name": "test", "num_bytes": 313661, "num_examples": 2448}, {"name": "validation", "num_bytes": 62531, "num_examples": 500}], "download_size": 0, "dataset_size": 1179089}}
|
2023-04-22T18:12:21+00:00
|
c09bc6706f7b2a749a4b2f17bd38b1154c8e4ca2
|
# Dataset Card for "rerelem_por_Latn_to_eng_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ruanchaves/rerelem_por_Latn_to_eng_Latn
|
[
"region:us"
] |
2023-04-21T07:50:00+00:00
|
{"dataset_info": {"features": [{"name": "docid", "dtype": "string"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "same_text", "dtype": "bool"}, {"name": "__language__", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1112298, "num_examples": 2226}, {"name": "validation", "num_bytes": 370560, "num_examples": 701}, {"name": "test", "num_bytes": 398794, "num_examples": 805}], "download_size": 0, "dataset_size": 1881652}}
|
2023-04-22T18:12:25+00:00
|
bc777fa56bf123a2d92cc27358618ec4c7abb544
|
# Dataset Card for "porsimplessent_por_Latn_to_eng_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ruanchaves/porsimplessent_por_Latn_to_eng_Latn
|
[
"region:us"
] |
2023-04-21T07:50:13+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "int32"}, {"name": "production_id", "dtype": "int32"}, {"name": "level", "dtype": "string"}, {"name": "changed", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "sentence_text_from", "dtype": "string"}, {"name": "sentence_text_to", "dtype": "string"}, {"name": "__language__", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2268564, "num_examples": 4976}, {"name": "validation", "num_bytes": 645118, "num_examples": 1446}, {"name": "test", "num_bytes": 765737, "num_examples": 1697}], "download_size": 0, "dataset_size": 3679419}}
|
2023-04-22T18:12:35+00:00
|
34b21b90394bb97d7d9d5e43fd1e47c275a8008f
|
13GP/training
|
[
"license:mit",
"region:us"
] |
2023-04-21T07:56:16+00:00
|
{"license": "mit"}
|
2023-04-21T16:01:21+00:00
|
|
4b446e2c43118c0b7971cf22dc9683d333d0dea7
|
# Dataset Card for "donut_800"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Laskari-Naveen/donut_800
|
[
"region:us"
] |
2023-04-21T08:12:48+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "labels", "sequence": "int64"}, {"name": "target_sequence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30762624372, "num_examples": 4000}, {"name": "validation", "num_bytes": 7690462054, "num_examples": 1000}], "download_size": 1532193923, "dataset_size": 38453086426}}
|
2023-04-21T08:39:17+00:00
|
02d0debc00355386a8d142a0d490380d665a8bc7
|
# Preprocessed parliament hearings ASR dataset to truecased form.
## Original dataset: https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3126
---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: string
splits:
- name: train
num_bytes: 53645064353.18
num_examples: 191455
- name: test
num_bytes: 740331298.0
num_examples: 2726
download_size: 51507379112
dataset_size: 54385395651.18
---
|
jkot/parliament_hearings_processed
|
[
"region:us"
] |
2023-04-21T09:06:00+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51234859011.0, "num_examples": 191455}, {"name": "test", "num_bytes": 762989296.0, "num_examples": 2726}], "download_size": 51507735963, "dataset_size": 51997848307.0}}
|
2023-04-25T07:53:38+00:00
|
a74a635337698e9c446ccb6f0d6208a258b9e85b
|
ESLO audio dataset
configs:
- max30s
- max10s
- single_samples (default)
This script relies on the raw data transcript files and audio files
Licence Creative Commons Attribution - Pas d'Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International
Dependencies:
- ffmpeg: `sudo apt-get install ffmpeg`
- ffmpeg-python: `pip install ffmpeg-python`
```
{'audio': {'array': array([-0.00250244, 0.00039673, 0.00326538, ..., 0.01953125,
0.02206421, 0.02304077]),
'path': None,
'sampling_rate': 16000},
'end_timestamp': 8.939,
'file': 'ESLO1_INTPERS_437',
'overlap': False,
'sentence': "eh bien je voudrais vous demander d'abord en quoi consiste votre "
'entreprise ici ? exactement',
'speaker': 'spk1',
'start_timestamp': 0.954}
```
Eshkol-Taravella I., Baude O., Maurel D., Hriba L., Dugua C., Tellier I., (2012), Un grand corpus oral « disponible » : le corpus d’Orléans 1968-2012., in Ressources linguistiques libres, TAL. Volume 52 – n° 3/2011, 17-46
Laboratoire Ligérien de Linguistique - UMR 7270 (LLL) (2023). ESLO [Corpus]. ORTOLANG (Open Resources and TOols for LANGuage) - www.ortolang.fr, v1, https://hdl.handle.net/11403/eslo/v1.
|
BrunoHays/ESLO
|
[
"task_categories:automatic-speech-recognition",
"language:fr",
"license:cc-by-nc-4.0",
"region:us"
] |
2023-04-21T09:30:18+00:00
|
{"language": ["fr"], "license": "cc-by-nc-4.0", "task_categories": ["automatic-speech-recognition"]}
|
2023-10-03T08:22:11+00:00
|
602c0d6d965ea0b44f9edaed0f76a506209e61ba
|
# Dataset Card for "pubmed_long_tokenised"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
reginaboateng/pubmed_long_tokenised
|
[
"region:us"
] |
2023-04-21T09:48:24+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 493127488, "num_examples": 119924}, {"name": "validation", "num_bytes": 27274896, "num_examples": 6633}, {"name": "test", "num_bytes": 27377696, "num_examples": 6658}], "download_size": 153946164, "dataset_size": 547780080}}
|
2023-04-21T09:49:16+00:00
|
5d04a61411c6b0a6d4ad1052ce5689a62e4d9cbb
|
fuwlstudioab/test
|
[
"license:cc-by-nc-sa-2.0",
"region:us"
] |
2023-04-21T10:19:45+00:00
|
{"license": "cc-by-nc-sa-2.0"}
|
2023-04-21T10:19:45+00:00
|
|
ebbea2b23eb80748f7efb7a688286fc03ee17f23
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
Lev Muchnik, [email protected]
### Dataset Summary
This dataset represents a 2022 snapshot of the Supreme Court of Israel public verdicts and decisions supported by rich metadata. The 5.31GB dataset represents 751,194 documents.
Overall, the dataset contains 2.68 Gb of text.
It can be loaded with the dataset package:
```
import datasets
data = datasets.load_dataset('LevMuchnik/SupremeCourtOfIsrael')
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The vast majority of the documents in the database are in Hebrew. A small number of documents are in English.
## Dataset Structure
The dataset is a json lines file with each line corresponding to a single document and containing document identification, text and metadata.
### Data Instances
[More Information Needed]
### Data Fields
The file contains the following fields:
- case_id - running number for cases
- download_time - when the document was downloaded (datetime)
- number_of_case_documents - number of documents in the current case
- file_name - full name of the document file, including relative path
- Id - document id
- CaseId - case id
- VerdictDt - Date of the document (datetime)
- CreatedDate - Date of when the document was inserted into the Supreme Court database
- CaseNum - case number
- CaseDesc - Unique case identifier. This id is used to reference cases within the Israeli legal system
- Pages - number of pages in the original document
- Path - relative path to the document
- CaseName - formal name of the case
- FileName - document file name, without path
- DocName -document file name, without path
- Year - document creation year
- TypeCode - enumeration of document types (see Type field below)
- Type - Document type
- פסק-דין 84339
- החלטה 663099
- צו ביניים 22
- פסקי דין באנגלית 310
- צו על תנאי 200
- צו 2606
- פד"י 302
- תקצירים 316
- Technical - boolean indicator of whether the document is technical or not.
- CodeVolume - ?
- document_hash - 258-bit hashtag of the document name. Used internally to uniquely identify the document
- text - text of the document. Multiple newlines and other document formating elements (paragraphs,lists, etc.) are preserved.
- html_title - document title extracted from the HTML
- VerdictsDt - date of the verdict
- meta_case_nm - formal case name,
- meta_sec_appeal - integer or None
- meta_side_ty - case type, list of strings
- meta_verdict_file_nm - name of the verdict file
- meta_judge - list of names of the cases judges
- meta_mador_nm - name of the court instance (e.g. בג"ץ)
- meta_side_nm - list of the case parties, list of strings
- meta_verdict_dt - date of the verdict
- meta_case_dt - date of the case
- meta_verdict_nbr -
- meta_ProgId - name of the software used to create the document (None, Word, etc)
- meta_is_technical - whether the document is technical, {'false', 'true'}
- meta_judge_nm_last - last names of the judges (list of strings)
- meta_case_nbr - formal number of the case (same as CaseDesc)
- meta_verdict_ty - type of the decision (same as Type)
- meta_lawyer_nm - list of lawyer names, list of strings or None
- meta_judge_nm_first - list of judges' first names, list of strings
- meta_verdict_pages - number of document cases
- meta_inyan_nm - court בג"ץ
- meta_court_nm - court (e.g. בית המשפט העליון )
### Data Splits
The entire dataset is qualified as 'train'.
## Dataset Creation
2023-04-22
### Curation Rationale
[More Information Needed]
### Source Data
https://supreme.court.gov.il/
#### Initial Data Collection and Normalization
The data was colleted by crawling the Israeli Supreme Court website.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The data contained in this dataset is public.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Prof. Lev Muchnik, Hebrew University of Jerusalem
Dr. Inbal Yahav Shenberger, Tel Aviv University
### Licensing Information
[More Information Needed]
### Citation Information
Lev Muchnik, Inbal Yahav, Ariel Nevo, Avichay Chriqui, Tim Shektov, 2023, The Israeli Supreme Court Dataset
### Contributions
The authours would like to thank the Israeli Innovation Authority (grants #78560 and #78561) for their support in creating of this dataset.
|
LevMuchnik/SupremeCourtOfIsrael
|
[
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-retrieval",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"task_ids:document-retrieval",
"size_categories:100K<n<1M",
"language:he",
"license:openrail",
"legal, verdicts, metadata, hebrew",
"region:us"
] |
2023-04-21T10:49:35+00:00
|
{"language": ["he"], "license": "openrail", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "fill-mask", "text-retrieval"], "task_ids": ["language-modeling", "masked-language-modeling", "document-retrieval"], "pretty_name": "Supreme Court Israel - Public Verdicts and Decisions", "tags": ["legal, verdicts, metadata, hebrew"]}
|
2023-04-27T05:01:49+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.