sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3d11d04d7ff8f7a113796af53b255260c5950f52
|
# Dataset Card for "data_aug_full_less"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
linhqyy/data_aug_full_less
|
[
"region:us"
] |
2023-09-20T00:55:54+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "intent", "dtype": "string"}, {"name": "entities", "list": [{"name": "type", "dtype": "string"}, {"name": "filler", "dtype": "string"}]}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1644352, "num_examples": 7787}, {"name": "test", "num_bytes": 141911, "num_examples": 678}], "download_size": 429178, "dataset_size": 1786263}}
|
2023-09-20T00:55:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "data_aug_full_less"
More Information needed
|
[
"# Dataset Card for \"data_aug_full_less\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"data_aug_full_less\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"data_aug_full_less\"\n\nMore Information needed"
] |
75c7ddcb699ce77eeb7b090df928191735787ea0
|
It's just the unlabeled train split of datasets/conceptual_captions but split into 4 pieces
|
ouasdg/cc3m-morepieces
|
[
"region:us"
] |
2023-09-20T01:02:57+00:00
|
{}
|
2023-09-20T01:12:13+00:00
|
[] |
[] |
TAGS
#region-us
|
It's just the unlabeled train split of datasets/conceptual_captions but split into 4 pieces
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
31fe956992ec1b3192e86586dcec5b52e8b2ad6c
|
# Dataset Card for "finesse_image_generation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sanctia/finesse_image_generation
|
[
"region:us"
] |
2023-09-20T01:08:30+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3681350830.818, "num_examples": 1389}], "download_size": 3170381883, "dataset_size": 3681350830.818}}
|
2023-09-21T11:01:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "finesse_image_generation"
More Information needed
|
[
"# Dataset Card for \"finesse_image_generation\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"finesse_image_generation\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"finesse_image_generation\"\n\nMore Information needed"
] |
ed2d9bce47921ce2fbccf3eae0c5039f1d42e799
|
# Dataset Card for "chata_rl_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
YaHi/chata_rl_dataset
|
[
"region:us"
] |
2023-09-20T01:12:29+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "output2", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "output1", "dtype": "string"}, {"name": "preference", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2113014, "num_examples": 1549}, {"name": "test", "num_bytes": 151721, "num_examples": 104}], "download_size": 801956, "dataset_size": 2264735}}
|
2023-09-22T19:13:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chata_rl_dataset"
More Information needed
|
[
"# Dataset Card for \"chata_rl_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chata_rl_dataset\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chata_rl_dataset\"\n\nMore Information needed"
] |
a8d31addd96e72cacf049cd7bc63d52987ca90eb
|
# Dataset Card for N-BAIoT
*From https://archive.ics.uci.edu/dataset/442/detection+of+iot+botnet+attacks+n+baiot:* This dataset addresses the lack of public botnet datasets, especially for the IoT. It suggests *real* traffic data, gathered from 9 commercial IoT devices authentically infected by Mirai and BASHLITE.
## Dataset Details
### Dataset Description
*From https://archive.ics.uci.edu/dataset/442/detection+of+iot+botnet+attacks+n+baiot:*
(a) Attribute being predicted:
-- Originally we aimed at distinguishing between benign and Malicious traffic data by means of anomaly detection techniques.
-- However, as the malicious data can be divided into 10 attacks carried by 2 botnets, the dataset can also be used for multi-class classification: 10 classes of attacks, plus 1 class of 'benign'.
(b) The study's results:
-- For each of the 9 IoT devices we trained and optimized a deep autoencoder on 2/3 of its benign data (i.e., the training set of each device). This was done to capture normal network traffic patterns.
-- The test data of each device comprised of the remaining 1/3 of benign data plus all the malicious data. On each test set we applied the respective trained (deep) autoencoder as an anomaly detector. The detection of anomalies (i.e., the cyberattacks launched from each of the above IoT devices) concluded with 100% TPR.
- **Curated by:** Meidan, Yair, Bohadana, Michael, Mathov, Yael, Mirsky, Yisroel, Breitenbacher, Dominik, , Asaf, and Shabtai, Asaf
- **License:** [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/legalcode)
### Dataset Sources
- **Repository:** https://archive.ics.uci.edu/dataset/442/detection+of+iot+botnet+attacks+n+baiot
- **Paper:** https://arxiv.org/abs/1805.03409
## Citation
**BibTeX:**
@misc{misc_detection_of_iot_botnet_attacks_n_baiot_442,
author = {Meidan,Yair, Bohadana,Michael, Mathov,Yael, Mirsky,Yisroel, Breitenbacher,Dominik, ,Asaf, and Shabtai,Asaf},
title = {{N-BaIoT Dataset to Detect IoT Botnet Attacks}},
year = {2018},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: https://doi.org/10.24432/C5RC8J}
}
**APA:**
Meidan, Yair, Bohadana, Michael, Mathov, Yael, Mirsky, Yisroel, Breitenbacher, Dominik, ,Asaf, and Shabtai, Asaf. (2018). N-BaIoT Dataset to Detect IoT Botnet Attacks. UCI Machine Learning Repository. https://doi.org/10.24432/C5RC8J.
## Glossary [optional]
- **IoT**: Internet of Things
- **Botnet**: A collection of devices that are maliciously controlled via malware
|
codymlewis/nbaiot
|
[
"license:cc-by-4.0",
"arxiv:1805.03409",
"region:us"
] |
2023-09-20T01:24:15+00:00
|
{"license": "cc-by-4.0", "pretty_name": "nbaiot", "dataset_info": {"features": [{"name": "features", "sequence": "float32", "length": 115}, {"name": "attack", "dtype": {"class_label": {"names": {"0": "benign_traffic", "1": "combo", "2": "junk", "3": "mirai-ack", "4": "mirai-scan", "5": "mirai-syn", "6": "mirai-udp", "7": "mirai-udpplain", "8": "scan", "9": "tcp", "10": "udp"}}}}, {"name": "device", "dtype": {"class_label": {"names": {"0": "Danmini_Doorbell", "1": "Ecobee_Thermostat", "2": "Ennio_Doorbell", "3": "Philips_B120N10_Baby_Monitor", "4": "Provision_PT_737E_Security_Camera", "5": "Provision_PT_838_Security_Camera", "6": "Samsung_SNH_1011_N_Webcam", "7": "SimpleHome_XCS7_1002_WHT_Security_Camera", "8": "SimpleHome_XCS7_1003_WHT_Security_Camera"}}}}], "splits": [{"name": "train", "num_bytes": 2857231888, "num_examples": 6002588}, {"name": "test", "num_bytes": 504568568, "num_examples": 1060018}], "download_size": 1772922927, "dataset_size": 3361800456}}
|
2023-10-13T03:02:56+00:00
|
[
"1805.03409"
] |
[] |
TAGS
#license-cc-by-4.0 #arxiv-1805.03409 #region-us
|
# Dataset Card for N-BAIoT
*From URL This dataset addresses the lack of public botnet datasets, especially for the IoT. It suggests *real* traffic data, gathered from 9 commercial IoT devices authentically infected by Mirai and BASHLITE.
## Dataset Details
### Dataset Description
*From URL
(a) Attribute being predicted:
-- Originally we aimed at distinguishing between benign and Malicious traffic data by means of anomaly detection techniques.
-- However, as the malicious data can be divided into 10 attacks carried by 2 botnets, the dataset can also be used for multi-class classification: 10 classes of attacks, plus 1 class of 'benign'.
(b) The study's results:
-- For each of the 9 IoT devices we trained and optimized a deep autoencoder on 2/3 of its benign data (i.e., the training set of each device). This was done to capture normal network traffic patterns.
-- The test data of each device comprised of the remaining 1/3 of benign data plus all the malicious data. On each test set we applied the respective trained (deep) autoencoder as an anomaly detector. The detection of anomalies (i.e., the cyberattacks launched from each of the above IoT devices) concluded with 100% TPR.
- Curated by: Meidan, Yair, Bohadana, Michael, Mathov, Yael, Mirsky, Yisroel, Breitenbacher, Dominik, , Asaf, and Shabtai, Asaf
- License: Creative Commons Attribution 4.0 International (CC BY 4.0)
### Dataset Sources
- Repository: URL
- Paper: URL
BibTeX:
@misc{misc_detection_of_iot_botnet_attacks_n_baiot_442,
author = {Meidan,Yair, Bohadana,Michael, Mathov,Yael, Mirsky,Yisroel, Breitenbacher,Dominik, ,Asaf, and Shabtai,Asaf},
title = {{N-BaIoT Dataset to Detect IoT Botnet Attacks}},
year = {2018},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: URL
}
APA:
Meidan, Yair, Bohadana, Michael, Mathov, Yael, Mirsky, Yisroel, Breitenbacher, Dominik, ,Asaf, and Shabtai, Asaf. (2018). N-BaIoT Dataset to Detect IoT Botnet Attacks. UCI Machine Learning Repository. URL
## Glossary [optional]
- IoT: Internet of Things
- Botnet: A collection of devices that are maliciously controlled via malware
|
[
"# Dataset Card for N-BAIoT\n\n*From URL This dataset addresses the lack of public botnet datasets, especially for the IoT. It suggests *real* traffic data, gathered from 9 commercial IoT devices authentically infected by Mirai and BASHLITE.",
"## Dataset Details",
"### Dataset Description\n\n*From URL\n(a) Attribute being predicted:\n-- Originally we aimed at distinguishing between benign and Malicious traffic data by means of anomaly detection techniques.\n-- However, as the malicious data can be divided into 10 attacks carried by 2 botnets, the dataset can also be used for multi-class classification: 10 classes of attacks, plus 1 class of 'benign'.\n\n\n(b) The study's results:\n-- For each of the 9 IoT devices we trained and optimized a deep autoencoder on 2/3 of its benign data (i.e., the training set of each device). This was done to capture normal network traffic patterns.\n-- The test data of each device comprised of the remaining 1/3 of benign data plus all the malicious data. On each test set we applied the respective trained (deep) autoencoder as an anomaly detector. The detection of anomalies (i.e., the cyberattacks launched from each of the above IoT devices) concluded with 100% TPR.\n\n\n\n- Curated by: Meidan, Yair, Bohadana, Michael, Mathov, Yael, Mirsky, Yisroel, Breitenbacher, Dominik, , Asaf, and Shabtai, Asaf\n- License: Creative Commons Attribution 4.0 International (CC BY 4.0)",
"### Dataset Sources\n\n- Repository: URL\n- Paper: URL\n\nBibTeX:\n\n@misc{misc_detection_of_iot_botnet_attacks_n_baiot_442,\n author = {Meidan,Yair, Bohadana,Michael, Mathov,Yael, Mirsky,Yisroel, Breitenbacher,Dominik, ,Asaf, and Shabtai,Asaf},\n title = {{N-BaIoT Dataset to Detect IoT Botnet Attacks}},\n year = {2018},\n howpublished = {UCI Machine Learning Repository},\n note = {{DOI}: URL\n}\n\nAPA:\n\nMeidan, Yair, Bohadana, Michael, Mathov, Yael, Mirsky, Yisroel, Breitenbacher, Dominik, ,Asaf, and Shabtai, Asaf. (2018). N-BaIoT Dataset to Detect IoT Botnet Attacks. UCI Machine Learning Repository. URL",
"## Glossary [optional]\n\n- IoT: Internet of Things\n- Botnet: A collection of devices that are maliciously controlled via malware"
] |
[
"TAGS\n#license-cc-by-4.0 #arxiv-1805.03409 #region-us \n",
"# Dataset Card for N-BAIoT\n\n*From URL This dataset addresses the lack of public botnet datasets, especially for the IoT. It suggests *real* traffic data, gathered from 9 commercial IoT devices authentically infected by Mirai and BASHLITE.",
"## Dataset Details",
"### Dataset Description\n\n*From URL\n(a) Attribute being predicted:\n-- Originally we aimed at distinguishing between benign and Malicious traffic data by means of anomaly detection techniques.\n-- However, as the malicious data can be divided into 10 attacks carried by 2 botnets, the dataset can also be used for multi-class classification: 10 classes of attacks, plus 1 class of 'benign'.\n\n\n(b) The study's results:\n-- For each of the 9 IoT devices we trained and optimized a deep autoencoder on 2/3 of its benign data (i.e., the training set of each device). This was done to capture normal network traffic patterns.\n-- The test data of each device comprised of the remaining 1/3 of benign data plus all the malicious data. On each test set we applied the respective trained (deep) autoencoder as an anomaly detector. The detection of anomalies (i.e., the cyberattacks launched from each of the above IoT devices) concluded with 100% TPR.\n\n\n\n- Curated by: Meidan, Yair, Bohadana, Michael, Mathov, Yael, Mirsky, Yisroel, Breitenbacher, Dominik, , Asaf, and Shabtai, Asaf\n- License: Creative Commons Attribution 4.0 International (CC BY 4.0)",
"### Dataset Sources\n\n- Repository: URL\n- Paper: URL\n\nBibTeX:\n\n@misc{misc_detection_of_iot_botnet_attacks_n_baiot_442,\n author = {Meidan,Yair, Bohadana,Michael, Mathov,Yael, Mirsky,Yisroel, Breitenbacher,Dominik, ,Asaf, and Shabtai,Asaf},\n title = {{N-BaIoT Dataset to Detect IoT Botnet Attacks}},\n year = {2018},\n howpublished = {UCI Machine Learning Repository},\n note = {{DOI}: URL\n}\n\nAPA:\n\nMeidan, Yair, Bohadana, Michael, Mathov, Yael, Mirsky, Yisroel, Breitenbacher, Dominik, ,Asaf, and Shabtai, Asaf. (2018). N-BaIoT Dataset to Detect IoT Botnet Attacks. UCI Machine Learning Repository. URL",
"## Glossary [optional]\n\n- IoT: Internet of Things\n- Botnet: A collection of devices that are maliciously controlled via malware"
] |
[
24,
63,
4,
300,
222,
32
] |
[
"passage: TAGS\n#license-cc-by-4.0 #arxiv-1805.03409 #region-us \n# Dataset Card for N-BAIoT\n\n*From URL This dataset addresses the lack of public botnet datasets, especially for the IoT. It suggests *real* traffic data, gathered from 9 commercial IoT devices authentically infected by Mirai and BASHLITE.## Dataset Details### Dataset Description\n\n*From URL\n(a) Attribute being predicted:\n-- Originally we aimed at distinguishing between benign and Malicious traffic data by means of anomaly detection techniques.\n-- However, as the malicious data can be divided into 10 attacks carried by 2 botnets, the dataset can also be used for multi-class classification: 10 classes of attacks, plus 1 class of 'benign'.\n\n\n(b) The study's results:\n-- For each of the 9 IoT devices we trained and optimized a deep autoencoder on 2/3 of its benign data (i.e., the training set of each device). This was done to capture normal network traffic patterns.\n-- The test data of each device comprised of the remaining 1/3 of benign data plus all the malicious data. On each test set we applied the respective trained (deep) autoencoder as an anomaly detector. The detection of anomalies (i.e., the cyberattacks launched from each of the above IoT devices) concluded with 100% TPR.\n\n\n\n- Curated by: Meidan, Yair, Bohadana, Michael, Mathov, Yael, Mirsky, Yisroel, Breitenbacher, Dominik, , Asaf, and Shabtai, Asaf\n- License: Creative Commons Attribution 4.0 International (CC BY 4.0)"
] |
f9cf81f76c6324576eb863097a2f7100d9f0db23
|
# Dataset Card for "finesse_image_generation1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sanctia/finesse_image_generation1
|
[
"region:us"
] |
2023-09-20T01:55:19+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3681309176.818, "num_examples": 1389}], "download_size": 3170376725, "dataset_size": 3681309176.818}}
|
2023-09-20T01:57:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "finesse_image_generation1"
More Information needed
|
[
"# Dataset Card for \"finesse_image_generation1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"finesse_image_generation1\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"finesse_image_generation1\"\n\nMore Information needed"
] |
8e4cf9fd0543fcc35dc1085458485be50ee15ce0
|
A filtered subset of C4-en containing 3,584,358 pages that are at least 16,000 characters long, useful for training models with longer context windows.
|
vllg/loong_c4
|
[
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:en",
"license:odc-by",
"region:us"
] |
2023-09-20T01:58:55+00:00
|
{"language": ["en"], "license": "odc-by", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"]}
|
2023-09-20T04:20:37+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-1M<n<10M #language-English #license-odc-by #region-us
|
A filtered subset of C4-en containing 3,584,358 pages that are at least 16,000 characters long, useful for training models with longer context windows.
|
[] |
[
"TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-English #license-odc-by #region-us \n"
] |
[
41
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-English #license-odc-by #region-us \n"
] |
8d756cba3714ff340880c63404659fff138493da
|
A filtered subset of C4-en containing 835,400 pages that are at least 32,000 characters long, useful for training models with longer context windows.
|
vllg/looong_c4
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:odc-by",
"region:us"
] |
2023-09-20T01:59:08+00:00
|
{"language": ["en"], "license": "odc-by", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"]}
|
2023-09-20T04:20:24+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-odc-by #region-us
|
A filtered subset of C4-en containing 835,400 pages that are at least 32,000 characters long, useful for training models with longer context windows.
|
[] |
[
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-odc-by #region-us \n"
] |
[
41
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-odc-by #region-us \n"
] |
46a85668a777dca657299507bd2ed6f07f87cc26
|
# Dataset Card for "three_styles_prompted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kewu93/three_styles_prompted
|
[
"region:us"
] |
2023-09-20T02:02:48+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 59921589.0, "num_examples": 2100}, {"name": "val", "num_bytes": 25922766.5, "num_examples": 900}], "download_size": 84801147, "dataset_size": 85844355.5}}
|
2023-09-20T02:08:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "three_styles_prompted"
More Information needed
|
[
"# Dataset Card for \"three_styles_prompted\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"three_styles_prompted\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"three_styles_prompted\"\n\nMore Information needed"
] |
df249319855dbb652956eca4b8cb893f4cccf515
|
# Dataset Card for "ask_theology"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hxyue1/ask_theology
|
[
"region:us"
] |
2023-09-20T02:45:39+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "authors", "dtype": "string"}, {"name": "chapter", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "embeddings", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 71960834, "num_examples": 7534}], "download_size": 0, "dataset_size": 71960834}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-20T21:41:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ask_theology"
More Information needed
|
[
"# Dataset Card for \"ask_theology\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ask_theology\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ask_theology\"\n\nMore Information needed"
] |
eaa36f01007d6331005ded2e7bad073593a9583b
|
# Dataset Card for "pubmed_subset_c4_20p"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zxvix/pubmed_subset_c4_20p
|
[
"region:us"
] |
2023-09-20T02:53:32+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2372378215.5730233, "num_examples": 1250378}, {"name": "test", "num_bytes": 1024229, "num_examples": 1000}], "download_size": 909276640, "dataset_size": 2373402444.5730233}}
|
2023-09-20T02:56:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pubmed_subset_c4_20p"
More Information needed
|
[
"# Dataset Card for \"pubmed_subset_c4_20p\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pubmed_subset_c4_20p\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pubmed_subset_c4_20p\"\n\nMore Information needed"
] |
26f289cff3e2e350798bd4e8832aba7bfd1153da
|
This dataset is curated from UniProt. The test set was created by selecting entire families of proteins to separate out at random.
The train/test split is approximately 80/20. All binding site and active site annotations were merged. All sequences longer than
1000 amino acids were split into non-overlapping chunks of 1000 residues or less.
|
AmelieSchreiber/600K_binding_sites
|
[
"license:mit",
"region:us"
] |
2023-09-20T02:58:32+00:00
|
{"license": "mit"}
|
2023-10-01T00:22:36+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
This dataset is curated from UniProt. The test set was created by selecting entire families of proteins to separate out at random.
The train/test split is approximately 80/20. All binding site and active site annotations were merged. All sequences longer than
1000 amino acids were split into non-overlapping chunks of 1000 residues or less.
|
[] |
[
"TAGS\n#license-mit #region-us \n"
] |
[
11
] |
[
"passage: TAGS\n#license-mit #region-us \n"
] |
10142f7c9b65ab9c0d71bd54712140a6f07f665c
|
# Dataset Card for NENA Speech Dataset 1.0 (test)
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [How to Use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
<!-- - [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations) -->
- [Building the Dataset](#building-the-dataset)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
<!-- - [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations) -->
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## ⚠️ This is a temperary repository that will be replaced by end of 2023
## Dataset Summary
NENA Speech is a multimodal dataset to help teach machines how real people speak the Northeastern Neo-Aramaic (NENA) dialects.
The NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.
NENA Speech consists of multimodal examples of speech of the NENA dialects. While all documented NENA dialects are included, not all have data yet, and some will never due to recent loss of their final speakers.
## Dataset Description
- **Homepage**: https://crowdsource.nenadb.dev/
- **Point of Contact:** [Matthew Nazari](mailto:[email protected])
## Languages
The NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.
Speakers of the Christian dialects call their language Assyrian and Chaldean in English. In their language these speakers use multiple different terms (e.g. suráy, sureth, ḥadiṯan, senaya). Speakers of the Jewish dialects call their language lišana deni, lišanət noshan, lišana nosha, lišana didan, all meaning "our language". Some names reflect the consciousness of it being a specifically Jewish language (e.g. lišan hozaye, hulaula).
NENA Speech has a subset for all of the over 150 NENA dialects. Not all dialects have examples available yet. Some dialects will never have examples available due to the loss of their final speakers in recent years.
## How to Use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, simply specify the corresponding language config name (e.g., "urmi (christian)" for the dialect of the Assyrian Christians of Urmi):
```python
from datasets import load_dataset
nena_speech = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
## Dataset Structure
### Data Instances
The NENA Speech dataset is a multimodal dataset that consists of three different kinds of examples:
1. **Unlabeled speech examples:** these contain audio of speech (`audio`) but no accompanying transcription (`transcription`) or translation (`translation`). This is useful for representation learning.
2. **Transcribed speech examples:** these contain both audio and transcription of speech. These are useful for machine learning tasks like automatic speech recognition and speech synthesis.
3. **Transcribed and translated speech examples:** these kinds of examples contain audio, transcription, and translation of speech. These are useful for tasks like multimodal translation.
Make sure to filter for the kinds of examples you need for your task before before using it.
```json
{
"transcription": "gu-mdìta.ˈ",
"translation": "in the town.",
"audio": {
"path": "et/train/nena_speech_0uk14ofpom196aj.mp3",
"array": array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
"sampling_rate": 48000
},
"locale": "IRN",
"proficiency": "proficient as mom",
"age": "70's",
"crowdsourced": true,
"unlabeled": true,
"interrupted": true,
"client_id": "gwurt1g1ln" ,
"path": "et/train/nena_speech_0uk14ofpom196aj.mp3",
}
```
### Data Fields
- `transcription (string)`: The transcription of what was spoken (e.g. `"beta"`)
- `translation (string)`: The translation of what was spoken in English (e.g. `"house"`)
- `audio (dict)`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`.
- `locale (string)`: The locale of the speaker
- `proficiency (string)`: The proficiency of the speaker
- `age (string)`: The age of the speaker (e.g. `"20's"`, `"50's"`, `"100+"`)
- `crowdsourced (bool)`: Indicates whether the example was crowdsourced as opposed to collected from existing language documentation resources
- `interrupted (bool)`: Indicates whether the example was interrupted with the speaker making sound effects or switching into another language
- `client_id (string)`: An id for which client (voice) made the recording
- `path (string)`: The path to the audio file
### Data Splits
The examples have been subdivided into three portions:
1. **dev:** the validation split (10%)
3. **test:** the test split (10%)
2. **train:** the train split (80%)
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Dataset Creation
<!-- ### Curation Rationale
[Needs More Information]
### Source Data
#### Language Documentation Resources
[Needs More Information]
#### Webscraping Facebook
[Needs More Information]
#### Crowdsourcing
[Needs More Information]
### Annotations
[Needs More Information] -->
### Building the Dataset
The NENA Speech dataset itself is built using `build.py`.
First, install the necessary requirements.
```
pip install -r requirements.txt
```
Next, build the dataset.
```
python build.py --build
```
Finally, push to the HuggingFace dataset repository.
## Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Data Preprocessing
The dataset consists of three different kinds of examples (see [Data Instances](#data-instances)).
Make sure to filter for the kinds of examples you need for your task before before using it. For example, for automatic speech recognition you will want to filter for examples with transcriptions.
In most tasks, you will want to filter out examples that are interrupted (e.g. by the speaker making sound effects, by the speaker switching into a another language).
```python
from datasets import load_dataset
ds = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")
def filter_for_asr(example):
return example['transcription'] and not example['interrupted']
ds = ds.filter(filter_for_asr, desc="filter dataset")
```
Transcriptions include markers of linguistic and acoustic features which may be removed in certain tasks (e.g. word stress, nuclear stress, intonation group markers, vowel length).
```python
from datasets import load_dataset
ds = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")
def prepare_dataset(batch):
chars_to_remove = ['ˈ', '̀', '́', '̄', '̆', '.', ',', '?', '!']
for char in chars_to_remove:
batch["transcription"] = batch["transcription"].replace(char, "")
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
<!-- ## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information] -->
## Additional Information
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/).
### Citation Information
This work has not yet been published.
|
mnazari/nena_speech_1_0_test
|
[
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"task_categories:translation",
"annotations_creators:crowdsourced",
"annotations_creators:Geoffrey Khan",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"language:aii",
"language:cld",
"language:huy",
"language:lsd",
"language:trg",
"language:aij",
"language:bhn",
"language:hrt",
"language:kqd",
"language:syn",
"license:cc0-1.0",
"region:us"
] |
2023-09-20T03:23:27+00:00
|
{"annotations_creators": ["crowdsourced", "Geoffrey Khan"], "language_creators": ["crowdsourced"], "language": ["aii", "cld", "huy", "lsd", "trg", "aij", "bhn", "hrt", "kqd", "syn"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K", "1K<n<10K", "n<1K"], "task_categories": ["automatic-speech-recognition", "text-to-speech", "translation"], "pretty_name": "NENA Speech Dataset 1.0 (test)"}
|
2023-10-27T07:58:56+00:00
|
[] |
[
"aii",
"cld",
"huy",
"lsd",
"trg",
"aij",
"bhn",
"hrt",
"kqd",
"syn"
] |
TAGS
#task_categories-automatic-speech-recognition #task_categories-text-to-speech #task_categories-translation #annotations_creators-crowdsourced #annotations_creators-Geoffrey Khan #language_creators-crowdsourced #multilinguality-multilingual #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #language-Assyrian Neo-Aramaic #language-Chaldean Neo-Aramaic #language-Hulaulá #language-Lishana Deni #language-Lishán Didán #language-Lishanid Noshan #language-Bohtan Neo-Aramaic #language-Hértevin #language-Koy Sanjaq Surat #language-Senaya #license-cc0-1.0 #region-us
|
# Dataset Card for NENA Speech Dataset 1.0 (test)
## Table of Contents
- Dataset Summary
- Dataset Description
- Languages
- How to Use
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Building the Dataset
- Personal and Sensitive Information
- Additional Information
- Licensing Information
- Citation Information
## ️ This is a temperary repository that will be replaced by end of 2023
## Dataset Summary
NENA Speech is a multimodal dataset to help teach machines how real people speak the Northeastern Neo-Aramaic (NENA) dialects.
The NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.
NENA Speech consists of multimodal examples of speech of the NENA dialects. While all documented NENA dialects are included, not all have data yet, and some will never due to recent loss of their final speakers.
## Dataset Description
- Homepage: URL
- Point of Contact: Matthew Nazari
## Languages
The NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.
Speakers of the Christian dialects call their language Assyrian and Chaldean in English. In their language these speakers use multiple different terms (e.g. suráy, sureth, ḥadiṯan, senaya). Speakers of the Jewish dialects call their language lišana deni, lišanət noshan, lišana nosha, lišana didan, all meaning "our language". Some names reflect the consciousness of it being a specifically Jewish language (e.g. lišan hozaye, hulaula).
NENA Speech has a subset for all of the over 150 NENA dialects. Not all dialects have examples available yet. Some dialects will never have examples available due to the loss of their final speakers in recent years.
## How to Use
The 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function.
For example, simply specify the corresponding language config name (e.g., "urmi (christian)" for the dialect of the Assyrian Christians of Urmi):
To find out more about loading and preparing audio datasets, head over to URL
## Dataset Structure
### Data Instances
The NENA Speech dataset is a multimodal dataset that consists of three different kinds of examples:
1. Unlabeled speech examples: these contain audio of speech ('audio') but no accompanying transcription ('transcription') or translation ('translation'). This is useful for representation learning.
2. Transcribed speech examples: these contain both audio and transcription of speech. These are useful for machine learning tasks like automatic speech recognition and speech synthesis.
3. Transcribed and translated speech examples: these kinds of examples contain audio, transcription, and translation of speech. These are useful for tasks like multimodal translation.
Make sure to filter for the kinds of examples you need for your task before before using it.
### Data Fields
- 'transcription (string)': The transcription of what was spoken (e.g. '"beta"')
- 'translation (string)': The translation of what was spoken in English (e.g. '"house"')
- 'audio (dict)': A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
- 'locale (string)': The locale of the speaker
- 'proficiency (string)': The proficiency of the speaker
- 'age (string)': The age of the speaker (e.g. '"20's"', '"50's"', '"100+"')
- 'crowdsourced (bool)': Indicates whether the example was crowdsourced as opposed to collected from existing language documentation resources
- 'interrupted (bool)': Indicates whether the example was interrupted with the speaker making sound effects or switching into another language
- 'client_id (string)': An id for which client (voice) made the recording
- 'path (string)': The path to the audio file
### Data Splits
The examples have been subdivided into three portions:
1. dev: the validation split (10%)
3. test: the test split (10%)
2. train: the train split (80%)
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Dataset Creation
### Building the Dataset
The NENA Speech dataset itself is built using 'URL'.
First, install the necessary requirements.
Next, build the dataset.
Finally, push to the HuggingFace dataset repository.
## Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Data Preprocessing
The dataset consists of three different kinds of examples (see Data Instances).
Make sure to filter for the kinds of examples you need for your task before before using it. For example, for automatic speech recognition you will want to filter for examples with transcriptions.
In most tasks, you will want to filter out examples that are interrupted (e.g. by the speaker making sound effects, by the speaker switching into a another language).
Transcriptions include markers of linguistic and acoustic features which may be removed in certain tasks (e.g. word stress, nuclear stress, intonation group markers, vowel length).
## Additional Information
### Licensing Information
Public Domain, CC-0.
This work has not yet been published.
|
[
"# Dataset Card for NENA Speech Dataset 1.0 (test)",
"## Table of Contents\n\n- Dataset Summary\n- Dataset Description\n- Languages\n- How to Use\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n \n - Building the Dataset\n- Personal and Sensitive Information\n\n- Additional Information\n - Licensing Information\n - Citation Information",
"## ️ This is a temperary repository that will be replaced by end of 2023",
"## Dataset Summary\n\nNENA Speech is a multimodal dataset to help teach machines how real people speak the Northeastern Neo-Aramaic (NENA) dialects.\n\nThe NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.\n\nNENA Speech consists of multimodal examples of speech of the NENA dialects. While all documented NENA dialects are included, not all have data yet, and some will never due to recent loss of their final speakers.",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Matthew Nazari",
"## Languages\n\nThe NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.\n\nSpeakers of the Christian dialects call their language Assyrian and Chaldean in English. In their language these speakers use multiple different terms (e.g. suráy, sureth, ḥadiṯan, senaya). Speakers of the Jewish dialects call their language lišana deni, lišanət noshan, lišana nosha, lišana didan, all meaning \"our language\". Some names reflect the consciousness of it being a specifically Jewish language (e.g. lišan hozaye, hulaula).\n\nNENA Speech has a subset for all of the over 150 NENA dialects. Not all dialects have examples available yet. Some dialects will never have examples available due to the loss of their final speakers in recent years.",
"## How to Use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, simply specify the corresponding language config name (e.g., \"urmi (christian)\" for the dialect of the Assyrian Christians of Urmi):\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL",
"## Dataset Structure",
"### Data Instances\n\nThe NENA Speech dataset is a multimodal dataset that consists of three different kinds of examples:\n\n1. Unlabeled speech examples: these contain audio of speech ('audio') but no accompanying transcription ('transcription') or translation ('translation'). This is useful for representation learning.\n2. Transcribed speech examples: these contain both audio and transcription of speech. These are useful for machine learning tasks like automatic speech recognition and speech synthesis.\n3. Transcribed and translated speech examples: these kinds of examples contain audio, transcription, and translation of speech. These are useful for tasks like multimodal translation.\n\nMake sure to filter for the kinds of examples you need for your task before before using it.",
"### Data Fields\n\n- 'transcription (string)': The transcription of what was spoken (e.g. '\"beta\"')\n- 'translation (string)': The translation of what was spoken in English (e.g. '\"house\"')\n- 'audio (dict)': A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the \"audio\" column, i.e. 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n- 'locale (string)': The locale of the speaker\n- 'proficiency (string)': The proficiency of the speaker\n- 'age (string)': The age of the speaker (e.g. '\"20's\"', '\"50's\"', '\"100+\"')\n- 'crowdsourced (bool)': Indicates whether the example was crowdsourced as opposed to collected from existing language documentation resources\n- 'interrupted (bool)': Indicates whether the example was interrupted with the speaker making sound effects or switching into another language\n- 'client_id (string)': An id for which client (voice) made the recording\n- 'path (string)': The path to the audio file",
"### Data Splits\n\nThe examples have been subdivided into three portions:\n\n1. dev: the validation split (10%)\n3. test: the test split (10%)\n2. train: the train split (80%)\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.",
"## Dataset Creation",
"### Building the Dataset\n\nThe NENA Speech dataset itself is built using 'URL'.\n\nFirst, install the necessary requirements.\n\n\n\nNext, build the dataset.\n\n\n\nFinally, push to the HuggingFace dataset repository.",
"## Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"## Data Preprocessing\n\nThe dataset consists of three different kinds of examples (see Data Instances).\n\nMake sure to filter for the kinds of examples you need for your task before before using it. For example, for automatic speech recognition you will want to filter for examples with transcriptions.\n\nIn most tasks, you will want to filter out examples that are interrupted (e.g. by the speaker making sound effects, by the speaker switching into a another language).\n\n\n\nTranscriptions include markers of linguistic and acoustic features which may be removed in certain tasks (e.g. word stress, nuclear stress, intonation group markers, vowel length).",
"## Additional Information",
"### Licensing Information\n\nPublic Domain, CC-0.\n\n\n\nThis work has not yet been published."
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #task_categories-text-to-speech #task_categories-translation #annotations_creators-crowdsourced #annotations_creators-Geoffrey Khan #language_creators-crowdsourced #multilinguality-multilingual #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #language-Assyrian Neo-Aramaic #language-Chaldean Neo-Aramaic #language-Hulaulá #language-Lishana Deni #language-Lishán Didán #language-Lishanid Noshan #language-Bohtan Neo-Aramaic #language-Hértevin #language-Koy Sanjaq Surat #language-Senaya #license-cc0-1.0 #region-us \n",
"# Dataset Card for NENA Speech Dataset 1.0 (test)",
"## Table of Contents\n\n- Dataset Summary\n- Dataset Description\n- Languages\n- How to Use\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n \n - Building the Dataset\n- Personal and Sensitive Information\n\n- Additional Information\n - Licensing Information\n - Citation Information",
"## ️ This is a temperary repository that will be replaced by end of 2023",
"## Dataset Summary\n\nNENA Speech is a multimodal dataset to help teach machines how real people speak the Northeastern Neo-Aramaic (NENA) dialects.\n\nThe NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.\n\nNENA Speech consists of multimodal examples of speech of the NENA dialects. While all documented NENA dialects are included, not all have data yet, and some will never due to recent loss of their final speakers.",
"## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Matthew Nazari",
"## Languages\n\nThe NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.\n\nSpeakers of the Christian dialects call their language Assyrian and Chaldean in English. In their language these speakers use multiple different terms (e.g. suráy, sureth, ḥadiṯan, senaya). Speakers of the Jewish dialects call their language lišana deni, lišanət noshan, lišana nosha, lišana didan, all meaning \"our language\". Some names reflect the consciousness of it being a specifically Jewish language (e.g. lišan hozaye, hulaula).\n\nNENA Speech has a subset for all of the over 150 NENA dialects. Not all dialects have examples available yet. Some dialects will never have examples available due to the loss of their final speakers in recent years.",
"## How to Use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, simply specify the corresponding language config name (e.g., \"urmi (christian)\" for the dialect of the Assyrian Christians of Urmi):\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL",
"## Dataset Structure",
"### Data Instances\n\nThe NENA Speech dataset is a multimodal dataset that consists of three different kinds of examples:\n\n1. Unlabeled speech examples: these contain audio of speech ('audio') but no accompanying transcription ('transcription') or translation ('translation'). This is useful for representation learning.\n2. Transcribed speech examples: these contain both audio and transcription of speech. These are useful for machine learning tasks like automatic speech recognition and speech synthesis.\n3. Transcribed and translated speech examples: these kinds of examples contain audio, transcription, and translation of speech. These are useful for tasks like multimodal translation.\n\nMake sure to filter for the kinds of examples you need for your task before before using it.",
"### Data Fields\n\n- 'transcription (string)': The transcription of what was spoken (e.g. '\"beta\"')\n- 'translation (string)': The translation of what was spoken in English (e.g. '\"house\"')\n- 'audio (dict)': A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the \"audio\" column, i.e. 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n- 'locale (string)': The locale of the speaker\n- 'proficiency (string)': The proficiency of the speaker\n- 'age (string)': The age of the speaker (e.g. '\"20's\"', '\"50's\"', '\"100+\"')\n- 'crowdsourced (bool)': Indicates whether the example was crowdsourced as opposed to collected from existing language documentation resources\n- 'interrupted (bool)': Indicates whether the example was interrupted with the speaker making sound effects or switching into another language\n- 'client_id (string)': An id for which client (voice) made the recording\n- 'path (string)': The path to the audio file",
"### Data Splits\n\nThe examples have been subdivided into three portions:\n\n1. dev: the validation split (10%)\n3. test: the test split (10%)\n2. train: the train split (80%)\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.",
"## Dataset Creation",
"### Building the Dataset\n\nThe NENA Speech dataset itself is built using 'URL'.\n\nFirst, install the necessary requirements.\n\n\n\nNext, build the dataset.\n\n\n\nFinally, push to the HuggingFace dataset repository.",
"## Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"## Data Preprocessing\n\nThe dataset consists of three different kinds of examples (see Data Instances).\n\nMake sure to filter for the kinds of examples you need for your task before before using it. For example, for automatic speech recognition you will want to filter for examples with transcriptions.\n\nIn most tasks, you will want to filter out examples that are interrupted (e.g. by the speaker making sound effects, by the speaker switching into a another language).\n\n\n\nTranscriptions include markers of linguistic and acoustic features which may be removed in certain tasks (e.g. word stress, nuclear stress, intonation group markers, vowel length).",
"## Additional Information",
"### Licensing Information\n\nPublic Domain, CC-0.\n\n\n\nThis work has not yet been published."
] |
[
214,
14,
71,
20,
135,
16,
217,
118,
6,
177,
405,
74,
5,
49,
41,
149,
5,
20
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #task_categories-text-to-speech #task_categories-translation #annotations_creators-crowdsourced #annotations_creators-Geoffrey Khan #language_creators-crowdsourced #multilinguality-multilingual #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #language-Assyrian Neo-Aramaic #language-Chaldean Neo-Aramaic #language-Hulaulá #language-Lishana Deni #language-Lishán Didán #language-Lishanid Noshan #language-Bohtan Neo-Aramaic #language-Hértevin #language-Koy Sanjaq Surat #language-Senaya #license-cc0-1.0 #region-us \n# Dataset Card for NENA Speech Dataset 1.0 (test)## Table of Contents\n\n- Dataset Summary\n- Dataset Description\n- Languages\n- How to Use\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n \n - Building the Dataset\n- Personal and Sensitive Information\n\n- Additional Information\n - Licensing Information\n - Citation Information## ️ This is a temperary repository that will be replaced by end of 2023## Dataset Summary\n\nNENA Speech is a multimodal dataset to help teach machines how real people speak the Northeastern Neo-Aramaic (NENA) dialects.\n\nThe NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.\n\nNENA Speech consists of multimodal examples of speech of the NENA dialects. While all documented NENA dialects are included, not all have data yet, and some will never due to recent loss of their final speakers.## Dataset Description\n\n- Homepage: URL\n- Point of Contact: Matthew Nazari",
"passage: ## Languages\n\nThe NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.\n\nSpeakers of the Christian dialects call their language Assyrian and Chaldean in English. In their language these speakers use multiple different terms (e.g. suráy, sureth, ḥadiṯan, senaya). Speakers of the Jewish dialects call their language lišana deni, lišanət noshan, lišana nosha, lišana didan, all meaning \"our language\". Some names reflect the consciousness of it being a specifically Jewish language (e.g. lišan hozaye, hulaula).\n\nNENA Speech has a subset for all of the over 150 NENA dialects. Not all dialects have examples available yet. Some dialects will never have examples available due to the loss of their final speakers in recent years.## How to Use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, simply specify the corresponding language config name (e.g., \"urmi (christian)\" for the dialect of the Assyrian Christians of Urmi):\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL## Dataset Structure### Data Instances\n\nThe NENA Speech dataset is a multimodal dataset that consists of three different kinds of examples:\n\n1. Unlabeled speech examples: these contain audio of speech ('audio') but no accompanying transcription ('transcription') or translation ('translation'). This is useful for representation learning.\n2. Transcribed speech examples: these contain both audio and transcription of speech. These are useful for machine learning tasks like automatic speech recognition and speech synthesis.\n3. Transcribed and translated speech examples: these kinds of examples contain audio, transcription, and translation of speech. These are useful for tasks like multimodal translation.\n\nMake sure to filter for the kinds of examples you need for your task before before using it."
] |
befb8baaa45e460bf54f856d2940f737a02b578c
|
# Dataset Card for "tang-poems-with-keywords"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chenqile09/tang-poems-with-keywords
|
[
"region:us"
] |
2023-09-20T04:32:47+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "author", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "paragraph", "dtype": "string"}, {"name": "keywords", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2464318, "num_examples": 5274}, {"name": "train", "num_bytes": 16842216, "num_examples": 36000}], "download_size": 12757028, "dataset_size": 19306534}}
|
2023-09-28T01:17:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "tang-poems-with-keywords"
More Information needed
|
[
"# Dataset Card for \"tang-poems-with-keywords\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"tang-poems-with-keywords\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"tang-poems-with-keywords\"\n\nMore Information needed"
] |
fdcc899d8aed86ee15e626399f5288dda5cc53aa
|
# Dataset Card for "enhanced_scenes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/enhanced_scenes
|
[
"region:us"
] |
2023-09-20T05:00:57+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3222820, "num_examples": 10000}], "download_size": 481342, "dataset_size": 3222820}}
|
2023-09-20T05:00:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "enhanced_scenes"
More Information needed
|
[
"# Dataset Card for \"enhanced_scenes\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"enhanced_scenes\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"enhanced_scenes\"\n\nMore Information needed"
] |
c0903d3ea19408b9a7f860bb08ed85c2c47f0d2f
|
# Dataset Card for "govreport-summarization-tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
shossain/govreport-summarization-tokenized
|
[
"region:us"
] |
2023-09-20T05:19:21+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 69604, "num_examples": 973}], "download_size": 22673, "dataset_size": 69604}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T06:04:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "govreport-summarization-tokenized"
More Information needed
|
[
"# Dataset Card for \"govreport-summarization-tokenized\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"govreport-summarization-tokenized\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"govreport-summarization-tokenized\"\n\nMore Information needed"
] |
7cac38d743b0483cd5e14883097b2597e6720354
|
## LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset
This dataset contains one million real-world conversations with 25 state-of-the-art LLMs.
It is collected from 210K unique IP addresses in the wild on the [Vicuna demo and Chatbot Arena website](https://chat.lmsys.org/) from April to August 2023.
Each sample includes a conversation ID, model name, conversation text in OpenAI API JSON format, detected language tag, and OpenAI moderation API tag.
User consent is obtained through the "Terms of use" section on the data collection website.
To ensure the safe release of data, we have made our best efforts to remove all conversations that contain personally identifiable information (PII).
In addition, we have included the OpenAI moderation API output for each message.
However, we have chosen to keep unsafe conversations so that researchers can study the safety-related questions associated with LLM usage in real-world scenarios as well as the OpenAI moderation process.
For more details, please refer to the paper: https://arxiv.org/abs/2309.11998
**Basic Statistics**
| Key | Value |
| --- | --- |
| # Conversations | 1,000,000 |
| # Models | 25 |
| # Users | 210,479 |
| # Languages | 154 |
| Avg. # Turns per Sample | 2.0 |
| Avg. # Tokens per Prompt | 69.5 |
| Avg. # Tokens per Response | 214.5 |
**PII Redaction**
We partnered with the [OpaquePrompts](https://opaqueprompts.opaque.co/) team to redact person names in this dataset to protect user privacy.
Names like "Mary" and "James" in a conversation will appear as "NAME_1" and "NAME_2". For example:
```json
Raw: [ { "content": "Write me a bio. My Name is Mary I am a student who is currently a beginner free lancer. I worked with James in the past ..." }]
Redacted: [ { "content": "Write me a bio. My Name is NAME_1 I am a student who is currently a beginner free lancer. I worked with NAME_2 in the past ..." }]
```
Each conversation includes a "redacted" field to indicate if it has been redacted.
This process may impact data quality and occasionally lead to incorrect redactions.
We are working on improving the redaction quality and will release improved versions in the future.
If you want to access the raw conversation data, please fill out [the form](https://docs.google.com/forms/d/1PZw67e19l0W3oCiQOjzSyZvXfOemhg6LCY0XzVmOUx0/edit) with details about your intended use cases.
## Uniqueness and Potential Usage
This dataset features large-scale real-world conversations with LLMs.
We believe it will help the AI research community answer important questions around topics like:
- Characteristics and distributions of real-world user prompts
- AI safety and content moderation
- Training instruction-following models
- Improving and evaluating LLM evaluation methods
- Model selection and request dispatching algorithms
For more details, please refer to the paper: https://arxiv.org/abs/2309.11998
## LMSYS-Chat-1M Dataset License Agreement
This Agreement contains the terms and conditions that govern your access and use of the LMSYS-Chat-1M Dataset (as defined above). You may not use the LMSYS-Chat-1M Dataset if you do not accept this Agreement. By clicking to accept, accessing the LMSYS-Chat-1M Dataset, or both, you hereby agree to the terms of the Agreement. If you are agreeing to be bound by the Agreement on behalf of your employer or another entity, you represent and warrant that you have full legal authority to bind your employer or such entity to this Agreement. If you do not have the requisite authority, you may not accept the Agreement or access the LMSYS-Chat-1M Dataset on behalf of your employer or another entity.
- Safety and Moderation: **This dataset contains unsafe conversations that may be perceived as offensive or unsettling.** User should apply appropriate filters and safety measures before utilizing this dataset for training dialogue agents.
- Non-Endorsement: The views and opinions depicted in this dataset **do not reflect** the perspectives of the researchers or affiliated institutions engaged in the data collection process.
- Legal Compliance: You are mandated to use it in adherence with all pertinent laws and regulations.
- Model Specific Terms: When leveraging direct outputs of a specific model, users must adhere to its corresponding terms of use.
- Non-Identification: You **must not** attempt to identify the identities of individuals or infer any sensitive personal data encompassed in this dataset.
- Prohibited Transfers: You should not distribute, copy, disclose, assign, sublicense, embed, host, or otherwise transfer the dataset to any third party.
- Right to Request Deletion: At any time, we may require you to delete all copies of the conversation dataset (in whole or in part) in your possession and control. You will promptly comply with any and all such requests. Upon our request, you shall provide us with written confirmation of your compliance with such requirement.
- Termination: We may, at any time, for any reason or for no reason, terminate this Agreement, effective immediately upon notice to you. Upon termination, the license granted to you hereunder will immediately terminate, and you will immediately stop using the LMSYS-Chat-1M Dataset and destroy all copies of the LMSYS-Chat-1M Dataset and related materials in your possession or control.
- Limitation of Liability: IN NO EVENT WILL WE BE LIABLE FOR ANY CONSEQUENTIAL, INCIDENTAL, EXEMPLARY, PUNITIVE, SPECIAL, OR INDIRECT DAMAGES (INCLUDING DAMAGES FOR LOSS OF PROFITS, BUSINESS INTERRUPTION, OR LOSS OF INFORMATION) ARISING OUT OF OR RELATING TO THIS AGREEMENT OR ITS SUBJECT MATTER, EVEN IF WE HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Subject to your compliance with the terms and conditions of this Agreement, we grant to you, a limited, non-exclusive, non-transferable, non-sublicensable license to use the LMSYS-Chat-1M Dataset, including the conversation data and annotations, to research, develop, and improve software, algorithms, machine learning models, techniques, and technologies for both research and commercial purposes.
## Citation
```
@misc{zheng2023lmsyschat1m,
title={LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset},
author={Lianmin Zheng and Wei-Lin Chiang and Ying Sheng and Tianle Li and Siyuan Zhuang and Zhanghao Wu and Yonghao Zhuang and Zhuohan Li and Zi Lin and Eric. P Xing and Joseph E. Gonzalez and Ion Stoica and Hao Zhang},
year={2023},
eprint={2309.11998},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
lmsys/lmsys-chat-1m
|
[
"task_categories:conversational",
"size_categories:1M<n<10M",
"arxiv:2309.11998",
"region:us"
] |
2023-09-20T05:33:44+00:00
|
{"size_categories": ["1M<n<10M"], "task_categories": ["conversational"], "extra_gated_prompt": "You agree to the [LMSYS-Chat-1M Dataset License Agreement](https://huggingface.co/datasets/lmsys/lmsys-chat-1m#lmsys-chat-1m-dataset-license-agreement).", "extra_gated_fields": {"Name": "text", "Email": "text", "Affiliation": "text", "Country": "text"}, "extra_gated_button_content": "I agree to the terms and conditions of the LMSYS-Chat-1M Dataset License Agreement.", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "conversation_id", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "conversation", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "turn", "dtype": "int64"}, {"name": "language", "dtype": "string"}, {"name": "openai_moderation", "list": [{"name": "categories", "struct": [{"name": "harassment", "dtype": "bool"}, {"name": "harassment/threatening", "dtype": "bool"}, {"name": "hate", "dtype": "bool"}, {"name": "hate/threatening", "dtype": "bool"}, {"name": "self-harm", "dtype": "bool"}, {"name": "self-harm/instructions", "dtype": "bool"}, {"name": "self-harm/intent", "dtype": "bool"}, {"name": "sexual", "dtype": "bool"}, {"name": "sexual/minors", "dtype": "bool"}, {"name": "violence", "dtype": "bool"}, {"name": "violence/graphic", "dtype": "bool"}]}, {"name": "category_scores", "struct": [{"name": "harassment", "dtype": "float64"}, {"name": "harassment/threatening", "dtype": "float64"}, {"name": "hate", "dtype": "float64"}, {"name": "hate/threatening", "dtype": "float64"}, {"name": "self-harm", "dtype": "float64"}, {"name": "self-harm/instructions", "dtype": "float64"}, {"name": "self-harm/intent", "dtype": "float64"}, {"name": "sexual", "dtype": "float64"}, {"name": "sexual/minors", "dtype": "float64"}, {"name": "violence", "dtype": "float64"}, {"name": "violence/graphic", "dtype": "float64"}]}, {"name": "flagged", "dtype": "bool"}]}, {"name": "redacted", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 2626438904, "num_examples": 1000000}], "download_size": 1488850250, "dataset_size": 2626438904}}
|
2023-10-04T16:40:32+00:00
|
[
"2309.11998"
] |
[] |
TAGS
#task_categories-conversational #size_categories-1M<n<10M #arxiv-2309.11998 #region-us
|
LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset
----------------------------------------------------------------
This dataset contains one million real-world conversations with 25 state-of-the-art LLMs.
It is collected from 210K unique IP addresses in the wild on the Vicuna demo and Chatbot Arena website from April to August 2023.
Each sample includes a conversation ID, model name, conversation text in OpenAI API JSON format, detected language tag, and OpenAI moderation API tag.
User consent is obtained through the "Terms of use" section on the data collection website.
To ensure the safe release of data, we have made our best efforts to remove all conversations that contain personally identifiable information (PII).
In addition, we have included the OpenAI moderation API output for each message.
However, we have chosen to keep unsafe conversations so that researchers can study the safety-related questions associated with LLM usage in real-world scenarios as well as the OpenAI moderation process.
For more details, please refer to the paper: URL
Basic Statistics
PII Redaction
We partnered with the OpaquePrompts team to redact person names in this dataset to protect user privacy.
Names like "Mary" and "James" in a conversation will appear as "NAME\_1" and "NAME\_2". For example:
Each conversation includes a "redacted" field to indicate if it has been redacted.
This process may impact data quality and occasionally lead to incorrect redactions.
We are working on improving the redaction quality and will release improved versions in the future.
If you want to access the raw conversation data, please fill out the form with details about your intended use cases.
Uniqueness and Potential Usage
------------------------------
This dataset features large-scale real-world conversations with LLMs.
We believe it will help the AI research community answer important questions around topics like:
* Characteristics and distributions of real-world user prompts
* AI safety and content moderation
* Training instruction-following models
* Improving and evaluating LLM evaluation methods
* Model selection and request dispatching algorithms
For more details, please refer to the paper: URL
LMSYS-Chat-1M Dataset License Agreement
---------------------------------------
This Agreement contains the terms and conditions that govern your access and use of the LMSYS-Chat-1M Dataset (as defined above). You may not use the LMSYS-Chat-1M Dataset if you do not accept this Agreement. By clicking to accept, accessing the LMSYS-Chat-1M Dataset, or both, you hereby agree to the terms of the Agreement. If you are agreeing to be bound by the Agreement on behalf of your employer or another entity, you represent and warrant that you have full legal authority to bind your employer or such entity to this Agreement. If you do not have the requisite authority, you may not accept the Agreement or access the LMSYS-Chat-1M Dataset on behalf of your employer or another entity.
* Safety and Moderation: This dataset contains unsafe conversations that may be perceived as offensive or unsettling. User should apply appropriate filters and safety measures before utilizing this dataset for training dialogue agents.
* Non-Endorsement: The views and opinions depicted in this dataset do not reflect the perspectives of the researchers or affiliated institutions engaged in the data collection process.
* Legal Compliance: You are mandated to use it in adherence with all pertinent laws and regulations.
* Model Specific Terms: When leveraging direct outputs of a specific model, users must adhere to its corresponding terms of use.
* Non-Identification: You must not attempt to identify the identities of individuals or infer any sensitive personal data encompassed in this dataset.
* Prohibited Transfers: You should not distribute, copy, disclose, assign, sublicense, embed, host, or otherwise transfer the dataset to any third party.
* Right to Request Deletion: At any time, we may require you to delete all copies of the conversation dataset (in whole or in part) in your possession and control. You will promptly comply with any and all such requests. Upon our request, you shall provide us with written confirmation of your compliance with such requirement.
* Termination: We may, at any time, for any reason or for no reason, terminate this Agreement, effective immediately upon notice to you. Upon termination, the license granted to you hereunder will immediately terminate, and you will immediately stop using the LMSYS-Chat-1M Dataset and destroy all copies of the LMSYS-Chat-1M Dataset and related materials in your possession or control.
* Limitation of Liability: IN NO EVENT WILL WE BE LIABLE FOR ANY CONSEQUENTIAL, INCIDENTAL, EXEMPLARY, PUNITIVE, SPECIAL, OR INDIRECT DAMAGES (INCLUDING DAMAGES FOR LOSS OF PROFITS, BUSINESS INTERRUPTION, OR LOSS OF INFORMATION) ARISING OUT OF OR RELATING TO THIS AGREEMENT OR ITS SUBJECT MATTER, EVEN IF WE HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Subject to your compliance with the terms and conditions of this Agreement, we grant to you, a limited, non-exclusive, non-transferable, non-sublicensable license to use the LMSYS-Chat-1M Dataset, including the conversation data and annotations, to research, develop, and improve software, algorithms, machine learning models, techniques, and technologies for both research and commercial purposes.
|
[] |
[
"TAGS\n#task_categories-conversational #size_categories-1M<n<10M #arxiv-2309.11998 #region-us \n"
] |
[
36
] |
[
"passage: TAGS\n#task_categories-conversational #size_categories-1M<n<10M #arxiv-2309.11998 #region-us \n"
] |
bd46d0b395d67bc9bdd8ef14b4a7946b59d7476a
|
# Dataset Card for "arabic_enhanced_scenes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/arabic_enhanced_scenes
|
[
"region:us"
] |
2023-09-20T05:34:22+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3389696, "num_examples": 10000}], "download_size": 403975, "dataset_size": 3389696}}
|
2023-09-20T06:00:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "arabic_enhanced_scenes"
More Information needed
|
[
"# Dataset Card for \"arabic_enhanced_scenes\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"arabic_enhanced_scenes\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"arabic_enhanced_scenes\"\n\nMore Information needed"
] |
09bfb6569bf9c66ce86b6b4065aef52277cf708b
|
A modified dataset of English dialogs between a user and an assistant discussing movie preferences in natural language. In the foundational dataset, it initially had 502 records.
Dataset Information
- Name: ccpemv2.jsonl
- Version: Version 0.2
- Modifications Included: Transformed to ### User:... ### Assistant:... format
- Language: English
- License: Creative Commons Attribution 4.0
Original Dataset Citation
```
@inproceedings{radlinski-etal-2019-ccpe,
title = {Coached Conversational Preference Elicitation: A Case Study in Understanding Movie Preferences},
author = {Filip Radlinski and Krisztian Balog and Bill Byrne and Karthik Krishnamoorthi},
booktitle = {Proceedings of the Annual Meeting of the Special Interest Group on Discourse and Dialogue ({SIGDIAL})},
year = 2019
}
```
|
aloobun/ccpemv2
|
[
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:cc-by-4.0",
"region:us"
] |
2023-09-20T05:47:37+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["n<1K"], "task_categories": ["text-generation"], "pretty_name": "movies dialog dataset"}
|
2023-09-29T09:31:35+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-n<1K #language-English #license-cc-by-4.0 #region-us
|
A modified dataset of English dialogs between a user and an assistant discussing movie preferences in natural language. In the foundational dataset, it initially had 502 records.
Dataset Information
- Name: URL
- Version: Version 0.2
- Modifications Included: Transformed to ### User:... ### Assistant:... format
- Language: English
- License: Creative Commons Attribution 4.0
Original Dataset Citation
|
[
"### User:... ### Assistant:... format\n- Language: English\n- License: Creative Commons Attribution 4.0\n\nOriginal Dataset Citation"
] |
[
"TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #license-cc-by-4.0 #region-us \n",
"### User:... ### Assistant:... format\n- Language: English\n- License: Creative Commons Attribution 4.0\n\nOriginal Dataset Citation"
] |
[
40,
27
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #license-cc-by-4.0 #region-us \n### User:... ### Assistant:... format\n- Language: English\n- License: Creative Commons Attribution 4.0\n\nOriginal Dataset Citation"
] |
c4be953c7f91a89d264d6fe192d26d4e4de2f25e
|
# Dataset Card for "logits-mt-ar-128"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
amitness/logits-mt-ar-128
|
[
"region:us"
] |
2023-09-20T05:52:57+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"name": "teacher_logits", "sequence": {"sequence": "float64"}}, {"name": "teacher_indices", "sequence": {"sequence": "int64"}}, {"name": "teacher_mask_indices", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 17264984288, "num_examples": 3814324}, {"name": "test", "num_bytes": 3047653868, "num_examples": 673117}], "download_size": 2917556992, "dataset_size": 20312638156}}
|
2023-09-27T07:30:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "logits-mt-ar-128"
More Information needed
|
[
"# Dataset Card for \"logits-mt-ar-128\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"logits-mt-ar-128\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"logits-mt-ar-128\"\n\nMore Information needed"
] |
433d972398117c717b8081757f6753819b69d70b
|
# Dataset Card for "autotree_automl_Higgs_gosdt_l512_d3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_automl_Higgs_gosdt_l512_d3
|
[
"region:us"
] |
2023-09-20T05:54:54+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float64"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float64"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 12501600000, "num_examples": 100000}, {"name": "validation", "num_bytes": 1250160000, "num_examples": 10000}], "download_size": 9801842261, "dataset_size": 13751760000}}
|
2023-09-20T05:58:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_automl_Higgs_gosdt_l512_d3"
More Information needed
|
[
"# Dataset Card for \"autotree_automl_Higgs_gosdt_l512_d3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_automl_Higgs_gosdt_l512_d3\"\n\nMore Information needed"
] |
[
6,
29
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_Higgs_gosdt_l512_d3\"\n\nMore Information needed"
] |
113a805d3003cd55209f40c5573064156a927eab
|
# Dataset Card for "recipe-nlg-llama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
skadewdl3/recipe-nlg-llama2
|
[
"region:us"
] |
2023-09-20T06:17:54+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "ingredients", "dtype": "string"}, {"name": "directions", "dtype": "string"}, {"name": "link", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "NER", "dtype": "string"}, {"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3317395276.3463464, "num_examples": 2008027}, {"name": "test", "num_bytes": 368600943.6536536, "num_examples": 223115}], "download_size": 168971675, "dataset_size": 3685996220.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
|
2023-10-04T06:40:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "recipe-nlg-llama2"
More Information needed
|
[
"# Dataset Card for \"recipe-nlg-llama2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"recipe-nlg-llama2\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"recipe-nlg-llama2\"\n\nMore Information needed"
] |
d048d6b12e6a1b995ef31054de5774c1bb3954f7
|
# Dataset Card for "new_photorealistic_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/new_photorealistic_prompts
|
[
"region:us"
] |
2023-09-20T06:37:33+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1492287, "num_examples": 10000}], "download_size": 345550, "dataset_size": 1492287}}
|
2023-09-20T06:37:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "new_photorealistic_prompts"
More Information needed
|
[
"# Dataset Card for \"new_photorealistic_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"new_photorealistic_prompts\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"new_photorealistic_prompts\"\n\nMore Information needed"
] |
64ffc9b7e32accdba2022a0970f8eaa7b31c3035
|
# Dataset Card for "arabic_glamour_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/arabic_glamour_prompts
|
[
"region:us"
] |
2023-09-20T06:53:13+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1949534, "num_examples": 10000}], "download_size": 328987, "dataset_size": 1949534}}
|
2023-09-20T06:53:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "arabic_glamour_prompts"
More Information needed
|
[
"# Dataset Card for \"arabic_glamour_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"arabic_glamour_prompts\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"arabic_glamour_prompts\"\n\nMore Information needed"
] |
07858746d34ad8c6a5dad1d94b02ba75aec844e5
|
# Dataset Card for "prompt_injection_password"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ivanleomk/prompt_injection_password
|
[
"region:us"
] |
2023-09-20T07:04:35+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 142227, "num_examples": 917}], "download_size": 53239, "dataset_size": 142227}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T07:04:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "prompt_injection_password"
More Information needed
|
[
"# Dataset Card for \"prompt_injection_password\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"prompt_injection_password\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"prompt_injection_password\"\n\nMore Information needed"
] |
a46b8d71ec58fb15d571b91b1184d8a3d4b9152f
|
An HF dataset of the OCWCourses benchmark from Lewkowycz et al. (2022).
```
@misc{lewkowycz2022solving,
title={Solving Quantitative Reasoning Problems with Language Models},
author={Aitor Lewkowycz and Anders Andreassen and David Dohan and Ethan Dyer and Henryk Michalewski and Vinay Ramasesh and Ambrose Slone and Cem Anil and Imanol Schlag and Theo Gutman-Solo and Yuhuai Wu and Behnam Neyshabur and Guy Gur-Ari and Vedant Misra},
year={2022},
eprint={2206.14858},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
zhangirazerbayev/ocwcourses
|
[
"arxiv:2206.14858",
"region:us"
] |
2023-09-20T07:26:19+00:00
|
{}
|
2023-10-17T01:49:19+00:00
|
[
"2206.14858"
] |
[] |
TAGS
#arxiv-2206.14858 #region-us
|
An HF dataset of the OCWCourses benchmark from Lewkowycz et al. (2022).
|
[] |
[
"TAGS\n#arxiv-2206.14858 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#arxiv-2206.14858 #region-us \n"
] |
40fb1d47909b6bec9c55b038a3e5c9626a203e11
|
# Dataset Card for "qa_wikipedia_chunked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
legacy107/qa_wikipedia_chunked
|
[
"region:us"
] |
2023-09-20T07:48:02+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer_start", "dtype": "int64"}, {"name": "answer", "dtype": "string"}, {"name": "article", "dtype": "string"}, {"name": "chunked_article", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 15700776313, "num_examples": 110970}, {"name": "validation", "num_bytes": 1842888919, "num_examples": 13833}, {"name": "test", "num_bytes": 1928000472, "num_examples": 13873}], "download_size": 2970213547, "dataset_size": 19471665704}}
|
2023-09-21T03:25:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "qa_wikipedia_chunked"
More Information needed
|
[
"# Dataset Card for \"qa_wikipedia_chunked\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"qa_wikipedia_chunked\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"qa_wikipedia_chunked\"\n\nMore Information needed"
] |
927cc4f4a25c14c2df87605b979510ddf224e153
|
## Human Genome Dataset
Here is a human genome ready to be used to train LLM.
|
garcianacho/human_genome_csv
|
[
"task_categories:token-classification",
"license:apache-2.0",
"biology",
"genome",
"human genome",
"bioinformatics",
"region:us"
] |
2023-09-20T07:52:07+00:00
|
{"license": "apache-2.0", "task_categories": ["token-classification"], "tags": ["biology", "genome", "human genome", "bioinformatics"]}
|
2023-10-04T11:41:28+00:00
|
[] |
[] |
TAGS
#task_categories-token-classification #license-apache-2.0 #biology #genome #human genome #bioinformatics #region-us
|
## Human Genome Dataset
Here is a human genome ready to be used to train LLM.
|
[
"## Human Genome Dataset\n\nHere is a human genome ready to be used to train LLM."
] |
[
"TAGS\n#task_categories-token-classification #license-apache-2.0 #biology #genome #human genome #bioinformatics #region-us \n",
"## Human Genome Dataset\n\nHere is a human genome ready to be used to train LLM."
] |
[
40,
21
] |
[
"passage: TAGS\n#task_categories-token-classification #license-apache-2.0 #biology #genome #human genome #bioinformatics #region-us \n## Human Genome Dataset\n\nHere is a human genome ready to be used to train LLM."
] |
b7011e736c6982774e50c721310d7fdd30a9c300
|
# Dataset Card for "ds_receipts_v2_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mychen76/ds_receipts_v2_train
|
[
"region:us"
] |
2023-09-20T07:56:43+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 102670815.483, "num_examples": 1137}], "download_size": 102731891, "dataset_size": 102670815.483}}
|
2023-09-20T20:38:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ds_receipts_v2_train"
More Information needed
|
[
"# Dataset Card for \"ds_receipts_v2_train\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ds_receipts_v2_train\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ds_receipts_v2_train\"\n\nMore Information needed"
] |
ede1a9678461a54cc5c77f27cd88a02419383502
|
# Dataset Card for "ds_receipts_v2_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mychen76/ds_receipts_v2_test
|
[
"region:us"
] |
2023-09-20T07:57:24+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51155438.0, "num_examples": 472}], "download_size": 50770089, "dataset_size": 51155438.0}}
|
2023-09-20T20:38:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ds_receipts_v2_test"
More Information needed
|
[
"# Dataset Card for \"ds_receipts_v2_test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ds_receipts_v2_test\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ds_receipts_v2_test\"\n\nMore Information needed"
] |
51adfeb479b7a633ec6fed658a974011de0950e4
|
# Dataset Card for "ds_receipts_v2_eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mychen76/ds_receipts_v2_eval
|
[
"region:us"
] |
2023-09-20T07:57:44+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1469341.0, "num_examples": 19}], "download_size": 1462479, "dataset_size": 1469341.0}}
|
2023-09-20T20:38:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ds_receipts_v2_eval"
More Information needed
|
[
"# Dataset Card for \"ds_receipts_v2_eval\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ds_receipts_v2_eval\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ds_receipts_v2_eval\"\n\nMore Information needed"
] |
386d094f43dde8ba2748cdae4f4e9d8269e88018
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
We designed a larger and more generic Word Embedding over Linguistic Features for Fake News Detection (WELFake) dataset of 72,134 news articles with 35,028 real and 37,106 fake news. For this, we merged four popular news datasets (i.e. Kaggle, McIntire, Reuters, BuzzFeed Political) to prevent over-fitting of classifiers and to provide more text data for better ML training.
Dataset contains four columns: Serial number (starting from 0); Title (about the text news heading); Text (about the news content); and Label (0 = fake and 1 = real).
There are 78098 data entries in csv file out of which only 72134 entries are accessed as per the data frame.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
davanstrien/WELFake
|
[
"region:us"
] |
2023-09-20T08:06:09+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "fake", "1": "real"}}}}], "splits": [{"name": "train", "num_bytes": 245239522, "num_examples": 72134}], "download_size": 151915950, "dataset_size": 245239522}}
|
2023-09-20T08:14:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
We designed a larger and more generic Word Embedding over Linguistic Features for Fake News Detection (WELFake) dataset of 72,134 news articles with 35,028 real and 37,106 fake news. For this, we merged four popular news datasets (i.e. Kaggle, McIntire, Reuters, BuzzFeed Political) to prevent over-fitting of classifiers and to provide more text data for better ML training.
Dataset contains four columns: Serial number (starting from 0); Title (about the text news heading); Text (about the news content); and Label (0 = fake and 1 = real).
There are 78098 data entries in csv file out of which only 72134 entries are accessed as per the data frame.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nWe designed a larger and more generic Word Embedding over Linguistic Features for Fake News Detection (WELFake) dataset of 72,134 news articles with 35,028 real and 37,106 fake news. For this, we merged four popular news datasets (i.e. Kaggle, McIntire, Reuters, BuzzFeed Political) to prevent over-fitting of classifiers and to provide more text data for better ML training.\n\nDataset contains four columns: Serial number (starting from 0); Title (about the text news heading); Text (about the news content); and Label (0 = fake and 1 = real).\n\nThere are 78098 data entries in csv file out of which only 72134 entries are accessed as per the data frame.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nWe designed a larger and more generic Word Embedding over Linguistic Features for Fake News Detection (WELFake) dataset of 72,134 news articles with 35,028 real and 37,106 fake news. For this, we merged four popular news datasets (i.e. Kaggle, McIntire, Reuters, BuzzFeed Political) to prevent over-fitting of classifiers and to provide more text data for better ML training.\n\nDataset contains four columns: Serial number (starting from 0); Title (about the text news heading); Text (about the news content); and Label (0 = fake and 1 = real).\n\nThere are 78098 data entries in csv file out of which only 72134 entries are accessed as per the data frame.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
181,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nWe designed a larger and more generic Word Embedding over Linguistic Features for Fake News Detection (WELFake) dataset of 72,134 news articles with 35,028 real and 37,106 fake news. For this, we merged four popular news datasets (i.e. Kaggle, McIntire, Reuters, BuzzFeed Political) to prevent over-fitting of classifiers and to provide more text data for better ML training.\n\nDataset contains four columns: Serial number (starting from 0); Title (about the text news heading); Text (about the news content); and Label (0 = fake and 1 = real).\n\nThere are 78098 data entries in csv file out of which only 72134 entries are accessed as per the data frame.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
19c6d39d84076b1a5c4621d1c68f1bd0508c8b69
|
# Dataset Card for "exp_data_v1-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tanvirsrbd1/exp_data_v1-1
|
[
"region:us"
] |
2023-09-20T08:12:16+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "html", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1509076, "num_examples": 2980}], "download_size": 487802, "dataset_size": 1509076}}
|
2023-10-04T05:12:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "exp_data_v1-1"
More Information needed
|
[
"# Dataset Card for \"exp_data_v1-1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"exp_data_v1-1\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"exp_data_v1-1\"\n\nMore Information needed"
] |
5fb1da2b04161597ca5328a46060a45dbf5493d6
|
# Wagons Images Classification
The dataset consists of images depicting **loaded and unloaded** wagons. The data are organasied in two folders for loaded and unloaded wagons and assisted with .CSV file containing text classification of the images.
This dataset can be useful for various tasks, such as *image classification, object detection and data-driven analyses related to wagon loading and unloading processes.
The dataset is useful for **rail transport sphere**, it can be utilised for automation the identification and classification of the wagons and further optimization of the processes in the industry.

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=wagons-images-classification) to discuss your requirements, learn about the price and buy the dataset.
# Content
- **loaded**: includes images of loaded wagons
- **unloaded**: includes images of unloaded wagons
- **.csv file**: contains information about the dataset
### File with the extension .csv
includes the following information for each media file:
- **image_name**: link to access the image,
- **type**: type of the wagon in the image (**loaded/unloaded**)
# Wagon images might be collected and labeled in accordance with your requirements.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=wagons-images-classification)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
|
TrainingDataPro/wagons-images-classification
|
[
"task_categories:image-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"finance",
"region:us"
] |
2023-09-20T08:12:37+00:00
|
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "task_categories": ["image-classification"], "tags": ["code", "finance"], "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "name", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "loaded", "1": "unloaded"}}}}], "splits": [{"name": "train", "num_bytes": 4452752, "num_examples": 18}], "download_size": 4344062, "dataset_size": 4452752}}
|
2023-10-12T06:18:03+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-image-classification #language-English #license-cc-by-nc-nd-4.0 #code #finance #region-us
|
# Wagons Images Classification
The dataset consists of images depicting loaded and unloaded wagons. The data are organasied in two folders for loaded and unloaded wagons and assisted with .CSV file containing text classification of the images.
This dataset can be useful for various tasks, such as *image classification, object detection and data-driven analyses related to wagon loading and unloading processes.
The dataset is useful for rail transport sphere, it can be utilised for automation the identification and classification of the wagons and further optimization of the processes in the industry.

# Wagon images might be collected and labeled in accordance with your requirements.
## TrainingData provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: URL
TrainingData's GitHub: URL
|
[
"# Wagons Images Classification\nThe dataset consists of images depicting loaded and unloaded wagons. The data are organasied in two folders for loaded and unloaded wagons and assisted with .CSV file containing text classification of the images.\n\nThis dataset can be useful for various tasks, such as *image classification, object detection and data-driven analyses related to wagon loading and unloading processes. \n\nThe dataset is useful for rail transport sphere, it can be utilised for automation the identification and classification of the wagons and further optimization of the processes in the industry. \n\n",
"# Wagon images might be collected and labeled in accordance with your requirements.",
"## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL"
] |
[
"TAGS\n#task_categories-image-classification #language-English #license-cc-by-nc-nd-4.0 #code #finance #region-us \n",
"# Wagons Images Classification\nThe dataset consists of images depicting loaded and unloaded wagons. The data are organasied in two folders for loaded and unloaded wagons and assisted with .CSV file containing text classification of the images.\n\nThis dataset can be useful for various tasks, such as *image classification, object detection and data-driven analyses related to wagon loading and unloading processes. \n\nThe dataset is useful for rail transport sphere, it can be utilised for automation the identification and classification of the wagons and further optimization of the processes in the industry. \n\n",
"# Wagon images might be collected and labeled in accordance with your requirements.",
"## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL"
] |
[
39,
150,
5,
30,
40,
49,
17,
39
] |
[
"passage: TAGS\n#task_categories-image-classification #language-English #license-cc-by-nc-nd-4.0 #code #finance #region-us \n# Wagons Images Classification\nThe dataset consists of images depicting loaded and unloaded wagons. The data are organasied in two folders for loaded and unloaded wagons and assisted with .CSV file containing text classification of the images.\n\nThis dataset can be useful for various tasks, such as *image classification, object detection and data-driven analyses related to wagon loading and unloading processes. \n\nThe dataset is useful for rail transport sphere, it can be utilised for automation the identification and classification of the wagons and further optimization of the processes in the industry. \n\n# Wagon images might be collected and labeled in accordance with your requirements.## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL"
] |
bcd43cf94c7d3ce84d92e95f77e7af79d0228d0f
|
# Dataset Card for "a77d2949"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/a77d2949
|
[
"region:us"
] |
2023-09-20T08:17:01+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 168, "num_examples": 10}], "download_size": 1322, "dataset_size": 168}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T08:17:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "a77d2949"
More Information needed
|
[
"# Dataset Card for \"a77d2949\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"a77d2949\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"a77d2949\"\n\nMore Information needed"
] |
d8847eb9a492810c3b1680c8321add5728b64e18
|
# Dataset Card for "koquad_v2_polyglot_tkd_20th"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fiveflow/koquad_v2_polyglot_tkd_20th
|
[
"region:us"
] |
2023-09-20T08:45:33+00:00
|
{"dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 1766922390, "num_examples": 20000}], "download_size": 592965039, "dataset_size": 1766922390}}
|
2023-09-20T08:46:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "koquad_v2_polyglot_tkd_20th"
More Information needed
|
[
"# Dataset Card for \"koquad_v2_polyglot_tkd_20th\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"koquad_v2_polyglot_tkd_20th\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"koquad_v2_polyglot_tkd_20th\"\n\nMore Information needed"
] |
20b364aae38a03c3dd92c4ee7d725207b06657a5
|
# Dataset Card for "grundfunktionen-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/grundfunktionen-undersampled
|
[
"region:us"
] |
2023-09-20T09:04:43+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "AndereAnlagen", "1": "Befoerdern", "2": "KaelteVersorgen", "3": "LuftVersorgen", "4": "MedienVersorgen", "5": "Sichern", "6": "StromVersorgen", "7": "WaermeVersorgen"}}}}], "splits": [{"name": "train", "num_bytes": 767809.3946920173, "num_examples": 4359}, {"name": "test", "num_bytes": 952887, "num_examples": 5431}, {"name": "valid", "num_bytes": 952887, "num_examples": 5431}], "download_size": 1154906, "dataset_size": 2673583.394692017}}
|
2023-09-20T09:04:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "grundfunktionen-undersampled"
More Information needed
|
[
"# Dataset Card for \"grundfunktionen-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"grundfunktionen-undersampled\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"grundfunktionen-undersampled\"\n\nMore Information needed"
] |
f79e94d4ef5d131117d6381a8837dd177fd24531
|
<h1 style="text-align: center">Bleedingheart Pretrain Dataset</h1>
<h2 style="text-align: center">A collaboration between Kaleido and Newstar</h2>
<hr>
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6" style="display: block; margin: 0 auto; margin-top: 10px; transform: translateY(-50%);">
<path stroke-linecap="round" stroke-linejoin="round" d="M19.5 5.25l-7.5 7.5-7.5-7.5m15 6l-7.5 7.5-7.5-7.5" />
</svg>
- We collected all the datasets we could find that are in Tagalog or any other Philippine dialect and put them in this repository.
- This data will be used to train the Bleedingheart model.
- Bleeding Heart is a stunning bird native to the island of Luzon in the Philippines. It is a medium-sized ground dove with a distinctive red patch of feathers on its chest, which gives it its name. The male's red patch is larger and brighter than the female's, and he displays it during the breeding season to attract a mate.
|
NewstaR/bleedingheart-pretrain-10M
|
[
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:tl",
"license:other",
"region:us"
] |
2023-09-20T09:05:47+00:00
|
{"language": ["tl"], "license": "other", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"]}
|
2023-10-02T07:48:05+00:00
|
[] |
[
"tl"
] |
TAGS
#task_categories-text-generation #size_categories-1M<n<10M #language-Tagalog #license-other #region-us
|
<h1 style="text-align: center">Bleedingheart Pretrain Dataset</h1>
<h2 style="text-align: center">A collaboration between Kaleido and Newstar</h2>
<hr>
<svg xmlns="URL fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6" style="display: block; margin: 0 auto; margin-top: 10px; transform: translateY(-50%);">
<path stroke-linecap="round" stroke-linejoin="round" d="M19.5 5.25l-7.5 7.5-7.5-7.5m15 6l-7.5 7.5-7.5-7.5" />
</svg>
- We collected all the datasets we could find that are in Tagalog or any other Philippine dialect and put them in this repository.
- This data will be used to train the Bleedingheart model.
- Bleeding Heart is a stunning bird native to the island of Luzon in the Philippines. It is a medium-sized ground dove with a distinctive red patch of feathers on its chest, which gives it its name. The male's red patch is larger and brighter than the female's, and he displays it during the breeding season to attract a mate.
|
[] |
[
"TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-Tagalog #license-other #region-us \n"
] |
[
40
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-Tagalog #license-other #region-us \n"
] |
344f955b0eb9fa9e895b2555b95032bad0b6da46
|
# Dataset Card for "story44kids_0_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/story44kids_0_prompts
|
[
"region:us"
] |
2023-09-20T09:39:24+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3026, "num_examples": 13}], "download_size": 3674, "dataset_size": 3026}}
|
2023-09-20T10:28:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "story44kids_0_prompts"
More Information needed
|
[
"# Dataset Card for \"story44kids_0_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"story44kids_0_prompts\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"story44kids_0_prompts\"\n\nMore Information needed"
] |
0bd55aba998f9d3d923c472e93d45599ca191174
|
# Dataset Card for "story44kids_1_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/story44kids_1_prompts
|
[
"region:us"
] |
2023-09-20T09:39:29+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3254, "num_examples": 10}], "download_size": 4900, "dataset_size": 3254}}
|
2023-09-20T10:28:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "story44kids_1_prompts"
More Information needed
|
[
"# Dataset Card for \"story44kids_1_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"story44kids_1_prompts\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"story44kids_1_prompts\"\n\nMore Information needed"
] |
9142b0dce7bcfe903e564915618d84c4b6ddf2ed
|
# Dataset Card for "story44kids_2_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/story44kids_2_prompts
|
[
"region:us"
] |
2023-09-20T09:39:33+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3409, "num_examples": 10}], "download_size": 4787, "dataset_size": 3409}}
|
2023-09-20T10:28:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "story44kids_2_prompts"
More Information needed
|
[
"# Dataset Card for \"story44kids_2_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"story44kids_2_prompts\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"story44kids_2_prompts\"\n\nMore Information needed"
] |
52b833836e64e9cef44a17231396571c44d35adb
|
# Dataset Card for "chip2_instruct_alpha_prompt_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/chip2_instruct_alpha_prompt_en
|
[
"region:us"
] |
2023-09-20T10:15:49+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 85102023, "num_examples": 210289}], "download_size": 50192027, "dataset_size": 85102023}}
|
2023-09-20T10:16:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chip2_instruct_alpha_prompt_en"
More Information needed
|
[
"# Dataset Card for \"chip2_instruct_alpha_prompt_en\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chip2_instruct_alpha_prompt_en\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chip2_instruct_alpha_prompt_en\"\n\nMore Information needed"
] |
b7cf06dad6d75edd79ade0f774878243d49b5681
|
# Dataset Card for "evol_500_sample_with_output"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pvduy/evol_500_sample_with_output
|
[
"region:us"
] |
2023-09-20T10:29:10+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output_wizard", "dtype": "string"}, {"name": "output_codellama", "dtype": "string"}, {"name": "output_lemur", "dtype": "string"}, {"name": "output_Xwin", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3804774, "num_examples": 500}], "download_size": 1721772, "dataset_size": 3804774}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T17:28:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "evol_500_sample_with_output"
More Information needed
|
[
"# Dataset Card for \"evol_500_sample_with_output\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"evol_500_sample_with_output\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"evol_500_sample_with_output\"\n\nMore Information needed"
] |
f3feb958f364ece85575012fd0205cc7698567c2
|
# Dataset Card for "waerme_versorgen_133-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/waerme_versorgen_133-undersampled
|
[
"region:us"
] |
2023-09-20T10:36:57+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "Grundfunktion", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Beziehen", "1": "Erzeugen", "2": "Speichern", "3": "Verteilen"}}}}], "splits": [{"name": "train", "num_bytes": 104796.04173106646, "num_examples": 532}, {"name": "test", "num_bytes": 447086, "num_examples": 2265}, {"name": "valid", "num_bytes": 447086, "num_examples": 2265}], "download_size": 362118, "dataset_size": 998968.0417310664}}
|
2023-09-20T10:37:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "waerme_versorgen_133-undersampled"
More Information needed
|
[
"# Dataset Card for \"waerme_versorgen_133-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"waerme_versorgen_133-undersampled\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"waerme_versorgen_133-undersampled\"\n\nMore Information needed"
] |
e7e2110745a9be6acba733af6c833f7c3347bc34
|
# This dataset consists of pig farming images captured from a side-view perspective.
# After downloading the dataset, place the images and labels in the 'JPEGImages' and 'Annotations' folders under 'VOCdevkit/VOC2007'.
# Running 'VOC.py' will categorize the data into training, validation, and test datasets according to specified ratios in VOC format.
# Running 'voc-yolo.py' will categorize the data into training, validation, and test datasets in YOLO format with specified ratios.
# By following the aforementioned steps, you can obtain the VOC and YOLO formats for this side-view-pigs dataset
---
---
|
MingweiMao/Side-view-Pigs
|
[
"license:other",
"region:us"
] |
2023-09-20T10:43:48+00:00
|
{"license": "other"}
|
2023-09-20T12:23:05+00:00
|
[] |
[] |
TAGS
#license-other #region-us
|
# This dataset consists of pig farming images captured from a side-view perspective.
# After downloading the dataset, place the images and labels in the 'JPEGImages' and 'Annotations' folders under 'VOCdevkit/VOC2007'.
# Running 'URL' will categorize the data into training, validation, and test datasets according to specified ratios in VOC format.
# Running 'URL' will categorize the data into training, validation, and test datasets in YOLO format with specified ratios.
# By following the aforementioned steps, you can obtain the VOC and YOLO formats for this side-view-pigs dataset
---
---
|
[
"# This dataset consists of pig farming images captured from a side-view perspective.",
"# After downloading the dataset, place the images and labels in the 'JPEGImages' and 'Annotations' folders under 'VOCdevkit/VOC2007'.",
"# Running 'URL' will categorize the data into training, validation, and test datasets according to specified ratios in VOC format.",
"# Running 'URL' will categorize the data into training, validation, and test datasets in YOLO format with specified ratios.",
"# By following the aforementioned steps, you can obtain the VOC and YOLO formats for this side-view-pigs dataset\n---\n---"
] |
[
"TAGS\n#license-other #region-us \n",
"# This dataset consists of pig farming images captured from a side-view perspective.",
"# After downloading the dataset, place the images and labels in the 'JPEGImages' and 'Annotations' folders under 'VOCdevkit/VOC2007'.",
"# Running 'URL' will categorize the data into training, validation, and test datasets according to specified ratios in VOC format.",
"# Running 'URL' will categorize the data into training, validation, and test datasets in YOLO format with specified ratios.",
"# By following the aforementioned steps, you can obtain the VOC and YOLO formats for this side-view-pigs dataset\n---\n---"
] |
[
11,
20,
42,
32,
32,
33
] |
[
"passage: TAGS\n#license-other #region-us \n# This dataset consists of pig farming images captured from a side-view perspective.# After downloading the dataset, place the images and labels in the 'JPEGImages' and 'Annotations' folders under 'VOCdevkit/VOC2007'.# Running 'URL' will categorize the data into training, validation, and test datasets according to specified ratios in VOC format.# Running 'URL' will categorize the data into training, validation, and test datasets in YOLO format with specified ratios.# By following the aforementioned steps, you can obtain the VOC and YOLO formats for this side-view-pigs dataset\n---\n---"
] |
b283b93fb65f79a0011b1e3c350451714079870c
|
# Dataset Card for "pubmed_subset_wiki_10p"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zxvix/pubmed_subset_wiki_10p
|
[
"region:us"
] |
2023-09-20T10:43:54+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3163168567.210593, "num_examples": 1110859}, {"name": "test", "num_bytes": 1024229, "num_examples": 1000}], "download_size": 826503443, "dataset_size": 3164192796.210593}}
|
2023-09-20T10:46:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pubmed_subset_wiki_10p"
More Information needed
|
[
"# Dataset Card for \"pubmed_subset_wiki_10p\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pubmed_subset_wiki_10p\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pubmed_subset_wiki_10p\"\n\nMore Information needed"
] |
d8c2162043239ab8b2fe41bdce561fa9000af404
|
# Dataset Card for "oasst1_prompt_dataset_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/oasst1_prompt_en
|
[
"region:us"
] |
2023-09-20T10:45:05+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32670635, "num_examples": 20976}], "download_size": 12117771, "dataset_size": 32670635}}
|
2023-09-20T10:45:10+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "oasst1_prompt_dataset_en"
More Information needed
|
[
"# Dataset Card for \"oasst1_prompt_dataset_en\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"oasst1_prompt_dataset_en\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"oasst1_prompt_dataset_en\"\n\nMore Information needed"
] |
492c5c003ab51baee98c0a17e1c8b19617cf6ced
|
# Dataset Card for "dolly_prompt_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/dolly_prompt_en
|
[
"region:us"
] |
2023-09-20T10:48:13+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18623377, "num_examples": 19238}], "download_size": 7835327, "dataset_size": 18623377}}
|
2023-09-20T10:48:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "dolly_prompt_en"
More Information needed
|
[
"# Dataset Card for \"dolly_prompt_en\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"dolly_prompt_en\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"dolly_prompt_en\"\n\nMore Information needed"
] |
16d0dfd9983c7693ea13b949bb611e524f4359cf
|
# AutoTrain Dataset for project: finetuning
## Dataset Description
This dataset has been automatically processed by AutoTrain for project finetuning.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_Chapter": "Chapter IV",
"text": "78",
"feat_Description": "Act done pursuant to the judgment or order of the court.",
"target": "Nothing which is done in pursuance of, or which is warranted by the judgment or order of, a Court of Justice, if done whilst such judgment or order remains in force, is an offence, notwithstanding the Court may have had no jurisdiction to pass such judgment or order, provided the person doing the act in good faith believes that the Court had such jurisdiction.",
"feat_Unnamed: 4": null,
"feat_Unnamed: 5": null
},
{
"feat_Chapter": "Chapter 16",
"text": "SECTION 341",
"feat_Description": "Punishment for wrongful restraint",
"target": "This section specifies the punishment for wrongful restraint. The penalty varies depending on the degree of restraint and the circumstances surrounding the offense.",
"feat_Unnamed: 4": null,
"feat_Unnamed: 5": null
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_Chapter": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)",
"feat_Description": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)",
"feat_Unnamed: 4": "Value(dtype='string', id=None)",
"feat_Unnamed: 5": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 389 |
| valid | 98 |
|
shiva33/autotrain-data-finetuning
|
[
"task_categories:summarization",
"language:en",
"region:us"
] |
2023-09-20T10:54:38+00:00
|
{"language": ["en"], "task_categories": ["summarization"]}
|
2023-09-20T11:10:27+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-summarization #language-English #region-us
|
AutoTrain Dataset for project: finetuning
=========================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project finetuning.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
|
[
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
[
"TAGS\n#task_categories-summarization #language-English #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
[
20,
26,
17,
23,
27
] |
[
"passage: TAGS\n#task_categories-summarization #language-English #region-us \n### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from this dataset looks as follows:### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
6add63e205e68e6955a25d9747c5df06001ad00c
|
# Dataset Card for "58cedb88"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-muse256-muse512-wuerst-sdv15/58cedb88
|
[
"region:us"
] |
2023-09-20T11:03:32+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 177, "num_examples": 10}], "download_size": 1372, "dataset_size": 177}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T11:03:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "58cedb88"
More Information needed
|
[
"# Dataset Card for \"58cedb88\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"58cedb88\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"58cedb88\"\n\nMore Information needed"
] |
89b632d768ac42e6b25caec6eb627fd6e6482b19
|
# Dataset Card for "f8dcc2ec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-muse256-muse512-wuerst-sdv15/f8dcc2ec
|
[
"region:us"
] |
2023-09-20T11:03:34+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 177, "num_examples": 10}], "download_size": 1372, "dataset_size": 177}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T11:03:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "f8dcc2ec"
More Information needed
|
[
"# Dataset Card for \"f8dcc2ec\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"f8dcc2ec\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"f8dcc2ec\"\n\nMore Information needed"
] |
5d8e297f495579e56884277f3792439f33ef55fe
|
# Dataset Card for document_texts
## Dataset Description
* **Homepage:** [DSSGx Munich](https://sites.google.com/view/dssgx-munich-2023/startseite) organization page.
* **Repository:** [GitHub](https://github.com/DSSGxMunich/land-sealing-dataset-and-analysis).
### Dataset Summary
This dataset contains th result of the PDF parser done by Tika. It contains for each document, the land parcel it refers to and the content downloaded.
## Dataset Structure
### Data Fields
- **filename:** Name of the parsed pdf file.
- **document_id:** Unique ID of the document, it is the combination of the land parcel id_number of document from that land parcel.
- **content:** Extracted text content.
- **land_parcel_id:** Unique ID of the land parcel for the document.
- **land_parcel_name:** Name of the land parcel for the document.
- **land_parcel_scanurl:** URL for the parsed content.
### Source Data
Comes from the module document_texts_creation.
|
DSSGxMunich/document_text
|
[
"license:mit",
"region:us"
] |
2023-09-20T11:08:50+00:00
|
{"license": "mit"}
|
2023-10-05T09:16:44+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
# Dataset Card for document_texts
## Dataset Description
* Homepage: DSSGx Munich organization page.
* Repository: GitHub.
### Dataset Summary
This dataset contains th result of the PDF parser done by Tika. It contains for each document, the land parcel it refers to and the content downloaded.
## Dataset Structure
### Data Fields
- filename: Name of the parsed pdf file.
- document_id: Unique ID of the document, it is the combination of the land parcel id_number of document from that land parcel.
- content: Extracted text content.
- land_parcel_id: Unique ID of the land parcel for the document.
- land_parcel_name: Name of the land parcel for the document.
- land_parcel_scanurl: URL for the parsed content.
### Source Data
Comes from the module document_texts_creation.
|
[
"# Dataset Card for document_texts",
"## Dataset Description\n \n* Homepage: DSSGx Munich organization page. \n \n* Repository: GitHub.",
"### Dataset Summary\n\nThis dataset contains th result of the PDF parser done by Tika. It contains for each document, the land parcel it refers to and the content downloaded.",
"## Dataset Structure",
"### Data Fields\n\n- filename: Name of the parsed pdf file. \n- document_id: Unique ID of the document, it is the combination of the land parcel id_number of document from that land parcel. \n- content: Extracted text content. \n- land_parcel_id: Unique ID of the land parcel for the document. \n- land_parcel_name: Name of the land parcel for the document. \n- land_parcel_scanurl: URL for the parsed content.",
"### Source Data\n\nComes from the module document_texts_creation."
] |
[
"TAGS\n#license-mit #region-us \n",
"# Dataset Card for document_texts",
"## Dataset Description\n \n* Homepage: DSSGx Munich organization page. \n \n* Repository: GitHub.",
"### Dataset Summary\n\nThis dataset contains th result of the PDF parser done by Tika. It contains for each document, the land parcel it refers to and the content downloaded.",
"## Dataset Structure",
"### Data Fields\n\n- filename: Name of the parsed pdf file. \n- document_id: Unique ID of the document, it is the combination of the land parcel id_number of document from that land parcel. \n- content: Extracted text content. \n- land_parcel_id: Unique ID of the land parcel for the document. \n- land_parcel_name: Name of the land parcel for the document. \n- land_parcel_scanurl: URL for the parsed content.",
"### Source Data\n\nComes from the module document_texts_creation."
] |
[
11,
9,
23,
43,
6,
109,
17
] |
[
"passage: TAGS\n#license-mit #region-us \n# Dataset Card for document_texts## Dataset Description\n \n* Homepage: DSSGx Munich organization page. \n \n* Repository: GitHub.### Dataset Summary\n\nThis dataset contains th result of the PDF parser done by Tika. It contains for each document, the land parcel it refers to and the content downloaded.## Dataset Structure### Data Fields\n\n- filename: Name of the parsed pdf file. \n- document_id: Unique ID of the document, it is the combination of the land parcel id_number of document from that land parcel. \n- content: Extracted text content. \n- land_parcel_id: Unique ID of the land parcel for the document. \n- land_parcel_name: Name of the land parcel for the document. \n- land_parcel_scanurl: URL for the parsed content.### Source Data\n\nComes from the module document_texts_creation."
] |
d32f2729a64a8d6bba88a0402fcc534bc2bab35f
|
# Dataset Card for "department_college_time_ForFineTune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vincenttttt/department_college_time_ForFineTune
|
[
"region:us"
] |
2023-09-20T11:09:17+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2728137, "num_examples": 7005}], "download_size": 532871, "dataset_size": 2728137}}
|
2023-09-20T11:09:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "department_college_time_ForFineTune"
More Information needed
|
[
"# Dataset Card for \"department_college_time_ForFineTune\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"department_college_time_ForFineTune\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"department_college_time_ForFineTune\"\n\nMore Information needed"
] |
07827c9881b1129bae21c70539fb5c58f3f8c6a9
|
# Dataset Card for regional_plan_sections
## Dataset Description
**Homepage:** [DSSGx Munich](https://sites.google.com/view/dssgx-munich-2023/startseite) organization page.
**Repository:** [GitHub](https://github.com/DSSGxMunich/land-sealing-dataset-and-analysis).
### Dataset Summary
This dataset contains the parsed information from the regional plans.
Each row is one section containing goals and objectives from the documents.
For each section, we also have the appearance of relevant keywords regarding floodings.
### Data Fields
- **hq100:** relevant keyword.
- **hqhäufig:** relevant keyword.
- **hqextrem:** relevant keyword.
- **vorranggebiete:** relevant keyword.
- **vorbehaltsgebiete:** relevant keyword.
- **affected_by_flooding:** relevant keyword.
- **innenentwicklung:** relevant keyword.
- **flächensparen:** relevant keyword.
- **filename:** Name of the file that was parsed.
- **chapter:** Name of the chapter.
- **section:** Complete section text, preprocessed.
- **section_type:** Objective, principle or explanation.
- **year:** Year of the document.
- **PLR:** Type of document.
- **Name:** Regional plan name.
### Source Data
Comes from the module rplan_content_extraction.
|
DSSGxMunich/regional_plan_sections
|
[
"license:mit",
"region:us"
] |
2023-09-20T11:11:33+00:00
|
{"license": "mit"}
|
2023-10-05T09:15:36+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
# Dataset Card for regional_plan_sections
## Dataset Description
Homepage: DSSGx Munich organization page.
Repository: GitHub.
### Dataset Summary
This dataset contains the parsed information from the regional plans.
Each row is one section containing goals and objectives from the documents.
For each section, we also have the appearance of relevant keywords regarding floodings.
### Data Fields
- hq100: relevant keyword.
- hqhäufig: relevant keyword.
- hqextrem: relevant keyword.
- vorranggebiete: relevant keyword.
- vorbehaltsgebiete: relevant keyword.
- affected_by_flooding: relevant keyword.
- innenentwicklung: relevant keyword.
- flächensparen: relevant keyword.
- filename: Name of the file that was parsed.
- chapter: Name of the chapter.
- section: Complete section text, preprocessed.
- section_type: Objective, principle or explanation.
- year: Year of the document.
- PLR: Type of document.
- Name: Regional plan name.
### Source Data
Comes from the module rplan_content_extraction.
|
[
"# Dataset Card for regional_plan_sections",
"## Dataset Description\n\nHomepage: DSSGx Munich organization page. \n\nRepository: GitHub.",
"### Dataset Summary\n\nThis dataset contains the parsed information from the regional plans. \nEach row is one section containing goals and objectives from the documents.\nFor each section, we also have the appearance of relevant keywords regarding floodings.",
"### Data Fields\n\n- hq100: relevant keyword. \n- hqhäufig: relevant keyword. \n- hqextrem: relevant keyword. \n- vorranggebiete: relevant keyword. \n- vorbehaltsgebiete: relevant keyword. \n- affected_by_flooding: relevant keyword. \n- innenentwicklung: relevant keyword. \n- flächensparen: relevant keyword. \n- filename: Name of the file that was parsed. \n- chapter: Name of the chapter. \n- section: Complete section text, preprocessed. \n- section_type: Objective, principle or explanation. \n- year: Year of the document.\n- PLR: Type of document. \n- Name: Regional plan name.",
"### Source Data\n\nComes from the module rplan_content_extraction."
] |
[
"TAGS\n#license-mit #region-us \n",
"# Dataset Card for regional_plan_sections",
"## Dataset Description\n\nHomepage: DSSGx Munich organization page. \n\nRepository: GitHub.",
"### Dataset Summary\n\nThis dataset contains the parsed information from the regional plans. \nEach row is one section containing goals and objectives from the documents.\nFor each section, we also have the appearance of relevant keywords regarding floodings.",
"### Data Fields\n\n- hq100: relevant keyword. \n- hqhäufig: relevant keyword. \n- hqextrem: relevant keyword. \n- vorranggebiete: relevant keyword. \n- vorbehaltsgebiete: relevant keyword. \n- affected_by_flooding: relevant keyword. \n- innenentwicklung: relevant keyword. \n- flächensparen: relevant keyword. \n- filename: Name of the file that was parsed. \n- chapter: Name of the chapter. \n- section: Complete section text, preprocessed. \n- section_type: Objective, principle or explanation. \n- year: Year of the document.\n- PLR: Type of document. \n- Name: Regional plan name.",
"### Source Data\n\nComes from the module rplan_content_extraction."
] |
[
11,
12,
21,
53,
147,
17
] |
[
"passage: TAGS\n#license-mit #region-us \n# Dataset Card for regional_plan_sections## Dataset Description\n\nHomepage: DSSGx Munich organization page. \n\nRepository: GitHub.### Dataset Summary\n\nThis dataset contains the parsed information from the regional plans. \nEach row is one section containing goals and objectives from the documents.\nFor each section, we also have the appearance of relevant keywords regarding floodings.### Data Fields\n\n- hq100: relevant keyword. \n- hqhäufig: relevant keyword. \n- hqextrem: relevant keyword. \n- vorranggebiete: relevant keyword. \n- vorbehaltsgebiete: relevant keyword. \n- affected_by_flooding: relevant keyword. \n- innenentwicklung: relevant keyword. \n- flächensparen: relevant keyword. \n- filename: Name of the file that was parsed. \n- chapter: Name of the chapter. \n- section: Complete section text, preprocessed. \n- section_type: Objective, principle or explanation. \n- year: Year of the document.\n- PLR: Type of document. \n- Name: Regional plan name.### Source Data\n\nComes from the module rplan_content_extraction."
] |
48b4ef56921474ed6c586563746c5ed727f95656
|
# Dataset Card for "wod8781nuo348jg5wf0832"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
minoruskore/wod8781nuo348jg5wf0832
|
[
"region:us"
] |
2023-09-20T11:24:55+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "mark", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "year", "dtype": "int64"}, {"name": "mileage", "dtype": "int64"}, {"name": "vol_engine", "dtype": "int64"}, {"name": "fuel", "dtype": "string"}, {"name": "price", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6622964, "num_examples": 94585}, {"name": "test", "num_bytes": 1633943, "num_examples": 23342}], "download_size": 2026065, "dataset_size": 8256907}}
|
2023-09-20T11:26:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wod8781nuo348jg5wf0832"
More Information needed
|
[
"# Dataset Card for \"wod8781nuo348jg5wf0832\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wod8781nuo348jg5wf0832\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wod8781nuo348jg5wf0832\"\n\nMore Information needed"
] |
878d729939be866e085000b064f40f7239d85ede
|
# Dataset Card for "Llama2D-Pretrain"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
supermomo668/Llama2D-Pretrain
|
[
"region:us"
] |
2023-09-20T11:35:52+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "float32"}, {"name": "coords", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "float32"}, {"name": "attention_mask", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 14226320, "num_examples": 395}], "download_size": 834338, "dataset_size": 14226320}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T12:30:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Llama2D-Pretrain"
More Information needed
|
[
"# Dataset Card for \"Llama2D-Pretrain\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Llama2D-Pretrain\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Llama2D-Pretrain\"\n\nMore Information needed"
] |
a65f51aec84f9db7070bcad93d4e1f0641cd6fe5
|
# Dataset Card for "chip2_instruct_alpha_prompt_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/chip2_instruct_alpha_prompt_ru
|
[
"region:us"
] |
2023-09-20T11:41:15+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 120371757, "num_examples": 162087}], "download_size": 58859759, "dataset_size": 120371757}}
|
2023-09-20T11:41:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chip2_instruct_alpha_prompt_ru"
More Information needed
|
[
"# Dataset Card for \"chip2_instruct_alpha_prompt_ru\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chip2_instruct_alpha_prompt_ru\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chip2_instruct_alpha_prompt_ru\"\n\nMore Information needed"
] |
e6ed1da1616e2d7f37e00e41249079d39be63abb
|
# Dataset Card for "tiny-ultrachat-uncensored"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
timothyckl/tiny-ultrachat-uncensored
|
[
"region:us"
] |
2023-09-20T11:42:04+00:00
|
{"dataset_info": {"features": [{"name": "data", "sequence": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 880539616, "num_examples": 175245}], "download_size": 453661628, "dataset_size": 880539616}}
|
2023-09-20T12:06:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "tiny-ultrachat-uncensored"
More Information needed
|
[
"# Dataset Card for \"tiny-ultrachat-uncensored\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"tiny-ultrachat-uncensored\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"tiny-ultrachat-uncensored\"\n\nMore Information needed"
] |
49739dba7d282292a0b2ffe1fb2b2bf2ac7256c5
|
# Dataset Card for "primary-sector-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
elenahuang/primary-sector-1k
|
[
"region:us"
] |
2023-09-20T11:44:58+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10742070, "num_examples": 1000}], "download_size": 5771489, "dataset_size": 10742070}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T11:45:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "primary-sector-1k"
More Information needed
|
[
"# Dataset Card for \"primary-sector-1k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"primary-sector-1k\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"primary-sector-1k\"\n\nMore Information needed"
] |
19f9b004b092ac0e44a81bf72b52d33743884738
|
# Dataset Card for "oasst1_prompt_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/oasst1_prompt_ru
|
[
"region:us"
] |
2023-09-20T11:45:40+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22467539, "num_examples": 10774}], "download_size": 7610348, "dataset_size": 22467539}}
|
2023-09-20T11:45:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "oasst1_prompt_ru"
More Information needed
|
[
"# Dataset Card for \"oasst1_prompt_ru\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"oasst1_prompt_ru\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"oasst1_prompt_ru\"\n\nMore Information needed"
] |
7a66a52e065830535d99d777cecc70c2b064841b
|
# Dataset Card for "llama-2-nuv-intent-big-oos"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Luciya/llama-2-nuv-intent-big-oos
|
[
"region:us"
] |
2023-09-20T11:48:04+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 973114, "num_examples": 1803}], "download_size": 150502, "dataset_size": 973114}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T11:48:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llama-2-nuv-intent-big-oos"
More Information needed
|
[
"# Dataset Card for \"llama-2-nuv-intent-big-oos\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llama-2-nuv-intent-big-oos\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llama-2-nuv-intent-big-oos\"\n\nMore Information needed"
] |
938a38553486f941e3f86a7a2387bd3cd03116b7
|
This is a table dump from Prof. Henrik van Wehrden's famous sustainability wiki. He is a sustainability professor in Leuphana University, Germany, and passionate about digitalizing his mind. Therefore, the wiki is born.
This Wiki pages are focused on sustainability and highly subjective on his view of the world.
Link: https://sustainabilitymethods.org/index.php/Main_Page
|
stepkurniawan/sustainability-methods-wiki
|
[
"license:mit",
"region:us"
] |
2023-09-20T11:48:53+00:00
|
{"license": "mit", "configs": [{"config_name": "50_QA", "data_files": [{"split": "train", "path": "50_QA/train-*"}]}, {"config_name": "50_QA_reviewed", "data_files": [{"split": "train", "path": "50_QA_reviewed/train-*"}]}, {"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": [{"config_name": "50_QA", "features": [{"name": "contexts", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "ground_truths", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 78182, "num_examples": 50}], "download_size": 57005, "dataset_size": 78182}, {"config_name": "50_QA_reviewed", "features": [{"name": "contexts", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "ground_truths", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 78147, "num_examples": 50}], "download_size": 56945, "dataset_size": 78147}]}
|
2024-01-01T20:40:30+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
This is a table dump from Prof. Henrik van Wehrden's famous sustainability wiki. He is a sustainability professor in Leuphana University, Germany, and passionate about digitalizing his mind. Therefore, the wiki is born.
This Wiki pages are focused on sustainability and highly subjective on his view of the world.
Link: URL
|
[] |
[
"TAGS\n#license-mit #region-us \n"
] |
[
11
] |
[
"passage: TAGS\n#license-mit #region-us \n"
] |
a50ee3c6c1949f8a976d26b989f043c4f37f7c61
|
# Dataset Card for "dolly_prompt_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/dolly_prompt_ru
|
[
"region:us"
] |
2023-09-20T11:51:07+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23359298, "num_examples": 15950}], "download_size": 0, "dataset_size": 23359298}}
|
2023-09-20T11:51:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "dolly_prompt_ru"
More Information needed
|
[
"# Dataset Card for \"dolly_prompt_ru\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"dolly_prompt_ru\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"dolly_prompt_ru\"\n\nMore Information needed"
] |
223517123358e3f3209d1a8b303e3da6171520d6
|
# Dataset Card for "mlcoban"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pphuc25/mlcoban
|
[
"region:us"
] |
2023-09-20T12:05:46+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1026527, "num_examples": 50}], "download_size": 465113, "dataset_size": 1026527}}
|
2023-09-20T12:20:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "mlcoban"
More Information needed
|
[
"# Dataset Card for \"mlcoban\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"mlcoban\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"mlcoban\"\n\nMore Information needed"
] |
2d737d04d611d8d74259861c12ba71dc06e9f380
|
# Dataset Card for "gramVaani-dataset-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
TheAIchemist13/gramVaani-dataset-test
|
[
"region:us"
] |
2023-09-20T12:16:52+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 64498564.656, "num_examples": 1032}], "download_size": 63040623, "dataset_size": 64498564.656}}
|
2023-09-20T12:16:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gramVaani-dataset-test"
More Information needed
|
[
"# Dataset Card for \"gramVaani-dataset-test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gramVaani-dataset-test\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gramVaani-dataset-test\"\n\nMore Information needed"
] |
8987757c7e5417856bb4d2afa59874258c6f0579
|
# Dataset Card for "french_podcasts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
manu/french_podcasts
|
[
"region:us"
] |
2023-09-20T12:18:57+00:00
|
{"dataset_info": {"features": [{"name": "programme_id", "dtype": "string"}, {"name": "programme_entry_date", "dtype": "string"}, {"name": "programme_rss_link", "dtype": "string"}, {"name": "podcast_title", "dtype": "string"}, {"name": "podcast_date", "dtype": "string"}, {"name": "podcast_duration", "dtype": "string"}, {"name": "audio_podcast_link", "dtype": "string"}, {"name": "transcript", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7558333, "num_examples": 1401}], "download_size": 3696664, "dataset_size": 7558333}}
|
2023-09-20T12:57:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "french_podcasts"
More Information needed
|
[
"# Dataset Card for \"french_podcasts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"french_podcasts\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"french_podcasts\"\n\nMore Information needed"
] |
9e67a911c2eeda01a3d23f34c96734fb416c58f6
|
# Dataset Card for "gramVaani-dataset-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
TheAIchemist13/gramVaani-dataset-train
|
[
"region:us"
] |
2023-09-20T12:22:08+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 417332519.528, "num_examples": 37152}], "download_size": 1953825846, "dataset_size": 417332519.528}}
|
2023-09-20T12:26:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gramVaani-dataset-train"
More Information needed
|
[
"# Dataset Card for \"gramVaani-dataset-train\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gramVaani-dataset-train\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gramVaani-dataset-train\"\n\nMore Information needed"
] |
c3122c440fd24da236199e2f46407e2c872dee1d
|
# Dataset Card for "primary-sector-100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
elenahuang/primary-sector-100
|
[
"region:us"
] |
2023-09-20T12:23:14+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 894120, "num_examples": 100}], "download_size": 489794, "dataset_size": 894120}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T12:23:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "primary-sector-100"
More Information needed
|
[
"# Dataset Card for \"primary-sector-100\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"primary-sector-100\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"primary-sector-100\"\n\nMore Information needed"
] |
55715c02356832e4fab4775ee59e00f70e833653
|
# Dataset Card for "khanhdinhpham"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pphuc25/khanhdinhpham
|
[
"region:us"
] |
2023-09-20T12:25:22+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1137699, "num_examples": 58}], "download_size": 521927, "dataset_size": 1137699}}
|
2023-09-20T12:25:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "khanhdinhpham"
More Information needed
|
[
"# Dataset Card for \"khanhdinhpham\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"khanhdinhpham\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"khanhdinhpham\"\n\nMore Information needed"
] |
37be96057d12d090d7fd4229ff4b34791ef70684
|
# Touch Rugby Rules Dataset
train.csv is taken from the [International Touch Website](https://cdn.internationaltouch.org/public/FIT%205th%20Edition%20Rulebook.pdf)
All text is chunked to a length of 250 tokens, aiming to keep sentences whole where possible.
For educational and non-commercial use only.
|
Trelis/touch-rugby-rules-unsupervised
|
[
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"fine-tuning",
"touch rugby",
"region:us"
] |
2023-09-20T12:28:02+00:00
|
{"language": ["en"], "size_categories": ["n<1K"], "task_categories": ["text-generation"], "tags": ["fine-tuning", "touch rugby"]}
|
2023-09-20T13:39:47+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-n<1K #language-English #fine-tuning #touch rugby #region-us
|
# Touch Rugby Rules Dataset
URL is taken from the International Touch Website
All text is chunked to a length of 250 tokens, aiming to keep sentences whole where possible.
For educational and non-commercial use only.
|
[
"# Touch Rugby Rules Dataset\n\nURL is taken from the International Touch Website\n\nAll text is chunked to a length of 250 tokens, aiming to keep sentences whole where possible.\n\nFor educational and non-commercial use only."
] |
[
"TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #fine-tuning #touch rugby #region-us \n",
"# Touch Rugby Rules Dataset\n\nURL is taken from the International Touch Website\n\nAll text is chunked to a length of 250 tokens, aiming to keep sentences whole where possible.\n\nFor educational and non-commercial use only."
] |
[
39,
49
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #fine-tuning #touch rugby #region-us \n# Touch Rugby Rules Dataset\n\nURL is taken from the International Touch Website\n\nAll text is chunked to a length of 250 tokens, aiming to keep sentences whole where possible.\n\nFor educational and non-commercial use only."
] |
1ab48c92616c046e04052139c9c944ff286c49b8
|
# Dataset Card for "data_test_whisper_large_v2_peft"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
linhqyy/data_test_whisper_large_v2_peft
|
[
"region:us"
] |
2023-09-20T12:29:43+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "id", "dtype": "string"}, {"name": "pred_str", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 174279434.625, "num_examples": 1299}], "download_size": 164189043, "dataset_size": 174279434.625}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T12:29:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "data_test_whisper_large_v2_peft"
More Information needed
|
[
"# Dataset Card for \"data_test_whisper_large_v2_peft\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"data_test_whisper_large_v2_peft\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"data_test_whisper_large_v2_peft\"\n\nMore Information needed"
] |
a2fa6206bf43839903eda1fbfe7bf497e9a2e087
|
# Dataset Card for "chitanka_raw_document"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mor40/chitanka_raw_document
|
[
"region:us"
] |
2023-09-20T12:47:17+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1830893781, "num_examples": 9910}], "download_size": 892507776, "dataset_size": 1830893781}}
|
2023-09-20T12:51:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chitanka_raw_document"
More Information needed
|
[
"# Dataset Card for \"chitanka_raw_document\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chitanka_raw_document\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chitanka_raw_document\"\n\nMore Information needed"
] |
1137070cfe8a2ce5aeb4328625a92f3eb35ded58
|
# Dataset Card for "ktkDataSet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
amktk/ktkDataSet
|
[
"region:us"
] |
2023-09-20T13:24:24+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transctiption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 71647032.0, "num_examples": 10}], "download_size": 60508649, "dataset_size": 71647032.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T13:25:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ktkDataSet"
More Information needed
|
[
"# Dataset Card for \"ktkDataSet\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ktkDataSet\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ktkDataSet\"\n\nMore Information needed"
] |
f8f884288190e84c00994347947d0be7fe611b8b
|
# Dataset Card for "headlines_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jtatman/headlines_data
|
[
"region:us"
] |
2023-09-20T14:00:29+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 102075981, "num_examples": 2329709}], "download_size": 70905263, "dataset_size": 102075981}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-20T14:00:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "headlines_data"
More Information needed
|
[
"# Dataset Card for \"headlines_data\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"headlines_data\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"headlines_data\"\n\nMore Information needed"
] |
1e2d0aeedecc949223cb300afc16ed28cad77f19
|
# Dataset Card for "ReimuArmpit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Aotsuyu/ReimuArmpit
|
[
"region:us"
] |
2023-09-20T14:10:58+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 4910736822.712, "num_examples": 1392}], "download_size": 4925159968, "dataset_size": 4910736822.712}}
|
2023-09-20T14:52:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ReimuArmpit"
More Information needed
|
[
"# Dataset Card for \"ReimuArmpit\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ReimuArmpit\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ReimuArmpit\"\n\nMore Information needed"
] |
eeb83b9460d83c519aa3a5eda33614856f0acab6
|
# Dataset Card for Synthetic Drilling Dataset
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
jonasmaltebecker/synthetic_drilling_dataset
|
[
"task_categories:time-series-forecasting",
"language:en",
"region:us"
] |
2023-09-20T14:12:43+00:00
|
{"language": ["en"], "task_categories": ["time-series-forecasting"]}
|
2023-09-20T15:12:21+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-time-series-forecasting #language-English #region-us
|
# Dataset Card for Synthetic Drilling Dataset
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Synthetic Drilling Dataset",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-time-series-forecasting #language-English #region-us \n",
"# Dataset Card for Synthetic Drilling Dataset",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
25,
13,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-time-series-forecasting #language-English #region-us \n# Dataset Card for Synthetic Drilling Dataset## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
b71f618abe63fe56021c50f42ad5bbd983e43b15
|
# Dataset Card for "new_anger"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lonestar108/anger
|
[
"region:us"
] |
2023-09-20T14:32:46+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validate", "path": "data/validate-*"}]}], "dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6706, "num_examples": 28}, {"name": "test", "num_bytes": 2899, "num_examples": 10}, {"name": "validate", "num_bytes": 563, "num_examples": 3}], "download_size": 12666, "dataset_size": 10168}}
|
2023-09-20T14:32:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "new_anger"
More Information needed
|
[
"# Dataset Card for \"new_anger\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"new_anger\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"new_anger\"\n\nMore Information needed"
] |
f68140f7772ec9ac3b6120e285162fc9c789a9fe
|
# Dataset Card for "indian_food_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
SUSHMITH/indian_food_images
|
[
"region:us"
] |
2023-09-20T14:39:36+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "burger", "1": "butter_naan", "2": "chole_bhature", "3": "fried_rice", "4": "idli", "5": "jalebi", "6": "masala_dosa", "7": "momos", "8": "pizza", "9": "samosa"}}}}], "splits": [{"name": "train", "num_bytes": 647581777.8596623, "num_examples": 2688}, {"name": "test", "num_bytes": 118283198.56433766, "num_examples": 475}], "download_size": 833112618, "dataset_size": 765864976.4239999}}
|
2023-09-20T14:41:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "indian_food_images"
More Information needed
|
[
"# Dataset Card for \"indian_food_images\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"indian_food_images\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"indian_food_images\"\n\nMore Information needed"
] |
face904e8f7738496977a58e28e66b27695701aa
|
# Dataset Card for "new_sadness"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lonestar108/sadness
|
[
"region:us"
] |
2023-09-20T14:39:55+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validate", "path": "data/validate-*"}]}], "dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7274, "num_examples": 23}, {"name": "test", "num_bytes": 3112, "num_examples": 9}, {"name": "validate", "num_bytes": 733, "num_examples": 3}], "download_size": 13174, "dataset_size": 11119}}
|
2023-09-20T14:39:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "new_sadness"
More Information needed
|
[
"# Dataset Card for \"new_sadness\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"new_sadness\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"new_sadness\"\n\nMore Information needed"
] |
eedbfb7ab88830ed859b894778e143845f8e6889
|
# Dataset Card for "new_fear"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lonestar108/fear
|
[
"region:us"
] |
2023-09-20T14:43:05+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validate", "path": "data/validate-*"}]}], "dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6636, "num_examples": 28}, {"name": "test", "num_bytes": 3323, "num_examples": 12}, {"name": "validate", "num_bytes": 560, "num_examples": 3}], "download_size": 12635, "dataset_size": 10519}}
|
2023-09-20T14:43:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "new_fear"
More Information needed
|
[
"# Dataset Card for \"new_fear\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"new_fear\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"new_fear\"\n\nMore Information needed"
] |
e2f8ea3a6cda6adb51de0600d01d4f65d077a54f
|
# Dataset Card for "new_chat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lonestar108/chat
|
[
"region:us"
] |
2023-09-20T14:45:53+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validate", "path": "data/validate-*"}]}], "dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13594, "num_examples": 27}, {"name": "test", "num_bytes": 7433, "num_examples": 8}, {"name": "validate", "num_bytes": 942, "num_examples": 3}], "download_size": 29119, "dataset_size": 21969}}
|
2023-09-20T14:45:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "new_chat"
More Information needed
|
[
"# Dataset Card for \"new_chat\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"new_chat\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"new_chat\"\n\nMore Information needed"
] |
be2fa4437be7207dcdcf20439bdb16eb0ceb0d05
|
# Dataset Card for "databricks-dolly-15k_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/databricks_dolly_15k_en
|
[
"region:us"
] |
2023-09-20T14:47:37+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12195589, "num_examples": 15011}], "download_size": 7749182, "dataset_size": 12195589}}
|
2023-09-20T14:47:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "databricks-dolly-15k_en"
More Information needed
|
[
"# Dataset Card for \"databricks-dolly-15k_en\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"databricks-dolly-15k_en\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"databricks-dolly-15k_en\"\n\nMore Information needed"
] |
189db4def0fc7db0bc02c3d601932baa0172cf41
|
# Dataset Card for "databricks_dolly_15k_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/databricks_dolly_15k_ru
|
[
"region:us"
] |
2023-09-20T14:51:24+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22121608, "num_examples": 14914}], "download_size": 11365356, "dataset_size": 22121608}}
|
2023-09-20T14:51:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "databricks_dolly_15k_ru"
More Information needed
|
[
"# Dataset Card for \"databricks_dolly_15k_ru\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"databricks_dolly_15k_ru\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"databricks_dolly_15k_ru\"\n\nMore Information needed"
] |
732b680128a236c0c8798a657600a2691024c999
|
# Dataset Card for NLUCat
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
NLUCat is a dataset of NLU in Catalan. It consists of nearly 12,000 instructions annotated with the most relevant intents and spans. Each instruction is accompanied, in addition, by the instructions received by the annotator who wrote it.
The intents taken into account are the habitual ones of a virtual home assistant (activity calendar, IOT, list management, leisure, etc.), but specific ones have also been added to take into account social and healthcare needs for vulnerable people (information on administrative procedures, menu and medication reminders, etc.).
The spans have been annotated with a tag describing the type of information they contain. They are fine-grained, but can be easily grouped to use them in robust systems.
The examples are not only written in Catalan, but they also take into account the geographical and cultural reality of the speakers of this language (geographic points, cultural references, etc.)
This dataset can be used to train models for intent classification, spans identification and examples generation.
<b>This is a simplified version of the dataset for training and evaluating intent classifiers. The full dataset and the annotation guideslines can be found in [Zenodo](https://zenodo.org/records/10362026)</b>
This work is licensed under a [CC0 International License](https://creativecommons.org/publicdomain/zero/1.0/).
### Supported Tasks and Leaderboards
Intent classification, spans identification and examples generation.
### Languages
The dataset is in Catalan (ca-ES).
## Dataset Structure
### Data Instances
Three JSON files, one for each split.
### Data Fields
* example: `str`. Example
* annotation: `dict`. Annotation of the example
* intent: `str`. Intent tag
* slots: `list`. List of slots
* Tag:`str`. tag to the slot
* Text:`str`. Text of the slot
* Start_char: `int`. First character of the span
* End_char: `int`. Last character of the span
#### Example
An example looks as follows:
```
{
"example": "Demana una ambulància; la meva dona està de part.",
"annotation": {
"intent": "call_emergency",
"slots": [
{
"Tag": "service",
"Text": "ambulància",
"Start_char": 11,
"End_char": 21
},
{
"Tag": "situation",
"Text": "la meva dona està de part",
"Start_char": 23,
"End_char": 48
}
]
}
},
```
### Data Splits
* NLUCat.train: 9128 examples
* NLUCat.dev: 1441 examples
* NLUCat.test: 1441 examples
### Statistics
| | test | dev | train | Total |
|-|-|-|-|-|
| alarm_query | 14 | 9 | 68 | 91 |
| alarm_remove | 10 | 12 | 68 | 90 |
| alarm_set | 11 | 10 | 69 | 90 |
| app_end | 8 | 9 | 43 | 60 |
| app_launch | 9 | 7 | 47 | 63 |
| audio_volume_down | 15 | 16 | 105 | 136 |
| audio_volume_mute | 8 | 9 | 62 | 79 |
| audio_volume_up | 14 | 16 | 101 | 131 |
| book restaurant | 31 | 27 | 182 | 240 |
| calendar_query | 34 | 38 | 227 | 299 |
| calendar_remove | 31 | 33 | 211 | 275 |
| calendar_set | 50 | 53 | 340 | 443 |
| call_emergency | 14 | 18 | 111 | 143 |
| call_medicalService | 14 | 11 | 70 | 95 |
| call_person | 23 | 18 | 116 | 157 |
| call_service | 6 | 9 | 45 | 60 |
| compare_places | 6 | 7 | 47 | 60 |
| contact_add | 20 | 22 | 138 | 180 |
| contact_query | 16 | 16 | 89 | 121 |
| cooking_query | 13 | 12 | 65 | 90 |
| cooking_recipe | 9 | 10 | 74 | 93 |
| datetime_convert | 14 | 14 | 95 | 123 |
| datetime_query | 18 | 17 | 112 | 147 |
| general_affirm | 6 | 6 | 18 | 30 |
| general_commandstop | 13 | 13 | 75 | 101 |
| general_confirm | 6 | 6 | 48 | 60 |
| general_dontcare | 8 | 6 | 46 | 60 |
| general_explain | 5 | 5 | 7 | 17 |
| general_greet | 13 | 10 | 67 | 90 |
| general_joke | 10 | 11 | 69 | 90 |
| general_negate | 12 | 9 | 69 | 90 |
| general_praise | 15 | 10 | 65 | 90 |
| general_quirky | 15 | 14 | 99 | 128 |
| general_repeat | 11 | 14 | 65 | 90 |
| generat_explain | 8 | 7 | 58 | 73 |
| iot_cleaning | 11 | 9 | 70 | 90 |
| iot_coffee | 10 | 12 | 68 | 90 |
| iot_hue_lightchange | 9 | 12 | 69 | 90 |
| iot_hue_lightdim | 14 | 12 | 64 | 90 |
| iot_hue_lightoff | 10 | 11 | 70 | 91 |
| iot_hue_lighton | 11 | 14 | 66 | 91 |
| iot_hue_lightup | 10 | 9 | 70 | 89 |
| iot_wemo_off | 11 | 13 | 65 | 89 |
| iot_wemo_on | 6 | 8 | 46 | 60 |
| lists_createoradd | 19 | 16 | 115 | 150 |
| lists_query | 15 | 15 | 92 | 122 |
| lists_remove | 14 | 14 | 91 | 119 |
| medReminder_query | 18 | 17 | 108 | 143 |
| medReminder_set | 17 | 17 | 113 | 147 |
| medicalAppointment_query | 20 | 19 | 114 | 153 |
| medicalAppointment_set | 24 | 23 | 165 | 212 |
| menu_query | 15 | 17 | 113 | 145 |
| message_query | 21 | 20 | 140 | 181 |
| message_send | 26 | 24 | 162 | 212 |
| music_dislikeness | 10 | 9 | 69 | 88 |
| music_likeness | 11 | 9 | 71 | 91 |
| music_query | 22 | 23 | 135 | 180 |
| music_settings | 9 | 9 | 63 | 81 |
| news_query | 19 | 22 | 149 | 190 |
| play_audiobook | 12 | 15 | 93 | 120 |
| play_game | 12 | 11 | 67 | 90 |
| play_music | 41 | 45 | 271 | 357 |
| play_podcasts | 20 | 19 | 121 | 160 |
| play_radio | 20 | 20 | 115 | 155 |
| play_video | 15 | 15 | 90 | 120 |
| qa_currency | 12 | 9 | 69 | 90 |
| qa_definition | 19 | 23 | 147 | 189 |
| qa_factoid | 26 | 24 | 143 | 193 |
| qa_maths | 13 | 12 | 95 | 120 |
| qa_medicalService | 20 | 21 | 117 | 158 |
| qa_procedures | 36 | 33 | 220 | 289 |
| qa_service | 16 | 18 | 112 | 146 |
| qa_sports | 9 | 9 | 72 | 90 |
| qa_stock | 13 | 10 | 67 | 90 |
| recommendation_events | 22 | 22 | 143 | 187 |
| recommendation_locations | 23 | 24 | 157 | 204 |
| recommendation_movies | 18 | 23 | 139 | 180 |
| share_currentLocation | 15 | 13 | 92 | 120 |
| social_post | 19 | 20 | 112 | 151 |
| social_query | 14 | 14 | 96 | 124 |
| takeaway_order | 20 | 25 | 135 | 180 |
| takeaway_query | 7 | 9 | 50 | 66 |
| transport_directions | 28 | 24 | 181 | 233 |
| transport_query | 31 | 31 | 185 | 247 |
| transport_taxi | 26 | 22 | 132 | 180 |
| transport_ticket | 25 | 25 | 160 | 210 |
| transport_traffic | 15 | 17 | 88 | 120 |
| weather_query | 31 | 29 | 189 | 249 |
| *Total* | *1440* | *1440* | *9117* | *11997* |
## Dataset Creation
### Curation Rationale
We created this dataset to contribute to the development of language models in Catalan, a low-resource language.
When creating this dataset, we took into account not only the language but the entire socio-cultural reality of the Catalan-speaking population. Special consideration was also given to the needs of the vulnerable population.
### Source Data
#### Initial Data Collection and Normalization
We commissioned a company to create fictitious examples for the creation of this dataset.
#### Who are the source language producers?
We commissioned the writing of the examples to the company [m47 labs](https://www.m47labs.com/).
### Annotations
#### Annotation process
The elaboration of this dataset has been done in three steps, taking as a model the process followed by the [NLU-Evaluation-Data](https://github.com/xliuhw/NLU-Evaluation-Data) dataset, as explained in the [paper](https://arxiv.org/abs/1903.05566).
* First step: translation or elaboration of the instructions given to the annotators to write the examples.
* Second step: writing the examples. This step also includes the grammatical correction and normalization of the texts.
* Third step: recording the attempts and the slots of each example. In this step, some modifications were made to the annotation guides to adjust them to the real situations.
#### Who are the annotators?
The drafting of the examples and their annotation was entrusted to the company [m47 labs](https://www.m47labs.com/) through a public tender process.
### Personal and Sensitive Information
No personal or sensitive information included.
The examples used for the preparation of this dataset are fictitious and, therefore, the information shown is not real.
## Considerations for Using the Data
### Social Impact of Dataset
We hope that this dataset will help the development of virtual assistants in Catalan, a language that is often not taken into account, and that it will especially help to improve the quality of life of people with special needs.
### Discussion of Biases
When writing the examples, the annotators were asked to take into account the socio-cultural reality (geographic points, artists and cultural references, etc.) of the Catalan-speaking population.
Likewise, they were asked to be careful to avoid examples that reinforce the stereotypes that exist in this society. For example: be careful with the gender or origin of personal names that are associated with certain activities.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Language Technologies Unit at the Barcelona Supercomputing Center ([email protected])
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a [CC0 International License](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
[DOI](https://doi.org/10.5281/zenodo.10362026)
### Contributions
The drafting of the examples and their annotation was entrusted to the company [m47 labs](https://www.m47labs.com/) through a public tender process.
|
projecte-aina/NLUCat
|
[
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:text-generation",
"task_ids:intent-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"language:ca",
"arxiv:1903.05566",
"region:us"
] |
2023-09-20T14:53:38+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["ca"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": [], "task_categories": ["text-classification", "token-classification", "text-generation"], "task_ids": ["intent-classification", "named-entity-recognition", "language-modeling"], "pretty_name": "NLUCat - Natural Language Understanding in Catalan", "tags": []}
|
2023-12-12T11:56:28+00:00
|
[
"1903.05566"
] |
[
"ca"
] |
TAGS
#task_categories-text-classification #task_categories-token-classification #task_categories-text-generation #task_ids-intent-classification #task_ids-named-entity-recognition #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10M<n<100M #language-Catalan #arxiv-1903.05566 #region-us
|
Dataset Card for NLUCat
=======================
Dataset Description
-------------------
* Homepage:
* Repository:
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
NLUCat is a dataset of NLU in Catalan. It consists of nearly 12,000 instructions annotated with the most relevant intents and spans. Each instruction is accompanied, in addition, by the instructions received by the annotator who wrote it.
The intents taken into account are the habitual ones of a virtual home assistant (activity calendar, IOT, list management, leisure, etc.), but specific ones have also been added to take into account social and healthcare needs for vulnerable people (information on administrative procedures, menu and medication reminders, etc.).
The spans have been annotated with a tag describing the type of information they contain. They are fine-grained, but can be easily grouped to use them in robust systems.
The examples are not only written in Catalan, but they also take into account the geographical and cultural reality of the speakers of this language (geographic points, cultural references, etc.)
This dataset can be used to train models for intent classification, spans identification and examples generation.
**This is a simplified version of the dataset for training and evaluating intent classifiers. The full dataset and the annotation guideslines can be found in Zenodo**
This work is licensed under a CC0 International License.
### Supported Tasks and Leaderboards
Intent classification, spans identification and examples generation.
### Languages
The dataset is in Catalan (ca-ES).
Dataset Structure
-----------------
### Data Instances
Three JSON files, one for each split.
### Data Fields
* example: 'str'. Example
* annotation: 'dict'. Annotation of the example
* intent: 'str'. Intent tag
* slots: 'list'. List of slots
* Tag:'str'. tag to the slot
* Text:'str'. Text of the slot
* Start\_char: 'int'. First character of the span
* End\_char: 'int'. Last character of the span
#### Example
An example looks as follows:
### Data Splits
* URL: 9128 examples
* URL: 1441 examples
* URL: 1441 examples
### Statistics
Dataset Creation
----------------
### Curation Rationale
We created this dataset to contribute to the development of language models in Catalan, a low-resource language.
When creating this dataset, we took into account not only the language but the entire socio-cultural reality of the Catalan-speaking population. Special consideration was also given to the needs of the vulnerable population.
### Source Data
#### Initial Data Collection and Normalization
We commissioned a company to create fictitious examples for the creation of this dataset.
#### Who are the source language producers?
We commissioned the writing of the examples to the company m47 labs.
### Annotations
#### Annotation process
The elaboration of this dataset has been done in three steps, taking as a model the process followed by the NLU-Evaluation-Data dataset, as explained in the paper.
* First step: translation or elaboration of the instructions given to the annotators to write the examples.
* Second step: writing the examples. This step also includes the grammatical correction and normalization of the texts.
* Third step: recording the attempts and the slots of each example. In this step, some modifications were made to the annotation guides to adjust them to the real situations.
#### Who are the annotators?
The drafting of the examples and their annotation was entrusted to the company m47 labs through a public tender process.
### Personal and Sensitive Information
No personal or sensitive information included.
The examples used for the preparation of this dataset are fictitious and, therefore, the information shown is not real.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
We hope that this dataset will help the development of virtual assistants in Catalan, a language that is often not taken into account, and that it will especially help to improve the quality of life of people with special needs.
### Discussion of Biases
When writing the examples, the annotators were asked to take into account the socio-cultural reality (geographic points, artists and cultural references, etc.) of the Catalan-speaking population.
Likewise, they were asked to be careful to avoid examples that reinforce the stereotypes that exist in this society. For example: be careful with the gender or origin of personal names that are associated with certain activities.
### Other Known Limitations
[N/A]
Additional Information
----------------------
### Dataset Curators
Language Technologies Unit at the Barcelona Supercomputing Center (langtech@URL)
This work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.
### Licensing Information
This work is licensed under a CC0 International License
DOI
### Contributions
The drafting of the examples and their annotation was entrusted to the company m47 labs through a public tender process.
|
[
"### Dataset Summary\n\n\nNLUCat is a dataset of NLU in Catalan. It consists of nearly 12,000 instructions annotated with the most relevant intents and spans. Each instruction is accompanied, in addition, by the instructions received by the annotator who wrote it.\n\n\nThe intents taken into account are the habitual ones of a virtual home assistant (activity calendar, IOT, list management, leisure, etc.), but specific ones have also been added to take into account social and healthcare needs for vulnerable people (information on administrative procedures, menu and medication reminders, etc.).\n\n\nThe spans have been annotated with a tag describing the type of information they contain. They are fine-grained, but can be easily grouped to use them in robust systems.\n\n\nThe examples are not only written in Catalan, but they also take into account the geographical and cultural reality of the speakers of this language (geographic points, cultural references, etc.)\n\n\nThis dataset can be used to train models for intent classification, spans identification and examples generation.\n\n\n**This is a simplified version of the dataset for training and evaluating intent classifiers. The full dataset and the annotation guideslines can be found in Zenodo**\n\n\nThis work is licensed under a CC0 International License.",
"### Supported Tasks and Leaderboards\n\n\nIntent classification, spans identification and examples generation.",
"### Languages\n\n\nThe dataset is in Catalan (ca-ES).\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nThree JSON files, one for each split.",
"### Data Fields\n\n\n* example: 'str'. Example\n* annotation: 'dict'. Annotation of the example\n* intent: 'str'. Intent tag\n* slots: 'list'. List of slots\n* Tag:'str'. tag to the slot\n* Text:'str'. Text of the slot\n* Start\\_char: 'int'. First character of the span\n* End\\_char: 'int'. Last character of the span",
"#### Example\n\n\nAn example looks as follows:",
"### Data Splits\n\n\n* URL: 9128 examples\n* URL: 1441 examples\n* URL: 1441 examples",
"### Statistics\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nWe created this dataset to contribute to the development of language models in Catalan, a low-resource language.\n\n\nWhen creating this dataset, we took into account not only the language but the entire socio-cultural reality of the Catalan-speaking population. Special consideration was also given to the needs of the vulnerable population.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nWe commissioned a company to create fictitious examples for the creation of this dataset.",
"#### Who are the source language producers?\n\n\nWe commissioned the writing of the examples to the company m47 labs.",
"### Annotations",
"#### Annotation process\n\n\nThe elaboration of this dataset has been done in three steps, taking as a model the process followed by the NLU-Evaluation-Data dataset, as explained in the paper.\n\n\n* First step: translation or elaboration of the instructions given to the annotators to write the examples.\n* Second step: writing the examples. This step also includes the grammatical correction and normalization of the texts.\n* Third step: recording the attempts and the slots of each example. In this step, some modifications were made to the annotation guides to adjust them to the real situations.",
"#### Who are the annotators?\n\n\nThe drafting of the examples and their annotation was entrusted to the company m47 labs through a public tender process.",
"### Personal and Sensitive Information\n\n\nNo personal or sensitive information included.\n\n\nThe examples used for the preparation of this dataset are fictitious and, therefore, the information shown is not real.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nWe hope that this dataset will help the development of virtual assistants in Catalan, a language that is often not taken into account, and that it will especially help to improve the quality of life of people with special needs.",
"### Discussion of Biases\n\n\nWhen writing the examples, the annotators were asked to take into account the socio-cultural reality (geographic points, artists and cultural references, etc.) of the Catalan-speaking population.\nLikewise, they were asked to be careful to avoid examples that reinforce the stereotypes that exist in this society. For example: be careful with the gender or origin of personal names that are associated with certain activities.",
"### Other Known Limitations\n\n\n[N/A]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nLanguage Technologies Unit at the Barcelona Supercomputing Center (langtech@URL)\n\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.",
"### Licensing Information\n\n\nThis work is licensed under a CC0 International License\n\n\nDOI",
"### Contributions\n\n\nThe drafting of the examples and their annotation was entrusted to the company m47 labs through a public tender process."
] |
[
"TAGS\n#task_categories-text-classification #task_categories-token-classification #task_categories-text-generation #task_ids-intent-classification #task_ids-named-entity-recognition #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10M<n<100M #language-Catalan #arxiv-1903.05566 #region-us \n",
"### Dataset Summary\n\n\nNLUCat is a dataset of NLU in Catalan. It consists of nearly 12,000 instructions annotated with the most relevant intents and spans. Each instruction is accompanied, in addition, by the instructions received by the annotator who wrote it.\n\n\nThe intents taken into account are the habitual ones of a virtual home assistant (activity calendar, IOT, list management, leisure, etc.), but specific ones have also been added to take into account social and healthcare needs for vulnerable people (information on administrative procedures, menu and medication reminders, etc.).\n\n\nThe spans have been annotated with a tag describing the type of information they contain. They are fine-grained, but can be easily grouped to use them in robust systems.\n\n\nThe examples are not only written in Catalan, but they also take into account the geographical and cultural reality of the speakers of this language (geographic points, cultural references, etc.)\n\n\nThis dataset can be used to train models for intent classification, spans identification and examples generation.\n\n\n**This is a simplified version of the dataset for training and evaluating intent classifiers. The full dataset and the annotation guideslines can be found in Zenodo**\n\n\nThis work is licensed under a CC0 International License.",
"### Supported Tasks and Leaderboards\n\n\nIntent classification, spans identification and examples generation.",
"### Languages\n\n\nThe dataset is in Catalan (ca-ES).\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nThree JSON files, one for each split.",
"### Data Fields\n\n\n* example: 'str'. Example\n* annotation: 'dict'. Annotation of the example\n* intent: 'str'. Intent tag\n* slots: 'list'. List of slots\n* Tag:'str'. tag to the slot\n* Text:'str'. Text of the slot\n* Start\\_char: 'int'. First character of the span\n* End\\_char: 'int'. Last character of the span",
"#### Example\n\n\nAn example looks as follows:",
"### Data Splits\n\n\n* URL: 9128 examples\n* URL: 1441 examples\n* URL: 1441 examples",
"### Statistics\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nWe created this dataset to contribute to the development of language models in Catalan, a low-resource language.\n\n\nWhen creating this dataset, we took into account not only the language but the entire socio-cultural reality of the Catalan-speaking population. Special consideration was also given to the needs of the vulnerable population.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nWe commissioned a company to create fictitious examples for the creation of this dataset.",
"#### Who are the source language producers?\n\n\nWe commissioned the writing of the examples to the company m47 labs.",
"### Annotations",
"#### Annotation process\n\n\nThe elaboration of this dataset has been done in three steps, taking as a model the process followed by the NLU-Evaluation-Data dataset, as explained in the paper.\n\n\n* First step: translation or elaboration of the instructions given to the annotators to write the examples.\n* Second step: writing the examples. This step also includes the grammatical correction and normalization of the texts.\n* Third step: recording the attempts and the slots of each example. In this step, some modifications were made to the annotation guides to adjust them to the real situations.",
"#### Who are the annotators?\n\n\nThe drafting of the examples and their annotation was entrusted to the company m47 labs through a public tender process.",
"### Personal and Sensitive Information\n\n\nNo personal or sensitive information included.\n\n\nThe examples used for the preparation of this dataset are fictitious and, therefore, the information shown is not real.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nWe hope that this dataset will help the development of virtual assistants in Catalan, a language that is often not taken into account, and that it will especially help to improve the quality of life of people with special needs.",
"### Discussion of Biases\n\n\nWhen writing the examples, the annotators were asked to take into account the socio-cultural reality (geographic points, artists and cultural references, etc.) of the Catalan-speaking population.\nLikewise, they were asked to be careful to avoid examples that reinforce the stereotypes that exist in this society. For example: be careful with the gender or origin of personal names that are associated with certain activities.",
"### Other Known Limitations\n\n\n[N/A]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nLanguage Technologies Unit at the Barcelona Supercomputing Center (langtech@URL)\n\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.",
"### Licensing Information\n\n\nThis work is licensed under a CC0 International License\n\n\nDOI",
"### Contributions\n\n\nThe drafting of the examples and their annotation was entrusted to the company m47 labs through a public tender process."
] |
[
133,
290,
24,
22,
16,
99,
11,
26,
10,
73,
4,
31,
26,
5,
133,
36,
52,
52,
101,
19,
61,
19,
32
] |
[
"passage: TAGS\n#task_categories-text-classification #task_categories-token-classification #task_categories-text-generation #task_ids-intent-classification #task_ids-named-entity-recognition #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10M<n<100M #language-Catalan #arxiv-1903.05566 #region-us \n### Dataset Summary\n\n\nNLUCat is a dataset of NLU in Catalan. It consists of nearly 12,000 instructions annotated with the most relevant intents and spans. Each instruction is accompanied, in addition, by the instructions received by the annotator who wrote it.\n\n\nThe intents taken into account are the habitual ones of a virtual home assistant (activity calendar, IOT, list management, leisure, etc.), but specific ones have also been added to take into account social and healthcare needs for vulnerable people (information on administrative procedures, menu and medication reminders, etc.).\n\n\nThe spans have been annotated with a tag describing the type of information they contain. They are fine-grained, but can be easily grouped to use them in robust systems.\n\n\nThe examples are not only written in Catalan, but they also take into account the geographical and cultural reality of the speakers of this language (geographic points, cultural references, etc.)\n\n\nThis dataset can be used to train models for intent classification, spans identification and examples generation.\n\n\n**This is a simplified version of the dataset for training and evaluating intent classifiers. The full dataset and the annotation guideslines can be found in Zenodo**\n\n\nThis work is licensed under a CC0 International License.### Supported Tasks and Leaderboards\n\n\nIntent classification, spans identification and examples generation.### Languages\n\n\nThe dataset is in Catalan (ca-ES).\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nThree JSON files, one for each split.",
"passage: ### Data Fields\n\n\n* example: 'str'. Example\n* annotation: 'dict'. Annotation of the example\n* intent: 'str'. Intent tag\n* slots: 'list'. List of slots\n* Tag:'str'. tag to the slot\n* Text:'str'. Text of the slot\n* Start\\_char: 'int'. First character of the span\n* End\\_char: 'int'. Last character of the span#### Example\n\n\nAn example looks as follows:### Data Splits\n\n\n* URL: 9128 examples\n* URL: 1441 examples\n* URL: 1441 examples### Statistics\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nWe created this dataset to contribute to the development of language models in Catalan, a low-resource language.\n\n\nWhen creating this dataset, we took into account not only the language but the entire socio-cultural reality of the Catalan-speaking population. Special consideration was also given to the needs of the vulnerable population.### Source Data#### Initial Data Collection and Normalization\n\n\nWe commissioned a company to create fictitious examples for the creation of this dataset.#### Who are the source language producers?\n\n\nWe commissioned the writing of the examples to the company m47 labs.### Annotations#### Annotation process\n\n\nThe elaboration of this dataset has been done in three steps, taking as a model the process followed by the NLU-Evaluation-Data dataset, as explained in the paper.\n\n\n* First step: translation or elaboration of the instructions given to the annotators to write the examples.\n* Second step: writing the examples. This step also includes the grammatical correction and normalization of the texts.\n* Third step: recording the attempts and the slots of each example. In this step, some modifications were made to the annotation guides to adjust them to the real situations.#### Who are the annotators?\n\n\nThe drafting of the examples and their annotation was entrusted to the company m47 labs through a public tender process.### Personal and Sensitive Information\n\n\nNo personal or sensitive information included.\n\n\nThe examples used for the preparation of this dataset are fictitious and, therefore, the information shown is not real.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset\n\n\nWe hope that this dataset will help the development of virtual assistants in Catalan, a language that is often not taken into account, and that it will especially help to improve the quality of life of people with special needs."
] |
c5459ea18a095f97c4aadc6dc816d41fb4256f5b
|
# Dataset Card for "wiki_text_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yoandrey/wiki_text_embeddings
|
[
"region:us"
] |
2023-09-20T14:54:15+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 87067694446, "num_examples": 35167920}], "download_size": 103338111988, "dataset_size": 87067694446}}
|
2023-09-20T16:29:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wiki_text_embeddings"
More Information needed
|
[
"# Dataset Card for \"wiki_text_embeddings\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wiki_text_embeddings\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wiki_text_embeddings\"\n\nMore Information needed"
] |
279a645fa6685830d0338ef94365893a1d173d54
|
# Dataset Card for "MisaHub_WCE_train_val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Aaryan333/MisaHub_WCE_train_val
|
[
"region:us"
] |
2023-09-20T15:00:54+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "bleeding", "1": "non_bleeding"}}}}], "splits": [{"name": "train", "num_bytes": 131095275.4041589, "num_examples": 2094}, {"name": "validation", "num_bytes": 32084848.5118411, "num_examples": 524}], "download_size": 162184262, "dataset_size": 163180123.916}}
|
2023-09-20T15:01:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "MisaHub_WCE_train_val"
More Information needed
|
[
"# Dataset Card for \"MisaHub_WCE_train_val\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"MisaHub_WCE_train_val\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"MisaHub_WCE_train_val\"\n\nMore Information needed"
] |
e877a9cbfa7e4af458ab41a90745da1687a673af
|
# Dataset Card for "vi_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AnhTong/vi_dataset
|
[
"region:us"
] |
2023-09-20T15:11:02+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "link", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "astronomy", "num_bytes": 5509853, "num_examples": 1163}, {"name": "cacnuoc", "num_bytes": 1849582, "num_examples": 373}, {"name": "hocvan12", "num_bytes": 3700549, "num_examples": 584}, {"name": "marketing", "num_bytes": 1395360, "num_examples": 304}, {"name": "molympiad", "num_bytes": 11949913, "num_examples": 4488}, {"name": "sinhhocvn", "num_bytes": 1201768, "num_examples": 142}, {"name": "vansudia", "num_bytes": 85849474, "num_examples": 9045}, {"name": "kimca", "num_bytes": 2126678, "num_examples": 902}, {"name": "toidicodedao", "num_bytes": 3045055, "num_examples": 498}], "download_size": 57946392, "dataset_size": 116628232}}
|
2023-09-20T15:50:47+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "vi_dataset"
More Information needed
|
[
"# Dataset Card for \"vi_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"vi_dataset\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"vi_dataset\"\n\nMore Information needed"
] |
ce0c54cba5573ac46587065f8026b78ebd14d11e
|
# Bangumi Image Base of 4-nin Wa Sorezore Uso O Tsuku
This is the image base of bangumi 4-nin wa Sorezore Uso o Tsuku, we detected 14 characters, 1462 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 272 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 82 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 11 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 325 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 23 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 12 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 285 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 15 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 12 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 9 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 22 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 294 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 11 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 89 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/4ninwasorezoreusootsuku
|
[
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] |
2023-09-20T15:33:54+00:00
|
{"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]}
|
2023-09-29T08:43:45+00:00
|
[] |
[] |
TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
|
Bangumi Image Base of 4-nin Wa Sorezore Uso O Tsuku
===================================================
This is the image base of bangumi 4-nin wa Sorezore Uso o Tsuku, we detected 14 characters, 1462 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
|
[] |
[
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
[
25
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
08e686ffc650f8120fbcc240a6b6193f40065e5e
|
# Dataset Card for turkish-nlp-suite/beyazperde-all-movie-reviews
<img src="https://raw.githubusercontent.com/turkish-nlp-suite/.github/main/profile/beyazPerde.png" width="20%" height="20%">
## Dataset Description
- **Repository:** [BeyazPerde All Movie Reviews](https://github.com/turkish-nlp-suite/BeyazPerde-Movie-Reviews/)
- **Paper:** [ACL link](https://aclanthology.org/2023.acl-long.768/)
- **Dataset:** BeyazPerde All Movie Reviews
- **Domain:** Social Media
### Dataset Summary
Beyazperde Movie Reviews offers Turkish sentiment analysis datasets that is scraped from popular movie reviews website Beyazperde.com. All Movie Reviews include audience reviews about movies of all the time. Here's the star rating distribution:
| star rating | count |
|---|---|
| 0.5 | 3.635 |
| 1.0 | 2.325 |
| 1.5 | 1.077 |
| 2.0 | 1.902 |
| 2.5 | 4.767 |
| 3.0 |4.347 |
| 3.5 | 6.495 |
| 4.0 |9.486 |
| 4.5 | 3.652 |
| 5.0 | 7.594 |
| total | 45.280 |
The star rating looks quite balanced. This dataset offers the challenge of understanding the sentiment in a refined way, dissecting the positive sentiment into "very positive" or "okayish positive".
### Dataset Instances
An instance of this dataset looks as follows:
```
{
"movie": "Avatar",
"text": "Açıkçası film beklentilerimi karşılayamadı. Tabi her şeyin ilki güzel ama son seride iyi olabilirdi. Filmde görsel olarak her şey güzeldi kendimi filmi izledikten sonra ıslanmış gibi hissettim :D Puan kırdığım noktalar filmin bilim kurgudan fantastiğe doğru kayması. Ardından sır kapısına döndürüp iyilik yapan iyilik bulur moduna girmesi. Çoğu sahnelerin çocuklara hitap etmesi. Neyse serinin üçüncü filmi sağlam olucak gibi...",
"rating": 3,5
}
```
### Data Split
| name |train|validation|test|
|---------|----:|---:|---:|
|BeyazPerde All Movie Reviews|35280|5000|5000|
### Citation
This work is supported by Google Developer Experts Program. Part of Duygu 2022 Fall-Winter collection, "Turkish NLP with Duygu"/ "Duygu'yla Türkçe NLP". All rights reserved. If you'd like to use this dataset in your own work, please kindly cite [A Diverse Set of Freely Available Linguistic Resources for Turkish](https://aclanthology.org/2023.acl-long.768/) :
```
@inproceedings{altinok-2023-diverse,
title = "A Diverse Set of Freely Available Linguistic Resources for {T}urkish",
author = "Altinok, Duygu",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.768",
pages = "13739--13750",
abstract = "This study presents a diverse set of freely available linguistic resources for Turkish natural language processing, including corpora, pretrained models and education material. Although Turkish is spoken by a sizeable population of over 80 million people, Turkish linguistic resources for natural language processing remain scarce. In this study, we provide corpora to allow practitioners to build their own applications and pretrained models that would assist industry researchers in creating quick prototypes. The provided corpora include named entity recognition datasets of diverse genres, including Wikipedia articles and supplement products customer reviews. In addition, crawling e-commerce and movie reviews websites, we compiled several sentiment analysis datasets of different genres. Our linguistic resources for Turkish also include pretrained spaCy language models. To the best of our knowledge, our models are the first spaCy models trained for the Turkish language. Finally, we provide various types of education material, such as video tutorials and code examples, that can support the interested audience on practicing Turkish NLP. The advantages of our linguistic resources are three-fold: they are freely available, they are first of their kind, and they are easy to use in a broad range of implementations. Along with a thorough description of the resource creation process, we also explain the position of our resources in the Turkish NLP world.",
}
```
|
turkish-nlp-suite/beyazperde-all-movie-reviews
|
[
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:tr",
"license:cc-by-sa-4.0",
"region:us"
] |
2023-09-20T15:36:45+00:00
|
{"language": ["tr"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "BeyazPerde All Movie Reviews"}
|
2023-09-22T15:46:22+00:00
|
[] |
[
"tr"
] |
TAGS
#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-10K<n<100K #language-Turkish #license-cc-by-sa-4.0 #region-us
|
Dataset Card for turkish-nlp-suite/beyazperde-all-movie-reviews
===============================================================
<img src="URL width="20%" height="20%">
Dataset Description
-------------------
* Repository: BeyazPerde All Movie Reviews
* Paper: ACL link
* Dataset: BeyazPerde All Movie Reviews
* Domain: Social Media
### Dataset Summary
Beyazperde Movie Reviews offers Turkish sentiment analysis datasets that is scraped from popular movie reviews website URL. All Movie Reviews include audience reviews about movies of all the time. Here's the star rating distribution:
The star rating looks quite balanced. This dataset offers the challenge of understanding the sentiment in a refined way, dissecting the positive sentiment into "very positive" or "okayish positive".
### Dataset Instances
An instance of this dataset looks as follows:
### Data Split
This work is supported by Google Developer Experts Program. Part of Duygu 2022 Fall-Winter collection, "Turkish NLP with Duygu"/ "Duygu'yla Türkçe NLP". All rights reserved. If you'd like to use this dataset in your own work, please kindly cite A Diverse Set of Freely Available Linguistic Resources for Turkish :
|
[
"### Dataset Summary\n\n\nBeyazperde Movie Reviews offers Turkish sentiment analysis datasets that is scraped from popular movie reviews website URL. All Movie Reviews include audience reviews about movies of all the time. Here's the star rating distribution:\n\n\n\nThe star rating looks quite balanced. This dataset offers the challenge of understanding the sentiment in a refined way, dissecting the positive sentiment into \"very positive\" or \"okayish positive\".",
"### Dataset Instances\n\n\nAn instance of this dataset looks as follows:",
"### Data Split\n\n\n\nThis work is supported by Google Developer Experts Program. Part of Duygu 2022 Fall-Winter collection, \"Turkish NLP with Duygu\"/ \"Duygu'yla Türkçe NLP\". All rights reserved. If you'd like to use this dataset in your own work, please kindly cite A Diverse Set of Freely Available Linguistic Resources for Turkish :"
] |
[
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-10K<n<100K #language-Turkish #license-cc-by-sa-4.0 #region-us \n",
"### Dataset Summary\n\n\nBeyazperde Movie Reviews offers Turkish sentiment analysis datasets that is scraped from popular movie reviews website URL. All Movie Reviews include audience reviews about movies of all the time. Here's the star rating distribution:\n\n\n\nThe star rating looks quite balanced. This dataset offers the challenge of understanding the sentiment in a refined way, dissecting the positive sentiment into \"very positive\" or \"okayish positive\".",
"### Dataset Instances\n\n\nAn instance of this dataset looks as follows:",
"### Data Split\n\n\n\nThis work is supported by Google Developer Experts Program. Part of Duygu 2022 Fall-Winter collection, \"Turkish NLP with Duygu\"/ \"Duygu'yla Türkçe NLP\". All rights reserved. If you'd like to use this dataset in your own work, please kindly cite A Diverse Set of Freely Available Linguistic Resources for Turkish :"
] |
[
65,
95,
18,
88
] |
[
"passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-10K<n<100K #language-Turkish #license-cc-by-sa-4.0 #region-us \n### Dataset Summary\n\n\nBeyazperde Movie Reviews offers Turkish sentiment analysis datasets that is scraped from popular movie reviews website URL. All Movie Reviews include audience reviews about movies of all the time. Here's the star rating distribution:\n\n\n\nThe star rating looks quite balanced. This dataset offers the challenge of understanding the sentiment in a refined way, dissecting the positive sentiment into \"very positive\" or \"okayish positive\".### Dataset Instances\n\n\nAn instance of this dataset looks as follows:### Data Split\n\n\n\nThis work is supported by Google Developer Experts Program. Part of Duygu 2022 Fall-Winter collection, \"Turkish NLP with Duygu\"/ \"Duygu'yla Türkçe NLP\". All rights reserved. If you'd like to use this dataset in your own work, please kindly cite A Diverse Set of Freely Available Linguistic Resources for Turkish :"
] |
c19d61bf548b2c7b53921504e852f8765af4c279
|
# Dataset Card for "pubmedsum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hippocrates/pubmedsum
|
[
"region:us"
] |
2023-09-20T15:37:03+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11428, "num_examples": 1}, {"name": "test", "num_bytes": 4144995, "num_examples": 200}], "download_size": 2086997, "dataset_size": 4156423}}
|
2023-09-20T15:37:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pubmedsum"
More Information needed
|
[
"# Dataset Card for \"pubmedsum\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pubmedsum\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pubmedsum\"\n\nMore Information needed"
] |
06e2e6335092788cc43945597e48dcb4e517e123
|
# Dataset Card for "PLOS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hippocrates/PLOS
|
[
"region:us"
] |
2023-09-20T15:38:06+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11157, "num_examples": 1}, {"name": "test", "num_bytes": 5608617, "num_examples": 200}], "download_size": 3016252, "dataset_size": 5619774}}
|
2023-09-20T15:38:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "PLOS"
More Information needed
|
[
"# Dataset Card for \"PLOS\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"PLOS\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"PLOS\"\n\nMore Information needed"
] |
a05b3329209ead1c3a3ac710841ab8b9b0f0db68
|
# Dataset Card for "CochranePLS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hippocrates/CochranePLS
|
[
"region:us"
] |
2023-09-20T15:38:50+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6321, "num_examples": 1}, {"name": "test", "num_bytes": 1279406, "num_examples": 200}], "download_size": 669610, "dataset_size": 1285727}}
|
2023-09-20T15:38:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "CochranePLS"
More Information needed
|
[
"# Dataset Card for \"CochranePLS\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"CochranePLS\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"CochranePLS\"\n\nMore Information needed"
] |
b694cec2f163237d696507ed4cbe4fe64dc194b2
|
# Dataset Card for "m2sum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hippocrates/m2sum
|
[
"region:us"
] |
2023-09-20T15:39:38+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10278, "num_examples": 1}, {"name": "test", "num_bytes": 4679014, "num_examples": 200}], "download_size": 2359186, "dataset_size": 4689292}}
|
2023-09-20T15:39:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "m2sum"
More Information needed
|
[
"# Dataset Card for \"m2sum\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"m2sum\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"m2sum\"\n\nMore Information needed"
] |
b7adac79d514989b2870562c905032a887828271
|
# Dataset Card for "wolof_speech_transcription"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
serge-wilson/wolof_speech_transcription
|
[
"region:us"
] |
2023-09-20T15:50:40+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1746401219.7180312, "num_examples": 12599}, {"name": "test", "num_bytes": 309529899.3475478, "num_examples": 2245}], "download_size": 2043272901, "dataset_size": 2055931119.065579}}
|
2023-09-20T15:52:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wolof_speech_transcription"
More Information needed
|
[
"# Dataset Card for \"wolof_speech_transcription\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wolof_speech_transcription\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wolof_speech_transcription\"\n\nMore Information needed"
] |
9527c471e92aa603cd0d3e53dee0bd98d6e53a1e
|
# Dataset Card for "limerick-topic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Yorth/limerick-topic
|
[
"region:us"
] |
2023-09-20T15:53:39+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "combined", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 16886056, "num_examples": 52708}, {"name": "validation", "num_bytes": 2112395, "num_examples": 6588}, {"name": "test", "num_bytes": 2111865, "num_examples": 6589}], "download_size": 10216598, "dataset_size": 21110316}}
|
2023-09-21T19:32:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "limerick-topic"
More Information needed
|
[
"# Dataset Card for \"limerick-topic\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"limerick-topic\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"limerick-topic\"\n\nMore Information needed"
] |
24ba71985e7424afe1262bfe302b022ae1c76ce8
|
# 🚀 Vision-Flan Dataset
vision-flan_191-task-1k is a human-labeled visual instruction tuning dataset consisting of 191 diverse tasks and 1,000 examples for each task.
It is constructed for visual instruction tuning and for building large-scale vision-language models.
## Paper or blog for more information:
https://github.com/VT-NLP/MultiInstruct/
https://vision-flan.github.io/
*Paper coming soon* 😊
## Citation
*Paper coming soon* 😊. If you use Vision-Flan, please use the following cites:
```
@misc{visionFlan2023,
title = {Vision-Flan:Scaling Visual Instruction Tuning},
url = {https://vision-flan.github.io/},
author = {Zhiyang Xu and Trevor Ashby and Chao Feng and Rulin Shao and Ying Shen and Di Jin and Qifan Wang and Lifu Huang},
month = {Sep},
year = {2023}
}
```
```
@inproceedings{DBLP:conf/acl/XuSH23,
author = {Zhiyang Xu and Ying Shen and Lifu Huang},
editor = {Anna Rogers and Jordan L. Boyd{-}Graber and Naoaki Okazaki},
title = {MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning},
booktitle = {Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), {ACL} 2023, Toronto, Canada, July 9-14, 2023},
pages = {11445--11465},
publisher = {Association for Computational Linguistics},
year = {2023},
url = {https://doi.org/10.18653/v1/2023.acl-long.641},
doi = {10.18653/v1/2023.acl-long.641},
timestamp = {Thu, 10 Aug 2023 12:35:59 +0200},
biburl = {https://dblp.org/rec/conf/acl/XuSH23.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
## License:
Please carefully check the licenses for all the datasets on this [page](https://vision-flan.github.io/tasks.html) before use.
## Contact:
If you have any questions or concerns please contact us at [email protected] .
|
Vision-Flan/vision-flan_191-task_1k
|
[
"task_categories:visual-question-answering",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] |
2023-09-20T15:54:20+00:00
|
{"language": ["en"], "size_categories": ["100K<n<1M"], "task_categories": ["visual-question-answering"], "pretty_name": "Vision-Flan"}
|
2023-09-21T17:11:37+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-visual-question-answering #size_categories-100K<n<1M #language-English #region-us
|
# Vision-Flan Dataset
vision-flan_191-task-1k is a human-labeled visual instruction tuning dataset consisting of 191 diverse tasks and 1,000 examples for each task.
It is constructed for visual instruction tuning and for building large-scale vision-language models.
## Paper or blog for more information:
URL
URL
*Paper coming soon*
*Paper coming soon* . If you use Vision-Flan, please use the following cites:
## License:
Please carefully check the licenses for all the datasets on this page before use.
## Contact:
If you have any questions or concerns please contact us at zhiyangx@URL .
|
[
"# Vision-Flan Dataset\n\nvision-flan_191-task-1k is a human-labeled visual instruction tuning dataset consisting of 191 diverse tasks and 1,000 examples for each task.\nIt is constructed for visual instruction tuning and for building large-scale vision-language models.",
"## Paper or blog for more information:\n\nURL\n\nURL\n\n*Paper coming soon* \n\n*Paper coming soon* . If you use Vision-Flan, please use the following cites:",
"## License:\n\nPlease carefully check the licenses for all the datasets on this page before use.",
"## Contact:\nIf you have any questions or concerns please contact us at zhiyangx@URL ."
] |
[
"TAGS\n#task_categories-visual-question-answering #size_categories-100K<n<1M #language-English #region-us \n",
"# Vision-Flan Dataset\n\nvision-flan_191-task-1k is a human-labeled visual instruction tuning dataset consisting of 191 diverse tasks and 1,000 examples for each task.\nIt is constructed for visual instruction tuning and for building large-scale vision-language models.",
"## Paper or blog for more information:\n\nURL\n\nURL\n\n*Paper coming soon* \n\n*Paper coming soon* . If you use Vision-Flan, please use the following cites:",
"## License:\n\nPlease carefully check the licenses for all the datasets on this page before use.",
"## Contact:\nIf you have any questions or concerns please contact us at zhiyangx@URL ."
] |
[
37,
67,
37,
21,
23
] |
[
"passage: TAGS\n#task_categories-visual-question-answering #size_categories-100K<n<1M #language-English #region-us \n# Vision-Flan Dataset\n\nvision-flan_191-task-1k is a human-labeled visual instruction tuning dataset consisting of 191 diverse tasks and 1,000 examples for each task.\nIt is constructed for visual instruction tuning and for building large-scale vision-language models.## Paper or blog for more information:\n\nURL\n\nURL\n\n*Paper coming soon* \n\n*Paper coming soon* . If you use Vision-Flan, please use the following cites:## License:\n\nPlease carefully check the licenses for all the datasets on this page before use.## Contact:\nIf you have any questions or concerns please contact us at zhiyangx@URL ."
] |
b4ac30fe3298bf568264c34395d0fefed61caec1
|
# Dataset Card for "fpb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ic-fspml/fpb
|
[
"region:us"
] |
2023-09-20T16:45:45+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 556902, "num_examples": 3876}, {"name": "test", "num_bytes": 138843, "num_examples": 970}], "download_size": 416525, "dataset_size": 695745}}
|
2023-09-20T16:45:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fpb"
More Information needed
|
[
"# Dataset Card for \"fpb\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fpb\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fpb\"\n\nMore Information needed"
] |
d5785b930604695b99fd40cfa8e2a7496da18af5
|
# Dataset Card for "fiqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ic-fspml/fiqa
|
[
"region:us"
] |
2023-09-20T16:45:47+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 86998, "num_examples": 938}, {"name": "test", "num_bytes": 18624, "num_examples": 235}], "download_size": 68130, "dataset_size": 105622}}
|
2023-09-20T16:45:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fiqa"
More Information needed
|
[
"# Dataset Card for \"fiqa\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fiqa\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fiqa\"\n\nMore Information needed"
] |
ddc9ee9242fa65332597f70e967ecc38b9d734fa
|
# WikiCities Clustering Dataset
This dataset was created from the (Wikipedia)[https://huggingface.co/datasets/wikipedia] training dataset by using a list of countries,
retrieving all cities for each country, and then finding their corresponding Wikipedia article in the Wikipedia dataset. Postprocessing
removed the last 25th percentile of countries with fewest city articles, and also took a maximum of 200 articles per country.
The final set has a total of 126 countries, and a total of 3531 cities.
Below is a distribution of cities by country.

|
jinaai/cities_wiki_clustering
|
[
"language:en",
"region:us"
] |
2023-09-20T17:09:08+00:00
|
{"language": ["en"]}
|
2023-10-27T14:28:11+00:00
|
[] |
[
"en"
] |
TAGS
#language-English #region-us
|
# WikiCities Clustering Dataset
This dataset was created from the (Wikipedia)[URL training dataset by using a list of countries,
retrieving all cities for each country, and then finding their corresponding Wikipedia article in the Wikipedia dataset. Postprocessing
removed the last 25th percentile of countries with fewest city articles, and also took a maximum of 200 articles per country.
The final set has a total of 126 countries, and a total of 3531 cities.
Below is a distribution of cities by country.
!image/jpeg
|
[
"# WikiCities Clustering Dataset\n\nThis dataset was created from the (Wikipedia)[URL training dataset by using a list of countries,\nretrieving all cities for each country, and then finding their corresponding Wikipedia article in the Wikipedia dataset. Postprocessing\nremoved the last 25th percentile of countries with fewest city articles, and also took a maximum of 200 articles per country.\nThe final set has a total of 126 countries, and a total of 3531 cities. \n\nBelow is a distribution of cities by country.\n\n!image/jpeg"
] |
[
"TAGS\n#language-English #region-us \n",
"# WikiCities Clustering Dataset\n\nThis dataset was created from the (Wikipedia)[URL training dataset by using a list of countries,\nretrieving all cities for each country, and then finding their corresponding Wikipedia article in the Wikipedia dataset. Postprocessing\nremoved the last 25th percentile of countries with fewest city articles, and also took a maximum of 200 articles per country.\nThe final set has a total of 126 countries, and a total of 3531 cities. \n\nBelow is a distribution of cities by country.\n\n!image/jpeg"
] |
[
10,
115
] |
[
"passage: TAGS\n#language-English #region-us \n# WikiCities Clustering Dataset\n\nThis dataset was created from the (Wikipedia)[URL training dataset by using a list of countries,\nretrieving all cities for each country, and then finding their corresponding Wikipedia article in the Wikipedia dataset. Postprocessing\nremoved the last 25th percentile of countries with fewest city articles, and also took a maximum of 200 articles per country.\nThe final set has a total of 126 countries, and a total of 3531 cities. \n\nBelow is a distribution of cities by country.\n\n!image/jpeg"
] |
c7b70a91e6efd515167efd959ea4987d2691605c
|
# Dataset Card for Evaluation run of wahaha1987/llama_7b_sharegpt94k_fastchat
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/wahaha1987/llama_7b_sharegpt94k_fastchat
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [wahaha1987/llama_7b_sharegpt94k_fastchat](https://huggingface.co/wahaha1987/llama_7b_sharegpt94k_fastchat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_wahaha1987__llama_7b_sharegpt94k_fastchat",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-20T18:16:52.904405](https://huggingface.co/datasets/open-llm-leaderboard/details_wahaha1987__llama_7b_sharegpt94k_fastchat/blob/main/results_2023-09-20T18-16-52.904405.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.08934563758389262,
"em_stderr": 0.0029211449908449474,
"f1": 0.14663171140939493,
"f1_stderr": 0.003084457529543832,
"acc": 0.3748038054707682,
"acc_stderr": 0.009200192405721019
},
"harness|drop|3": {
"em": 0.08934563758389262,
"em_stderr": 0.0029211449908449474,
"f1": 0.14663171140939493,
"f1_stderr": 0.003084457529543832
},
"harness|gsm8k|5": {
"acc": 0.043214556482183475,
"acc_stderr": 0.0056009875152378645
},
"harness|winogrande|5": {
"acc": 0.7063930544593529,
"acc_stderr": 0.012799397296204173
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_wahaha1987__llama_7b_sharegpt94k_fastchat
|
[
"region:us"
] |
2023-09-20T17:16:57+00:00
|
{"pretty_name": "Evaluation run of wahaha1987/llama_7b_sharegpt94k_fastchat", "dataset_summary": "Dataset automatically created during the evaluation run of model [wahaha1987/llama_7b_sharegpt94k_fastchat](https://huggingface.co/wahaha1987/llama_7b_sharegpt94k_fastchat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_wahaha1987__llama_7b_sharegpt94k_fastchat\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-20T18:16:52.904405](https://huggingface.co/datasets/open-llm-leaderboard/details_wahaha1987__llama_7b_sharegpt94k_fastchat/blob/main/results_2023-09-20T18-16-52.904405.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.08934563758389262,\n \"em_stderr\": 0.0029211449908449474,\n \"f1\": 0.14663171140939493,\n \"f1_stderr\": 0.003084457529543832,\n \"acc\": 0.3748038054707682,\n \"acc_stderr\": 0.009200192405721019\n },\n \"harness|drop|3\": {\n \"em\": 0.08934563758389262,\n \"em_stderr\": 0.0029211449908449474,\n \"f1\": 0.14663171140939493,\n \"f1_stderr\": 0.003084457529543832\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.043214556482183475,\n \"acc_stderr\": 0.0056009875152378645\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7063930544593529,\n \"acc_stderr\": 0.012799397296204173\n }\n}\n```", "repo_url": "https://huggingface.co/wahaha1987/llama_7b_sharegpt94k_fastchat", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_20T18_16_52.904405", "path": ["**/details_harness|drop|3_2023-09-20T18-16-52.904405.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-20T18-16-52.904405.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_20T18_16_52.904405", "path": ["**/details_harness|gsm8k|5_2023-09-20T18-16-52.904405.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-20T18-16-52.904405.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_20T18_16_52.904405", "path": ["**/details_harness|winogrande|5_2023-09-20T18-16-52.904405.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-20T18-16-52.904405.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_20T18_16_52.904405", "path": ["results_2023-09-20T18-16-52.904405.parquet"]}, {"split": "latest", "path": ["results_2023-09-20T18-16-52.904405.parquet"]}]}]}
|
2023-09-20T17:17:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of wahaha1987/llama_7b_sharegpt94k_fastchat
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model wahaha1987/llama_7b_sharegpt94k_fastchat on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-20T18:16:52.904405(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of wahaha1987/llama_7b_sharegpt94k_fastchat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model wahaha1987/llama_7b_sharegpt94k_fastchat on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-20T18:16:52.904405(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of wahaha1987/llama_7b_sharegpt94k_fastchat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model wahaha1987/llama_7b_sharegpt94k_fastchat on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-20T18:16:52.904405(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
27,
31,
175,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of wahaha1987/llama_7b_sharegpt94k_fastchat## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model wahaha1987/llama_7b_sharegpt94k_fastchat on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-20T18:16:52.904405(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.