sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
40f6a4b1a42c0bcb74aaf1ea3c8bba5ce140b021
# Dataset Card for CSC 中文拼写纠错数据集 - **Repository:** https://github.com/shibing624/pycorrector ## Dataset Description Chinese Spelling Correction (CSC) is a task to detect and correct misspelled characters in Chinese texts. CSC is challenging since many Chinese characters are visually or phonologically similar but with quite different semantic meanings. 中文拼写纠错数据集,共27万条,是通过原始SIGHAN13、14、15年数据集和Wang271k数据集合并整理后得到,json格式,带错误字符位置信息。 ### Original Dataset Summary - test.json 和 dev.json 为 **SIGHAN数据集**, 包括SIGHAN13 14 15,来自 [官方csc.html](http://nlp.ee.ncu.edu.tw/resource/csc.html) ,文件大小:339kb,4千条。 - train.json 为 **Wang271k数据集**,包括 Wang271k ,来自 [Automatic-Corpus-Generation dimmywang提供](https://github.com/wdimmy/Automatic-Corpus-Generation/blob/master/corpus/train.sgml) ,文件大小:93MB,27万条。 如果只想用SIGHAN数据集,可以这样取数据: ```python from datasets import load_dataset dev_ds = load_dataset('shibing624/CSC', split='validation') print(dev_ds) print(dev_ds[0]) test_ds = load_dataset('shibing624/CSC', split='test') print(test_ds) print(test_ds[0]) ``` ### Supported Tasks and Leaderboards 中文拼写纠错任务 The dataset designed for csc task training pretrained language models. ### Languages The data in CSC are in Chinese. ## Dataset Structure ### Data Instances An example of "train" looks as follows: ```json { "id": "B2-4029-3", "original_text": "晚间会听到嗓音,白天的时候大家都不会太在意,但是在睡觉的时候这嗓音成为大家的恶梦。", "wrong_ids": [ 5, 31 ], "correct_text": "晚间会听到噪音,白天的时候大家都不会太在意,但是在睡觉的时候这噪音成为大家的恶梦。" } ``` ### Data Fields 字段解释: - id:唯一标识符,无意义 - original_text: 原始错误文本 - wrong_ids: 错误字的位置,从0开始 - correct_text: 纠正后的文本 ### Data Splits | | train | dev | test | |---------------|------:|--:|--:| | CSC | 251835条 | 27981条 | 1100条 | ### Licensing Information The dataset is available under the Apache 2.0. ### Citation Information ```latex @misc{Xu_Pycorrector_Text_error, title={Pycorrector: Text error correction tool}, author={Xu Ming}, year={2021}, howpublished={\url{https://github.com/shibing624/pycorrector}}, } ``` ### Contributions [shibing624](https://github.com/shibing624) 整理并上传
shibing624/CSC
[ "task_categories:text-generation", "language:zh", "license:apache-2.0", "text-correction", "region:us" ]
2023-03-28T01:59:33+00:00
{"language": ["zh"], "license": "apache-2.0", "task_categories": ["text-generation"], "pretty_name": "CSC", "tags": ["text-correction"]}
2023-05-12T06:30:59+00:00
f3c07edd90dbe47045ee8dc9288c90131fcb7454
hou222/coco2023
[ "license:bigscience-openrail-m", "region:us" ]
2023-03-28T02:20:39+00:00
{"license": "bigscience-openrail-m"}
2023-03-28T02:20:39+00:00
abf67bfc5459959b056546b42941b7913e7c84e9
tnewaz/kd
[ "license:unknown", "region:us" ]
2023-03-28T02:20:53+00:00
{"license": "unknown"}
2023-03-28T02:20:53+00:00
9156fc5c22217d9437d69bebf92701540f68240a
# Dataset Card for "gpt2-chitchat-learn-small" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
wujohns/gpt2-chitchat-learn-small
[ "region:us" ]
2023-03-28T03:13:51+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 3121359, "num_examples": 9800}, {"name": "valid", "num_bytes": 69613, "num_examples": 200}], "download_size": 1765732, "dataset_size": 3190972}}
2023-03-28T03:21:45+00:00
f65b81a79cd6ddeb2d6e96dfb5fd3dd1a2b9dbaa
dothanhduy/junvu
[ "license:openrail", "region:us" ]
2023-03-28T03:36:59+00:00
{"license": "openrail"}
2023-03-28T03:38:36+00:00
2da02792e13b3f8fbe17903db9b90ffb963147d5
Furuhata-du/alpaca-classify-dataset
[ "region:us" ]
2023-03-28T04:35:18+00:00
{}
2023-03-28T04:36:33+00:00
8ac9eb259a49005c1c030d359b6009d631dcb116
myscale/recommendation-examples
[ "license:mit", "region:us" ]
2023-03-28T04:52:55+00:00
{"license": "mit"}
2023-03-28T07:30:37+00:00
7f78e69cad50ad3e75539a96fc80d538631f86ed
# Dataset Card for "face-eye-double-eyelids2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
youngdicey/face-eye-double-eyelids2
[ "region:us" ]
2023-03-28T04:53:09+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 890510.0, "num_examples": 97}], "download_size": 872745, "dataset_size": 890510.0}}
2023-03-28T04:59:08+00:00
503dc184e2664ecdc8c798808538565434922518
# Dataset Card for Wikipedia ## Table of Contents - [Dataset Card for "wikipedia"](#dataset-card-for-wikipedia) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [20200501.de](#20200501de) - [20200501.en](#20200501en) - [20200501.fr](#20200501fr) - [20200501.frr](#20200501frr) - [20200501.it](#20200501it) - [Data Fields](#data-fields) - [20200501.de](#20200501de-1) - [20200501.en](#20200501en-1) - [20200501.fr](#20200501fr-1) - [20200501.frr](#20200501frr-1) - [20200501.it](#20200501it-1) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary Wikipedia dataset containing cleaned articles of all languages. The datasets are built from the Wikipedia dump (https://dumps.wikimedia.org/) with one split per language. Each example contains the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.). The articles are parsed using the ``mwparserfromhell`` tool. To load this dataset you need to install Apache Beam and ``mwparserfromhell`` first: ``` pip install apache_beam mwparserfromhell ``` Then, you can load any subset of Wikipedia per language and per date this way: ```python from datasets import load_dataset load_dataset("wikipedia", language="sw", date="20220120", beam_runner=...) ``` where you can pass as `beam_runner` any Apache Beam supported runner for (distributed) data processing (see [here](https://beam.apache.org/documentation/runners/capability-matrix/)). Pass "DirectRunner" to run it on your machine. You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html). Some subsets of Wikipedia have already been processed by HuggingFace, and you can load them just with: ```python from datasets import load_dataset load_dataset("wikipedia", "20220301.en") ``` The list of pre-processed subsets is: - "20220301.de" - "20220301.en" - "20220301.fr" - "20220301.frr" - "20220301.it" - "20220301.simple" ### Supported Tasks and Leaderboards The dataset is generally used for Language Modeling. ### Languages You can find the list of languages [here](https://meta.wikimedia.org/wiki/List_of_Wikipedias). ## Dataset Structure ### Data Instances An example looks as follows: ``` {'id': '1', 'url': 'https://simple.wikipedia.org/wiki/April', 'title': 'April', 'text': 'April is the fourth month...' } ``` Some subsets of Wikipedia have already been processed by HuggingFace, as you can see below: #### 20220301.de - **Size of downloaded dataset files:** 6523.22 MB - **Size of the generated dataset:** 8905.28 MB - **Total amount of disk used:** 15428.50 MB #### 20220301.en - **Size of downloaded dataset files:** 20598.31 MB - **Size of the generated dataset:** 20275.52 MB - **Total amount of disk used:** 40873.83 MB #### 20220301.fr - **Size of downloaded dataset files:** 5602.57 MB - **Size of the generated dataset:** 7375.92 MB - **Total amount of disk used:** 12978.49 MB #### 20220301.frr - **Size of downloaded dataset files:** 12.44 MB - **Size of the generated dataset:** 9.13 MB - **Total amount of disk used:** 21.57 MB #### 20220301.it - **Size of downloaded dataset files:** 3516.44 MB - **Size of the generated dataset:** 4539.94 MB - **Total amount of disk used:** 8056.39 MB #### 20220301.simple - **Size of downloaded dataset files:** 239.68 MB - **Size of the generated dataset:** 235.07 MB - **Total amount of disk used:** 474.76 MB ### Data Fields The data fields are the same among all configurations: - `id` (`str`): ID of the article. - `url` (`str`): URL of the article. - `title` (`str`): Title of the article. - `text` (`str`): Text content of the article. ### Data Splits Here are the number of examples for several configurations: | name | train | |-----------------|--------:| | 20220301.de | 2665357 | | 20220301.en | 6458670 | | 20220301.fr | 2402095 | | 20220301.frr | 15199 | | 20220301.it | 1743035 | | 20220301.simple | 205328 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information Most of Wikipedia's text and many of its images are co-licensed under the [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License) (CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License) (GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts). Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes the text. ### Citation Information ``` @ONLINE{wikidump, author = "Wikimedia Foundation", title = "Wikimedia Downloads", url = "https://dumps.wikimedia.org" } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
livinNector/wikipedia
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:n<1K", "size_categories:1K<n<10K", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "size_categories:1M<n<10M", "source_datasets:original", "language:aa", "language:ab", "language:ace", "language:af", "language:ak", "language:als", "language:am", "language:an", "language:ang", "language:ar", "language:arc", "language:arz", "language:as", "language:ast", "language:atj", "language:av", "language:ay", "language:az", "language:azb", "language:ba", "language:bar", "language:bcl", "language:be", "language:bg", "language:bh", "language:bi", "language:bjn", "language:bm", "language:bn", "language:bo", "language:bpy", "language:br", "language:bs", "language:bug", "language:bxr", "language:ca", "language:cbk", "language:cdo", "language:ce", "language:ceb", "language:ch", "language:cho", "language:chr", "language:chy", "language:ckb", "language:co", "language:cr", "language:crh", "language:cs", "language:csb", "language:cu", "language:cv", "language:cy", "language:da", "language:de", "language:din", "language:diq", "language:dsb", "language:dty", "language:dv", "language:dz", "language:ee", "language:el", "language:eml", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:ext", "language:fa", "language:ff", "language:fi", "language:fj", "language:fo", "language:fr", "language:frp", "language:frr", "language:fur", "language:fy", "language:ga", "language:gag", "language:gan", "language:gd", "language:gl", "language:glk", "language:gn", "language:gom", "language:gor", "language:got", "language:gu", "language:gv", "language:ha", "language:hak", "language:haw", "language:he", "language:hi", "language:hif", "language:ho", "language:hr", "language:hsb", "language:ht", "language:hu", "language:hy", "language:ia", "language:id", "language:ie", "language:ig", "language:ii", "language:ik", "language:ilo", "language:inh", "language:io", "language:is", "language:it", "language:iu", "language:ja", "language:jam", "language:jbo", "language:jv", "language:ka", "language:kaa", "language:kab", "language:kbd", "language:kbp", "language:kg", "language:ki", "language:kj", "language:kk", "language:kl", "language:km", "language:kn", "language:ko", "language:koi", "language:krc", "language:ks", "language:ksh", "language:ku", "language:kv", "language:kw", "language:ky", "language:la", "language:lad", "language:lb", "language:lbe", "language:lez", "language:lfn", "language:lg", "language:li", "language:lij", "language:lmo", "language:ln", "language:lo", "language:lrc", "language:lt", "language:ltg", "language:lv", "language:lzh", "language:mai", "language:mdf", "language:mg", "language:mh", "language:mhr", "language:mi", "language:min", "language:mk", "language:ml", "language:mn", "language:mr", "language:mrj", "language:ms", "language:mt", "language:mus", "language:mwl", "language:my", "language:myv", "language:mzn", "language:na", "language:nah", "language:nan", "language:nap", "language:nds", "language:ne", "language:new", "language:ng", "language:nl", "language:nn", "language:no", "language:nov", "language:nrf", "language:nso", "language:nv", "language:ny", "language:oc", "language:olo", "language:om", "language:or", "language:os", "language:pa", "language:pag", "language:pam", "language:pap", "language:pcd", "language:pdc", "language:pfl", "language:pi", "language:pih", "language:pl", "language:pms", "language:pnb", "language:pnt", "language:ps", "language:pt", "language:qu", "language:rm", "language:rmy", "language:rn", "language:ro", "language:ru", "language:rue", "language:rup", "language:rw", "language:sa", "language:sah", "language:sat", "language:sc", "language:scn", "language:sco", "language:sd", "language:se", "language:sg", "language:sgs", "language:sh", "language:si", "language:sk", "language:sl", "language:sm", "language:sn", "language:so", "language:sq", "language:sr", "language:srn", "language:ss", "language:st", "language:stq", "language:su", "language:sv", "language:sw", "language:szl", "language:ta", "language:tcy", "language:tdt", "language:te", "language:tg", "language:th", "language:ti", "language:tk", "language:tl", "language:tn", "language:to", "language:tpi", "language:tr", "language:ts", "language:tt", "language:tum", "language:tw", "language:ty", "language:tyv", "language:udm", "language:ug", "language:uk", "language:ur", "language:uz", "language:ve", "language:vec", "language:vep", "language:vi", "language:vls", "language:vo", "language:vro", "language:wa", "language:war", "language:wo", "language:wuu", "language:xal", "language:xh", "language:xmf", "language:yi", "language:yo", "language:yue", "language:za", "language:zea", "language:zh", "language:zu", "license:cc-by-sa-3.0", "license:gfdl", "region:us" ]
2023-03-28T05:19:01+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["aa", "ab", "ace", "af", "ak", "als", "am", "an", "ang", "ar", "arc", "arz", "as", "ast", "atj", "av", "ay", "az", "azb", "ba", "bar", "bcl", "be", "bg", "bh", "bi", "bjn", "bm", "bn", "bo", "bpy", "br", "bs", "bug", "bxr", "ca", "cbk", "cdo", "ce", "ceb", "ch", "cho", "chr", "chy", "ckb", "co", "cr", "crh", "cs", "csb", "cu", "cv", "cy", "da", "de", "din", "diq", "dsb", "dty", "dv", "dz", "ee", "el", "eml", "en", "eo", "es", "et", "eu", "ext", "fa", "ff", "fi", "fj", "fo", "fr", "frp", "frr", "fur", "fy", "ga", "gag", "gan", "gd", "gl", "glk", "gn", "gom", "gor", "got", "gu", "gv", "ha", "hak", "haw", "he", "hi", "hif", "ho", "hr", "hsb", "ht", "hu", "hy", "ia", "id", "ie", "ig", "ii", "ik", "ilo", "inh", "io", "is", "it", "iu", "ja", "jam", "jbo", "jv", "ka", "kaa", "kab", "kbd", "kbp", "kg", "ki", "kj", "kk", "kl", "km", "kn", "ko", "koi", "krc", "ks", "ksh", "ku", "kv", "kw", "ky", "la", "lad", "lb", "lbe", "lez", "lfn", "lg", "li", "lij", "lmo", "ln", "lo", "lrc", "lt", "ltg", "lv", "lzh", "mai", "mdf", "mg", "mh", "mhr", "mi", "min", "mk", "ml", "mn", "mr", "mrj", "ms", "mt", "mus", "mwl", "my", "myv", "mzn", "na", "nah", "nan", "nap", "nds", "ne", "new", "ng", "nl", "nn", "no", "nov", "nrf", "nso", "nv", "ny", "oc", "olo", "om", "or", "os", "pa", "pag", "pam", "pap", "pcd", "pdc", "pfl", "pi", "pih", "pl", "pms", "pnb", "pnt", "ps", "pt", "qu", "rm", "rmy", "rn", "ro", "ru", "rue", "rup", "rw", "sa", "sah", "sat", "sc", "scn", "sco", "sd", "se", "sg", "sgs", "sh", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "srn", "ss", "st", "stq", "su", "sv", "sw", "szl", "ta", "tcy", "tdt", "te", "tg", "th", "ti", "tk", "tl", "tn", "to", "tpi", "tr", "ts", "tt", "tum", "tw", "ty", "tyv", "udm", "ug", "uk", "ur", "uz", "ve", "vec", "vep", "vi", "vls", "vo", "vro", "wa", "war", "wo", "wuu", "xal", "xh", "xmf", "yi", "yo", "yue", "za", "zea", "zh", "zu"], "license": ["cc-by-sa-3.0", "gfdl"], "multilinguality": ["multilingual"], "size_categories": ["n<1K", "1K<n<10K", "10K<n<100K", "100K<n<1M", "1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "Wikipedia", "language_bcp47": ["nds-nl"], "configs": ["20220301.aa", "20220301.ab", "20220301.ace", "20220301.ady", "20220301.af", "20220301.ak", "20220301.als", "20220301.am", "20220301.an", "20220301.ang", "20220301.ar", "20220301.arc", "20220301.arz", "20220301.as", "20220301.ast", "20220301.atj", "20220301.av", "20220301.ay", "20220301.az", "20220301.azb", "20220301.ba", "20220301.bar", "20220301.bat-smg", "20220301.bcl", "20220301.be", "20220301.be-x-old", "20220301.bg", "20220301.bh", "20220301.bi", "20220301.bjn", "20220301.bm", "20220301.bn", "20220301.bo", "20220301.bpy", "20220301.br", "20220301.bs", "20220301.bug", "20220301.bxr", "20220301.ca", "20220301.cbk-zam", "20220301.cdo", "20220301.ce", "20220301.ceb", "20220301.ch", "20220301.cho", "20220301.chr", "20220301.chy", "20220301.ckb", "20220301.co", "20220301.cr", "20220301.crh", "20220301.cs", "20220301.csb", "20220301.cu", "20220301.cv", "20220301.cy", "20220301.da", "20220301.de", "20220301.din", "20220301.diq", "20220301.dsb", "20220301.dty", "20220301.dv", "20220301.dz", "20220301.ee", "20220301.el", "20220301.eml", "20220301.en", "20220301.eo", "20220301.es", "20220301.et", "20220301.eu", "20220301.ext", "20220301.fa", "20220301.ff", "20220301.fi", "20220301.fiu-vro", "20220301.fj", "20220301.fo", "20220301.fr", "20220301.frp", "20220301.frr", "20220301.fur", "20220301.fy", "20220301.ga", "20220301.gag", "20220301.gan", "20220301.gd", "20220301.gl", "20220301.glk", "20220301.gn", "20220301.gom", "20220301.gor", "20220301.got", "20220301.gu", "20220301.gv", "20220301.ha", "20220301.hak", "20220301.haw", "20220301.he", "20220301.hi", "20220301.hif", "20220301.ho", "20220301.hr", "20220301.hsb", "20220301.ht", "20220301.hu", "20220301.hy", "20220301.ia", "20220301.id", "20220301.ie", "20220301.ig", "20220301.ii", "20220301.ik", "20220301.ilo", "20220301.inh", "20220301.io", "20220301.is", "20220301.it", "20220301.iu", "20220301.ja", "20220301.jam", "20220301.jbo", "20220301.jv", "20220301.ka", "20220301.kaa", "20220301.kab", "20220301.kbd", "20220301.kbp", "20220301.kg", "20220301.ki", "20220301.kj", "20220301.kk", "20220301.kl", "20220301.km", "20220301.kn", "20220301.ko", "20220301.koi", "20220301.krc", "20220301.ks", "20220301.ksh", "20220301.ku", "20220301.kv", "20220301.kw", "20220301.ky", "20220301.la", "20220301.lad", "20220301.lb", "20220301.lbe", "20220301.lez", "20220301.lfn", "20220301.lg", "20220301.li", "20220301.lij", "20220301.lmo", "20220301.ln", "20220301.lo", "20220301.lrc", "20220301.lt", "20220301.ltg", "20220301.lv", "20220301.mai", "20220301.map-bms", "20220301.mdf", "20220301.mg", "20220301.mh", "20220301.mhr", "20220301.mi", "20220301.min", "20220301.mk", "20220301.ml", "20220301.mn", "20220301.mr", "20220301.mrj", "20220301.ms", "20220301.mt", "20220301.mus", "20220301.mwl", "20220301.my", "20220301.myv", "20220301.mzn", "20220301.na", "20220301.nah", "20220301.nap", "20220301.nds", "20220301.nds-nl", "20220301.ne", "20220301.new", "20220301.ng", "20220301.nl", "20220301.nn", "20220301.no", "20220301.nov", "20220301.nrm", "20220301.nso", "20220301.nv", "20220301.ny", "20220301.oc", "20220301.olo", "20220301.om", "20220301.or", "20220301.os", "20220301.pa", "20220301.pag", "20220301.pam", "20220301.pap", "20220301.pcd", "20220301.pdc", "20220301.pfl", "20220301.pi", "20220301.pih", "20220301.pl", "20220301.pms", "20220301.pnb", "20220301.pnt", "20220301.ps", "20220301.pt", "20220301.qu", "20220301.rm", "20220301.rmy", "20220301.rn", "20220301.ro", "20220301.roa-rup", "20220301.roa-tara", "20220301.ru", "20220301.rue", "20220301.rw", "20220301.sa", "20220301.sah", "20220301.sat", "20220301.sc", "20220301.scn", "20220301.sco", "20220301.sd", "20220301.se", "20220301.sg", "20220301.sh", "20220301.si", "20220301.simple", "20220301.sk", "20220301.sl", "20220301.sm", "20220301.sn", "20220301.so", "20220301.sq", "20220301.sr", "20220301.srn", "20220301.ss", "20220301.st", "20220301.stq", "20220301.su", "20220301.sv", "20220301.sw", "20220301.szl", "20220301.ta", "20220301.tcy", "20220301.te", "20220301.tet", "20220301.tg", "20220301.th", "20220301.ti", "20220301.tk", "20220301.tl", "20220301.tn", "20220301.to", "20220301.tpi", "20220301.tr", "20220301.ts", "20220301.tt", "20220301.tum", "20220301.tw", "20220301.ty", "20220301.tyv", "20220301.udm", "20220301.ug", "20220301.uk", "20220301.ur", "20220301.uz", "20220301.ve", "20220301.vec", "20220301.vep", "20220301.vi", "20220301.vls", "20220301.vo", "20220301.wa", "20220301.war", "20220301.wo", "20220301.wuu", "20220301.xal", "20220301.xh", "20220301.xmf", "20220301.yi", "20220301.yo", "20220301.za", "20220301.zea", "20220301.zh", "20220301.zh-classical", "20220301.zh-min-nan", "20220301.zh-yue", "20220301.zu"], "dataset_info": [{"config_name": "20220301.de", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8905282792, "num_examples": 2665357}], "download_size": 6523215105, "dataset_size": 8905282792}, {"config_name": "20220301.en", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20275516160, "num_examples": 6458670}], "download_size": 20598313936, "dataset_size": 20275516160}, {"config_name": "20220301.fr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7375920768, "num_examples": 2402095}], "download_size": 5602565274, "dataset_size": 7375920768}, {"config_name": "20220301.frr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9129760, "num_examples": 15199}], "download_size": 12438017, "dataset_size": 9129760}, {"config_name": "20220301.it", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4539944448, "num_examples": 1743035}], "download_size": 3516441239, "dataset_size": 4539944448}, {"config_name": "20220301.simple", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 235072360, "num_examples": 205328}], "download_size": 239682796, "dataset_size": 235072360}]}
2023-03-28T05:32:24+00:00
3cdd8237f11477f046293e9934539a1fd36c1053
# Dataset Card for "alpaca-es-autoclean" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mserras/alpaca-es-autoclean
[ "region:us" ]
2023-03-28T05:44:45+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "1-instruction", "dtype": "string"}, {"name": "2-input", "dtype": "string"}, {"name": "3-output", "dtype": "string"}]}, {"name": "prediction", "dtype": "null"}, {"name": "prediction_agent", "dtype": "null"}, {"name": "annotation", "dtype": "string"}, {"name": "annotation_agent", "dtype": "string"}, {"name": "vectors", "struct": [{"name": "input", "sequence": "float64"}, {"name": "instruction", "sequence": "float64"}, {"name": "output", "sequence": "float64"}]}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "struct": [{"name": "en_index", "dtype": "int64"}, {"name": "tr-flag-1-instruction", "dtype": "bool"}, {"name": "tr-flag-2-input", "dtype": "bool"}, {"name": "tr-flag-3-output", "dtype": "bool"}]}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 14035334, "num_examples": 746}], "download_size": 10244494, "dataset_size": 14035334}}
2023-04-06T05:47:08+00:00
e179d67478d009f2ccbbe0104591b6726c88b7cb
1024khandsom/data
[ "license:openrail", "region:us" ]
2023-03-28T05:55:24+00:00
{"license": "openrail"}
2023-03-28T05:55:24+00:00
17b4686e7d0db1df69186a099e94e5dfc889c401
Joe02/minamoto_refs
[ "license:other", "region:us" ]
2023-03-28T06:23:04+00:00
{"license": "other"}
2023-03-28T06:23:20+00:00
6522fd336316d0e522de302892a3a926c2d8504d
This ChatDoctor-5K dataset is collected from this paper https://arxiv.org/pdf/2303.14070.pdf Alternatively, you can download the original dataset from this link https://drive.google.com/file/d/1nDTKZ3wZbZWTkFMBkxlamrzbNz0frugg/view?usp=sharing
LinhDuong/chatdoctor-5k
[ "license:apache-2.0", "arxiv:2303.14070", "region:us" ]
2023-03-28T06:23:57+00:00
{"license": "apache-2.0"}
2023-03-28T06:32:21+00:00
50a905b7c7a669947d99c6559fc9d93ce98c00d5
This ChatDoctor-200K dataset is collected from this paper https://arxiv.org/pdf/2303.14070.pdf Alternatively, you can download the original dataset from this link https://drive.google.com/file/d/1lyfqIwlLSClhgrCutWuEe_IACNq6XNUt/view?usp=sharing
LinhDuong/chatdoctor-200k
[ "license:apache-2.0", "arxiv:2303.14070", "region:us" ]
2023-03-28T06:33:20+00:00
{"license": "apache-2.0"}
2023-03-28T06:58:46+00:00
bdad53d6613ba6c05b9d1fd496d5226d804ca3a3
# Dataset Card for "SQuAD-V1-in-SQuAD-format" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TimoImhof/SQuAD-V1-in-SQuAD-format
[ "region:us" ]
2023-03-28T07:16:22+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "unmodified", "num_bytes": 9570059, "num_examples": 10552}, {"name": "modified_30_percent", "num_bytes": 9577354, "num_examples": 10552}, {"name": "modified_100_percent", "num_bytes": 9594310, "num_examples": 10552}], "download_size": 9334653, "dataset_size": 28741723}}
2023-04-01T12:25:06+00:00
5747a2076518fb1a66d39f2af1fb5011e4be4327
fathyshalab/google-presto
[ "license:cc-by-4.0", "region:us" ]
2023-03-28T07:21:06+00:00
{"license": "cc-by-4.0"}
2023-03-28T08:34:50+00:00
387c7336f7aba97721ff2b3939b6dd11bb7cfd40
# Dataset Card for "energieag-gpt3-synthetic-data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
fathyshalab/miketake
[ "region:us" ]
2023-03-28T07:31:02+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "intent", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 36645, "num_examples": 390}], "download_size": 15353, "dataset_size": 36645}}
2023-03-28T10:14:20+00:00
12fec9a89c75d79422584f17f35400d81987cf88
# Dataset Card for "vqa-with-coco-img-3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ryanramos/vqa-with-coco-img-3
[ "region:us" ]
2023-03-28T07:43:57+00:00
{"dataset_info": {"features": [{"name": "license", "dtype": "int64"}, {"name": "file_name", "dtype": "string"}, {"name": "coco_url", "dtype": "string"}, {"name": "height", "dtype": "int64"}, {"name": "width", "dtype": "int64"}, {"name": "date_captured", "dtype": "string"}, {"name": "flickr_url", "dtype": "string"}, {"name": "captions", "list": [{"name": "caption", "dtype": "string"}, {"name": "id", "dtype": "int64"}]}, {"name": "questions", "list": [{"name": "answer_type", "dtype": "string"}, {"name": "answers", "list": [{"name": "answer", "dtype": "string"}, {"name": "answer_confidence", "dtype": "string"}, {"name": "answer_id", "dtype": "int64"}]}, {"name": "image_id", "dtype": "int64"}, {"name": "multiple_choice_answer", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "question_id", "dtype": "int64"}, {"name": "question_type", "dtype": "string"}]}, {"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 889819603.5, "num_examples": 16500}], "download_size": 860459417, "dataset_size": 889819603.5}}
2023-03-28T07:44:21+00:00
7b20721ee8154850637993d8516bd9bdc5195910
kjchoi/news_summ-data
[ "region:us" ]
2023-03-28T07:44:50+00:00
{}
2023-03-28T08:22:04+00:00
13493bbced2afde11adf3f108d92492e5d2e83bf
# Dataset Card for "HotpotQA-in-SQuAD-format" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TimoImhof/HotpotQA-in-SQuAD-format
[ "region:us" ]
2023-03-28T07:45:20+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "unmodified", "num_bytes": 7657753, "num_examples": 6113}, {"name": "modified_30_percent", "num_bytes": 7662336, "num_examples": 6113}, {"name": "modified_100_percent", "num_bytes": 7673192, "num_examples": 6113}], "download_size": 12541785, "dataset_size": 22993281}}
2023-04-01T12:50:00+00:00
59bbdfd526efde5e660295896eafdc4c349355b3
# Dataset Card for "TriviaQA-in-SQuAD-format" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TimoImhof/TriviaQA-in-SQuAD-format
[ "region:us" ]
2023-03-28T07:48:36+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "unmodified", "num_bytes": 22886661, "num_examples": 15368}, {"name": "modified_30_percent", "num_bytes": 22899894, "num_examples": 15368}, {"name": "modified_100_percent", "num_bytes": 22929228, "num_examples": 15368}], "download_size": 40760032, "dataset_size": 68715783}}
2023-04-01T12:43:14+00:00
a49b8461b1649f73f713c4679d7def6ece58a2fd
# Dataset Card for "vqa-with-coco-img-0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ryanramos/vqa-with-coco-img-0
[ "region:us" ]
2023-03-28T07:50:38+00:00
{"dataset_info": {"features": [{"name": "license", "dtype": "int64"}, {"name": "file_name", "dtype": "string"}, {"name": "coco_url", "dtype": "string"}, {"name": "height", "dtype": "int64"}, {"name": "width", "dtype": "int64"}, {"name": "date_captured", "dtype": "string"}, {"name": "flickr_url", "dtype": "string"}, {"name": "captions", "list": [{"name": "caption", "dtype": "string"}, {"name": "id", "dtype": "int64"}]}, {"name": "questions", "list": [{"name": "answer_type", "dtype": "string"}, {"name": "answers", "list": [{"name": "answer", "dtype": "string"}, {"name": "answer_confidence", "dtype": "string"}, {"name": "answer_id", "dtype": "int64"}]}, {"name": "image_id", "dtype": "int64"}, {"name": "multiple_choice_answer", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "question_id", "dtype": "int64"}, {"name": "question_type", "dtype": "string"}]}, {"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 879162652.5, "num_examples": 16500}], "download_size": 849463217, "dataset_size": 879162652.5}}
2023-03-28T07:51:20+00:00
1354cac3e6477e937ee98d0cc42c199bab495384
# Dataset Card for "vqa-with-coco-img-2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ryanramos/vqa-with-coco-img-2
[ "region:us" ]
2023-03-28T07:51:56+00:00
{"dataset_info": {"features": [{"name": "license", "dtype": "int64"}, {"name": "file_name", "dtype": "string"}, {"name": "coco_url", "dtype": "string"}, {"name": "height", "dtype": "int64"}, {"name": "width", "dtype": "int64"}, {"name": "date_captured", "dtype": "string"}, {"name": "flickr_url", "dtype": "string"}, {"name": "captions", "list": [{"name": "caption", "dtype": "string"}, {"name": "id", "dtype": "int64"}]}, {"name": "questions", "list": [{"name": "answer_type", "dtype": "string"}, {"name": "answers", "list": [{"name": "answer", "dtype": "string"}, {"name": "answer_confidence", "dtype": "string"}, {"name": "answer_id", "dtype": "int64"}]}, {"name": "image_id", "dtype": "int64"}, {"name": "multiple_choice_answer", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "question_id", "dtype": "int64"}, {"name": "question_type", "dtype": "string"}]}, {"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 883496810.5, "num_examples": 16500}], "download_size": 854737087, "dataset_size": 883496810.5}}
2023-03-28T07:52:36+00:00
4981699b1e45744922c54aaffba75adffb47ad26
# Usage ``` from datasets import load_dataset dataset = load_dataset('TeamSODA/LibriTTS', streaming=True) ```
TeamSODA/LibriTTS
[ "region:us" ]
2023-03-28T08:03:34+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 8027118681.616, "num_examples": 33236}], "download_size": 9205367507, "dataset_size": 8027118681.616}}
2023-03-28T11:31:28+00:00
faf28cff15ed7d3a068e1c5ae6a388e14fb893a8
Temk/test
[ "license:openrail", "region:us" ]
2023-03-28T08:18:07+00:00
{"license": "openrail"}
2023-03-28T08:24:00+00:00
eab176b193ef1988973ff59e97db2ded5eb7997d
WindCity/CityBrain
[ "license:openrail", "region:us" ]
2023-03-28T08:31:15+00:00
{"license": "openrail"}
2023-03-28T08:50:48+00:00
0578a679dcd504a4a7f1c860ae9ca8dc32345a3b
MrHoang/HoiVaTuVancuaBacSy
[ "license:bigscience-openrail-m", "region:us" ]
2023-03-28T08:44:12+00:00
{"license": "bigscience-openrail-m"}
2023-03-28T08:46:39+00:00
68acecd112131982547cfe09d46bc4db3c89b7a6
# Dataset Card for "vqa-with-coco-img-4" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ryanramos/vqa-with-coco-img-4
[ "region:us" ]
2023-03-28T08:55:00+00:00
{"dataset_info": {"features": [{"name": "license", "dtype": "int64"}, {"name": "file_name", "dtype": "string"}, {"name": "coco_url", "dtype": "string"}, {"name": "height", "dtype": "int64"}, {"name": "width", "dtype": "int64"}, {"name": "date_captured", "dtype": "string"}, {"name": "flickr_url", "dtype": "string"}, {"name": "captions", "list": [{"name": "caption", "dtype": "string"}, {"name": "id", "dtype": "int64"}]}, {"name": "questions", "list": [{"name": "answer_type", "dtype": "string"}, {"name": "answers", "list": [{"name": "answer", "dtype": "string"}, {"name": "answer_confidence", "dtype": "string"}, {"name": "answer_id", "dtype": "int64"}]}, {"name": "image_id", "dtype": "int64"}, {"name": "multiple_choice_answer", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "question_id", "dtype": "int64"}, {"name": "question_type", "dtype": "string"}]}, {"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 915855341.125, "num_examples": 16783}], "download_size": 885712220, "dataset_size": 915855341.125}}
2023-03-28T08:55:49+00:00
0c309c5d062688a417e15fdae4d038f69c262467
# Dataset Card for "google-presto-german" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
fathyshalab/google-presto-german
[ "region:us" ]
2023-03-28T09:09:32+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label_name", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3217962, "num_examples": 41756}, {"name": "test", "num_bytes": 2263704, "num_examples": 29356}, {"name": "validation", "num_bytes": 962391, "num_examples": 12472}], "download_size": 2163028, "dataset_size": 6444057}}
2023-03-29T10:34:40+00:00
07aca27b058adc81cfbdffc26650d25db2ca0699
Oakh/oakh2
[ "license:other", "region:us" ]
2023-03-28T09:22:29+00:00
{"license": "other"}
2023-03-28T09:22:29+00:00
0fee41680f5030d55a5d403a791dd504c6a65aae
# Dataset Card for "raw_parts_of_kbuhist2_v3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Riksarkivet/Diachronic_swe
[ "region:us" ]
2023-03-28T09:48:34+00:00
{"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "ID", "dtype": "string"}, {"name": "H1_sv", "dtype": "string"}, {"name": "corpus", "dtype": "string"}, {"name": "H3_corpus_sv", "dtype": "string"}, {"name": "dataset", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "subtitle", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "meta_year", "dtype": "string"}, {"name": "originDate", "dtype": "string"}, {"name": "retrieveDate", "dtype": "string"}, {"name": "printedDate", "dtype": "string"}, {"name": "genre", "dtype": "string"}, {"name": "subgenre", "dtype": "string"}, {"name": "digitisationMethod", "dtype": "string"}, {"name": "annotationMethod", "dtype": "string"}, {"name": "sentenceOrder", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4613839319.867484, "num_examples": 13540111}], "download_size": 889888861, "dataset_size": 4613839319.867484}}
2023-03-28T10:25:30+00:00
9b0dac90d467869184a81108e07cf2603648fd28
# Dataset Card for JuICe (A Large Scale Distantly Supervised Dataset for Open Domain Context-based Code Generation) ## Dataset Description - **Homepage: [GitHub](https://github.com/rajasagashe/juice)** - **Paper: [JuICe: A Large Scale Distantly Supervised Dataset for Open Domain Context-based Code Generation](https://arxiv.org/abs/1910.02216)** ### Dataset Summary The JuICe dataset was developed to study code generation conditioned on a long context history. For that purpose, the authors collected data from interactive coding environements (ICE) in Jupyter notebooks (JuICE). Since these notebooks contain interleaved code snippet cells and natural language markdown they are particularly useful for this task. While the original [dataset](https://github.com/rajasagashe/juice) also contains a corpus of 1.5 million jupyter notebook examples, this version (redistributed on the hub for easier access), contains only the curated test set of 3.7K instances based on online programming assignments. ### Supported Tasks and Leaderboards This dataset can be used for Natural Language to Code Generation tasks. ### Languages Python, English ### Data Instances ```python dataset = load_dataset("koutch/JuICe") DatasetDict({ validation: Dataset({ features: ['question', 'answer', 'notebook'], num_rows: 1831 }) test: Dataset({ features: ['question', 'answer', 'notebook'], num_rows: 2115 }) }) ``` ### Data Fields In short, each data row contains a programming `question` and an code `answer` to that programming question, answer which might require contextualized information in previous cells in the `notebook` - `question`: Contextualized programming exercise/question to be answerred in the last cell of the jupyter notebook - `notebook`: The ordered sequence of jupyter notebook cells which forms the full exercise context - `text`: the raw content of the cell - `cell_type`: code, markdown, or raw - `answer`: The code implementation which answers to the question ### Data Splits * validation: the dev split in the original paper * test: the test split in the original paper ## Additional Information ### Citation Information If you use the dataset or the code in your research, please cite the following paper: ``` @article{DBLP:journals/corr/abs-1910-02216, author = {Rajas Agashe and Srinivasan Iyer and Luke Zettlemoyer}, title = {JuICe: {A} Large Scale Distantly Supervised Dataset for Open Domain Context-based Code Generation}, journal = {CoRR}, volume = {abs/1910.02216}, year = {2019}, url = {http://arxiv.org/abs/1910.02216}, eprinttype = {arXiv}, eprint = {1910.02216}, timestamp = {Wed, 09 Oct 2019 14:07:58 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-02216.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
koutch/JuICe
[ "task_categories:question-answering", "size_categories:1K<n<10K", "language:en", "license:cc-by-4.0", "code", "arxiv:1910.02216", "region:us" ]
2023-03-28T11:01:43+00:00
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"], "pretty_name": "juice", "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "notebook", "sequence": [{"name": "text", "dtype": "string"}, {"name": "cell_type", "dtype": "string"}]}], "splits": [{"name": "validation", "num_bytes": 19578995, "num_examples": 1831}, {"name": "test", "num_bytes": 21651420, "num_examples": 2115}], "download_size": 155457826, "dataset_size": 41230415}, "tags": ["code"]}
2023-03-29T06:34:03+00:00
fdd2d23cc4803bc7e1a09578678ebf83342f88b1
### dataset Alpaca and Belle 1M for [InstructGLM](https://github.com/yuekaizhang/InstructGLM) github project training
yuekai/belle_1M_and_alpaca_cleaned
[ "region:us" ]
2023-03-28T11:06:35+00:00
{}
2023-03-28T12:12:50+00:00
ac34becb1c86d2b018b2b76b66c6ca442f776eb2
# Dataset Card for "boc" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
cta2106/boc
[ "region:us" ]
2023-03-28T11:12:13+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "country_iso_2", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "filename", "dtype": "string"}, {"name": "filetype", "dtype": "string"}, {"name": "is_manual", "dtype": "string"}, {"name": "speaker", "dtype": "null"}, {"name": "time_ingested", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 662649, "num_examples": 224}], "download_size": 0, "dataset_size": 662649}}
2023-03-29T14:37:12+00:00
4a69f2127c9196dff082adff12da3bcf3196e51e
goigoilab/demotjuck
[ "license:unknown", "region:us" ]
2023-03-28T11:16:39+00:00
{"license": "unknown"}
2023-03-28T11:38:17+00:00
40bd5a07cc06db6c5de8765cfebcc00ed70fe077
# Dataset Card for "UA_speech_noisereduced_test-0.9_train-0.1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AravindVadlapudi02/UA_speech_noisereduced_test-0.9_train-0.1
[ "region:us" ]
2023-03-28T11:51:52+00:00
{"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "healthy control", "1": "pathology"}}}}, {"name": "input_features", "sequence": {"sequence": "float32"}}], "splits": [{"name": "train", "num_bytes": 535865256, "num_examples": 558}, {"name": "test", "num_bytes": 4831430292, "num_examples": 5031}], "download_size": 620175139, "dataset_size": 5367295548}}
2023-03-28T11:53:03+00:00
d41246ca09ebc5bd7aad33ed50ffa62d27460b42
tsdocode/vi_alpaca_clean
[ "task_categories:text-generation", "language:vi", "license:cc-by-4.0", "instruction-finetuning", "region:us" ]
2023-03-28T12:12:00+00:00
{"language": ["vi"], "license": "cc-by-4.0", "task_categories": ["text-generation"], "pretty_name": "Vietnamese Alpaca", "tags": ["instruction-finetuning"]}
2023-03-28T12:14:52+00:00
1288dba25d21b0752e1678111543cfc6c25a13ee
# Dataset Card for "ada_key_merge_subset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
loubnabnl/ada_key_merge_subset
[ "region:us" ]
2023-03-28T12:41:09+00:00
{"dataset_info": {"features": [{"name": "entities", "list": [{"name": "context", "dtype": "string"}, {"name": "end", "dtype": "int64"}, {"name": "score", "dtype": "float32"}, {"name": "start", "dtype": "int64"}, {"name": "tag", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "max_stars_repo_path", "dtype": "string"}, {"name": "max_stars_repo_name", "dtype": "string"}, {"name": "max_stars_count", "dtype": "int64"}, {"name": "content", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "new_content", "dtype": "string"}, {"name": "modified", "dtype": "bool"}, {"name": "references", "dtype": "string"}, {"name": "fixed_content", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 54890027, "num_examples": 580}], "download_size": 7819078, "dataset_size": 54890027}}
2023-03-28T12:41:13+00:00
dfb7bc41da7f3c5b03d80b82a47ac17094347eee
# Dataset Card for "petfinder-dogs" ## Dataset Description - **Homepage:** https://www.petfinder.com/ - **Paper:** N.A. - **Leaderboard:** N.A. - **Point of Contact:** N.A. ### Dataset Summary Contains 700k+ 300px-wide images of 150k+ distinct dogs extracted from the PetFinder API in March 2023. Only those having at least 4 photos are present: Each subject has between 4 and 12 photos. This dataset aims to simplify AI work based on dogs' images and avoid rescraping thousands of them from the PetFinder API again and again.
drzraf/petfinder-dogs
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "license:unknown", "pets", "dogs", "animals", "photos", "region:us" ]
2023-03-28T12:51:31+00:00
{"annotations_creators": [], "language_creators": ["crowdsourced"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "pretty_name": "300px dogs photos from Petfinder", "tags": ["pets", "dogs", "animals", "photos"]}
2023-03-31T17:47:42+00:00
bc2679757ae58afce4794de43be356560baebea2
badokorach/Test_QA
[ "size_categories:n<1K", "language:en", "license:openrail", "region:us" ]
2023-03-28T12:58:43+00:00
{"language": ["en"], "license": "openrail", "size_categories": ["n<1K"], "pretty_name": "Brenda"}
2023-03-31T20:07:43+00:00
ab37731f21431a793618cba9dfd9c1f11445a594
# Dataset Card for "SBU_caption" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
guangyil/SBU_caption
[ "region:us" ]
2023-03-28T13:19:53+00:00
{"dataset_info": {"features": [{"name": "bert_token", "sequence": "int64"}, {"name": "gpt2_token", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 287684206.968832, "num_examples": 954428}, {"name": "test", "num_bytes": 288158.040064, "num_examples": 956}], "download_size": 76486474, "dataset_size": 287972365.008896}}
2023-03-28T13:20:43+00:00
43afcb1aa5e2b041c751a4c6a27fc72f5372d26b
# Dataset Card for "librispeech_tiny" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Isma/librispeech_tiny
[ "region:us" ]
2023-03-28T13:25:57+00:00
{"dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "chapter_id", "dtype": "int64"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "10mn", "num_bytes": 11602736.683065278, "num_examples": 50}, {"name": "1h", "num_bytes": 65903566.59981079, "num_examples": 284}, {"name": "2h", "num_bytes": 133663590.94891201, "num_examples": 576}], "download_size": 202468853, "dataset_size": 211169894.23178807}}
2023-03-28T13:31:24+00:00
06f1388ad8f646aea313f0a34b2ea0787c04c284
NaturalStupidlty/FinBERT-Twitter-BTC
[ "license:apache-2.0", "region:us" ]
2023-03-28T14:10:55+00:00
{"license": "apache-2.0"}
2023-03-28T14:12:34+00:00
c1ae2bada807877856fa13da6c918cf4c4e55c7f
# Dataset Card for "word_aligned_translation" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
spdenisov/word_aligned_translation
[ "region:us" ]
2023-03-28T14:16:14+00:00
{"dataset_info": {"features": [{"name": "target_language", "dtype": "string"}, {"name": "source_language", "dtype": "string"}, {"name": "source_words", "sequence": "string"}, {"name": "target_lines", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 22748376, "num_examples": 48560}], "download_size": 15270181, "dataset_size": 22748376}}
2023-03-28T14:16:19+00:00
4da766cc98b0dbded6a6d8b9b0072c1b81c78819
# Dataset Card for "somos-alpaca-es-intro" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dvilasuero/somos-alpaca-es-intro
[ "region:us" ]
2023-03-28T14:16:52+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "1-instruction", "dtype": "string"}, {"name": "2-input", "dtype": "string"}, {"name": "3-output", "dtype": "string"}]}, {"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "prediction_agent", "dtype": "string"}, {"name": "annotation", "dtype": "string"}, {"name": "annotation_agent", "dtype": "string"}, {"name": "vectors", "struct": [{"name": "input", "sequence": "float64"}, {"name": "instruction", "sequence": "float64"}, {"name": "output", "sequence": "float64"}]}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 11389691, "num_examples": 606}], "download_size": 0, "dataset_size": 11389691}}
2023-04-09T08:21:06+00:00
9bd6cdf31373f1e86e3c47b254114418bb35dedc
# AutoTrain Dataset for project: sweet-potato-classification ## Dataset Description This dataset has been automatically processed by AutoTrain for project sweet-potato-classification. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<256x192 RGB PIL image>", "target": 0 }, { "image": "<256x192 RGB PIL image>", "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(names=['Leaf rust', 'Root rot', 'alternaria_sweet_potato_leaf_spot'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 46 | | valid | 13 |
bazudde/Sweetpotato_images
[ "task_categories:image-classification", "region:us" ]
2023-03-28T14:39:38+00:00
{"task_categories": ["image-classification"]}
2023-03-28T14:40:52+00:00
bcb97b65353557de050d999830c9ef1c6bc624ae
mistobaan/t0_zsopt
[ "license:apache-2.0", "region:us" ]
2023-03-28T14:40:36+00:00
{"license": "apache-2.0", "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "task_source", "dtype": "string"}, {"name": "task_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32409537993, "num_examples": 42881000}], "download_size": 17798927725, "dataset_size": 32409537993}}
2023-03-28T15:27:44+00:00
cc20d8c2c52e81963cd76f34c91f55b47ff4b8e9
MihaiIonascu/dreadit-train
[ "license:apache-2.0", "region:us" ]
2023-03-28T15:01:33+00:00
{"license": "apache-2.0"}
2023-03-28T15:01:59+00:00
a297b7bb833e3e7d35936665dde05eae98e3ed3f
MihaiIonascu/dreadit-validation
[ "license:apache-2.0", "region:us" ]
2023-03-28T15:02:52+00:00
{"license": "apache-2.0"}
2023-03-28T15:03:03+00:00
4f604cdd780d135dbdfe2b55008b2ad6e63c4d21
# Dataset Card for "processed4" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
spdenisov/processed4
[ "region:us" ]
2023-03-28T15:23:26+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 52509660, "num_examples": 58560}], "download_size": 17379477, "dataset_size": 52509660}}
2023-03-28T16:20:57+00:00
bb3a697e95c102fcf62479012f3992393f76af06
# Dataset Card for "disc_cla_plenaria-2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Sleoruiz/disc_cla_plenaria-2
[ "region:us" ]
2023-03-28T15:34:44+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "comision", "dtype": "string"}, {"name": "fecha_gaceta", "dtype": "string"}, {"name": "gaceta_numero", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "text", "dtype": "string"}]}, {"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "prediction_agent", "dtype": "string"}, {"name": "annotation", "sequence": "string"}, {"name": "annotation_agent", "dtype": "string"}, {"name": "vectors", "dtype": "null"}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 162072571, "num_examples": 42666}], "download_size": 65858974, "dataset_size": 162072571}}
2023-03-28T15:35:35+00:00
9336f2f0e2b5fffb381709b468447bd6a8d08205
mk10/Anna
[ "license:creativeml-openrail-m", "region:us" ]
2023-03-28T15:54:33+00:00
{"license": "creativeml-openrail-m"}
2023-03-28T15:54:33+00:00
798c567f69c8f4b12fc191015e59ee34e9afe00d
This dataset splits the original [CodeAlpaca dataset](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k) into train and test splits.
HuggingFaceH4/CodeAlpaca_20K
[ "task_categories:text-generation", "license:cc", "region:us" ]
2023-03-28T16:18:25+00:00
{"license": "cc", "task_categories": ["text-generation"]}
2023-03-28T16:26:28+00:00
0b5b719a24477994a96cff9d9f403363fb65652b
The data is exactly like the original GSM8k (https://huggingface.co/datasets/gsm8k ), but with the label consisting of the correct answer(one number) only.
skrishna/gsm8k_only_answer
[ "license:mit", "region:us" ]
2023-03-28T16:24:21+00:00
{"license": "mit"}
2023-03-28T17:09:01+00:00
2f1f0d3b5f4c3662d0ce1c105a029d636a5e33ce
# Dataset Card for "processed5" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
spdenisov/processed5
[ "region:us" ]
2023-03-28T16:37:00+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 45720965, "num_examples": 48560}], "download_size": 14638764, "dataset_size": 45720965}}
2023-03-28T16:37:03+00:00
f90b68394bef857638298beb5056f724d613c114
# Dataset Card for "processed_trans" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
spdenisov/processed_trans
[ "region:us" ]
2023-03-28T16:47:30+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"name": "length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 183120649.48606578, "num_examples": 151998}], "download_size": 40922404, "dataset_size": 183120649.48606578}}
2023-03-30T18:00:31+00:00
5342ae26e4d41b78854f58a1cc50114581638962
# Dataset Card for "processed_word" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
spdenisov/processed_word
[ "region:us" ]
2023-03-28T16:47:59+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"name": "length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 118733211.26505354, "num_examples": 48517}], "download_size": 21466771, "dataset_size": 118733211.26505354}}
2023-03-30T18:13:28+00:00
9fae08be3e409bb2fa5af45d0d5fe77cffb86094
# Dataset Card for "zb4xnuMlahk" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
keithhon/zb4xnuMlahk
[ "region:us" ]
2023-03-28T17:29:50+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}, {"name": "youtube_video_id", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 401505573.0, "num_examples": 385}], "download_size": 79562986, "dataset_size": 401505573.0}}
2023-03-28T17:29:58+00:00
0be99cef83a992515a3d535507779b8fdbe77ac4
zeke/fuego-20230328-205425-ea1ecc
[ "fuego", "region:us" ]
2023-03-28T17:54:26+00:00
{"tags": ["fuego"], "fuego": {"id": "20230328-205425-ea1ecc", "status": "done", "script": "main.py", "requirements_file": "requirements.txt", "space_id": "zeke/fuego-20230328-205425-ea1ecc", "space_hardware": "cpu-basic", "github_repo_id": "pytorch/examples", "github_repo_branch": "main", "github_repo_sha": "54f4572509891883a947411fd7239237dd2a39c3"}}
2023-03-28T18:05:12+00:00
f43218ab2007ae30e4c243f3d4422490199c8d47
# Dataset Card for "seabream-freshness_v0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TheSleepyJo/seabream-freshness_v0
[ "region:us" ]
2023-03-28T18:01:33+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 783931699.0, "num_examples": 16}], "download_size": 51974320, "dataset_size": 783931699.0}}
2023-03-28T18:07:44+00:00
4ee5c971eb6c2f582d251587e9dfb193b9045530
soodoku/archive-news
[ "license:apache-2.0", "region:us" ]
2023-03-28T18:28:26+00:00
{"license": "apache-2.0"}
2023-03-28T18:28:26+00:00
677fbe26adab0f5b07711c29a1adb3ac45c1287b
# Dataset Validated from https://huggingface.co/spaces/dariolopez/argilla-elena-reddit-c-ssrs-suicide-dataset-es https://dariolopez-argilla-elena-reddit-c-ssrs-suic-00dc6af.hf.space
dariolopez/argilla-elena-reddit-c-ssrs-suicide-dataset-es
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:es", "region:us" ]
2023-03-28T20:12:40+00:00
{"language": ["es"], "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"]}
2023-03-29T15:17:58+00:00
91a2f3a43534ce5095292c1290660a76aa3dbee7
# Dataset Card for "city-council-gpt3-silver-standard-summaries" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
alex2awesome/city-council-gpt3-silver-standard-summaries
[ "region:us" ]
2023-03-28T20:16:13+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21349237, "num_examples": 3250}], "download_size": 9490470, "dataset_size": 21349237}}
2023-03-28T20:20:18+00:00
986cd4b3ed266060d64106360d69b9f890e2fd97
# Dataset Card for "logiclm_bookcorpus_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
chanind/logiclm_bookcorpus_dataset
[ "region:us" ]
2023-03-28T20:43:21+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 9635097081.0, "num_examples": 72888253}], "download_size": 3280936965, "dataset_size": 9635097081.0}}
2023-04-17T19:11:24+00:00
e23f974bb7c44882c334f4a98a2dadf9b0e57e22
hen8001/cotton_crop_project_data
[ "license:other", "region:us" ]
2023-03-28T20:48:50+00:00
{"license": "other"}
2023-03-28T20:48:50+00:00
a6f9a327ef25c4e0c0757d0140bf31e849230780
# Dataset Card for "tokenized_udtree" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
spdenisov/tokenized_udtree
[ "region:us" ]
2023-03-28T20:50:04+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "cs_0", "num_bytes": 73985244, "num_examples": 102133}, {"name": "cs_1", "num_bytes": 95459594, "num_examples": 102133}, {"name": "cs_2", "num_bytes": 95354064, "num_examples": 102133}, {"name": "cs_3", "num_bytes": 128817619, "num_examples": 102133}, {"name": "cs_4", "num_bytes": 236925044, "num_examples": 102133}, {"name": "cs_5", "num_bytes": 115688159, "num_examples": 102133}, {"name": "cs_6", "num_bytes": 132404489, "num_examples": 102133}, {"name": "tr_0", "num_bytes": 28666902, "num_examples": 60089}, {"name": "tr_1", "num_bytes": 31887742, "num_examples": 60089}, {"name": "tr_2", "num_bytes": 31749302, "num_examples": 60089}, {"name": "tr_3", "num_bytes": 28498032, "num_examples": 60089}, {"name": "tr_4", "num_bytes": 57177672, "num_examples": 60089}, {"name": "tr_5", "num_bytes": 37804587, "num_examples": 60089}, {"name": "tr_6", "num_bytes": 28280762, "num_examples": 60089}, {"name": "ar_0", "num_bytes": 32848442, "num_examples": 21864}, {"name": "ar_1", "num_bytes": 49955197, "num_examples": 21864}, {"name": "ar_2", "num_bytes": 49285292, "num_examples": 21864}, {"name": "ar_3", "num_bytes": 69585617, "num_examples": 21864}, {"name": "ar_4", "num_bytes": 91649737, "num_examples": 21864}, {"name": "ar_5", "num_bytes": 59303592, "num_examples": 21864}, {"name": "ar_6", "num_bytes": 50935047, "num_examples": 21864}, {"name": "de_0", "num_bytes": 112997417, "num_examples": 166849}, {"name": "de_1", "num_bytes": 149332477, "num_examples": 166849}, {"name": "de_2", "num_bytes": 157628127, "num_examples": 166849}, {"name": "de_3", "num_bytes": 155444887, "num_examples": 166849}, {"name": "de_4", "num_bytes": 309419752, "num_examples": 166849}, {"name": "de_5", "num_bytes": 191783977, "num_examples": 166849}, {"name": "de_6", "num_bytes": 138689312, "num_examples": 166849}, {"name": "fr_0", "num_bytes": 27905013, "num_examples": 34921}, {"name": "fr_1", "num_bytes": 41237113, "num_examples": 34921}, {"name": "fr_2", "num_bytes": 45655098, "num_examples": 34921}, {"name": "fr_3", "num_bytes": 39973853, "num_examples": 34921}, {"name": "fr_4", "num_bytes": 76420558, "num_examples": 34921}, {"name": "fr_5", "num_bytes": 56197173, "num_examples": 34921}, {"name": "fr_6", "num_bytes": 39938223, "num_examples": 34921}, {"name": "no_0", "num_bytes": 19584526, "num_examples": 33282}, {"name": "no_1", "num_bytes": 25823376, "num_examples": 33282}, {"name": "no_2", "num_bytes": 26954416, "num_examples": 33282}, {"name": "no_3", "num_bytes": 23459636, "num_examples": 33282}, {"name": "no_4", "num_bytes": 43762856, "num_examples": 33282}, {"name": "no_5", "num_bytes": 32578281, "num_examples": 33282}, {"name": "no_6", "num_bytes": 23459636, "num_examples": 33282}, {"name": "pt_0", "num_bytes": 12627085, "num_examples": 30720}, {"name": "pt_1", "num_bytes": 16475005, "num_examples": 30720}, {"name": "pt_2", "num_bytes": 17295815, "num_examples": 30720}, {"name": "pt_3", "num_bytes": 16917200, "num_examples": 30720}, {"name": "pt_4", "num_bytes": 24168495, "num_examples": 30720}, {"name": "pt_5", "num_bytes": 20520155, "num_examples": 30720}, {"name": "pt_6", "num_bytes": 15115165, "num_examples": 30720}, {"name": "es_0", "num_bytes": 27551907, "num_examples": 28474}, {"name": "es_1", "num_bytes": 39391152, "num_examples": 28474}, {"name": "es_2", "num_bytes": 42349787, "num_examples": 28474}, {"name": "es_3", "num_bytes": 43743597, "num_examples": 28474}, {"name": "es_4", "num_bytes": 69878787, "num_examples": 28474}, {"name": "es_5", "num_bytes": 51203677, "num_examples": 28474}, {"name": "es_6", "num_bytes": 46914367, "num_examples": 28474}, {"name": "ru_0", "num_bytes": 57566900, "num_examples": 89525}, {"name": "ru_1", "num_bytes": 74853550, "num_examples": 89525}, {"name": "ru_2", "num_bytes": 76555950, "num_examples": 89525}, {"name": "ru_3", "num_bytes": 67072565, "num_examples": 89525}, {"name": "ru_4", "num_bytes": 155012405, "num_examples": 89525}, {"name": "ru_5", "num_bytes": 92396515, "num_examples": 89525}, {"name": "ru_6", "num_bytes": 98333345, "num_examples": 89525}, {"name": "en_0", "num_bytes": 14945668, "num_examples": 28686}, {"name": "en_1", "num_bytes": 20836733, "num_examples": 28686}, {"name": "en_2", "num_bytes": 23313373, "num_examples": 28686}, {"name": "en_3", "num_bytes": 21978133, "num_examples": 28686}, {"name": "en_4", "num_bytes": 32732303, "num_examples": 28686}, {"name": "en_5", "num_bytes": 28539183, "num_examples": 28686}, {"name": "en_6", "num_bytes": 28399343, "num_examples": 28686}, {"name": "fi_0", "num_bytes": 14729969, "num_examples": 27198}, {"name": "fi_1", "num_bytes": 17656509, "num_examples": 27198}, {"name": "fi_2", "num_bytes": 16915489, "num_examples": 27198}, {"name": "fi_3", "num_bytes": 18732354, "num_examples": 27198}, {"name": "fi_4", "num_bytes": 29894674, "num_examples": 27198}, {"name": "fi_5", "num_bytes": 20079089, "num_examples": 27198}, {"name": "fi_6", "num_bytes": 18874279, "num_examples": 27198}, {"name": "gd_0", "num_bytes": 2829948, "num_examples": 3541}, {"name": "gd_1", "num_bytes": 3700318, "num_examples": 3541}, {"name": "gd_2", "num_bytes": 3798313, "num_examples": 3541}, {"name": "gd_3", "num_bytes": 3907648, "num_examples": 3541}, {"name": "gd_4", "num_bytes": 5359963, "num_examples": 3541}, {"name": "gd_5", "num_bytes": 4693368, "num_examples": 3541}, {"name": "gd_6", "num_bytes": 3383253, "num_examples": 3541}, {"name": "gv_0", "num_bytes": 456221, "num_examples": 1172}, {"name": "gv_1", "num_bytes": 597391, "num_examples": 1172}, {"name": "gv_2", "num_bytes": 609501, "num_examples": 1172}, {"name": "gv_3", "num_bytes": 542486, "num_examples": 1172}, {"name": "gv_4", "num_bytes": 785231, "num_examples": 1172}, {"name": "gv_5", "num_bytes": 729026, "num_examples": 1172}, {"name": "gv_6", "num_bytes": 542486, "num_examples": 1172}, {"name": "ga_0", "num_bytes": 3928820, "num_examples": 4005}, {"name": "ga_1", "num_bytes": 5021230, "num_examples": 4005}, {"name": "ga_2", "num_bytes": 5059580, "num_examples": 4005}, {"name": "ga_3", "num_bytes": 4843745, "num_examples": 4005}, {"name": "ga_4", "num_bytes": 9085760, "num_examples": 4005}, {"name": "ga_5", "num_bytes": 6197075, "num_examples": 4005}, {"name": "ga_6", "num_bytes": 4483365, "num_examples": 4005}, {"name": "cop_0", "num_bytes": 4660032, "num_examples": 1379}, {"name": "cop_1", "num_bytes": 5726842, "num_examples": 1379}, {"name": "cop_2", "num_bytes": 4508942, "num_examples": 1379}, {"name": "cop_3", "num_bytes": 4496787, "num_examples": 1379}, {"name": "cop_4", "num_bytes": 5425137, "num_examples": 1379}, {"name": "cop_5", "num_bytes": 4907442, "num_examples": 1379}, {"name": "cop_6", "num_bytes": 4284382, "num_examples": 1379}, {"name": "it_0", "num_bytes": 17989232, "num_examples": 21724}, {"name": "it_1", "num_bytes": 25839627, "num_examples": 21724}, {"name": "it_2", "num_bytes": 27448052, "num_examples": 21724}, {"name": "it_3", "num_bytes": 24875027, "num_examples": 21724}, {"name": "it_4", "num_bytes": 43731272, "num_examples": 21724}, {"name": "it_5", "num_bytes": 33091747, "num_examples": 21724}, {"name": "it_6", "num_bytes": 30955017, "num_examples": 21724}, {"name": "cy_0", "num_bytes": 907518, "num_examples": 1111}, {"name": "cy_1", "num_bytes": 1180383, "num_examples": 1111}, {"name": "cy_2", "num_bytes": 1192068, "num_examples": 1111}, {"name": "cy_3", "num_bytes": 1123428, "num_examples": 1111}, {"name": "cy_4", "num_bytes": 1834888, "num_examples": 1111}, {"name": "cy_5", "num_bytes": 1439843, "num_examples": 1111}, {"name": "cy_6", "num_bytes": 1055223, "num_examples": 1111}, {"name": "hu_0", "num_bytes": 858340, "num_examples": 910}, {"name": "hu_1", "num_bytes": 1088085, "num_examples": 910}, {"name": "hu_2", "num_bytes": 1086220, "num_examples": 910}, {"name": "hu_3", "num_bytes": 957490, "num_examples": 910}, {"name": "hu_4", "num_bytes": 1964920, "num_examples": 910}, {"name": "hu_5", "num_bytes": 1370660, "num_examples": 910}, {"name": "hu_6", "num_bytes": 957490, "num_examples": 910}, {"name": "zh_0", "num_bytes": 9051347, "num_examples": 7994}, {"name": "zh_1", "num_bytes": 12537582, "num_examples": 7994}, {"name": "zh_2", "num_bytes": 11419717, "num_examples": 7994}, {"name": "zh_3", "num_bytes": 10888407, "num_examples": 7994}, {"name": "zh_4", "num_bytes": 10558847, "num_examples": 7994}, {"name": "zh_5", "num_bytes": 13867342, "num_examples": 7994}, {"name": "zh_6", "num_bytes": 10167967, "num_examples": 7994}, {"name": "hy_0", "num_bytes": 5120790, "num_examples": 3200}, {"name": "hy_1", "num_bytes": 5762195, "num_examples": 3200}, {"name": "hy_2", "num_bytes": 4712195, "num_examples": 3200}, {"name": "hy_3", "num_bytes": 4260805, "num_examples": 3200}, {"name": "hy_4", "num_bytes": 8546900, "num_examples": 3200}, {"name": "hy_5", "num_bytes": 5442440, "num_examples": 3200}, {"name": "hy_6", "num_bytes": 4260805, "num_examples": 3200}, {"name": "ro_0", "num_bytes": 6894274, "num_examples": 8043}, {"name": "ro_1", "num_bytes": 9156564, "num_examples": 8043}, {"name": "ro_2", "num_bytes": 9493574, "num_examples": 8043}, {"name": "ro_3", "num_bytes": 10830604, "num_examples": 8043}, {"name": "ro_4", "num_bytes": 20320209, "num_examples": 8043}, {"name": "ro_5", "num_bytes": 11507314, "num_examples": 8043}, {"name": "ro_6", "num_bytes": 8300564, "num_examples": 8043}, {"name": "da_0", "num_bytes": 2963139, "num_examples": 4383}, {"name": "da_1", "num_bytes": 3945104, "num_examples": 4383}, {"name": "da_2", "num_bytes": 4115634, "num_examples": 4383}, {"name": "da_3", "num_bytes": 3583269, "num_examples": 4383}, {"name": "da_4", "num_bytes": 7089004, "num_examples": 4383}, {"name": "da_5", "num_bytes": 4981724, "num_examples": 4383}, {"name": "da_6", "num_bytes": 3583269, "num_examples": 4383}, {"name": "nl_0", "num_bytes": 6741817, "num_examples": 12289}, {"name": "nl_1", "num_bytes": 8989392, "num_examples": 12289}, {"name": "nl_2", "num_bytes": 9389757, "num_examples": 12289}, {"name": "nl_3", "num_bytes": 16004832, "num_examples": 12289}, {"name": "nl_4", "num_bytes": 12089687, "num_examples": 12289}, {"name": "nl_5", "num_bytes": 11410547, "num_examples": 12289}, {"name": "nl_6", "num_bytes": 12631912, "num_examples": 12289}], "download_size": 934434422, "dataset_size": 5264208717}}
2023-03-28T20:56:12+00:00
6d9ab0255988ab69b3c8fcc7b5ac547a32926411
# Dataset Card for "tokenized_udtrees_trunc" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
spdenisov/tokenized_udtrees_trunc
[ "region:us" ]
2023-03-28T21:21:32+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "labels", "sequence": "int64"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "length", "dtype": "int64"}], "splits": [{"name": "fr_0", "num_bytes": 72813504, "num_examples": 34912}, {"name": "fr_1", "num_bytes": 106992505, "num_examples": 34884}, {"name": "fr_2", "num_bytes": 118066880, "num_examples": 34858}, {"name": "fr_3", "num_bytes": 103747628, "num_examples": 34886}, {"name": "fr_4", "num_bytes": 179954204, "num_examples": 33724}, {"name": "fr_5", "num_bytes": 142682805, "num_examples": 34681}, {"name": "fr_6", "num_bytes": 103669700, "num_examples": 34887}, {"name": "ar_0", "num_bytes": 76392970, "num_examples": 21341}, {"name": "ar_1", "num_bytes": 99682724, "num_examples": 20211}, {"name": "ar_2", "num_bytes": 104828728, "num_examples": 20561}, {"name": "ar_3", "num_bytes": 120387755, "num_examples": 18591}, {"name": "ar_4", "num_bytes": 110845444, "num_examples": 15239}, {"name": "ar_5", "num_bytes": 113333216, "num_examples": 19622}, {"name": "ar_6", "num_bytes": 97966198, "num_examples": 20004}, {"name": "nl_0", "num_bytes": 17678650, "num_examples": 12289}, {"name": "nl_1", "num_bytes": 23522345, "num_examples": 12289}, {"name": "nl_2", "num_bytes": 24563294, "num_examples": 12289}, {"name": "nl_3", "num_bytes": 41551823, "num_examples": 12274}, {"name": "nl_4", "num_bytes": 31583112, "num_examples": 12289}, {"name": "nl_5", "num_bytes": 29817348, "num_examples": 12289}, {"name": "nl_6", "num_bytes": 32965583, "num_examples": 12287}, {"name": "de_0", "num_bytes": 295802185, "num_examples": 166848}, {"name": "de_1", "num_bytes": 390229614, "num_examples": 166845}, {"name": "de_2", "num_bytes": 411788885, "num_examples": 166844}, {"name": "de_3", "num_bytes": 406127223, "num_examples": 166845}, {"name": "de_4", "num_bytes": 794559733, "num_examples": 166061}, {"name": "de_5", "num_bytes": 500383319, "num_examples": 166830}, {"name": "de_6", "num_bytes": 362580545, "num_examples": 166846}, {"name": "ru_0", "num_bytes": 150571543, "num_examples": 89515}, {"name": "ru_1", "num_bytes": 195170653, "num_examples": 89496}, {"name": "ru_2", "num_bytes": 199557398, "num_examples": 89494}, {"name": "ru_3", "num_bytes": 175089824, "num_examples": 89505}, {"name": "ru_4", "num_bytes": 385862504, "num_examples": 88402}, {"name": "ru_5", "num_bytes": 239909307, "num_examples": 89442}, {"name": "ru_6", "num_bytes": 254396827, "num_examples": 89380}, {"name": "pt_0", "num_bytes": 33205205, "num_examples": 30720}, {"name": "pt_1", "num_bytes": 43209797, "num_examples": 30720}, {"name": "pt_2", "num_bytes": 45343903, "num_examples": 30720}, {"name": "pt_3", "num_bytes": 44359504, "num_examples": 30720}, {"name": "pt_4", "num_bytes": 63212871, "num_examples": 30720}, {"name": "pt_5", "num_bytes": 53727187, "num_examples": 30720}, {"name": "pt_6", "num_bytes": 39674213, "num_examples": 30720}, {"name": "ro_0", "num_bytes": 17993349, "num_examples": 8041}, {"name": "ro_1", "num_bytes": 23770442, "num_examples": 8035}, {"name": "ro_2", "num_bytes": 24600913, "num_examples": 8032}, {"name": "ro_3", "num_bytes": 27929669, "num_examples": 8023}, {"name": "ro_4", "num_bytes": 48677219, "num_examples": 7799}, {"name": "ro_5", "num_bytes": 29549023, "num_examples": 8015}, {"name": "ro_6", "num_bytes": 21594484, "num_examples": 8038}, {"name": "hy_0", "num_bytes": 12162343, "num_examples": 3129}, {"name": "hy_1", "num_bytes": 13197354, "num_examples": 3096}, {"name": "hy_2", "num_bytes": 11443297, "num_examples": 3149}, {"name": "hy_3", "num_bytes": 10501791, "num_examples": 3161}, {"name": "hy_4", "num_bytes": 16496323, "num_examples": 2884}, {"name": "hy_5", "num_bytes": 12602551, "num_examples": 3107}, {"name": "hy_6", "num_bytes": 10501791, "num_examples": 3161}, {"name": "en_0", "num_bytes": 39190941, "num_examples": 28685}, {"name": "en_1", "num_bytes": 54446758, "num_examples": 28682}, {"name": "en_2", "num_bytes": 60866411, "num_examples": 28681}, {"name": "en_3", "num_bytes": 57413241, "num_examples": 28682}, {"name": "en_4", "num_bytes": 84543655, "num_examples": 28628}, {"name": "en_5", "num_bytes": 73953982, "num_examples": 28648}, {"name": "en_6", "num_bytes": 73215142, "num_examples": 28626}, {"name": "hu_0", "num_bytes": 2242786, "num_examples": 910}, {"name": "hu_1", "num_bytes": 2840123, "num_examples": 910}, {"name": "hu_2", "num_bytes": 2835274, "num_examples": 910}, {"name": "hu_3", "num_bytes": 2500576, "num_examples": 910}, {"name": "hu_4", "num_bytes": 4799115, "num_examples": 889}, {"name": "hu_5", "num_bytes": 3547088, "num_examples": 908}, {"name": "hu_6", "num_bytes": 2500576, "num_examples": 910}, {"name": "tr_0", "num_bytes": 75249383, "num_examples": 60088}, {"name": "tr_1", "num_bytes": 83604892, "num_examples": 60087}, {"name": "tr_2", "num_bytes": 83243895, "num_examples": 60087}, {"name": "tr_3", "num_bytes": 74806746, "num_examples": 60088}, {"name": "tr_4", "num_bytes": 148074211, "num_examples": 60006}, {"name": "tr_5", "num_bytes": 98925962, "num_examples": 60083}, {"name": "tr_6", "num_bytes": 74242806, "num_examples": 60088}, {"name": "it_0", "num_bytes": 46804518, "num_examples": 21711}, {"name": "it_1", "num_bytes": 66265256, "num_examples": 21655}, {"name": "it_2", "num_bytes": 70151753, "num_examples": 21637}, {"name": "it_3", "num_bytes": 63960323, "num_examples": 21667}, {"name": "it_4", "num_bytes": 100412869, "num_examples": 20900}, {"name": "it_5", "num_bytes": 82319403, "num_examples": 21483}, {"name": "it_6", "num_bytes": 77655835, "num_examples": 21535}, {"name": "fi_0", "num_bytes": 38406525, "num_examples": 27185}, {"name": "fi_1", "num_bytes": 45852915, "num_examples": 27178}, {"name": "fi_2", "num_bytes": 43964919, "num_examples": 27179}, {"name": "fi_3", "num_bytes": 48780830, "num_examples": 27184}, {"name": "fi_4", "num_bytes": 76447425, "num_examples": 27109}, {"name": "fi_5", "num_bytes": 51991381, "num_examples": 27170}, {"name": "fi_6", "num_bytes": 48559262, "num_examples": 27153}, {"name": "fa_0", "num_bytes": 96243585, "num_examples": 30906}, {"name": "fa_1", "num_bytes": 113502571, "num_examples": 30784}, {"name": "fa_2", "num_bytes": 97058237, "num_examples": 30894}, {"name": "fa_3", "num_bytes": 107038686, "num_examples": 30851}, {"name": "fa_4", "num_bytes": 112125942, "num_examples": 30822}, {"name": "fa_5", "num_bytes": 113077898, "num_examples": 30767}, {"name": "fa_6", "num_bytes": 88091064, "num_examples": 30932}, {"name": "gd_0", "num_bytes": 7335465, "num_examples": 3537}, {"name": "gd_1", "num_bytes": 9467949, "num_examples": 3530}, {"name": "gd_2", "num_bytes": 9689767, "num_examples": 3528}, {"name": "gd_3", "num_bytes": 9926268, "num_examples": 3525}, {"name": "gd_4", "num_bytes": 12713464, "num_examples": 3465}, {"name": "gd_5", "num_bytes": 11546562, "num_examples": 3499}, {"name": "gd_6", "num_bytes": 8709089, "num_examples": 3534}, {"name": "cy_0", "num_bytes": 2373101, "num_examples": 1111}, {"name": "cy_1", "num_bytes": 3082550, "num_examples": 1111}, {"name": "cy_2", "num_bytes": 3112931, "num_examples": 1111}, {"name": "cy_3", "num_bytes": 2934467, "num_examples": 1111}, {"name": "cy_4", "num_bytes": 4784263, "num_examples": 1111}, {"name": "cy_5", "num_bytes": 3757146, "num_examples": 1111}, {"name": "cy_6", "num_bytes": 2757134, "num_examples": 1111}, {"name": "cs_0", "num_bytes": 193204789, "num_examples": 102111}, {"name": "cs_1", "num_bytes": 248532815, "num_examples": 102085}, {"name": "cs_2", "num_bytes": 248265366, "num_examples": 102085}, {"name": "cs_3", "num_bytes": 332530755, "num_examples": 101916}, {"name": "cs_4", "num_bytes": 537663964, "num_examples": 97317}, {"name": "cs_5", "num_bytes": 299610164, "num_examples": 101990}, {"name": "cs_6", "num_bytes": 339589731, "num_examples": 101777}, {"name": "es_0", "num_bytes": 71968866, "num_examples": 28473}, {"name": "es_1", "num_bytes": 102260411, "num_examples": 28443}, {"name": "es_2", "num_bytes": 109651662, "num_examples": 28424}, {"name": "es_3", "num_bytes": 112979119, "num_examples": 28404}, {"name": "es_4", "num_bytes": 163186080, "num_examples": 27271}, {"name": "es_5", "num_bytes": 130959590, "num_examples": 28317}, {"name": "es_6", "num_bytes": 119790214, "num_examples": 28310}, {"name": "zh_0", "num_bytes": 23617606, "num_examples": 7993}, {"name": "zh_1", "num_bytes": 32483372, "num_examples": 7980}, {"name": "zh_2", "num_bytes": 29697463, "num_examples": 7988}, {"name": "zh_3", "num_bytes": 28332743, "num_examples": 7989}, {"name": "zh_4", "num_bytes": 27491845, "num_examples": 7990}, {"name": "zh_5", "num_bytes": 35551944, "num_examples": 7954}, {"name": "zh_6", "num_bytes": 26490384, "num_examples": 7991}, {"name": "no_0", "num_bytes": 51325808, "num_examples": 33282}, {"name": "no_1", "num_bytes": 67531367, "num_examples": 33281}, {"name": "no_2", "num_bytes": 70471135, "num_examples": 33281}, {"name": "no_3", "num_bytes": 61386787, "num_examples": 33281}, {"name": "no_4", "num_bytes": 113337815, "num_examples": 33227}, {"name": "no_5", "num_bytes": 84988095, "num_examples": 33274}, {"name": "no_6", "num_bytes": 61386787, "num_examples": 33281}, {"name": "ga_0", "num_bytes": 10164126, "num_examples": 4000}, {"name": "ga_1", "num_bytes": 12904387, "num_examples": 3995}, {"name": "ga_2", "num_bytes": 13000600, "num_examples": 3995}, {"name": "ga_3", "num_bytes": 12458429, "num_examples": 3996}, {"name": "ga_4", "num_bytes": 22263032, "num_examples": 3924}, {"name": "ga_5", "num_bytes": 15711892, "num_examples": 3980}, {"name": "ga_6", "num_bytes": 11531217, "num_examples": 3996}, {"name": "da_0", "num_bytes": 7757634, "num_examples": 4383}, {"name": "da_1", "num_bytes": 10310743, "num_examples": 4383}, {"name": "da_2", "num_bytes": 10754121, "num_examples": 4383}, {"name": "da_3", "num_bytes": 9369972, "num_examples": 4383}, {"name": "da_4", "num_bytes": 17982417, "num_examples": 4351}, {"name": "da_5", "num_bytes": 12936123, "num_examples": 4378}, {"name": "da_6", "num_bytes": 9369972, "num_examples": 4383}, {"name": "cop_0", "num_bytes": 7622435, "num_examples": 1122}, {"name": "cop_1", "num_bytes": 7185677, "num_examples": 972}, {"name": "cop_2", "num_bytes": 7618669, "num_examples": 1143}, {"name": "cop_3", "num_bytes": 7622440, "num_examples": 1145}, {"name": "cop_4", "num_bytes": 7298153, "num_examples": 1011}, {"name": "cop_5", "num_bytes": 7482224, "num_examples": 1084}, {"name": "cop_6", "num_bytes": 7630235, "num_examples": 1174}, {"name": "gv_0", "num_bytes": 1200473, "num_examples": 1172}, {"name": "gv_1", "num_bytes": 1567515, "num_examples": 1172}, {"name": "gv_2", "num_bytes": 1599001, "num_examples": 1172}, {"name": "gv_3", "num_bytes": 1424762, "num_examples": 1172}, {"name": "gv_4", "num_bytes": 2042489, "num_examples": 1171}, {"name": "gv_5", "num_bytes": 1881763, "num_examples": 1170}, {"name": "gv_6", "num_bytes": 1424762, "num_examples": 1172}], "download_size": 1339506450, "dataset_size": 13867176061}}
2023-03-30T22:05:12+00:00
939a5b7acaaa6b462d7e05b3ba89fc3a13e02728
# Dataset Card for "sidewalk-imagery" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ateebak/sidewalk-imagery
[ "region:us" ]
2023-03-28T21:48:11+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3138225.0, "num_examples": 10}], "download_size": 3139735, "dataset_size": 3138225.0}}
2023-03-28T21:48:16+00:00
28e70aa35c91c539893eb39b9c23c5b56a0c27bb
# Dataset Validated from https://huggingface.co/spaces/dariolopez/argilla-reddit-c-ssrs-suicide-dataset-es https://dariolopez-argilla-reddit-c-ssrs-suicide-da-5219f8e.hf.space
dariolopez/argilla-reddit-c-ssrs-suicide-dataset-es
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:es", "region:us" ]
2023-03-28T21:54:06+00:00
{"language": ["es"], "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"]}
2023-03-28T21:55:33+00:00
a94626aaaaaddc55fcfca224b75e313f18860a5a
# Dataset Card for "full-wiki-segments-parquet" Source: Downloaded from: https://zenodo.org/record/6149599
McGill-NLP/full-wiki-segments-parquet
[ "region:us" ]
2023-03-28T22:05:27+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "sub_title", "dtype": "string"}, {"name": "index", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 13223584797, "num_examples": 25700592}, {"name": "collection", "num_bytes": 13223584797, "num_examples": 25700592}], "download_size": 15182013003, "dataset_size": 26447169594}}
2023-04-13T20:49:11+00:00
f26313a2032d26df39da26c16d2b02330a61e850
CrucibleAI/ControlNetLAIONFace
[ "license:cc0-1.0", "region:us" ]
2023-03-28T22:34:10+00:00
{"license": "cc0-1.0"}
2023-03-28T22:34:10+00:00
089056f66581442bea3f488acfd941ccfa116d8b
# Dataset Card for Syosetu711K *The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.* ## Dataset Description - **Homepage:** (TODO) - **Repository:** <https://github.com/RyokoAI/BigKnow2022> - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** Ronsor/undeleted <[email protected]> ### Dataset Summary Syosetu711K is a dataset composed of approximately 711,700 novels scraped from the Japanese novel self-publishing website Syosetuka ni Narou (JA: 小説家になろう, lit. "Let's Become a Novelist") between March 26 and March 27, 2023. The dataset contains most if not all novels published on the site, regardless of length or quality; however, we include metadata so users of this dataset can filter and evaluate its contents. Syosetu711Kは、日本の小説投稿サイト「小説家になろう」から2023年3月26日から27日にかけてスクレイプされた約711,700冊の小説から 構成されるデータセットです。このデータセットには、長さや品質に関係なく、サイトに掲載されているほとんどの小説が含まれています。ただし、 各小説のIDも含まれているため、小説家になろうAPIを使ってその情報を検索することができます。 ### Supported Tasks and Leaderboards This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes. * text-classification * text-generation ### Languages * Japanese ## Dataset Structure ### Data Instances ```json { "text": "【小説タイトル】\n焼けて爛れる恋よりも、微睡む優しい愛が欲しい\n【Nコード】\nN5029ID\n【作者名】\n秋暁秋季\n【あらすじ】\n俺の彼女は物凄く気の多い人だった。\nお眼鏡に適う奴が居れば、瞳孔を蕩 けさせる人だった。\nその癖照れ屋で、すぐに目を逸らす。\nな...", "meta": { "subset": "syosetu", "q": 0.6, "id": "N5029ID", "author": "秋暁秋季", "userid": 719797, "title": "焼けて爛れる恋よりも、微睡む優しい愛が欲しい", "length": 871, "points": 0, "lang": "ja", "chapters": 1, "keywords": ["気が多い", "浮気性", "無愛想", "照れる", "嫉妬", "好みではない", "クソデカ感情", "空気のような安心感"], "isr15": 0, "genre": 102, "biggenre": 1 } } { "text": "【小説タイトル】\n【能力者】\n【Nコード】\nN9864IB\n【作者名】\n夢音いちご\n【あらすじ】\n私立アビリティ学園。\n小・中・高・大が一貫となった、大規模な名門校。\nそして、ここは規模の大きさだけ でなく、ある特殊な制度を設けて\nいることでも有名だ。\nそれ...", "meta": { "subset": "syosetu", "q": 0.6, "id": "N9864IB", "author": "夢音いちご", "userid": 1912777, "title": "【能力者】", "length": 2334, "points": 0, "lang": "ja", "chapters": 2, "keywords": ["ガールズラブ", "身分差", "伝奇", "日常", "青春", "ラブコメ", "女主人公", "学園", "魔法", "超能力"], "isr15": 0, "genre": 202, "biggenre": 2 } } ``` ### Data Fields * `text`: the actual novel text, all chapters * `meta`: novel metadata * `subset`: dataset tag: `syosetu` * `lang`: dataset language: `ja` (Japanese) * `id`: novel ID/ncode * `author`: author name * `userid`: author user ID * `title`: novel title * `length`: novel length in words * `points`: global points (corresponds to `global_point` from the Syosetu API) * `q`: q-score (quality score) calculated based on `points` * `chapters`: number of chapters (corresponds to `general_all_no` from the Syosetu API) * `keywords`: array of novel keywords (corresponds to `keyword` from the Syosetu API, split on spaces) * `isr15`: whether the novel is rated R15+ * `genre`: novel genre ID (optional, see Syosetu API documentation) * `biggenre`: general novel genre ID (optional, see Syosetu API documentation) * `isr18`: whether the novel is rated R18+ * `nocgenre`: novel genre ID (optional, only available if `isr18` is true, see Syosetu API documentation) *For further reference, see the Syosetuka ni Narou API documentation: <https://dev.syosetu.com/man/api/> (JA).* #### Q-Score Distribution ``` 0.00: 0 0.10: 0 0.20: 0 0.30: 0 0.40: 0 0.50: 213005 0.60: 331393 0.70: 101971 0.80: 63877 0.90: 1542 1.00: 2 ``` ### Data Splits No splitting of the data was performed. ## Dataset Creation ### Curation Rationale Syosetuka ni Narou is the most popular website in Japan for authors wishing to self-publish their novels online. Many works on the site been picked up by large commercial publishers. Because of this, we believe that this dataset provides a large corpus of high-quality, creative content in the Japanese language. ### Source Data #### Initial Data Collection and Normalization *More information about any referenced scripts, commands, or programs used may be found in the BigKnow2022 GitHub repository.* First, metadata for all novels on the site was gathered into a JSON lines (JSONL) file. The Syosetuka ni Narou API was used to obtain this information. Second, this listing was used to create a secondary text file containing a list of only the novel "ncodes," or IDs. This secondary file was distributed to downloader nodes. Third, the sister site <https://pdfnovels.net> was queried with each novel ID, and the resulting PDF was saved for later processing. Fourth, the `pdftotext` tool was used to convert the PDF files to text documents. A few other scripts were then used to clean up the resulting text files. Finally, the text files and other metadata were converted into the specified data field schema above, and the resulting JSON entries were concatenated into the Syosetu711K dataset. The version uploaded to this repository, however, is split into multiple files, numbered 00 through 20 inclusive. #### Who are the source language producers? The authors of each novel. ### Annotations #### Annotation process Titles and general genre were collected alongside the novel text and IDs. #### Who are the annotators? There were no human annotators. ### Personal and Sensitive Information The dataset contains only works of fiction, and we do not believe it contains any PII. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content in Japanese. It may also be useful for other languages depending on your language model. ### Discussion of Biases This dataset is composed of fictional works by various authors. Because of this fact, the contents of this dataset will reflect the biases of those authors. **Additionally, this dataset contains NSFW material and was not filtered. Beware of stereotypes.** ### Other Known Limitations N/A ## Additional Information ### Dataset Curators Ronsor Labs ### Licensing Information Apache 2.0, for all parts of which Ronsor Labs or the Ryoko AI Production Committee may be considered authors. All other material is distributed under fair use principles. ### Citation Information ``` @misc{ryokoai2023-bigknow2022, title = {BigKnow2022: Bringing Language Models Up to Speed}, author = {Ronsor}, year = {2023}, howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}}, } ``` ### Contributions Thanks to @ronsor (GH) for gathering this dataset.
RyokoAI/Syosetu711K
[ "task_categories:text-classification", "task_categories:text-generation", "size_categories:100K<n<1M", "language:ja", "license:apache-2.0", "novel", "training", "region:us" ]
2023-03-28T22:57:10+00:00
{"language": ["ja"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification", "text-generation"], "pretty_name": "Syosetuka ni Narou 711K", "tags": ["novel", "training"]}
2023-04-05T00:13:44+00:00
f18e34a72d4e7d216a00f60431617bcc1295ad0a
peterpull/MediatorBot
[ "license:creativeml-openrail-m", "region:us" ]
2023-03-28T22:57:31+00:00
{"license": "creativeml-openrail-m"}
2023-07-05T00:23:53+00:00
666c51fac4966b14293f818d926db57d305fe257
# Dataset Card for "tomatoesCWSI" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mattyhatch/tomatoesCWSI
[ "region:us" ]
2023-03-28T23:35:16+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1034721.0, "num_examples": 6}], "download_size": 134150, "dataset_size": 1034721.0}}
2023-03-28T23:35:19+00:00
d37d2754d2b6fa872303c5569aa5d874c0d7d8d8
# Dataset Card for "t0_zsnoopt" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mistobaan/t0_zsnoopt
[ "region:us" ]
2023-03-28T23:37:24+00:00
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "task_source", "dtype": "string"}, {"name": "task_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32033020523, "num_examples": 42881000}], "download_size": 17532883525, "dataset_size": 32033020523}}
2023-03-28T23:48:50+00:00
2602be50411bf7c658121c63da0624b4b17b1e65
# Dataset Card for "tomatoes" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mattyhatch/tomatoes
[ "region:us" ]
2023-03-28T23:47:51+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 174243.0, "num_examples": 1}], "download_size": 23312, "dataset_size": 174243.0}}
2023-03-28T23:47:55+00:00
09d1dea8fab040e61ae84d3b2a8bc8d73ff616dd
coyotespike/mydataset
[ "license:mit", "region:us" ]
2023-03-29T00:43:09+00:00
{"license": "mit"}
2023-03-29T00:43:09+00:00
61d4ab4ec9214c312b649cce057d9ca50acaaa02
zz990906/garbage_detection
[ "task_categories:image-classification", "language:en", "region:us" ]
2023-03-29T00:47:33+00:00
{"language": ["en"], "task_categories": ["image-classification"]}
2023-03-29T01:14:30+00:00
c548f65065d74270ddf4b0bcd17eff319235159e
# Dataset Card for "processed_bert_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
swadesh7/processed_bert_dataset
[ "region:us" ]
2023-03-29T00:51:03+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 3600, "num_examples": 1}], "download_size": 4997, "dataset_size": 3600}}
2023-03-30T00:00:44+00:00
2ec396c9153265c1e0e22fb77ed207c3733d8f7c
# Dataset Card for "tomatoesTest1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mattyhatch/tomatoesTest1
[ "region:us" ]
2023-03-29T00:51:12+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 346819.0, "num_examples": 2}], "download_size": 43706, "dataset_size": 346819.0}}
2023-03-29T01:37:37+00:00
91db457a1f21b45c68912bfbca2e911b565a292d
# Dataset Card for "logo-splitted" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Babypotatotang/logo-splitted
[ "region:us" ]
2023-03-29T01:08:38+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 321359468.84, "num_examples": 24080}, {"name": "test", "num_bytes": 82173680.498, "num_examples": 6021}], "download_size": 266044858, "dataset_size": 403533149.33799994}}
2023-03-29T01:14:46+00:00
e7170659b7635f20ff9c68de9d93bc6bf66897e3
# Dataset Card for "logo-weighted-name" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Babypotatotang/logo-weighted-name
[ "region:us" ]
2023-03-29T01:08:48+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 321407628.84, "num_examples": 24080}, {"name": "test", "num_bytes": 82185722.498, "num_examples": 6021}], "download_size": 266068093, "dataset_size": 403593351.33799994}}
2023-03-29T01:19:41+00:00
d10ce30771ad1d3d25944c7f68b01bee9ac5c269
## Mission: Open and Good ML In our mission to democratize good machine learning (ML), we examine how supporting ML community work also empowers examining and preventing possible harms. Open development and science decentralizes power so that many people can collectively work on AI that reflects their needs and values. While [openness enables broader perspectives to contribute to research and AI overall, it faces the tension of less risk control](https://arxiv.org/abs/2302.04844). Moderating ML artifacts presents unique challenges due to the dynamic and rapidly evolving nature of these systems. In fact, as ML models become more advanced and capable of producing increasingly diverse content, the potential for harmful or unintended outputs grows, necessitating the development of robust moderation and evaluation strategies. Moreover, the complexity of ML models and the vast amounts of data they process exacerbate the challenge of identifying and addressing potential biases and ethical concerns. As hosts, we recognize the responsibility that comes with potentially amplifying harm to our users and the world more broadly. Often these harms disparately impact minority communities in a context-dependent manner. We have taken the approach of analyzing the tensions in play for each context, open to discussion across the company and Hugging Face community. While many models can amplify harm, especially discriminatory content, we are taking a series of steps to identify highest risk models and what action to take. Importantly, active perspectives from many backgrounds is key to understanding, measuring, and mitigating potential harms that affect different groups of people. We are crafting tools and safeguards in addition to improving our documentation practices to ensure open source science empowers individuals and continues to minimize potential harms. ## Ethical Categories The first major aspect of our work to foster good open ML consists in promoting the tools and positive examples of ML development that prioritize values and consideration for its stakeholders. This helps users take concrete steps to address outstanding issues, and present plausible alternatives to de facto damaging practices in ML development. To help our users discover and engage with ethics-related ML work, we have compiled a set of tags. These 6 high-level categories are based on our analysis of Spaces that community members had contributed. They are designed to give you a jargon-free way of thinking about ethical technology: - Rigorous work pays special attention to developing with best practices in mind. In ML, this can mean examining failure cases (including conducting bias and fairness audits), protecting privacy through security measures, and ensuring that potential users (technical and non-technical) are informed about the project's limitations. - Consentful work [supports](https://www.consentfultech.io/) the self-determination of people who use and are affected by these technologies. - Socially Conscious work shows us how technology can support social, environmental, and scientific efforts. - Sustainable work highlights and explores techniques for making machine learning ecologically sustainable. - Inclusive work broadens the scope of who builds and benefits in the machine learning world. - Inquisitive work shines a light on inequities and power structures which challenge the community to rethink its relationship to technology. Read more at https://huggingface.co/ethics Look for these terms as we’ll be using these tags, and updating them based on community contributions, across some new projects on the Hub! ## Safeguards Taking an “all-or-nothing” view of open releases ignores the wide variety of contexts that determine an ML artifact’s positive or negative impacts. Having more levers of control over how ML systems are shared and re-used supports collaborative development and analysis with less risk of promoting harmful uses or misuses; allowing for more openness and participation in innovation for shared benefits. We engage directly with contributors and have addressed pressing issues. To bring this to the next level, we are building community-based processes. This approach empowers both Hugging Face contributors, and those affected by contributions, to inform the limitations, sharing, and additional mechanisms necessary for models and data made available on our platform. The three main aspects we will pay attention to are: the origin of the artifact, how the artifact is handled by its developers, and how the artifact has been used. In that respect we: - launched a [flagging feature](https://twitter.com/GiadaPistilli/status/1571865167092396033) for our community to determine whether ML artifacts or community content (model, dataset, space, or discussion) violate our [content guidelines](https://huggingface.co/content-guidelines), - monitor our community discussion boards to ensure Hub users abide by the [code of conduct](https://huggingface.co/code-of-conduct), - robustly document our most-downloaded models with model cards that detail social impacts, biases, and intended and out-of-scope use cases, - create audience-guiding tags, such as the “Not For All Audiences” tag that can be added to the repository’s card metadata to avoid un-requested violent and sexual content, - promote use of [Open Responsible AI Licenses (RAIL)](https://huggingface.co/blog/open_rail) for [models](https://www.licenses.ai/blog/2022/8/26/bigscience-open-rail-m-license), such as with LLMs ([BLOOM](https://huggingface.co/spaces/bigscience/license), [BigCode](https://huggingface.co/spaces/bigcode/license)), - conduct research that [analyzes](https://arxiv.org/abs/2302.04844) which models and datasets have the highest potential for, or track record of, misuse and malicious use. **How to use the flagging function:** Click on the flag icon on any Model, Dataset, Space, or Discussion: <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_3/flag2.jpg" alt="screenshot pointing to the flag icon to Report this model" /> </p> Share why you flagged this item: <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_3/flag1.jpg" alt="screenshot showing the text window where you describe why you flagged this item" /> </p> In prioritizing open science, we examine potential harm on a case-by-case basis. When users flag a system, developers can directly and transparently respond to concerns. Moderators are able to disengage from discussions should behavior become hateful and/or abusive (see [code of conduct](https://huggingface.co/code-of-conduct)). Should a specific model be flagged as high risk by our community, we consider: - Downgrading the ML artifact’s visibility across the Hub in the trending tab and in feeds, - Requesting that the models be made private, - Gating access to ML artifacts (see documentation for [models](https://huggingface.co/docs/hub/models-gated) and [datasets](https://huggingface.co/docs/hub/datasets-gated)), - Disabling access. **How to add the “Not For All Audiences” tag:** Edit the model/data card → add `not-for-all-audiences` in the tags section → open the PR and wait for the authors to merge it. Once merged, the following tag will be displayed on the repository: <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_3/nfaa_tag.png" alt="screenshot showing where to add tags" /> </p> Any repository tagged `not-for-all-audiences` will display the following popup when visited: <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_3/nfaa2.png" alt="screenshot showing where to add tags" /> </p> Clicking "View Content" will allow you to view the repository as normal. If you wish to always view `not-for-all-audiences`-tagged repositories without the popup, this setting can be changed in a user's [Content Preferences](https://huggingface.co/settings/content-preferences) <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_3/nfaa1.png" alt="screenshot showing where to add tags" /> </p> Open science requires safeguards, and one of our goals is to create an environment informed by tradeoffs with different values. Hosting and providing access to models in addition to cultivating community and discussion empowers diverse groups to assess social implications and guide what is good machine learning. ## Are you working on safeguards? Share them on Hugging Face Hub! The most important part of Hugging Face is our community. If you’re a researcher working on making ML safer to use, especially for open science, we want to support and showcase your work! Here are some recent demos and tools from researchers in the Hugging Face community: - [A Watermark for LLMs](https://huggingface.co/spaces/tomg-group-umd/lm-watermarking) by John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein ([paper](https://arxiv.org/abs/2301.10226)) - [Generate Model Cards Tool](https://huggingface.co/spaces/huggingface/Model_Cards_Writing_Tool) by the Hugging Face team - [Photoguard](https://huggingface.co/spaces/RamAnanth1/photoguard) to safeguard images against manipulation by Ram Ananth Thanks for reading! 🤗 ~ Irene, Nima, Giada, Yacine, and Elizabeth, on behalf of the Ethics and Society regulars If you want to cite this blog post, please use the following: ``` @misc{hf_ethics_soc_blog_3, author = {Irene Solaiman and Giada Pistilli and Nima Boscarino and Yacine Jernite and Elizabeth Allendorf and Margaret Mitchell and Carlos Muñoz Ferrandis and Nathan Lambert and Alexandra Sasha Luccioni }, title = {Hugging Face Ethics and Society Newsletter 3: Ethical Openness at Hugging Face}, booktitle = {Hugging Face Blog}, year = {2023}, url = {https://doi.org/10.57967/hf/0487}, doi = {10.57967/hf/0487} } ```
society-ethics/BlogPostOpenness
[ "license:cc-by-4.0", "arxiv:2302.04844", "arxiv:2301.10226", "doi:10.57967/hf/0487", "region:us" ]
2023-03-29T01:46:01+00:00
{"license": "cc-by-4.0"}
2023-03-29T22:03:15+00:00
c504b3fb1bd408507949c50cadc83c580f4ec202
# A large instruct dataset This dataset is a combination of multiple sources, including the GPT4All dataset, the Alpaca dataset from Stanford, custom generation using AllenAI augmentation, and some dataset augmentation from open-source Meta datasets. The dataset is split into 70% for training, 20% for validation, and 10% for testing. ## Description The Swype.com dataset contains prompt and completion pairs for various tasks. It's an augmented version of the following datasets: - [GPT4All](https://github.com/nomic-ai/gpt4all): A dataset containing a wide range of tasks for training and evaluating general-purpose language models. - [Alpaca dataset from Stanford](https://github.com/tatsu-lab/stanford_alpaca): A dataset containing prompts, completions, and annotations for controllable text generation. - Custom generation using [AllenAI augmentation](https://allenai.org): Augmentation performed using the advanced NLP tools provided by AllenAI. - Some dataset augmentation from open-source Meta datasets: Additional augmentation from various open-source Meta datasets. The dataset is designed for training and evaluating language models on diverse tasks, with a focus on controllable and instruction-based text generation. ## Dataset Structure The dataset contains the following columns: - `prompt`: The input prompt string, representing a task or question. - `completion`: The output completion string, representing the answer or generated text based on the prompt. ## Citation If you use this dataset in your research or work, please cite it as follows: @misc{srikanth2023swypedataset, author = {Srikanth Srinivas}, title = {Swype.com Dataset}, year = {2023}, publisher = {Swype.com}, howpublished = {\url{https://swype.com}}, email = {[email protected]} }
swype/instruct
[ "license:mit", "region:us" ]
2023-03-29T01:48:16+00:00
{"license": "mit"}
2023-04-05T22:14:28+00:00
6fdd2b51f47321b4c51ee217cb5240678661725a
fjd/scannet-processed-test
[ "license:cc-by-nc-4.0", "region:us" ]
2023-03-29T02:27:18+00:00
{"license": "cc-by-nc-4.0"}
2023-03-29T03:13:39+00:00
78a68f58c1c25f5d4a1fdc3381fc0ae098c25d65
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
Jokotiya/ShoppingAssistant
[ "region:us" ]
2023-03-29T02:27:50+00:00
{}
2023-03-30T12:49:51+00:00
e47d732b6389285cd0021ac60f90f52babbbb89d
zz545906747/blue_eye_dragon
[ "license:apache-2.0", "region:us" ]
2023-03-29T02:36:07+00:00
{"license": "apache-2.0"}
2023-03-29T02:36:07+00:00
51d16b628c75a7101b5233e6ec385c1eac366f83
This dataset is a combination of Stanford's Alpaca (https://github.com/tatsu-lab/stanford_alpaca) and FiQA (https://sites.google.com/view/fiqa/) with another 1.3k pairs custom generated using GPT3.5 Script for tuning through Kaggle's (https://www.kaggle.com) free resources using PEFT/LoRa: https://www.kaggle.com/code/gbhacker23/wealth-alpaca-lora GitHub repo with performance analyses, training and data generation scripts, and inference notebooks: https://github.com/gaurangbharti1/wealth-alpaca Cleaner dataset: https://huggingface.co/datasets/gbharti/wealth-alpaca_lora (no major changes, just cleaned up) CSV format: https://huggingface.co/datasets/gbharti/finance-alpaca-csv
gbharti/finance-alpaca
[ "language:en", "region:us" ]
2023-03-29T02:37:58+00:00
{"language": ["en"]}
2023-09-26T03:13:35+00:00
e8263a61e5436778e9a6e06052f5b5468565ca82
# Dataset Card for "vqa-with-coco-img-1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ryanramos/vqa-with-coco-img-1
[ "region:us" ]
2023-03-29T03:09:35+00:00
{"dataset_info": {"features": [{"name": "license", "dtype": "int64"}, {"name": "file_name", "dtype": "string"}, {"name": "coco_url", "dtype": "string"}, {"name": "height", "dtype": "int64"}, {"name": "width", "dtype": "int64"}, {"name": "date_captured", "dtype": "string"}, {"name": "flickr_url", "dtype": "string"}, {"name": "captions", "list": [{"name": "caption", "dtype": "string"}, {"name": "id", "dtype": "int64"}]}, {"name": "questions", "list": [{"name": "answer_type", "dtype": "string"}, {"name": "answers", "list": [{"name": "answer", "dtype": "string"}, {"name": "answer_confidence", "dtype": "string"}, {"name": "answer_id", "dtype": "int64"}]}, {"name": "image_id", "dtype": "int64"}, {"name": "multiple_choice_answer", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "question_id", "dtype": "int64"}, {"name": "question_type", "dtype": "string"}]}, {"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 877684351.5, "num_examples": 16500}], "download_size": 848535039, "dataset_size": 877684351.5}}
2023-03-29T03:10:16+00:00
3fcc17d54c4f43a7ce958b74e59128bc91c02693
p1atdev/cosppi
[ "license:cc0-1.0", "region:us" ]
2023-03-29T03:15:47+00:00
{"license": "cc0-1.0"}
2023-03-29T03:30:29+00:00
6ee4c288915f4f23c3370be19d48c7a3f7f01f21
# Dataset Card for "hoodies_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
akadhim-ai/hoodies_dataset
[ "region:us" ]
2023-03-29T03:26:15+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 82715.0, "num_examples": 10}], "download_size": 82187, "dataset_size": 82715.0}}
2023-03-29T03:37:31+00:00
32f0efe0f32032255f8d6e38bdeacc7031d3e333
# Dataset Card for "slurp_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yhfang/slurp_dataset
[ "region:us" ]
2023-03-29T04:06:54+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "slurp_id", "dtype": "int64"}, {"name": "intent", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2367182266.757, "num_examples": 50627}], "download_size": 2741594496, "dataset_size": 2367182266.757}}
2023-03-29T04:20:31+00:00
01ce82001787b0a23b1509f54702722716249f44
# Dataset Card for "libri360_1s_augmented" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mazkobot/libri360_1s_augmented
[ "region:us" ]
2023-03-29T04:08:15+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 40276728937.024, "num_examples": 1256744}], "download_size": 38107431571, "dataset_size": 40276728937.024}}
2023-03-29T06:41:19+00:00
0d72626946e02037318643cdd9ea9b101184247d
# Dataset Card for "martin_valen_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
akadhim-ai/martin_valen_dataset
[ "region:us" ]
2023-03-29T04:15:43+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 82775.0, "num_examples": 10}], "download_size": 82229, "dataset_size": 82775.0}}
2023-03-29T04:15:50+00:00
242de76765dba012827504956620abc8a15ba927
thepoweroframesh/distilbert-base-uncased-finetuned-squad
[ "license:openrail", "region:us" ]
2023-03-29T04:39:22+00:00
{"license": "openrail"}
2023-03-29T04:39:22+00:00
dc750d2b09a7cfe1b1a0ab3a3eef4ddcdd1c0959
sazzad14/roadquality
[ "license:cc", "region:us" ]
2023-03-29T05:08:11+00:00
{"license": "cc"}
2023-03-29T05:08:11+00:00
4a982f040e62228c0facd3fa439f1b3d04f89b58
tatsu23/spanishtwitch
[ "license:openrail", "region:us" ]
2023-03-29T05:24:34+00:00
{"license": "openrail"}
2023-03-29T05:24:34+00:00
8c6ec32c340031124236862abfed7be1583d1172
# Dataset Card for Common Voice Corpus 13.0 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://commonvoice.mozilla.org/en/datasets - **Repository:** https://github.com/common-voice/common-voice - **Paper:** https://arxiv.org/abs/1912.06670 - **Leaderboard:** https://paperswithcode.com/dataset/common-voice - **Point of Contact:** [Vaibhav Srivastav](mailto:[email protected]) ### Dataset Summary The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help improve the accuracy of speech recognition engines. The dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added. Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing. ### Supported Tasks and Leaderboards The results for models trained on the Common Voice datasets are available via the [🤗 Autoevaluate Leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=mozilla-foundation%2Fcommon_voice_11_0&only_verified=0&task=automatic-speech-recognition&config=ar&split=test&metric=wer) ### Languages ``` Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dioula, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Icelandic, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Lao, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Occitan, Odia, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Turkmen, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Yoruba ``` ## How to use The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi): ```python from datasets import load_dataset cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train") ``` Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk. ```python from datasets import load_dataset cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train", streaming=True) print(next(iter(cv_13))) ``` *Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed). ### Local ```python from datasets import load_dataset from torch.utils.data.sampler import BatchSampler, RandomSampler cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train") batch_sampler = BatchSampler(RandomSampler(cv_13), batch_size=32, drop_last=False) dataloader = DataLoader(cv_13, batch_sampler=batch_sampler) ``` ### Streaming ```python from datasets import load_dataset from torch.utils.data import DataLoader cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train") dataloader = DataLoader(cv_13, batch_size=32) ``` To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets). ### Example scripts Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition). ## Dataset Structure ### Data Instances A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`. ```python { 'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5', 'path': 'et/clips/common_voice_et_18318995.mp3', 'audio': { 'path': 'et/clips/common_voice_et_18318995.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000 }, 'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.', 'up_votes': 2, 'down_votes': 0, 'age': 'twenties', 'gender': 'male', 'accent': '', 'locale': 'et', 'segment': '' } ``` ### Data Fields `client_id` (`string`): An id for which client (voice) made the recording `path` (`string`): The path to the audio file `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. `sentence` (`string`): The sentence the user was prompted to speak `up_votes` (`int64`): How many upvotes the audio file has received from reviewers `down_votes` (`int64`): How many downvotes the audio file has received from reviewers `age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`) `gender` (`string`): The gender of the speaker `accent` (`string`): Accent of the speaker `locale` (`string`): The locale of the speaker `segment` (`string`): Usually an empty field ### Data Splits The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other. The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality. The invalidated data is data has been invalidated by reviewers and received downvotes indicating that the data is of low quality. The reported data is data that has been reported, for different reasons. The other data is data that has not yet been reviewed. The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train. ## Data Preprocessing Recommended by Hugging Face The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_. In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation. ```python from datasets import load_dataset ds = load_dataset("mozilla-foundation/common_voice_13_0", "en", use_auth_token=True) def prepare_dataset(batch): """Function to preprocess the dataset with the .map method""" transcription = batch["sentence"] if transcription.startswith('"') and transcription.endswith('"'): # we can remove trailing quotation marks as they do not affect the transcription transcription = transcription[1:-1] if transcription[-1] not in [".", "?", "!"]: # append a full-stop to sentences that do not end in punctuation transcription = transcription + "." batch["sentence"] = transcription return batch ds = ds.map(prepare_dataset, desc="preprocess dataset") ``` ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ## Considerations for Using the Data ### Social Impact of Dataset The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/) ### Citation Information ``` @inproceedings{commonvoice:2020, author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.}, title = {Common Voice: A Massively-Multilingual Speech Corpus}, booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, pages = {4211--4215}, year = 2020 } ```
mozilla-foundation/common_voice_13_0
[ "task_categories:automatic-speech-recognition", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "source_datasets:extended|common_voice", "license:cc0-1.0", "arxiv:1912.06670", "region:us" ]
2023-03-29T06:43:24+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": {"ab": ["10K<n<100K"], "ar": ["100K<n<1M"], "as": ["1K<n<10K"], "ast": ["1K<n<10K"], "az": ["n<1K"], "ba": ["100K<n<1M"], "bas": ["1K<n<10K"], "be": ["1M<n<10M"], "bg": ["10K<n<100K"], "bn": ["1M<n<10M"], "br": ["10K<n<100K"], "ca": ["1M<n<10M"], "ckb": ["100K<n<1M"], "cnh": ["1K<n<10K"], "cs": ["100K<n<1M"], "cv": ["10K<n<100K"], "cy": ["100K<n<1M"], "da": ["10K<n<100K"], "de": ["100K<n<1M"], "dv": ["10K<n<100K"], "dyu": ["n<1K"], "el": ["10K<n<100K"], "en": ["1M<n<10M"], "eo": ["1M<n<10M"], "es": ["1M<n<10M"], "et": ["10K<n<100K"], "eu": ["100K<n<1M"], "fa": ["100K<n<1M"], "fi": ["10K<n<100K"], "fr": ["100K<n<1M"], "fy-NL": ["100K<n<1M"], "ga-IE": ["10K<n<100K"], "gl": ["10K<n<100K"], "gn": ["1K<n<10K"], "ha": ["10K<n<100K"], "hi": ["10K<n<100K"], "hsb": ["1K<n<10K"], "hu": ["10K<n<100K"], "hy-AM": ["1K<n<10K"], "ia": ["10K<n<100K"], "id": ["10K<n<100K"], "ig": ["1K<n<10K"], "is": ["n<1K"], "it": ["100K<n<1M"], "ja": ["100K<n<1M"], "ka": ["10K<n<100K"], "kab": ["100K<n<1M"], "kk": ["1K<n<10K"], "kmr": ["10K<n<100K"], "ko": ["1K<n<10K"], "ky": ["10K<n<100K"], "lg": ["100K<n<1M"], "lo": ["n<1K"], "lt": ["10K<n<100K"], "lv": ["10K<n<100K"], "mdf": ["n<1K"], "mhr": ["100K<n<1M"], "mk": ["n<1K"], "ml": ["1K<n<10K"], "mn": ["10K<n<100K"], "mr": ["10K<n<100K"], "mrj": ["10K<n<100K"], "mt": ["10K<n<100K"], "myv": ["1K<n<10K"], "nan-tw": ["10K<n<100K"], "ne-NP": ["n<1K"], "nl": ["10K<n<100K"], "nn-NO": ["n<1K"], "oc": ["1K<n<10K"], "or": ["1K<n<10K"], "pa-IN": ["1K<n<10K"], "pl": ["100K<n<1M"], "pt": ["100K<n<1M"], "quy": ["n<1K"], "rm-sursilv": ["1K<n<10K"], "rm-vallader": ["1K<n<10K"], "ro": ["10K<n<100K"], "ru": ["100K<n<1M"], "rw": ["1M<n<10M"], "sah": ["1K<n<10K"], "sat": ["n<1K"], "sc": ["1K<n<10K"], "sk": ["10K<n<100K"], "skr": ["1K<n<10K"], "sl": ["10K<n<100K"], "sr": ["1K<n<10K"], "sv-SE": ["10K<n<100K"], "sw": ["100K<n<1M"], "ta": ["100K<n<1M"], "th": ["100K<n<1M"], "ti": ["n<1K"], "tig": ["n<1K"], "tk": ["1K<n<10K"], "tok": ["10K<n<100K"], "tr": ["10K<n<100K"], "tt": ["10K<n<100K"], "tw": ["n<1K"], "ug": ["10K<n<100K"], "uk": ["10K<n<100K"], "ur": ["100K<n<1M"], "uz": ["100K<n<1M"], "vi": ["10K<n<100K"], "vot": ["n<1K"], "yo": ["1K<n<10K"], "yue": ["10K<n<100K"], "zh-CN": ["100K<n<1M"], "zh-HK": ["100K<n<1M"], "zh-TW": ["100K<n<1M"]}, "source_datasets": ["extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "paperswithcode_id": "common-voice", "pretty_name": "Common Voice Corpus 13.0", "language_bcp47": ["ab", "ar", "as", "ast", "az", "ba", "bas", "be", "bg", "bn", "br", "ca", "ckb", "cnh", "cs", "cv", "cy", "da", "de", "dv", "dyu", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy-NL", "ga-IE", "gl", "gn", "ha", "hi", "hsb", "hu", "hy-AM", "ia", "id", "ig", "is", "it", "ja", "ka", "kab", "kk", "kmr", "ko", "ky", "lg", "lo", "lt", "lv", "mdf", "mhr", "mk", "ml", "mn", "mr", "mrj", "mt", "myv", "nan-tw", "ne-NP", "nl", "nn-NO", "oc", "or", "pa-IN", "pl", "pt", "quy", "rm-sursilv", "rm-vallader", "ro", "ru", "rw", "sah", "sat", "sc", "sk", "skr", "sl", "sr", "sv-SE", "sw", "ta", "th", "ti", "tig", "tk", "tok", "tr", "tt", "tw", "ug", "uk", "ur", "uz", "vi", "vot", "yo", "yue", "zh-CN", "zh-HK", "zh-TW"], "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."}
2023-06-26T14:23:12+00:00
d451b3cb56a7420538993d2ac828e2f6b5f8d95d
hello2mao/yoneyama-mai
[ "license:apache-2.0", "region:us" ]
2023-03-29T06:44:55+00:00
{"license": "apache-2.0"}
2023-03-29T06:44:55+00:00
f9526313a90c3b6b4ad3ef8925a349b40a5821be
# Dataset Card for "test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
daisr/test
[ "region:us" ]
2023-03-29T06:51:49+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 5530314.0, "num_examples": 5}], "download_size": 545067, "dataset_size": 5530314.0}}
2023-03-29T07:39:48+00:00