sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
06e070a122039872dee59a46c9b137a4abec7340
Sidddd/Admissions
[ "license:unknown", "region:us" ]
2023-06-11T11:47:01+00:00
{"license": "unknown"}
2023-06-11T11:47:59+00:00
07936e834690ff78f272c0bbb179c4b3eefdb764
# Dataset Card for "cancer_new" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
RoopamSadh/cancer_new
[ "region:us" ]
2023-06-11T12:08:38+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20376.0, "num_examples": 5}], "download_size": 31187, "dataset_size": 20376.0}}
2023-06-11T12:08:44+00:00
1f601cd8e1b67803cdad36a560df452796c7a4cc
## Malyuk [mɐˈlʲuk] Combined corpus: [UberText 2.0](https://lang.org.ua/en/ubertext/), [Oscar](https://huggingface.co/datasets/oscar), [Ukrainian News](https://huggingface.co/datasets/zeusfsx/ukrainian-news) This is not an official release by any means. It is just a compilation made by me to simplify the training of the Ukrainian LLM. Nothing is guaranteed, no support requests, nothing. * 113GB of texts in jsonl. * 38941863 articles. ![alt text](https://huggingface.co/datasets/lang-uk/malyuk/resolve/main/eyes.png "Watching ya")
lang-uk/malyuk
[ "size_categories:10B<n<100B", "language:uk", "region:us" ]
2023-06-11T12:15:26+00:00
{"language": ["uk"], "size_categories": ["10B<n<100B"]}
2023-10-02T08:40:25+00:00
d878e3e4e42ad0fe1d1acd1da3f9c22e94c2f675
# Dataset Card for "salestech_sales_qualification_framework_bant" --- license: apache-2.0 --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The BANT technique is a sales qualifying framework that considers a prospect's budget, internal influence/ability to buy, need for the product, and timeframe for making a purchase when determining whether to pursue a sale. Because it aids in lead qualification during the discovery call, BANT plays a vital role in the sales process. The sales team may acquire precise information from the prospect about their budget, stakeholders, need, and timescale immediately, rather than waiting days or weeks for leads to be qualified using a score determined from the prospect's behaviour and engagement with marketing and sales materials. Budget - The prospect's financial capacity to invest in your solution. Authority - Who has the final say in this transaction? Who gets to make the final call? Need – Does the potential customer really need my product? Do all members of the team require this? Timeline -How long will it be before the potential customer makes a decision? ### Supported Tasks and Leaderboards N.A. ### Languages ENGLISH ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields There are 2 columns: text: text label : label (one of four from BANT) ### Data Splits N.A. ## Dataset Creation ### Curation Rationale Prospectus text mining ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset This will help SaleTech to better qualify leads. ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ### Contributions Made by Author [Scholarly360](https://github.com/Scholarly360) .
scholarly360/salestech_sales_qualification_framework_bant
[ "salestech", "sales", "region:us" ]
2023-06-11T12:38:28+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7416, "num_examples": 100}, {"name": "test", "num_bytes": 1884, "num_examples": 26}], "download_size": 6977, "dataset_size": 9300}, "tags": ["salestech", "sales"]}
2023-06-11T12:50:20+00:00
6be4adf3ff2d3976d658c80383483adb9fdd1945
# Dataset Card for "docs_on_several_languages" This dataset is a collection of different images in different languages. The set includes the following languages: Azerbaijani, Belorussian, Chinese, English, Estonian, Finnish, Georgian, Japanese, Korean, Kazakh, Latvian, Lithuanian, Mongolian, Norwegian, Polish, Russian, Ukranian. Each language has a corresponding class label defined. At least 100 images in the entire dataset are allocated per class. This dataset was originally used for the task of classifying the language of a document based on its image, but I hope it can help you in other machine learning tasks. [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AlekseyScorpi/docs_on_several_languages
[ "task_categories:text-classification", "size_categories:1K<n<10K", "code", "region:us" ]
2023-06-11T12:50:31+00:00
{"size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "az", "1": "by", "2": "cn", "3": "en", "4": "es", "5": "fn", "6": "gr", "7": "jp", "8": "ko", "9": "kz", "10": "la", "11": "li", "12": "mo", "13": "no", "14": "pl", "15": "ru", "16": "ua"}}}}], "splits": [{"name": "train", "num_bytes": 1893804579.79, "num_examples": 1987}, {"name": "test", "num_bytes": 374568135, "num_examples": 339}], "download_size": 2423302965, "dataset_size": 2268372714.79}, "tags": ["code"]}
2023-09-16T06:01:24+00:00
80535bc9a4812a91c5a1315ed959dd1ef53646bd
converted using this https://github.com/jason9693/midi-neural-processor
breadlicker45/midi-data
[ "region:us" ]
2023-06-11T12:55:52+00:00
{}
2023-06-11T13:17:25+00:00
85de5dea2d8c21814b16508af33cd4ac9071b0cb
bgspaditya/maroon100k
[ "license:mit", "region:us" ]
2023-06-11T12:56:59+00:00
{"license": "mit"}
2023-06-11T13:12:16+00:00
7ccd66f7a937505fc8c048307008d3ec2cd6d42f
# Dataset Card for "ah_openai_dialog_annotation_val_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Deojoandco/ah_openai_dialog_annotation_val_test
[ "region:us" ]
2023-06-11T13:46:44+00:00
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "num_comments", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "score", "dtype": "int64"}, {"name": "upvote_ratio", "dtype": "float64"}, {"name": "distinguished", "dtype": "string"}, {"name": "over_18", "dtype": "bool"}, {"name": "created_utc", "dtype": "int64"}, {"name": "comments", "list": [{"name": "body", "dtype": "string"}, {"name": "created_utc", "dtype": "float64"}, {"name": "distinguished", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "permalink", "dtype": "string"}, {"name": "score", "dtype": "int64"}]}, {"name": "best_num_comments", "dtype": "int64"}, {"name": "query", "dtype": "string"}, {"name": "dialog", "dtype": "string"}, {"name": "annotation_error", "dtype": "bool"}, {"name": "annotation", "struct": [{"name": "Error", "dtype": "string"}, {"name": "success", "dtype": "bool"}, {"name": "text", "dtype": "string"}]}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6504818, "num_examples": 585}], "download_size": 3807643, "dataset_size": 6504818}}
2023-06-11T13:47:02+00:00
fcd5c89b1f6406a191f0a2fbdb327b2b40e2e736
# Dataset Card for "ah_openai_dialog_annotation_val" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Deojoandco/ah_openai_dialog_annotation_val
[ "region:us" ]
2023-06-11T13:49:50+00:00
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "num_comments", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "score", "dtype": "int64"}, {"name": "upvote_ratio", "dtype": "float64"}, {"name": "distinguished", "dtype": "string"}, {"name": "over_18", "dtype": "bool"}, {"name": "created_utc", "dtype": "int64"}, {"name": "comments", "list": [{"name": "body", "dtype": "string"}, {"name": "created_utc", "dtype": "float64"}, {"name": "distinguished", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "permalink", "dtype": "string"}, {"name": "score", "dtype": "int64"}]}, {"name": "best_num_comments", "dtype": "int64"}, {"name": "query", "dtype": "string"}, {"name": "dialog", "dtype": "string"}, {"name": "annotation_error", "dtype": "bool"}, {"name": "annotation", "struct": [{"name": "Error", "dtype": "string"}, {"name": "success", "dtype": "bool"}, {"name": "text", "dtype": "string"}]}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3124235, "num_examples": 293}], "download_size": 1816005, "dataset_size": 3124235}}
2023-06-11T13:50:06+00:00
f02441f27483a0c9a54b420ea02b1edb44f2175b
# Dataset Card for "ah_openai_dialog_annotation_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Deojoandco/ah_openai_dialog_annotation_test
[ "region:us" ]
2023-06-11T13:51:03+00:00
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "num_comments", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "score", "dtype": "int64"}, {"name": "upvote_ratio", "dtype": "float64"}, {"name": "distinguished", "dtype": "string"}, {"name": "over_18", "dtype": "bool"}, {"name": "created_utc", "dtype": "int64"}, {"name": "comments", "list": [{"name": "body", "dtype": "string"}, {"name": "created_utc", "dtype": "float64"}, {"name": "distinguished", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "permalink", "dtype": "string"}, {"name": "score", "dtype": "int64"}]}, {"name": "best_num_comments", "dtype": "int64"}, {"name": "query", "dtype": "string"}, {"name": "dialog", "dtype": "string"}, {"name": "annotation_error", "dtype": "bool"}, {"name": "annotation", "struct": [{"name": "Error", "dtype": "string"}, {"name": "success", "dtype": "bool"}, {"name": "text", "dtype": "string"}]}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3380584, "num_examples": 292}], "download_size": 2027925, "dataset_size": 3380584}}
2023-06-11T13:51:17+00:00
6282bf166439f0d233bc991d44bff0a96d43aaab
# Dataset Card for "musdb18-spec-pix2pix-test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zachary-shah/musdb18-spec-pix2pix-test
[ "region:us" ]
2023-06-11T14:21:14+00:00
{"dataset_info": {"features": [{"name": "original_prompt", "dtype": "string"}, {"name": "original_image", "dtype": "image"}, {"name": "edit_prompt", "dtype": "string"}, {"name": "edited_prompt", "dtype": "string"}, {"name": "edited_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 18297334.0, "num_examples": 196}], "download_size": 18266177, "dataset_size": 18297334.0}}
2023-06-11T14:21:15+00:00
759b4470bdd76499007b15b39a43d905d86f8761
# Oscar 2023_01 DE Deduplicated This is a deduplicated version of the german subset of the [23.01 OSCAR Corpus](https://github.com/ChenghaoMou/text-dedup), a large, crawled, and processed text dataset curated by the OSCAR project (Open Super-large Crawled Aggregated coRpus). OSCAR 23.01 is the January 2023 version of the OSCAR Corpus based on the November/December 2022 dump of Common Crawl. While being quite similar to OSCAR 22.01, it contains several new features, including KenLM-based adult content detection, [...]. It was deduplicated using a MinHash implementation from the `text-dedup` library by `ChenghaoMou` available on [GitHub](https://github.com/ChenghaoMou/text-dedup). with the following command: ```bash python -m text_dedup.minhash --path oscar-corpus/OSCAR-2301 --name "de" --cache_dir "../cache" --split "train" --column "text" --batch_size 10000 --output output/minhash_oscar_de_dedup ``` Find a filtered version of this dataset at [bjoernp/oscar2301_de_deduped_filtered](https://huggingface.co/datasets/bjoernp/oscar2301_de_deduped_filtered). ## Deduplication statistics | Step | Runtime | |---|---| | Loading | 10.64s | | MinHashing | 10574.02s | | Clustering | 12187.65s | | Filtering | 4198.70s | | Saving | 3560.06s | | Total | 30531.07s | | Dataset | Number of documents | |---|---| | Before | 103299215 | | After | 53172498 | ## Dataset scheme: ```json { "text":"English sentence\nphrase en franΓ§ais\n????????????", // (1) "meta":{ "warc_headers":{ // (2) "warc-identified-content-language":"fra,eng", "warc-target-uri":"https://fr.wikipedia.org/wiki/...", "warc-record-id":"<urn:uuid:29eaa920-d299-4b1d-b687-c72bd8d68116>", "warc-type":"conversion", "content-length":"35298", // (3) "warc-refers-to":"<urn:uuid:39e42055-0d94-4e45-9c6c-9e7056635d64>", "warc-block-digest":"sha1:WFH2A5WHCS2H365GIAFYQPI7UOAMFGHB", // (3) "warc-date":"2022-11-26T09:45:47Z", "content-type":"text/plain" }, "identification":{ // (4) "label":"fr", "prob":0.8938327 }, "harmful_pp":4063.1814, // (5) "tlsh":"tlsh:T125315FF2B6088901EEA097015DB39B4600B...", // (6) "quality_warnings":[ // (7) "short_sentences", "header", "footer" ], "categories":[ // (8) "examen_pix", "liste_bu" ], "sentence_identifications":[ // (9) { "label":"fr", "prob":0.99837273 }, { "label":"en", "prob":0.9992377 }, null ] } } ``` ## Licensing (from the original OSCAR Corpus. We cannot reasonably comply with takedown requests.) ``` These data are released under this licensing scheme We do not own any of the text from which these data has been extracted. We license the actual packaging, the metadata and the annotations of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/ To the extent possible under law, the OSCAR project, Inria, the Univertity of Mannheim and DFKI GmbH have waived all copyright and related or neighboring rights to OSCAR This work is published from: France and Germany. [[[ Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please: * Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted. * Clearly identify the copyrighted work claimed to be infringed. * Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material. We will comply to legitimate requests by removing the affected sources from the next release of the corpus. ]]] ``` ## Citation ``` @ARTICLE{2022arXiv221210440J, author = {{Jansen}, Tim and {Tong}, Yangling and {Zevallos}, Victoria and {Ortiz Suarez}, Pedro}, title = "{Perplexed by Quality: A Perplexity-based Method for Adult and Harmful Content Detection in Multilingual Heterogeneous Web Data}", journal = {arXiv e-prints}, keywords = {Computer Science - Computation and Language}, year = 2022, month = dec, eid = {arXiv:2212.10440}, pages = {arXiv:2212.10440}, doi = {10.48550/arXiv.2212.10440}, archivePrefix = {arXiv}, eprint = {2212.10440}, primaryClass = {cs.CL}, adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv221210440J}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @inproceedings{abadji-etal-2022-towards, title = "Towards a Cleaner Document-Oriented Multilingual Crawled Corpus", author = "Abadji, Julien and Ortiz Suarez, Pedro and Romary, Laurent and Sagot, Beno{\^\i}t", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.463", pages = "4344--4355", abstract = "The need for large corpora raw corpora has dramatically increased in recent years with the introduction of transfer learning and semi-supervised learning methods to Natural Language Processing. And while there have been some recent attempts to manually curate the amount of data necessary to train large language models, the main way to obtain this data is still through automatic web crawling. In this paper we take the existing multilingual web corpus OSCAR and its pipeline Ungoliant that extracts and classifies data from Common Crawl at the line level, and propose a set of improvements and automatic annotations in order to produce a new document-oriented version of OSCAR that could prove more suitable to pre-train large generative language models as well as hopefully other applications in Natural Language Processing and Digital Humanities.", } @inproceedings{AbadjiOrtizSuarezRomaryetal.2021, author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot}, title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus}, series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)}, editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr BaΕ„ski and Adrien Barbaresi and Simon Clematide and Ines Pisetta}, publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache}, address = {Mannheim}, doi = {10.14618/ids-pub-10468}, url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688}, pages = {1 -- 9}, year = {2021}, abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.}, language = {en} } @article{kreutzer-etal-2022-quality, title = "Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets", author = {Kreutzer, Julia and Caswell, Isaac and Wang, Lisa and Wahab, Ahsan and van Esch, Daan and Ulzii-Orshikh, Nasanbayar and Tapo, Allahsera and Subramani, Nishant and Sokolov, Artem and Sikasote, Claytone and Setyawan, Monang and Sarin, Supheakmungkol and Samb, Sokhar and Sagot, Beno{\^\i}t and Rivera, Clara and Rios, Annette and Papadimitriou, Isabel and Osei, Salomey and Suarez, Pedro Ortiz and Orife, Iroro and Ogueji, Kelechi and Rubungo, Andre Niyongabo and Nguyen, Toan Q. and M{\"u}ller, Mathias and M{\"u}ller, Andr{\'e} and Muhammad, Shamsuddeen Hassan and Muhammad, Nanda and Mnyakeni, Ayanda and Mirzakhalov, Jamshidbek and Matangira, Tapiwanashe and Leong, Colin and Lawson, Nze and Kudugunta, Sneha and Jernite, Yacine and Jenny, Mathias and Firat, Orhan and Dossou, Bonaventure F. P. and Dlamini, Sakhile and de Silva, Nisansa and {\c{C}}abuk Ball{\i}, Sakine and Biderman, Stella and Battisti, Alessia and Baruwa, Ahmed and Bapna, Ankur and Baljekar, Pallavi and Azime, Israel Abebe and Awokoya, Ayodele and Ataman, Duygu and Ahia, Orevaoghene and Ahia, Oghenefego and Agrawal, Sweta and Adeyemi, Mofetoluwa}, journal = "Transactions of the Association for Computational Linguistics", volume = "10", year = "2022", address = "Cambridge, MA", publisher = "MIT Press", url = "https://aclanthology.org/2022.tacl-1.4", doi = "10.1162/tacl_a_00447", pages = "50--72", abstract = "With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, Web-mined text datasets covering hundreds of languages. We manually audit the quality of 205 language-specific corpora released with five major public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4). Lower-resource corpora have systematic issues: At least 15 corpora have no usable text, and a significant fraction contains less than 50{\%} sentences of acceptable quality. In addition, many are mislabeled or use nonstandard/ambiguous language codes. We demonstrate that these issues are easy to detect even for non-proficient speakers, and supplement the human audit with automatic analyses. Finally, we recommend techniques to evaluate and improve multilingual corpora and discuss potential risks that come with low-quality data releases.", } @inproceedings{ortiz-suarez-etal-2020-monolingual, title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages", author = "Ortiz Su{'a}rez, Pedro Javier and Romary, Laurent and Sagot, Benoit", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.156", pages = "1703--1714", abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.", } @inproceedings{OrtizSuarezSagotRomary2019, author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary}, title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures}, series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019}, editor = {Piotr BaΕ„ski and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi}, publisher = {Leibniz-Institut f{"u}r Deutsche Sprache}, address = {Mannheim}, doi = {10.14618/ids-pub-9021}, url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215}, pages = {9 -- 16}, year = {2019}, abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.}, language = {en} } ```
bjoernp/oscar2023_de_deduped
[ "task_categories:text-generation", "size_categories:10M<n<100M", "language:de", "arxiv:2212.10440", "region:us" ]
2023-06-11T14:40:47+00:00
{"language": ["de"], "size_categories": ["10M<n<100M"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "meta", "struct": [{"name": "warc_headers", "struct": [{"name": "warc-record-id", "dtype": "string"}, {"name": "warc-date", "dtype": "string"}, {"name": "content-type", "dtype": "string"}, {"name": "content-length", "dtype": "int32"}, {"name": "warc-type", "dtype": "string"}, {"name": "warc-identified-content-language", "dtype": "string"}, {"name": "warc-refers-to", "dtype": "string"}, {"name": "warc-target-uri", "dtype": "string"}, {"name": "warc-block-digest", "dtype": "string"}]}, {"name": "identification", "struct": [{"name": "label", "dtype": "string"}, {"name": "prob", "dtype": "float32"}]}, {"name": "harmful_pp", "dtype": "float32"}, {"name": "tlsh", "dtype": "string"}, {"name": "quality_warnings", "sequence": "string"}, {"name": "categories", "sequence": "string"}, {"name": "sentence_identifications", "list": [{"name": "label", "dtype": "string"}, {"name": "prob", "dtype": "float32"}]}]}], "splits": [{"name": "train", "num_bytes": 382684030510, "num_examples": 53172498}], "download_size": 80368267320, "dataset_size": 382684030510}}
2023-08-16T19:22:18+00:00
cbe6e9cc1ee6823a5f9f25bb2224cef547cb11bb
# Dataset Card for constrained_language (pre-training data for simplified English) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Citation Information](#additional-information) - [Citation Information](#citation-information) ## Dataset Description - **Paper: https://arxiv.org/abs/2305.17266** - **Point of Contact: [email protected]** ### Dataset Summary This dataset is one of the two datasets published by "Honey, I Shrunk the Language: Language Model Behavior at Reduced Scale" (https://arxiv.org/abs/2305.17266). The dataset available at this link is the pre-training data constrained by vocabulary. The other published data i.e. the pre-training data that is not constrained by vocabulary is available at https://huggingface.co/datasets/text-machine-lab/unconstrained_language. The vocabulary used for curating the data is constructed from the AOChildes corpus (https://www.sciencedirect.com/science/article/abs/pii/S0079742121000256). The AOChildes corpus consists of transcripts of child-directed speech. Hence, the vocabulary constructed from AOChildes corpus consists of words spoken or heard by children of age six years or younger. The vocabulary is then used to filter the widely used text corpora, - C4: https://arxiv.org/abs/1910.10683, - BookCorpus: https://ieeexplore.ieee.org/document/7410368, - Wikipedia: https://huggingface.co/datasets/wikipedia, - Simplified-Wikipedia: https://simple.wikipedia.org/wiki/Main_Page, - Children's Book Test Corpus: https://arxiv.org/abs/1511.02301 From the above corpora, only those spans are included that contain words only from the predefined vocabulary. The dataset includes 44 million sentences (~6 million sequences, each with ~128 tokens) and 3 million contiguous spans (each with ~128 tokens). Refer to Table 1 of the paper for data distribution over different corpora. ### Languages The dataset contains the English language only. ## Dataset Structure The dataset is available in the arrow dataset format with three splits namely, train, validation, and test. Every data instance has only one key "Text" that included a text span of approximately 128 tokens. ### Citation Information If this dataset is useful to you please cite our work. ```sh @article{deshpande2023honey, title={Honey, I Shrunk the Language: Language Model Behavior at Reduced Scale}, author={Deshpande, Vijeta and Pechi, Dan and Thatte, Shree and Lialin, Vladislav and Rumshisky, Anna}, journal={arXiv preprint arXiv:2305.17266}, year={2023} } ```
text-machine-lab/constrained_language
[ "arxiv:2305.17266", "arxiv:1910.10683", "arxiv:1511.02301", "region:us" ]
2023-06-11T14:47:44+00:00
{"dataset_info": {"features": [{"name": "TEXT", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4537675604, "num_examples": 9081490}, {"name": "validation", "num_bytes": 50107745, "num_examples": 100000}, {"name": "test", "num_bytes": 50134861, "num_examples": 100000}], "download_size": 3052451421, "dataset_size": 4637918210}}
2023-06-13T04:32:11+00:00
008e340593fe01a5e467d52f7bb04811a5ab263a
Project Code: Instructions/Data loading for this project are in the .ipynb file, and it should be able to run from top to bottom, loading code and training models. If loading pretraining models instead, please skip the training cells and run the load model cells instead. Model Output (from Generator): | | | | Temperature | |--------|-------------------|-----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | Political Party | 0.5 | | Prompt | think about | Democrat | think about the dangers of gun violence . i ' m happy to have the endorsement of the brady campaign for women ' s rights and the first female women ' s law center . | | | | Republican | think about the president ' s speech ? | | | I am | Democrat | i am proud to serve in congress . | | | | Republican | i am pleased to see that the department of defense will continue to support the national guard and the entire state of kansas . | | | Keystone Pipeline | Democrat | keystone pipeline and the food stamp program . i ' m happy to have the support of the farm bureau and the afl - cio . | | | | Republican | keystone pipeline pipeline pipeline and jobs . | | | Gun control | Democrat | gun control legislation . | | | | Republican | gun control programs and the executive order . you can watch the hearing live here | | | | | Temperature | |--------|-------------------|-----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | Political Party | 0.7 | | Prompt | think about | Democrat | think about the conscience of the world . i hope you will join me and the other members of the great community for the community ' s congressional black caucus . | | | | Republican | think about the threat posed by isis ? you can watch it here . | | | I am | Democrat | i am glad that the house is taking an important step forward . with the passing of this bill i absolutely believe that this legislation will continue to improve the development of our economy and create jobs . | | | | Republican | i am glad to see the senate pass the bill so that we can work together to prevent the spread of this horrible disease . | | | Keystone Pipeline | Democrat | keystone pipeline and the food capacity of the big oil industry . | | | | Republican | keystone pipeline pipeline pipeline security jobs and the economy . | | | Gun control | Democrat | gun control legislation . | | | | Republican | gun control programs and the food stamp program . i ' m happy to have my ar1 colleagues in my dc office . | | | | | Temperature | |--------|-------------------|-----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | Political Party | 0.75 | | Prompt | think about | Democrat | think about the conscience of the world . i so believe that a symbol of faith is not . | | | | Republican | think about the threat posed by isis ? you can watch it here . | | | I am | Democrat | i am glad that the house is taking an important step forward . with the passing of this bill i absolutely believe that this legislation will continue to improve the development of our economy and create jobs . | | | | Republican | i am glad to see the senate pass the president ' s desk . | | | Keystone Pipeline | Democrat | keystone pipeline and the food capacity of rice . so many of our neighbors are struggling to afford . | | | | Republican | keystone pipeline pipeline pipeline security jobs and the president ' s decision to approve the keystone pipeline . | | | Gun control | Democrat | gun control legislation . | | | | Republican | gun control programs and the food stamp program . i so believe that a child is a son of a radical household and a woman of a second child of one . i ' m grateful for the work you do to help individuals and families for more than half years . | | | | | Temperature | |--------|-------------------|-----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | Political Party | 0.8 | | Prompt | think about | Democrat | think about the conscience of the world . i so believe that a symbol of faith is not . absolutely fact the glory of the world should be exhausted to be any given the ray ' s words on the american way . | | | | Republican | think about the threat posed by isis ? you can watch it here . | | | I am | Democrat | i am serious about the tactics that are you on so many . please go to | | | | Republican | i am serious about the security of americans . i believe it is a disgrace that the united states must absolutely develop the tools with the president to do what it means to protect the homeland and ensure that the safety and security of our nation has been addressed . | | | Keystone Pipeline | Democrat | keystone pipeline does not go to the bottom table . so it is a disgrace that the republican leadership may absolutely muster the clean power of the gop proposal to advance the house ' s bipartisan resolution . we should not be throwing a christmas break for the president ' s cabinet . if you have the opportunity to tell you the views are not really easy to do something or tell us the best way to spend the time at the grand opening of the new facility will be closed on saturday september 5th at 6 3 p . m . | | | | Republican | keystone pipeline pipeline salvage pipeline security jobs and the president ' s decision to approve the keystone pipeline . | | | Gun control | Democrat | gun control legislation . | | | | Republican | gun control programs and the food stamp program . i so believe that a child is a son of well absolutely household and a woman . i ' m proud to be co - sponsoring a bill to ensure that the american people will actually have their choice . | | | | | Temperature | |--------|-------------------|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | Political Party | 1 | | Prompt | think about | Democrat | think about the threat made by gun violence . i so believe that reinstating the scott - led amendment to absolutely eliminate gun violence with the cases of assault weapons and other gun sales is a national disgrace and on the senate floor . as i said i am sen . brown | | | | Republican | think about the threat posed by isis . you can watch a q a about my efforts this week . | | | I am | Democrat | i am serious about the tactics that define you . so please end with the attacks on this campaign . absolutely great to see the community supporting campaign finance laws and our state ' s ray ' s campaign on the island . will you join me | | | | Republican | i am serious about implementing the tax code . i so believe that a simpler fairer tax code puts our nation and the private sector ahead of a job of our fairer tax code and investing in decreasing healthcare savings . read more about my bill hr | | | Keystone Pipeline | Democrat | keystone pipeline gambling . i know that you can watch a happy discussion about scott ' s amendment . | | | | Republican | keystone pipeline pipeline salvage pipeline security jobs and u . s . transportation system . | | | Gun control | Democrat | gun control legislation is made in the 114th congress . so bipartisan end act . | | | | Republican | gun control programs made by gun owners and my so bipartisan opponent . the scott debate this amendment is absolutely simple and written . it ' s the average one deal that was snuck into a bill on the senate floor . as i said i explained sen . pat toomey ' |
jbochenek/political
[ "region:us" ]
2023-06-11T15:29:54+00:00
{}
2023-06-14T23:15:14+00:00
2fd0c648da97c8e21336ca5f2b5b49b6f4f8611f
# New York Times Ingredient Phrase Tagger Dataset Original source: https://github.com/nytimes/ingredient-phrase-tagger From the source: > We use a conditional random field model (CRF) to extract tags from labelled training data, which was tagged by human news assistants. > We wrote about our approach on the [New York Times Open blog](http://open.blogs.nytimes.com/2015/04/09/extracting-structured-data-from-recipes-using-conditional-random-fields/). > This repo contains scripts to extract the Quantity, Unit, Name, and Comments from unstructured ingredient phrases. > We use it on Cooking to format incoming recipes. Given the following input: ``` 1 pound carrots, young ones if possible Kosher salt, to taste 2 tablespoons sherry vinegar 2 tablespoons honey 2 tablespoons extra-virgin olive oil 1 medium-size shallot, peeled and finely diced 1/2 teaspoon fresh thyme leaves, finely chopped Black pepper, to taste ```
napsternxg/nyt_ingredients
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "recipe", "ingredients", "region:us" ]
2023-06-11T15:53:58+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "nyt_ingredients", "tags": ["recipe", "ingredients"]}
2023-10-06T23:45:48+00:00
bba7343cc5f2cafb51825bcab492fd8fe4037cfb
## General Information **Title**: ImageNet-AB **Description**: ImageNet-AB is an extended version of the ImageNet-1K training set, enriched with annotation byproducts (AB). In addition to the image and corresponding class labels, this dataset provides a rich history of interactions per input signal per front-end component during the annotation process. They include mouse traces, click locations, annotation times, as well as anonymised worker IDs. **Links**: - [ICCV'23 Paper](https://arxiv.org/abs/2303.17595) - [Main Repository](https://github.com/naver-ai/NeglectedFreeLunch) - [ImageNet Annotation Interface](https://github.com/naver-ai/imagenet-annotation-tool) ## Collection Process **Collection Details**: The additional annotations for the ImageNet-AB dataset were collected using Amazon Mechanical Turk (MTurk) workers from the US region, due to the task being described in English. The task was designed as a human intelligence task (HIT), and the qualification approval rate was set at 90% to ensure the task's quality. Each HIT contained 10 pages of annotation tasks, each page having 48 candidate images. We follow the original annotation interface of ImageNet as much as possible. See [GitHub repository](https://github.com/naver-ai/imagenet-annotation-tool) and [Paper](https://arxiv.org/abs/2303.17595) for further information. Annotators interact with different components in the annotation interface, using input devices. This interaction results in time-series data for mouse movements (mouseTracking) and mouse clicks (selectedRecord) for every image. The dataset also records whether the image was ultimately selected by the annotator in the 'selected' field. **Annotator Compensation**: Annotators were paid 1.5 USD per HIT. The median time taken to complete each HIT was 9.0 minutes, yielding an approximate hourly wage of 10.0 USD. This wage is above the US federal minimum hourly wage. A total of 20,304 USD was paid to the MTurk annotators, with an additional 20% fee paid to Amazon. **Annotation Rejection**: We rejected a HIT under the following circumstances. - The recall rate was lower than 0.333. - The total number of selections among 480 candidates was lower than 30. - The annotator did not complete at least 9 out of the 10 pages of tasks. - The annotation was not found in our database, and the secret hash code for confirming their completion was incorrect. - In total, 1,145 out of 14,681 completed HITs (7.8%) were rejected. **Collection Time**: The entire annotation collection process took place between December 18, 2021, and December 31, 2021. ## Data Schema ```json { "imageID": "n01440764/n01440764_105", "originalImageHeight": 375, "originalImageWidth": 500, "selected": true, "imageHeight": 243, "imageWidth": 243, "imagePosition": {"x": 857, "y": 1976}, "hoveredRecord": [ {"action": "enter", "time": 1641425051}, {"action": "leave", "time": 1641425319} ], "selectedRecord": [ {"x": 0.540, "y": 0.473, "time": 1641425052} ], "mouseTracking": [ {"x": 0.003, "y": 0.629, "time": 1641425051}, {"x": 0.441, "y": 0.600, "time": 1641425052} ], "worker_id": "47DBDD543E", "assignment_id": "36DSNE9QZFQKOCZGAHS6R63J6E1OJL", "page_idx": 3 } ``` ## Usage One could use the annotation byproducts to improve the model generalisability and robustness. This is appealing, as the annotation byproducts do not incur extra annotation costs for the annotators. For more information, refer to our [ICCV'23 Paper](https://arxiv.org/abs/2303.17595). ## Dataset Statistics There were two annotation rounds covering 1,281,167 ImageNet1K training images. In the first round, annotators re-selected 71.8% of these images. The remaining 28.2% were re-packaged into a second batch of HITs, from which an additional 14.9% were selected. In total, 1,110,786 (86.7%) of ImageNet1K training images were re-selected, with annotation byproducts available for 1,272,225 (99.3%) of the images. Other dataset statistics are inherited from the parent dataset, ImageNet-1K. ## Ethics and Legalities The crowdsourced annotators were fairly compensated for their time at a rate well above the U.S. federal minimum wage. In terms of data privacy, the dataset maintains the same ethical standards as the original ImageNet-1K dataset. Worker identifiers were anonymized using a non-reversible hashing function, ensuring privacy. Our data collection has obtained IRB approval from an author’s institute. For the future collection of annotation byproducts, we note that there exist potential risks that annotation byproducts may contain annotators’ privacy. Data collectors may even attempt to leverage more private information as byproducts. We urge data collectors not to collect or exploit private information from annotators. Whenever appropriate, one must ask for the annotators’ consent. ## Citation Information Detailed citation information is to be provided. ``` @inproceedings{han2023iccv, title = {Neglected Free Lunch – Learning Image Classifiers Using Annotation Byproducts}, author = {Han, Dongyoon and Choe, Junsuk and Chun, Seonghyeok and Chung, John Joon Young and Chang, Minsuk and Yun, Sangdoo and Song, Jean Y. and Oh, Seong Joon}, booktitle = {International Conference on Computer Vision (ICCV)}, year = {2023} } ``` ## Maintenance and Updates This section will be updated as and when there are changes or updates to the dataset. ## Known Limitations We have not been able to acquire annotation byproducts for all original ImageNet-1K dataset samples. This is because not all ImageNet-1K samples are re-selected by the annotators, potentially because of the errors in the original ImageNet-1K dataset. Given the budget constraint, we have not been able to acquire 10+ annotations per sample, as done in the original work.
coallaoh/ImageNet-AB
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "annotations_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:https://huggingface.co/datasets/imagenet-1k", "language:en", "license:apache-2.0", "arxiv:2303.17595", "region:us" ]
2023-06-11T15:54:47+00:00
{"annotations_creators": ["crowdsourced"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["https://huggingface.co/datasets/imagenet-1k"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "paperswithcode_id": "imagenet", "pretty_name": "ImageNet"}
2023-12-31T03:02:26+00:00
be4770eac0d6138aec946b22e192da022396f5aa
## General Information **Title**: COCO-AB **Description**: The COCO-AB dataset is an extension of the COCO 2014 training set, enriched with additional annotation byproducts (AB). The data includes 82,765 reannotated images from the original COCO 2014 training set. It has relevance in computer vision, specifically in object detection and location. The aim of the dataset is to provide a richer understanding of the images (without extra costs) by recording additional actions and interactions from the annotation process. **Links**: - [ICCV'23 Paper](https://arxiv.org/abs/2303.17595) - [Main Repository](https://github.com/naver-ai/NeglectedFreeLunch) - [COCO Annotation Interface](https://github.com/naver-ai/coco-annotation-tool) ## Collection Process **Collection Details**: The additional annotations for the COCO-AB dataset were collected using Amazon Mechanical Turk (MTurk) workers from the US region, due to the task being described in English. The task was designed as a human intelligence task (HIT), and the qualification approval rate was set at 90% to ensure the task's quality. Each HIT contained 20 pages of annotation tasks, each page having a single candidate image to be tagged. We follow the original annotation interface of COCO as much as possible. See [GitHub repository](https://github.com/naver-ai/coco-annotation-tool) and [Paper](https://arxiv.org/abs/2303.17595) for further information. A total of 4140 HITs were completed, with 365 HITs being rejected based on criteria such as recall rate, accuracy of icon location, task completion rate, and verification with database and secret hash code. **Annotator Compensation**: Annotators were paid 2.0 USD per HIT. The median time taken to complete each HIT was 12.1 minutes, yielding an approximate hourly wage of 9.92 USD. This wage is above the US federal minimum hourly wage. A total of 8,280 USD was paid to the MTurk annotators, with an additional 20% fee paid to Amazon. **Annotation Rejection**: We rejected a HIT under the following circumstances. - The recall rate was lower than 0.333. - The accuracy of icon location is lower than 0.75. - The annotator did not complete at least 16 out of the 20 pages of tasks. - The annotation was not found in our database, and the secret hash code for confirming their completion was incorrect. - In total, 365 out of 4,140 completed HITs (8.8%) were rejected. **Collection Time**: The entire annotation collection process took place between January 9, 2022, and January 12, 2022 ## Data Schema ```json { "image_id": 459214, "originalImageHeight": 428, "originalImageWidth": 640, "categories": [”car”, β€œbicycle”], "imageHeight": 450, "imageWidth": 450, "timeSpent": 22283, "actionHistories": [ {"actionType": ”add”, "iconType": ”car”, "pointTo": {"x": 0.583, "y": 0.588}, "timeAt": 16686}, {"actionType": ”add”, "iconType": β€œbicycle”, "pointTo": {"x": 0.592, "y": 0.639}, "timeAt": 16723} ], "categoryHistories": [ {"categoryIndex": 1, "categoryName": ”Animal”, "timeAt": 10815, "usingKeyboard": false}, {"categoryIndex": 10, "categoryName": ”IndoorObjects”, "timeAt": 19415, "usingKeyboard": false} ], "mouseTracking": [ {"x": 0.679, "y": 0.862, "timeAt": 15725}, {"x": 0.717, "y": 0.825, "timeAt": 15731} ], "worker_id": "00AA3B5E80", "assignment_id": "3AMYWKA6YLE80HK9QYYHI2YEL2YO6L", "page_idx": 8 } ``` ## Usage One could use the annotation byproducts to improve the model generalisability and robustness. This is appealing, as the annotation byproducts do not incur extra annotation costs for the annotators. For more information, refer to our [ICCV'23 Paper](https://arxiv.org/abs/2303.17595). ## Dataset Statistics Annotators have reannotated 82,765 (99.98%) of 82,783 training images from the COCO 2014 training set. For those images, we have recorded the annotation byproducts. We found that each HIT recalls 61.9% of the list of classes per image, with the standard deviation Β±0.118%p. The average localisation accuracy for icon placement is 92.3% where the standard deviation is Β±0.057%p. ## Ethics and Legalities The crowdsourced annotators were fairly compensated for their time at a rate well above the U.S. federal minimum wage. In terms of data privacy, the dataset maintains the same ethical standards as the original COCO dataset. Worker identifiers were anonymized using a non-reversible hashing function, ensuring privacy. Our data collection has obtained IRB approval from an author’s institute. For the future collection of annotation byproducts, we note that there exist potential risks that annotation byproducts may contain annotators’ privacy. Data collectors may even attempt to leverage more private information as byproducts. We urge data collectors not to collect or exploit private information from annotators. Whenever appropriate, one must ask for the annotators’ consent. ## Maintenance and Updates This section will be updated as and when there are changes or updates to the dataset. ## Known Limitations Given the budget constraint, we have not been able to acquire 8+ annotations per sample, as done in the original work. ## Citation Information ``` @inproceedings{han2023iccv, title = {Neglected Free Lunch – Learning Image Classifiers Using Annotation Byproducts}, author = {Han, Dongyoon and Choe, Junsuk and Chun, Seonghyeok and Chung, John Joon Young and Chang, Minsuk and Yun, Sangdoo and Song, Jean Y. and Oh, Seong Joon}, booktitle = {International Conference on Computer Vision (ICCV)}, year = {2023} } ```
coallaoh/COCO-AB
[ "task_categories:image-classification", "annotations_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:https://huggingface.co/datasets/HuggingFaceM4/COCO", "language:en", "license:apache-2.0", "arxiv:2303.17595", "region:us" ]
2023-06-11T15:55:34+00:00
{"annotations_creators": ["crowdsourced"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["https://huggingface.co/datasets/HuggingFaceM4/COCO"], "task_categories": ["image-classification"], "paperswithcode_id": "coco", "pretty_name": "COCO"}
2023-07-23T17:22:22+00:00
9e62e45a350a04a8ce324c609aab98bb169cd586
# Dataset Card for unconstrained_language ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Citation Information](#additional-information) - [Citation Information](#citation-information) ## Dataset Description - **Paper: https://arxiv.org/abs/2305.17266** - **Point of Contact: [email protected]** ### Dataset Summary This dataset is one of the two datasets published by "Honey, I Shrunk the Language: Language Model Behavior at Reduced Scale" (https://arxiv.org/abs/2305.17266). The dataset available at this link is the pre-training data **not** constrained by any predefined vocabulary. The other published data i.e. the pre-training data that is constrained by vocabulary is available at https://huggingface.co/datasets/text-machine-lab/constrained_language. This dataset is curated by randomly sampling text spans (of an approximate length of 128 tokens) from the following corpora, - C4: https://arxiv.org/abs/1910.10683, - BookCorpus: https://ieeexplore.ieee.org/document/7410368, - Wikipedia: https://huggingface.co/datasets/wikipedia, - Simplified-Wikipedia: https://simple.wikipedia.org/wiki/Main_Page, - Children's Book Test Corpus: https://arxiv.org/abs/1511.02301 The dataset includes ~9 million contiguous spans, each with approximately 128 tokens. ### Languages The dataset contains the English language only. ## Dataset Structure The dataset is available in the arrow dataset format with three splits namely, train, validation, and test. Every data instance has only one key "Text" that included a text span of approximately 128 tokens. ### Citation Information If this dataset is useful to you please cite our work. ```sh @article{deshpande2023honey, title={Honey, I Shrunk the Language: Language Model Behavior at Reduced Scale}, author={Deshpande, Vijeta and Pechi, Dan and Thatte, Shree and Lialin, Vladislav and Rumshisky, Anna}, journal={arXiv preprint arXiv:2305.17266}, year={2023} } ```
text-machine-lab/unconstrained_language
[ "arxiv:2305.17266", "arxiv:1910.10683", "arxiv:1511.02301", "region:us" ]
2023-06-11T16:01:19+00:00
{"dataset_info": {"features": [{"name": "TEXT", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5437652389, "num_examples": 9081490}, {"name": "validation", "num_bytes": 50107745, "num_examples": 100000}, {"name": "test", "num_bytes": 50134861, "num_examples": 100000}], "download_size": 3732550490, "dataset_size": 5537894995}}
2023-06-13T04:32:46+00:00
7f38fa20e34205c155f87a42c7b3eada55a8285b
# Dataset Card for "yugioh_images" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
FabioArdi/yugioh_images
[ "region:us" ]
2023-06-11T16:14:16+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "name", "dtype": "string"}, {"name": "frameType", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 660338115.545, "num_examples": 12405}], "download_size": 656146541, "dataset_size": 660338115.545}}
2023-06-11T16:16:55+00:00
7d2e673096dda2dd608963dae0794c667668b7a2
shrinath-suresh/stack_overflow_pytorch
[ "license:apache-2.0", "region:us" ]
2023-06-11T16:19:51+00:00
{"license": "apache-2.0"}
2023-06-11T16:20:39+00:00
1374b3d88c5cf9677bfcba7d4cd3ec76ce5f4887
# Dataset Card for "oscar2301_de_deduped_filtered" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bjoernp/oscar2301_de_deduped_filtered
[ "region:us" ]
2023-06-11T16:46:08+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "meta", "struct": [{"name": "warc_headers", "struct": [{"name": "warc-record-id", "dtype": "string"}, {"name": "warc-date", "dtype": "string"}, {"name": "content-type", "dtype": "string"}, {"name": "content-length", "dtype": "int32"}, {"name": "warc-type", "dtype": "string"}, {"name": "warc-identified-content-language", "dtype": "string"}, {"name": "warc-refers-to", "dtype": "string"}, {"name": "warc-target-uri", "dtype": "string"}, {"name": "warc-block-digest", "dtype": "string"}]}, {"name": "identification", "struct": [{"name": "label", "dtype": "string"}, {"name": "prob", "dtype": "float32"}]}, {"name": "harmful_pp", "dtype": "float32"}, {"name": "tlsh", "dtype": "string"}, {"name": "quality_warnings", "sequence": "string"}, {"name": "categories", "sequence": "string"}, {"name": "sentence_identifications", "list": [{"name": "label", "dtype": "string"}, {"name": "prob", "dtype": "float32"}]}]}], "splits": [{"name": "train", "num_bytes": 303054722776.2827, "num_examples": 42108307}], "download_size": 211315018208, "dataset_size": 303054722776.2827}}
2023-06-11T20:57:44+00:00
37b231a6e80047567dac941c6d65df07bb3cf5c4
# Dataset Card for "b0cca9c0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/b0cca9c0
[ "region:us" ]
2023-06-11T17:34:33+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 182, "num_examples": 10}], "download_size": 1335, "dataset_size": 182}}
2023-06-11T17:34:34+00:00
59f43ffbad8cc99f990cb6cadcd0e9ce406eb438
thakurvishesh1/good_prompt
[ "license:openrail", "region:us" ]
2023-06-11T18:22:24+00:00
{"license": "openrail"}
2023-06-11T18:23:12+00:00
d2bdd7f727618ca81d982b5891901e8624ecb2e2
# Dataset Card for Dataset Name ### Dataset Summary This is lemmatized version of Ag News Data. ### Languages English ### Citation Information ```bibtex @inproceedings{xu-etal-2023-vontss, title = "v{ONTSS}: v{MF} based semi-supervised neural topic modeling with optimal transport", author = "Xu, Weijie and Jiang, Xiaoyu and Sengamedu Hanumantha Rao, Srinivasan and Iannacci, Francis and Zhao, Jinjin", booktitle = "Findings of the Association for Computational Linguistics: ACL 2023", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-acl.271", doi = "10.18653/v1/2023.findings-acl.271", pages = "4433--4457", abstract = "Recently, Neural Topic Models (NTM), inspired by variational autoencoders, have attracted a lot of research interest; however, these methods have limited applications in the real world due to the challenge of incorporating human knowledge. This work presents a semi-supervised neural topic modeling method, vONTSS, which uses von Mises-Fisher (vMF) based variational autoencoders and optimal transport. When a few keywords per topic are provided, vONTSS in the semi-supervised setting generates potential topics and optimizes topic-keyword quality and topic classification. Experiments show that vONTSS outperforms existing semi-supervised topic modeling methods in classification accuracy and diversity. vONTSS also supports unsupervised topic modeling. Quantitative and qualitative experiments show that vONTSS in the unsupervised setting outperforms recent NTMs on multiple aspects: vONTSS discovers highly clustered and coherent topics on benchmark datasets. It is also much faster than the state-of-the-art weakly supervised text classification method while achieving similar classification performance. We further prove the equivalence of optimal transport loss and cross-entropy loss at the global minimum.", } ```
xwjzds/ag_news_lemma_train
[ "region:us" ]
2023-06-11T18:28:03+00:00
{}
2023-09-16T20:57:50+00:00
c7eeb1da44737258d117f5c702c88f14b430f2ad
# Dataset Card for "the_stack_repo_languages" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bjoernp/the_stack_repo_languages
[ "region:us" ]
2023-06-11T18:50:54+00:00
{"dataset_info": {"features": [{"name": "text_lang", "dtype": "string"}, {"name": "confidence", "dtype": "float64"}, {"name": "repo_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1681449, "num_examples": 35913}], "download_size": 0, "dataset_size": 1681449}}
2023-06-11T20:52:42+00:00
da79065c612da46316eec7de357f33ead60959cf
# Dataset Card for "OSCAR-2109" Num tokens: 2,884,522,212 tokens
vietgpt/OSCAR-2109
[ "region:us" ]
2023-06-11T18:53:42+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "perplexity", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 16802536783.756039, "num_examples": 5098334}], "download_size": 8245526034, "dataset_size": 16802536783.756039}}
2023-06-13T03:53:37+00:00
1daf32a3e6493ea55887d42bbf6b713cad0e7d07
# Dataset Card for "librespeech_dev_clean" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hsali/librespeech_dev_clean
[ "region:us" ]
2023-06-11T18:59:04+00:00
{"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 641600432, "num_examples": 2703}], "download_size": 566257946, "dataset_size": 641600432}}
2023-06-11T19:01:34+00:00
086ebeee20d4cc3b3e7c05ae703fcf278ae3a759
## DataComp-1B This repository contains metadata files for DataComp-1B. For details on how to use the metadata, please visit [our website](https://www.datacomp.ai/) and our [github repository](https://github.com/mlfoundations/datacomp). We distribute the image url-text samples and metadata under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights. ## Terms and Conditions We have terms of service that are similar to those adopted by HuggingFace (https://huggingface.co/terms-of-service), which covers their dataset library. Specifically, any content you download, access or use from our index, is at your own risk and subject to the terms of service or copyright limitations accompanying such content. The image url-text index, which is a research artifact, is provided as is. By using said index, you assume all risks, including but not limited to, liabilities related to image downloading and storage.
mlfoundations/datacomp_1b
[ "license:cc-by-4.0", "region:us" ]
2023-06-11T19:12:44+00:00
{"license": "cc-by-4.0"}
2023-08-21T20:43:05+00:00
857a340c3a65c4d4f5797e75fbab8d30756bf939
# Dataset Card for "the-stack-dedup-markdown-deu_Latn" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bjoernp/the-stack-dedup-markdown-deu_Latn
[ "region:us" ]
2023-06-11T19:40:36+00:00
{"dataset_info": {"features": [{"name": "hexsha", "dtype": "string"}, {"name": "size", "dtype": "int64"}, {"name": "ext", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "max_stars_repo_path", "dtype": "string"}, {"name": "max_stars_repo_name", "dtype": "string"}, {"name": "max_stars_repo_head_hexsha", "dtype": "string"}, {"name": "max_stars_repo_licenses", "sequence": "string"}, {"name": "max_stars_count", "dtype": "int64"}, {"name": "max_stars_repo_stars_event_min_datetime", "dtype": "string"}, {"name": "max_stars_repo_stars_event_max_datetime", "dtype": "string"}, {"name": "max_issues_repo_path", "dtype": "string"}, {"name": "max_issues_repo_name", "dtype": "string"}, {"name": "max_issues_repo_head_hexsha", "dtype": "string"}, {"name": "max_issues_repo_licenses", "sequence": "string"}, {"name": "max_issues_count", "dtype": "int64"}, {"name": "max_issues_repo_issues_event_min_datetime", "dtype": "string"}, {"name": "max_issues_repo_issues_event_max_datetime", "dtype": "string"}, {"name": "max_forks_repo_path", "dtype": "string"}, {"name": "max_forks_repo_name", "dtype": "string"}, {"name": "max_forks_repo_head_hexsha", "dtype": "string"}, {"name": "max_forks_repo_licenses", "sequence": "string"}, {"name": "max_forks_count", "dtype": "int64"}, {"name": "max_forks_repo_forks_event_min_datetime", "dtype": "string"}, {"name": "max_forks_repo_forks_event_max_datetime", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "avg_line_length", "dtype": "float64"}, {"name": "max_line_length", "dtype": "int64"}, {"name": "alphanum_fraction", "dtype": "float64"}, {"name": "text_lang", "dtype": "string"}, {"name": "confidence", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 562944473.2668307, "num_examples": 127321}], "download_size": 452771983, "dataset_size": 562944473.2668307}}
2023-06-11T19:41:32+00:00
5541473c9516db2ce0704ad21c431196610031ee
# Dataset Card for "voxelgym_5c_42x42_100000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Cubpaw/voxelgym_5c_42x42_100000
[ "region:us" ]
2023-06-11T20:05:00+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}, {"name": "rgb_label", "dtype": "image"}, {"name": "path_label", "dtype": "image"}, {"name": "path_rgb_label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 73637120.0, "num_examples": 80000}, {"name": "validation", "num_bytes": 18495820.0, "num_examples": 20000}], "download_size": 70037769, "dataset_size": 92132940.0}}
2023-06-11T20:06:32+00:00
906190eda0dc5ff11b57f277500ba0fea83ac5ed
# EasyQA: A Kindergarten-Level QA Dataset for Investigating Truthfulness. EasyQA is a GPT-3.5-turbo-generated dataset of easy kindergarten-level facts, meant to be used to prompt and evaluate large language models for "common-sense" truthful responses. This dataset was originally created to understand how different types of truthfulness may be represented in the intermediate activations of large language models. EasyQA compromises 2346 questions that span 50 categories, including art, technology, education, music, and animals. The questions are meant to be extremely simple and obvious, eliciting an obvious truth that would not be susceptible to misconceptions -- making it an excellent comparison compared to benchmarks related to other types of truth (e.g. TruthfulQA, which focuses on common misconceptions). Credits to Kevin Wang, Richard Ren, and Phillip Guo. ## Dataset Creation The dataset was created by prompting GPT-3.5-turbo with: "*Please generate 50 easy, obvious, common-knowledge questions that a kindergartener would learn in class about the topic prompted, as well as correct and incorrect responses. These questions should be less like trivia questions (i.e. Who is known as the Queen of Jazz?) and more like obvious facts (ie What color is the sky?). Your generations should be in the format: Question: {Your question here} Right: {Right answer} Wrong: {Wrong answer} where each question is a new line. Please follow this format verbatim (e.g. do not number the questions).*" The following categories were used: ``` Animals Plants Food and drink Music Movies Television shows Literature Sports Geography History Science Mathematics Art Technology Politics Business and Economy Education Health and Fitness Environment and Climate Space and Astronomy Fashion and Style Video Games Travel and Tourism Language and Literature Religion and Spirituality Famous Personalities Cultural Events/Festivals Cars and Automobiles Photography Architecture Medicine and Health Psychology Philosophy Law Social Sciences Human Rights Current Events/News Global Affairs National Landmarks Celebrities and Entertainment Nature Cooking and Baking Gardening DIY Projects Dance Comic Books and Graphic Novels Mythology and Folklore Internet and Social Media Parenting and Family Life Home Decor ```
notrichardren/easy_qa
[ "task_categories:question-answering", "language:en", "license:apache-2.0", "region:us" ]
2023-06-11T20:29:56+00:00
{"language": ["en"], "license": "apache-2.0", "task_categories": ["question-answering"], "pretty_name": "Easy Question Answer"}
2023-06-26T11:33:45+00:00
c83d94a552c26ded2ef516a1c7ebb9ef2e991556
# Dataset Card for "L3D" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pahautelman/L3D
[ "region:us" ]
2023-06-11T20:33:51+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9474836753.64, "num_examples": 769608}], "download_size": 8860171644, "dataset_size": 9474836753.64}}
2023-06-12T03:22:54+00:00
3be69e46a2f23c71752ab49cde1295467b9a96db
# Dataset Card for "the-stack-dedup-python-deu_Latn" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bjoernp/the-stack-dedup-python-deu_Latn
[ "region:us" ]
2023-06-11T20:48:41+00:00
{"dataset_info": {"features": [{"name": "hexsha", "dtype": "string"}, {"name": "size", "dtype": "int64"}, {"name": "ext", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "max_stars_repo_path", "dtype": "string"}, {"name": "max_stars_repo_name", "dtype": "string"}, {"name": "max_stars_repo_head_hexsha", "dtype": "string"}, {"name": "max_stars_repo_licenses", "sequence": "string"}, {"name": "max_stars_count", "dtype": "int64"}, {"name": "max_stars_repo_stars_event_min_datetime", "dtype": "string"}, {"name": "max_stars_repo_stars_event_max_datetime", "dtype": "string"}, {"name": "max_issues_repo_path", "dtype": "string"}, {"name": "max_issues_repo_name", "dtype": "string"}, {"name": "max_issues_repo_head_hexsha", "dtype": "string"}, {"name": "max_issues_repo_licenses", "sequence": "string"}, {"name": "max_issues_count", "dtype": "int64"}, {"name": "max_issues_repo_issues_event_min_datetime", "dtype": "string"}, {"name": "max_issues_repo_issues_event_max_datetime", "dtype": "string"}, {"name": "max_forks_repo_path", "dtype": "string"}, {"name": "max_forks_repo_name", "dtype": "string"}, {"name": "max_forks_repo_head_hexsha", "dtype": "string"}, {"name": "max_forks_repo_licenses", "sequence": "string"}, {"name": "max_forks_count", "dtype": "int64"}, {"name": "max_forks_repo_forks_event_min_datetime", "dtype": "string"}, {"name": "max_forks_repo_forks_event_max_datetime", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "avg_line_length", "dtype": "float64"}, {"name": "max_line_length", "dtype": "int64"}, {"name": "alphanum_fraction", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 267637689.56000614, "num_examples": 48262}], "download_size": 90252233, "dataset_size": 267637689.56000614}}
2023-06-11T20:53:29+00:00
94ce7cce9a85251f878080c77a90ee1e7515001b
smckay42/openai_mining_dataset_openvalidators_prepared
[ "task_categories:question-answering", "size_categories:10K<n<100K", "language:en", "region:us" ]
2023-06-11T21:43:34+00:00
{"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"]}
2023-06-12T00:48:08+00:00
66a58b15a0bb763a541529d0e57dfa82a6f1f226
# Dataset Card for "Sample_test_google_flan_t5_xxl_mode_T_SPECIFIC_A_ns_10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVasNLPExperiments/Sample_test_google_flan_t5_xxl_mode_T_SPECIFIC_A_ns_10
[ "region:us" ]
2023-06-11T21:53:10+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_with_openai_rices", "num_bytes": 4266, "num_examples": 10}], "download_size": 5331, "dataset_size": 4266}}
2023-06-11T21:53:13+00:00
c75aceb6bdfb2359a4ce37ed777e515924c2bb47
# Dataset Card for "unsplash_20k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
wtcherr/unsplash_20k
[ "region:us" ]
2023-06-11T22:46:08+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2560499324.351, "num_examples": 19999}], "download_size": 440556200, "dataset_size": 2560499324.351}}
2023-06-11T22:49:45+00:00
2f56b17cd32cfdbc4c0929391ccbb7cf6b3456ab
killah-t-cell/multi_controlnet_dataset_22
[ "region:us" ]
2023-06-11T22:57:00+00:00
{}
2023-06-11T22:58:59+00:00
1c0d71aa6bf0d2ee11f44c1d367289221f45ced5
TankuVie/vie_sent_segment_unpunctual_text
[ "license:other", "region:us" ]
2023-06-11T23:46:25+00:00
{"license": "other"}
2023-06-16T08:01:01+00:00
33dd3bc5b4acb003ed1296ca1e3830750ce48772
# Dataset Card for "multi_controlnet_dataset_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
killah-t-cell/multi_controlnet_dataset_test
[ "region:us" ]
2023-06-12T00:12:20+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "conditioning", "1": "images"}}}}], "splits": [{"name": "test", "num_bytes": 209560.0, "num_examples": 6}], "download_size": 211793, "dataset_size": 209560.0}}
2023-06-12T00:16:20+00:00
317cd697cd7f2e88a019c7f777e5c3a09c5f791f
# Dataset Card for "articles_samples" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Kamaljp/articles_samples
[ "region:us" ]
2023-06-12T00:12:32+00:00
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "authors", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}, {"name": "tags", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 543097961.7192049, "num_examples": 100000}, {"name": "validation", "num_bytes": 5430979.617192049, "num_examples": 1000}, {"name": "test", "num_bytes": 5430979.617192049, "num_examples": 1000}], "download_size": 328367988, "dataset_size": 553959920.953589}}
2023-06-12T00:13:07+00:00
7c2b706e521857e754983f95d0ae7bba4c34de24
# Dataset Card for "article_w_table" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Kamaljp/article_w_table
[ "region:us" ]
2023-06-12T00:15:25+00:00
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "authors", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}, {"name": "tags", "dtype": "string"}, {"name": "tabled_format", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 72711, "num_examples": 10}], "download_size": 54309, "dataset_size": 72711}}
2023-06-12T00:15:27+00:00
67f213509b70e83d8f12ac4304b96f0f02b555d2
Wrathless/Testing
[ "license:apache-2.0", "region:us" ]
2023-06-12T00:17:29+00:00
{"license": "apache-2.0"}
2023-06-12T00:17:29+00:00
b63715d8d19564ed69a928b01d9a6e8daeb897e1
# QuALITY: Question Answering with Long Input Texts, Yes! This is the QuALITY v1.0.1 training set converted to instruction-style prompts. All credit to the original authors. See https://github.com/nyu-mll/quality for details.
chargoddard/QuALITY-instruct
[ "language:en", "region:us" ]
2023-06-12T00:20:53+00:00
{"language": "en", "pretty_name": "https://github.com/nyu-mll/quality", "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 62125756, "num_examples": 2523}, {"name": "dev", "num_bytes": 50877356, "num_examples": 2086}], "download_size": 5451636, "dataset_size": 113003112}}
2023-07-13T23:29:45+00:00
1f8b9de1c88943b201d3f314a40b9db0d0632afe
orangetin/oig-chip
[ "license:apache-2.0", "region:us" ]
2023-06-12T00:24:20+00:00
{"license": "apache-2.0"}
2023-06-12T00:32:23+00:00
82ea92c6a2b422bec85f3a817482c870c5492c47
# Dataset Card for "multi_controlnet_dataset_test_2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
killah-t-cell/multi_controlnet_dataset_test_2
[ "region:us" ]
2023-06-12T00:35:25+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 209858.0, "num_examples": 6}], "download_size": 212082, "dataset_size": 209858.0}}
2023-06-12T00:35:28+00:00
5c022cba54ad043da4fecccf412fee3d2d1c6669
# Dataset Card for "multi_controlnet_dataset_final_final_v2_for_real_this_time" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
killah-t-cell/multi_controlnet_dataset_final_final_v2_for_real_this_time
[ "region:us" ]
2023-06-12T01:22:36+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning_image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2896795342.864, "num_examples": 45079}], "download_size": 2750873349, "dataset_size": 2896795342.864}}
2023-06-12T02:59:34+00:00
776c1d6cb714a0842455ffaa21c64e3d8c320495
![Screenshot 2023-06-11 at 23.19.31.png](https://s3.amazonaws.com/moonup/production/uploads/6226bae1c8655fec3995a41d/cO9OKcYBO7-MbDZJopF6J.png) General information The overall ACDC dataset was created from real clinical exams acquired at the University Hospital of Dijon. Acquired data were fully anonymized and handled within the regulations set by the local ethical committee of the Hospital of Dijon (France). Our dataset covers several well-defined pathologies with enough cases to (1) properly train machine learning methods and (2) clearly assess the variations of the main physiological parameters obtained from cine-MRI (in particular diastolic volume and ejection fraction). The dataset is composed of 150 exams (all from different patients) divided into 5 evenly distributed subgroups (4 pathological plus 1 healthy subject groups) as described below. Furthermore, each patient comes with the following additional information : weight, height, as well as the diastolic and systolic phase instants. Tasks The main task of this dataset is the semantic segmentation of the heart in cardiac magnetic resonance images, specifically the endocardium and myocardium. The present task is very relevant for the detection of cardiovascular diseases. Segmentation is a very time-consuming process, so automatically performing the segmentation with Artificial Intelligence algorithms can be extremely beneficial to reduce the time spent in a manual segmentation. In this way, a very relevant bottleneck can be avoided and cardiovascular diseases can be detected in a timely manner. Reference O. Bernard, A. Lalande, C. Zotti, F. Cervenansky, et al. "Deep Learning Techniques for Automatic MRI Cardiac Multi-structures Segmentation and Diagnosis: Is the Problem Solved ?" in IEEE Transactions on Medical Imaging, vol. 37, no. 11, pp. 2514-2525, Nov. 2018 doi: 10.1109/TMI.2018.2837502
msepulvedagodoy/acdc
[ "task_categories:image-segmentation", "size_categories:1K<n<10K", "language:es", "language:en", "medical", "region:us" ]
2023-06-12T02:05:50+00:00
{"language": ["es", "en"], "size_categories": ["1K<n<10K"], "task_categories": ["image-segmentation"], "pretty_name": "ACDC-Automated Cardiac Diagnosis Challenge", "tags": ["medical"]}
2023-06-12T16:16:21+00:00
8e5af40e226f2f2504e74082d8830b5a834e48ad
# Dataset Card for "synthetic_marketing_emails_demo" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dmitrijsk/synthetic_marketing_emails_demo
[ "region:us" ]
2023-06-12T02:19:44+00:00
{"dataset_info": {"features": [{"name": "product", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "marketing_email", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3930, "num_examples": 3}], "download_size": 12228, "dataset_size": 3930}}
2023-06-12T02:27:11+00:00
b5a2d885b8f04ecdd6caf1121a8e39a2b445e78e
# Dataset Card for "wit" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
theodor1289/wit
[ "region:us" ]
2023-06-12T02:41:21+00:00
{"dataset_info": {"features": [{"name": "image_url", "dtype": "string"}, {"name": "image", "dtype": {"image": {"decode": false}}}, {"name": "text", "dtype": "string"}, {"name": "context_page_description", "dtype": "string"}, {"name": "context_section_description", "dtype": "string"}, {"name": "caption_alt_text_description", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 313793832273.375, "num_examples": 3921869}, {"name": "test", "num_bytes": 34879359766.5, "num_examples": 435764}], "download_size": 992115227, "dataset_size": 348673192039.875}}
2023-06-15T07:04:59+00:00
40f0b870758d0362b0d92d59fddfab755856d721
TheAgon1sT/nedelina-kolevatest
[ "license:afl-3.0", "region:us" ]
2023-06-12T02:51:49+00:00
{"license": "afl-3.0"}
2023-06-12T02:51:49+00:00
135221d749888944672a05adf4551ad9f960448f
houck2040/rice_mba
[ "license:mit", "region:us" ]
2023-06-12T03:15:38+00:00
{"license": "mit"}
2023-06-12T03:16:01+00:00
1a2ef7364f50cbb41fbb85b3b5a4e8d92bc1222c
# Dataset Card for "promptTTS_encodec_v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kuanhuggingface/promptTTS_encodec_v2_small
[ "region:us" ]
2023-06-12T04:36:48+00:00
{"dataset_info": {"features": [{"name": "file_id", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "src_encodec_0", "sequence": "int64"}, {"name": "src_encodec_1", "sequence": "int64"}, {"name": "src_encodec_2", "sequence": "int64"}, {"name": "src_encodec_3", "sequence": "int64"}, {"name": "src_encodec_4", "sequence": "int64"}, {"name": "src_encodec_5", "sequence": "int64"}, {"name": "src_encodec_6", "sequence": "int64"}, {"name": "src_encodec_7", "sequence": "int64"}, {"name": "tgt_encodec_0", "sequence": "int64"}, {"name": "tgt_encodec_1", "sequence": "int64"}, {"name": "tgt_encodec_2", "sequence": "int64"}, {"name": "tgt_encodec_3", "sequence": "int64"}, {"name": "tgt_encodec_4", "sequence": "int64"}, {"name": "tgt_encodec_5", "sequence": "int64"}, {"name": "tgt_encodec_6", "sequence": "int64"}, {"name": "tgt_encodec_7", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2975164369, "num_examples": 47270}, {"name": "validation", "num_bytes": 97855975, "num_examples": 1349}, {"name": "test", "num_bytes": 80754157, "num_examples": 1350}], "download_size": 437609990, "dataset_size": 3153774501}}
2023-06-12T04:45:16+00:00
5c9e97f38432130057cd1eb2411171e39f4fdd18
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains datas being collected from Genbank. The dataset is organized in a way that it separate all the genes from an DNA , and was classified according to the region and coding type. In that way, people could get more detailed information regarding each DNA sequences. The dataset also contain source, which is the whole DNA sequence, where the user can use it to compare to each segment to see the exact location. The dataset contains 937 files with about 200 million data and 300-400 GB storage space. Therefore user can specify the number of files they are going to use by using the code below according to their own needs. If user want to download all of files, they can enter 937 as second arguement. ```python datasets.load_dataset('wyxu/Genome_database', num_urls = number of file you want to use) ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances ```python {DNA id: AP013063.1 Organism: Serratia marcescens SM39 year: 2017 region type:coding specific_class: Protein Product:thr operon leader peptide sequence: ATGCGCAACATCAGCCTGAAAACCACAATTATTACCACCACCGATACCACAGGTAACGGGGCGGGCTGA gc_content:0.52173913 translation code: MRNISLKTTIITTTDTTGNGAG start_position: 207 end_position: 276} ``` ### Data Fields __DNA id__: id number for the whole DNA sequence, sequences with same DNA id are from same DNA __Organism__: Organism of the DNA __year__: the year of the DNA sequence __region type__: determine the general type of the sequence. For all the type that is typically classified as coding region, it was named with coding; while others including those that are case dependent were named according to their own type such as regulator, repeat_region,gap, intron,extron, etc.(__Note__: when classifying coding type, all the CDS, mRNA, tmRNA, tRNA,rRNA and others such as propetide, sig_propetide,mat_propetide was classified as coding. In order to minimize the missing coding part, all the other categories which has associated product was also classified as coding ) __specific class__: if the sequence is coding sequence, it would be classified according to their production type such as RNA, Protein. The regulators would also be classified by their own class such as terminator, ribosome __Product__ : if the sequence produce protein, the product name would be listed __sequence__: the actual sequence __gc_content__: the gc_content of the sequence __translation code__: if the sequence produce protein, then the translation code would be provided as a reference __start_position__: the start position of the segment __end_position__: the end position of the segment ### Data Splits first 80% of files was used as training dataset, while last 20% was used as testing dataset ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data The data collected are all from the most recent release of genbank, genbank 255. #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
wyxu/Genome_database
[ "task_categories:conversational", "task_categories:fill-mask", "size_categories:100M<n<1B", "size_categories:10M<n<100M", "language:en", "biology", "medical", "region:us" ]
2023-06-12T04:42:03+00:00
{"language": ["en"], "size_categories": ["100M<n<1B", "10M<n<100M"], "task_categories": ["conversational", "fill-mask"], "pretty_name": "genome database", "tags": ["biology", "medical"], "viewer": false}
2023-06-19T08:52:47+00:00
ba2d3864e4d9f7d2206a6f177fd4516536116c62
# Dataset Card for "books3_basic_paragraphs" the_pile books3, books with smog grade difficulty estimate of 6.5 or under. Split into paragraphs and filtered out most 'non-paragraphs' like titles, tables of content, etc.
skeskinen/books3_basic_paragraphs
[ "region:us" ]
2023-06-12T04:47:39+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "book", "dtype": "string"}, {"name": "pos", "dtype": "float64"}, {"name": "smog_index", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 1366299770, "num_examples": 6639751}], "download_size": 676098743, "dataset_size": 1366299770}}
2023-06-14T11:55:02+00:00
4cbadc1b683171bc759be812c4e36bc3a823d55f
edwinjue/311-data-last-month
[ "license:gpl-3.0", "region:us" ]
2023-06-12T04:51:41+00:00
{"license": "gpl-3.0"}
2023-06-12T04:52:58+00:00
21ba1961ddd6c634ac37265ea17a877d2723af5e
# Dataset Card for "refinedweb-3m" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mattymchen/refinedweb-3m
[ "region:us" ]
2023-06-12T04:58:49+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7834920949, "num_examples": 3000000}], "download_size": 4904877808, "dataset_size": 7834920949}}
2023-06-12T05:01:04+00:00
0619705023ff55bca519d2f142b7ff679f6297da
# Dataset Card for "sustainability_ner" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
maren-hugg/sustainability_ner
[ "region:us" ]
2023-06-12T05:09:47+00:00
{"dataset_info": {"features": [{"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}, {"name": "tokens", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 265782.5, "num_examples": 500}, {"name": "test", "num_bytes": 132891.25, "num_examples": 250}, {"name": "validation", "num_bytes": 132891.25, "num_examples": 250}], "download_size": 127046, "dataset_size": 531565.0}}
2023-06-12T05:09:51+00:00
280fc68ad3b156209af755eece6c677f9eb3fcf3
JennnDexter/pokemon-lora
[ "license:unknown", "region:us" ]
2023-06-12T05:26:58+00:00
{"license": "unknown"}
2023-06-12T05:26:58+00:00
b7d58d2fdd55fd749f2113bde31d39ddf6468f22
# Dataset Card for "openassistant-oasst1-flattened-filtered" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
flozi00/openassistant-oasst1-flattened-filtered
[ "region:us" ]
2023-06-12T05:38:31+00:00
{"dataset_info": {"features": [{"name": "conversations", "dtype": "string"}, {"name": "lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 36851090.044335596, "num_examples": 18266}], "download_size": 13653824, "dataset_size": 36851090.044335596}}
2023-07-05T18:13:07+00:00
559eb5884245ba94ca81ea5116aee7600455df22
- Please also refer to the original repository `fukanarita/newschat-with-impression` [[github]](https://github.com/fukanarita/newschat-with-impression).
fujiki/newschat-with-impression
[ "license:mit", "region:us" ]
2023-06-12T05:51:02+00:00
{"license": "mit"}
2023-06-12T07:53:20+00:00
f31c4b48432df90c28c4b2f9a2edb8016461c060
# Dataset Card for "books3_lowgrade_paragraphs" the_pile books3, books with smog grade difficulty estimate between 6.6 or and 7.1. Split into paragraphs and filtered out most 'non-paragraphs' like titles, tables of content, etc. For easier books, see books3_basic_paragraphs
skeskinen/books3_lowgrade_paragraphs
[ "region:us" ]
2023-06-12T05:58:56+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "book", "dtype": "string"}, {"name": "pos", "dtype": "float64"}, {"name": "smog_index", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 6426499179, "num_examples": 29542059}], "download_size": 3274999825, "dataset_size": 6426499179}}
2023-06-12T08:04:14+00:00
cb923dce7a874984bb42d66dd85aa7b78e2c6dc5
# Dataset Card for "reddit_casual_conversation_for_alpaca_lora" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
binhgiangnguyendanh/reddit_casual_conversation_for_alpaca_lora
[ "region:us" ]
2023-06-12T06:01:00+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7138483, "num_examples": 8686}], "download_size": 2583834, "dataset_size": 7138483}}
2023-06-26T09:20:53+00:00
32863c5ecc7d57d947a0441e795921f7cc1a8881
This version of Multimodal Instruction Data includes diverse and high-quality dowanstream data. It contains about 2M samples from VQA, Detector, Detailed Description of Image, and others. ``` {'aokvqa_qa': 17056, 'vsr_tof': 7680, 'flickr30k_caption': 158914, 'esnlive_evil': 401717, 'nocaps_caption': 45000, 'okvqa_qg': 9009, 'okvqa_qa': 9009, 'openvqa_qa': 34602, 'minigpt4_description': 3439, 'chart2image_chart': 8305, 'minigpt4_detailed-qa': 17195, 'vqav2_qa': 443757, 'llava_detailed-qa': 356753, 'vqav2_qg': 443757, 'semart_art': 20313, 'coco_caption': 591753, 'refcoco_detector': 8540, 'visdial_QA': 1000, 'gqa_qa': 943000, 'scienceqa_scienceqa': 6218, 'iconqa_qa': 29859, 'textcaps_caption': 109765} ``` The above statistic can be used for WeightRandom sampling of Data while training your Visiual-Language Models. More details about Our LMEye project, please see https://github.com/YunxinLi/LingCloud We will present a LMEye Variant with new architecture trained on enormous Instruction Data in the next week.
YunxinLi/Multimodal_Insturction_Data_V2
[ "license:apache-2.0", "region:us" ]
2023-06-12T06:08:30+00:00
{"license": "apache-2.0"}
2023-06-12T06:22:55+00:00
67f37a66b6d9f5c6cfc881f419a34bf8853b7626
This dataset contains ~508k prompt-instruction pairs with high quality responses. It was synthetically created from a subset of Ultrachat prompts. It does not contain any alignment focused responses or NSFW content. Licensed under apache-2.0
ignmilton/ign_clean_instruct_dataset_500k
[ "task_categories:question-answering", "task_categories:conversational", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "code", "doi:10.57967/hf/1576", "region:us" ]
2023-06-12T06:12:30+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["question-answering", "conversational"], "pretty_name": "ign_500k", "tags": ["code"]}
2023-06-13T06:45:51+00:00
4bd49c7d02b62f4fe33ecbd41a28892adf3ecd24
xiemoxiaoshaso/image
[ "license:openrail", "region:us" ]
2023-06-12T06:15:45+00:00
{"license": "openrail"}
2023-06-25T08:18:05+00:00
7d1e8ab7bf9a902488e473700bd7bbe4c05e0216
# Neuroscience Journals Dataset ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** apache-2.0 ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses Journal Classification ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
PenguinMan/ARXIV
[ "task_categories:text-classification", "task_categories:feature-extraction", "language:en", "license:apache-2.0", "medical", "region:us" ]
2023-06-12T06:15:49+00:00
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-classification", "feature-extraction"], "tags": ["medical"]}
2024-01-28T14:40:45+00:00
501b0b64060a811b2a705fe97882254407298474
# Dataset Card for "reddit-ah-dialogturns-annotations" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Deojoandco/reddit-ah-dialogturns-annotations
[ "region:us" ]
2023-06-12T06:17:00+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "speaker", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "annotation", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3772164, "num_examples": 16055}, {"name": "validation", "num_bytes": 376937, "num_examples": 1641}, {"name": "test", "num_bytes": 360334, "num_examples": 1559}], "download_size": 0, "dataset_size": 4509435}}
2023-06-15T02:57:27+00:00
a37ded87211c7b78072864a9db0596a8c47395cb
# Dataset Card for "text_data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
RoopamSadh/text_data
[ "region:us" ]
2023-06-12T06:24:36+00:00
{"dataset_info": {"features": [{"name": "quote", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "tags", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3652, "num_examples": 5}], "download_size": 7851, "dataset_size": 3652}}
2023-06-12T06:29:51+00:00
8f4f58f2416d1a618082c7bba64d74f55cd86d2a
Coaso/test
[ "license:cc-by-sa-3.0", "region:us" ]
2023-06-12T06:59:36+00:00
{"license": "cc-by-sa-3.0"}
2023-06-12T07:02:22+00:00
d0c5240af76205492cde2ad9c690385f820d08f1
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
Nio520/test_4pd_nio
[ "task_categories:question-answering", "size_categories:n<1K", "language:en", "license:apache-2.0", "region:us" ]
2023-06-12T07:12:41+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["question-answering"], "pretty_name": "test_nio"}
2023-06-12T08:06:19+00:00
05cd1b7d77d288d82da32518e034750c41136660
https://github.com/EducationalTestingService/sarcasm ``` @inproceedings{ghosh-etal-2020-report, title = "A Report on the 2020 Sarcasm Detection Shared Task", author = "Ghosh, Debanjan and Vajpayee, Avijit and Muresan, Smaranda", booktitle = "Proceedings of the Second Workshop on Figurative Language Processing", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.figlang-1.1", doi = "10.18653/v1/2020.figlang-1.1", pages = "1--11", abstract = "Detecting sarcasm and verbal irony is critical for understanding people{'}s actual sentiments and beliefs. Thus, the field of sarcasm analysis has become a popular research problem in natural language processing. As the community working on computational approaches for sarcasm detection is growing, it is imperative to conduct benchmarking studies to analyze the current state-of-the-art, facilitating progress in this area. We report on the shared task on sarcasm detection we conducted as a part of the 2nd Workshop on Figurative Language Processing (FigLang 2020) at ACL 2020.", } ```
tasksource/figlang2020-sarcasm
[ "language:en", "region:us" ]
2023-06-12T07:16:21+00:00
{"language": ["en"]}
2023-06-12T07:22:07+00:00
9e6e8d0e67d97c36417e5cc7bb7958d41ca2db21
# Dataset Card for COPA-ca ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Example](#example) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Website:** https://zenodo.org/record/7973926 - **Point of Contact:** [email protected] ### Dataset Summary The COPA-ca dataset (Choice of plausible alternatives in Catalan) is a professional translation of the English COPA dataset into Catalan, commissioned by BSC LangTech Unit. The dataset consists of 1000 premises, each given a question and two choices with a label encoding which of the choices is more plausible given the annotator. The dataset is split into 400 training samples, 100 validation samples, and 500 test samples. It includes the following features: 'premise', 'choice1', 'choice2', 'question', 'label', 'idx', 'changed'. This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>. ### Supported Tasks and Leaderboards Commonsense reasoning, Language Model ### Languages The dataset is in Catalan (`ca-ES`). ## Dataset Structure ### Data Instances Three JSON files, one for each split. ### Example: <pre> { "premise": "El meu cos va dibuixar una ombra damunt l'herba.", "choice1": "El sol estava sortint.", "choice2": "L'herba estava tallada.", "question": "cause", "label": 0, "idx": 1, "changed": false } { "premise": "La dona va tolerar el comportament difΓ­cil de la seva amiga.", "choice1": "La dona sabia que la seva amiga estava passant per un moment difΓ­cil.", "choice2": "A la dona li va semblar que la seva amiga s'aprofitava de la seva amabilitat.", "question": "cause", "label": 0, "idx": 2, "changed": false } </pre> ### Data Fields - premise: a string feature. - choice1: a string feature. - choice2: a string feature. - question: a string feature. - label: a int64 feature. - idx: a int32 feature. - changed: a bool feature. ### Data Splits * copa-ca.train.jsonl: 400 examples * copa-ca.val.jsonl: 100 examples * copa-ca.test.jsonl: 500 examples ## Dataset Creation ### Curation Rationale We created this dataset to contribute to the development of language models in Catalan, a low-resource language. ### Source Data [COPA](https://people.ict.usc.edu/~gordon/copa.html). #### Initial Data Collection and Normalization This dataset is a professional translation the English COPA dataset into Catalan, commissioned by BSC LangTech Unit within Projecte AINA. #### Who are the source language producers? For more information on how COPA was created, refer to the paper (Roemmele et al. 2011), or visit the [COPA's webpage](https://people.ict.usc.edu/~gordon/copa.html). ### Annotations #### Annotation process [N/A] #### Who are the annotators? This is a professional translation of the English COPA dataset and its annotations. ### Personal and Sensitive Information No personal or sensitive information included. ## Considerations for Using the Data ### Social Impact of Dataset We hope this dataset contributes to the development of language models in Catalan, a low-resource language. ### Discussion of Biases [N/A] ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators Language Technologies Unit at the Barcelona Supercomputing Center ([email protected]) This work was funded by the [Departament de la VicepresidΓ¨ncia i de PolΓ­tiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ### Licensing Information This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>. ### Citation Information [DOI](https://doi.org/10.5281/zenodo.8124398) ### Contributions [N/A]
projecte-aina/COPA-ca
[ "task_ids:natural-language-inference", "annotations_creators:professional translators", "multilinguality:monolingual", "language:ca", "license:cc-by-sa-4.0", "causal-reasoning", "textual-entailment", "commonsense-reasoning", "region:us" ]
2023-06-12T07:18:32+00:00
{"annotations_creators": ["professional translators"], "language": ["ca"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "task_ids": ["natural-language-inference"], "pretty_name": "copa-ca", "tags": ["causal-reasoning", "textual-entailment", "commonsense-reasoning"]}
2023-11-25T05:52:05+00:00
8b55110dfa0e53eb532686ba2110f10a45f4c114
yajun06/eee
[ "license:openrail", "region:us" ]
2023-06-12T07:30:33+00:00
{"license": "openrail"}
2023-06-12T07:30:33+00:00
1f4fbd35d29403eac6543c83bdaa83d537ecbbc7
Demo to save data from a Space to a Dataset. Goal is to provide reusable snippets of code. - Documentation: https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#scheduled-uploads - Space: https://huggingface.co/spaces/Wauplin/space_to_dataset_saver/ - JSON dataset: https://huggingface.co/datasets/Wauplin/example-space-to-dataset-json - Image dataset: https://huggingface.co/datasets/Wauplin/example-space-to-dataset-image - Image (zipped) dataset: https://huggingface.co/datasets/Wauplin/example-space-to-dataset-image-zip
Wauplin/example-space-to-dataset-image
[ "region:us" ]
2023-06-12T07:38:33+00:00
{}
2024-01-21T22:28:57+00:00
299207d7b113a1e0cab44b4bfd05e349f22d4516
Thouph/animation_dataset
[ "license:mit", "region:us" ]
2023-06-12T07:52:27+00:00
{"license": "mit"}
2023-06-12T07:57:57+00:00
45be76be635fde0d76356529d6f7327e0108a2d4
heath1989/sd_prepare
[ "license:apache-2.0", "region:us" ]
2023-06-12T07:56:09+00:00
{"license": "apache-2.0"}
2023-11-02T02:00:13+00:00
fa310e5dcc4570ac4ad79169a55e5136284dbcb3
thanhnguyentung/demo-dataset
[ "license:mit", "region:us" ]
2023-06-12T08:02:40+00:00
{"license": "mit"}
2023-06-12T08:07:27+00:00
82ecc40361df021acd5036233aa4889d6ccd1ce0
lexing/kun
[ "license:openrail", "region:us" ]
2023-06-12T08:04:39+00:00
{"license": "openrail"}
2023-06-12T08:13:42+00:00
7fa455d4bf085c4e500d6d73423eb0a69dcf4e5e
# Dataset Card for "683b3b1d" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/683b3b1d
[ "region:us" ]
2023-06-12T08:16:23+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 184, "num_examples": 10}], "download_size": 1340, "dataset_size": 184}}
2023-06-12T08:16:24+00:00
f394124aa0e596ab0e61158f07356d0d100f2d43
czczycz/QABot
[ "license:openrail", "region:us" ]
2023-06-12T08:19:38+00:00
{"license": "openrail"}
2023-06-12T08:21:10+00:00
9fb229d62e6536b808ae9011f532de6867dea82e
# Dataset Card for "Assistant" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
divi7007/Assistant
[ "region:us" ]
2023-06-12T08:26:56+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 21631104, "num_examples": 528}], "download_size": 7333858, "dataset_size": 21631104}}
2023-06-12T08:27:01+00:00
201ba8db42dc4dae9e4cc4632e2ca4aa795a0afc
# Dataset Card for "OCNLI_instruction1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hongboyang/OCNLI_instruction1
[ "region:us" ]
2023-06-12T08:47:54+00:00
{"dataset_info": {"features": [{"name": "INPUT", "dtype": "string"}, {"name": "OPTIONS", "sequence": "string"}, {"name": "TARGET", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10222102, "num_examples": 50437}], "download_size": 2346564, "dataset_size": 10222102}}
2023-06-12T09:03:48+00:00
0af26a0ddb9b6d8c661654926131ae658d0a3d2d
# Reddit US UK Subreddits Dataset This repository contains data from Reddit, from the subreddits of the **fifty (50) US states**, and the **ten (10) UK cities** listed below: 1. London 2. Manchester 3. Birmingham 4. Leeds-Bradford 5. Glasgow 6. Southampton-Portsmouth 7. Liverpool 8. Newcastle 9. Nottingham 10. Sheffield In addition, r/CasualUK is also included in this dataset. All data are sourced from the following data source: https://academictorrents.com/details/c398a571976c78d346c325bd75c47b82edf6124e The data spans from 2005-06 start of month to 2022-12 end of month. The suffix "submissions" denotes that the data contains posts, and the suffic "comments" denotes the comments in the various subreddits. The data is compressed in the zst format, and the uncompressed raw data exists in the format of JSON.
alujjdnd/Reddit-US-UK
[ "language:en", "license:mit", "region:us" ]
2023-06-12T09:01:40+00:00
{"language": ["en"], "license": "mit", "datasets": ["reddit"]}
2023-06-26T11:22:44+00:00
f601245118c619703f3a4f2597d956da05075eaf
# Dataset Card for "books3_basic_sentenses_paraphrased" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
skeskinen/books3_basic_sentenses_paraphrased
[ "region:us" ]
2023-06-12T09:03:35+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "book", "dtype": "string"}, {"name": "pos", "dtype": "float64"}, {"name": "smog_index", "dtype": "float64"}, {"name": "paraphrase", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 115227959, "num_examples": 670204}], "download_size": 44941036, "dataset_size": 115227959}}
2023-06-13T19:41:11+00:00
25dc987c849007429b3b019b35bc9686dcc2821c
# Helmet Detection Dataset The dataset consist of photographs of construction workers during the work. The dataset provides helmet detection using bounding boxes, and addresses public safety tasks such as providing compliance with safety regulations, authomizing the processes of identification of rules violations and reducing accidents during the construction work. # Get the dataset ### This is just an example of the data Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=helmet_detection) to discuss your requirements, learn about the price and buy the dataset. ![](https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F12421376%2Fc7a46d2407e8aa245f107524fcaecff5%2Fhelmets.png?generation=1686295342860797&alt=media) # Dataset structure - **img** - contains of original images of construction workers - **boxes** - includes bounding box labeling for the original images - **annotations.xml** - contains coordinates of the bounding boxes and labels (helmet, no_helmet), created for the original photo # Data Format Each image from `img` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the bounding boxes and labels for helmet detection. For each point, the x and y coordinates are provided. # Example of XML file structure ![](https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F12421376%2Fce2115cd583ab7bc4e1d3d2749b4d7ad%2Fcarbon%20(7).png?generation=1686295970420156&alt=media) # Helmet Detection might be made in accordance with your requirements. ## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=helmet_detection) provides high-quality data annotation tailored to your needs More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets** TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
TrainingDataPro/helmet_detection
[ "task_categories:image-classification", "language:en", "license:cc-by-nc-nd-4.0", "code", "region:us" ]
2023-06-12T09:16:40+00:00
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "task_categories": ["image-classification"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "mask", "dtype": "image"}, {"name": "bboxes", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 56575701, "num_examples": 46}], "download_size": 56584366, "dataset_size": 56575701}, "tags": ["code"]}
2023-09-14T15:43:53+00:00
3577a5cccd7005447adb9643364766e988dc3ee0
sakulchai/insdataset
[ "license:mit", "region:us" ]
2023-06-12T09:28:38+00:00
{"license": "mit"}
2023-06-16T07:20:21+00:00
0ab359e5031512e917350484e7a51e921d4d3555
# Dataset Card for "ASR-Data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bvkbharadwaj/ASR-Data
[ "region:us" ]
2023-06-12T09:38:21+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 914739.0, "num_examples": 5}], "download_size": 894112, "dataset_size": 914739.0}}
2023-06-26T11:23:11+00:00
f0668a2859f1fcacc58efe76e299854a55775f4f
# Dataset Card for "test_jules_cat_2023-06-12-10-39-03" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
IDQO/test_jules_cat_2023-06-12-10-39-03
[ "region:us" ]
2023-06-12T09:39:03+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Automation & Process Control ", "1": "Batteries & Chargers ", "2": "Cable, Wire & Cable Assemblies ", "3": "Chemicals & Adhesives ", "4": "Company Fashion", "5": "Connectors ", "6": "Electrical ", "7": "Eye and face protection", "8": "Fall protection", "9": "First aid and fire protection", "10": "Foot protection", "11": "Hand protection", "12": "Head protection", "13": "Hearing protection", "14": "Hydraulics", "15": "Hygiene & maintenance", "16": "LED Lighting Components ", "17": "Lighting Products ", "18": "Passive Components ", "19": "Power & Line Protection ", "20": "Power Tools", "21": "Power Transmission", "22": "Protective Wear", "23": "Semiconductors - Discretes ", "24": "Semiconductors - ICs ", "25": "Sensors & Transducers ", "26": "Signaling", "27": "Storage and Tools", "28": "Switches & Relays ", "29": "Wireless Modules & Adaptors ", "30": "Workwear"}}}}], "splits": [{"name": "train", "num_bytes": 260560.0, "num_examples": 2400}, {"name": "test", "num_bytes": 65140.0, "num_examples": 600}], "download_size": 241386, "dataset_size": 325700.0}}
2023-06-12T09:39:08+00:00
98a877094e31e70c500990c90711a6015df0e812
# Dataset Card for "CMRC2018_instruction1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hongboyang/CMRC2018_instruction1
[ "region:us" ]
2023-06-12T09:56:22+00:00
{"dataset_info": {"features": [{"name": "INPUT", "dtype": "string"}, {"name": "TARGET", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 17133521, "num_examples": 10142}], "download_size": 4142597, "dataset_size": 17133521}}
2023-06-12T09:56:28+00:00
36c4ee326a269e418d6b9310b9da4ff8d2d2a2a3
# Dataset Card for "LCSTS_instruction1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hongboyang/LCSTS_instruction1
[ "region:us" ]
2023-06-12T10:22:01+00:00
{"dataset_info": {"features": [{"name": "INPUT", "dtype": "string"}, {"name": "TARGET", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1128722053, "num_examples": 2400591}], "download_size": 693529602, "dataset_size": 1128722053}}
2023-06-12T10:34:14+00:00
f8ae96155ac234a54aaf2b70ca4e02d22cfe31af
# Dataset Card for "medicationqa" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
truehealth/medicationqa
[ "region:us" ]
2023-06-12T10:28:52+00:00
{"dataset_info": {"features": [{"name": "Question", "dtype": "string"}, {"name": "Focus (Drug)", "dtype": "string"}, {"name": "Question Type", "dtype": "string"}, {"name": "Answer", "dtype": "string"}, {"name": "Section Title", "dtype": "string"}, {"name": "URL", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 403030, "num_examples": 690}], "download_size": 0, "dataset_size": 403030}}
2023-06-12T13:24:14+00:00
bfa0c50bcc8f01ab8bc3555fd2fda6d0c784271b
# Dataset Card for RTE_TH ### Dataset Description This dataset is Thai translated version of [RTE](https://huggingface.co/datasets/super_glue/viewer/rte) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
Patt/RTE_TH
[ "task_categories:text-classification", "language:en", "language:th", "license:cc-by-sa-4.0", "arxiv:1907.04307", "region:us" ]
2023-06-12T10:40:00+00:00
{"language": ["en", "th"], "license": "cc-by-sa-4.0", "task_categories": ["text-classification"]}
2024-01-15T17:31:10+00:00
8a3f408655fe5dce6af95ee9e0360045736d69b0
lhk/test_dataset
[ "license:cc-by-2.0", "region:us" ]
2023-06-12T10:56:45+00:00
{"license": "cc-by-2.0"}
2023-06-12T10:57:51+00:00
e1abb2c6f2169c5fb0cb2f290edf78d2243bb855
# Dataset Card for Dataset Name ## Dataset Description - **Homepage: m2sodai.jonggyu.me** - **Repository: temporarily private** - **Paper: under review** - **Point of Contact: jgjang0123 [at] gmail [dot] com** ### Dataset Summary The M<sup>2</sup>SODAI dataset is the first multi-modal, bounding-box-labeled, and synchronized aerial dataset. Used Sensor: - Hyperspectral image - RGB image ## Dataset Structure ```md data β”œβ”€β”€ label.txt β”œβ”€β”€ train β”‚ β”œβ”€β”€ 1.jpg β”‚ β”œβ”€β”€ 1.mat β”‚ β”œβ”€β”€ 1.json β”‚ └── ... β”œβ”€β”€ val β”‚ β”œβ”€β”€ 0.jpg β”‚ β”œβ”€β”€ 0.mat β”‚ └── 0.json └── test β”œβ”€β”€ 17.jpg β”œβ”€β”€ 17.mat └── 17.json ``` ### Data Instances For object detection, we annotated the bounding boxes on the floating matters and ships in the RGB and HSI data. We note that the floating matter contains buoys, rescue tubes, lifeboats, etc. Since small objects are hard to be recognized, we refer to the infrared visualization map of the HSI data for bounding box annotation. ### Data Splits After the data processing, we obtained 1,257 pairs of synchronized RGB and HSI data, where the total number of instances in the dataset is 11,892. For experiments, we randomly divided the dataset into 1,007 training data, 125 validation data, and 125 test data. ## Dataset Creation ### Source Data Our focus is to create a public dataset consisting of synchronized maritime aerial RGB and HSI data. To this end, we built a data collection system by leveraging a single-engine utility aircraft (Cessna Grand Caravan 208B). An HSI sensor (AsiaFENIX, Specim, Oulu, Finland) and an RGB sensor (DMC, Z/I Imaging, Aalen, Germany) are equipped on the bottom of the aircraft, the direction of which is downward. The raw data was acquired through 59 flight strips in 12 flight measurement campaigns, which cover a total area of 299.7 km<sup>2</sup>. During the flight strips, the aircraft maintains its speed of 260 km/h and altitude of 1 km. The below table shows the detailed specifications of the sensors used in the data collection. The HSI sensor (AsiaFENIX) scans the wavelength range from 400 nm to 1000 nm in steps of 4.5 nm, a total of 127 spectrum bands. The wavelength range includes visible spectrum (VIS) and near-infrared (NIR) spectrum, generally used for remote sensing and machine vision tasks. The RGB sensor (DMC) captures high-resolution RGB data in three channels: Red (590-675 nm), Green (500-650 nm), and Blue (400-580 nm). We note that RGB and HSI data are collected simultaneously, in which the spatial resolutions of RGB and HSI sensors are approximately 0.1 m and 0.7 m, respectively. | | HSI sensor | RGB sensor | |---------------|------------------------------------------------|----------------------------------------------------| | Name | AsiaFENIX (@Specim) | DMC (@Z/I Imaging) | | Spectrum | 400-1000 nm, 127 channels (in steps of 4.5 nm) | Blue: 400-580 nm Green: 500-650 nm Red: 590-675 nm | | Altitude | 1km | 1km | | Field of View | 40 degree | 74 degree | | Resolution | 0.7 m | 0.1 m | <img src="https://s3.amazonaws.com/moonup/production/uploads/6487107c86b4bc5a09f9d62e/ihWLuijyA1j38RbSoEC7_.png" width="50%"> Illustration of the collected raw data. We collected the data on twelve spots. The first row shows the collected raw RGB data. The second and third rows show the overall HSI data and collected raw HSI data in each flight strip. In this figure, since the sensors have different specifications on the field of view (FoV), the raw RGB data and HSI data have different coordinates. ### Data Processing and Annotations Since the size of the raw data is too large for object detection (HSI: 3,220<sup>2</sup> pixels, and RGB: 22,520<sup>2</sup> pixels on average), we cropped the raw data into a fixed size. We note that RGB and HSI data are cropped in size of 1600 X 1600 X 3 and 224 X 224 X 127, respectively. However, the problem is that the coordinates of the collected RGB and HSI pairs are not matched. Hence, we employ an image registration method to correct pixel offsets between RGB and HSI pairs. In the below figure, our data processing procedure is depicted. <img src="https://s3.amazonaws.com/moonup/production/uploads/6487107c86b4bc5a09f9d62e/f4XBxM5Q1Qij7T4UkCf34.png" width="50%"> 1. We transform the raw RGB and HSI data into grayscale images. 2. We apply contrast-limited adaptive histogram equalization (CLAHE)-based contrast enhancer to the grayscale RGB data and grayscale HSI data. 3. To estimate the homography matrix between the enhanced RGB data and enhanced HSI data, we carry out the oriented FAST and rotated BRIEF (ORB) feature descriptor to both data, thereby extracting features of the data. 4. We use a Brute-force matcher to find the matched feature among the ORB features; then, the homography matrix is computed from least square optimization for synchronizing the matched features. 5. We crop the registered data in the same size and generate corresponding bounding box annotation data. ### Personal and Sensitive Information There is no personal/sensitive information in our dataset ### Citation Information [N/A]
jgjang0123/m2sodai
[ "task_categories:object-detection", "size_categories:1B<n<10B", "license:mit", "object detection", "doi:10.57967/hf/0986", "region:us" ]
2023-06-12T11:37:37+00:00
{"license": "mit", "size_categories": ["1B<n<10B"], "task_categories": ["object-detection"], "pretty_name": "M2SODAI", "tags": ["object detection"]}
2023-06-14T09:52:26+00:00
d8cc7c83aad1eaf4b17c72eb3fff428e20fd4541
deepghs/anime_ch_skin_color
[ "task_categories:image-classification", "size_categories:10K<n<100K", "license:mit", "art", "region:us" ]
2023-06-12T11:40:11+00:00
{"license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["image-classification"], "tags": ["art"]}
2023-06-17T13:14:19+00:00
7cf53d941539bdf7f6c9ea412222407f77922320
# Dataset Card for "sam-controlnet-original-test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
baptistecolle/sam-controlnet-original-test
[ "region:us" ]
2023-06-12T11:43:41+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "filepath", "dtype": "string"}, {"name": "sentids", "list": "int32"}, {"name": "filename", "dtype": "string"}, {"name": "imgid", "dtype": "int32"}, {"name": "split", "dtype": "string"}, {"name": "sentences", "struct": [{"name": "tokens", "list": "string"}, {"name": "raw", "dtype": "string"}, {"name": "imgid", "dtype": "int32"}, {"name": "sentid", "dtype": "int32"}]}, {"name": "cocoid", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 33794447.590563774, "num_examples": 200}], "download_size": 33721904, "dataset_size": 33794447.590563774}}
2023-06-12T11:44:08+00:00
28cbe5b4d2eb7f05ef2d3a89e6fc8a202b653a87
# Dataset Card for "mental_health_dataset_1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
quocanh34/mental_health_dataset_1
[ "region:us" ]
2023-06-12T11:46:32+00:00
{"dataset_info": {"features": [{"name": "author", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "created_utc", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "num_comments", "dtype": "int64"}, {"name": "score", "dtype": "int64"}, {"name": "subreddit", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "upvote_ratio", "dtype": "float64"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 119442477, "num_examples": 151288}], "download_size": 67226682, "dataset_size": 119442477}}
2023-06-12T11:46:48+00:00
08c1884bbbc128c06dde934c523dda5104021ca4
# Dataset Card for "sam-controlnet-test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
baptistecolle/sam-controlnet-test
[ "region:us" ]
2023-06-12T11:48:28+00:00
{"dataset_info": {"features": [{"name": "masks", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 28821400.0, "num_examples": 200}], "download_size": 0, "dataset_size": 28821400.0}}
2023-06-12T13:16:57+00:00
7ab99c93cc2beca52327e7efd66892c91b88ebf2
chuckchen/tokenizer-vocab
[ "license:creativeml-openrail-m", "region:us" ]
2023-06-12T11:50:33+00:00
{"license": "creativeml-openrail-m"}
2023-06-13T23:14:47+00:00
0da7bc28e39c765a788f745b8da476c163a15589
AaronnWolfe/TurmericRhizomes
[ "license:bigscience-openrail-m", "region:us" ]
2023-06-12T11:51:28+00:00
{"license": "bigscience-openrail-m"}
2023-06-12T11:51:28+00:00