sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
35183de294390f330e4c5202af7eb77269739d6b
# Dataset Card for "clinic-banking" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
fathyshalab/clinic-banking
[ "region:us" ]
2022-11-28T13:28:09+00:00
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21001.221333333335, "num_examples": 262}, {"name": "test", "num_bytes": 9057.778666666667, "num_examples": 113}], "download_size": 16289, "dataset_size": 30059.0}}
2022-12-24T17:38:36+00:00
43a78c4b8460311443f9917b9658f2a81806117a
# Dataset Card for pcba_686978 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage: https://moleculenet.org/** - **Repository: https://github.com/deepchem/deepchem/tree/master** - **Paper: https://arxiv.org/abs/1703.00564** ### Dataset Summary `pcba_686978` is a dataset included in [MoleculeNet](https://moleculenet.org/). PubChem BioAssay (PCBA) is a database consisting of biological activities of small molecules generated by high-throughput screening. We have chosen one of the larger tasks (ID 686978) as described in https://par.nsf.gov/servlets/purl/10168888. ## Dataset Structure ### Data Fields Each split contains * `smiles`: the [SMILES](https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system) representation of a molecule * `selfies`: the [SELFIES](https://github.com/aspuru-guzik-group/selfies) representation of a molecule * `target`: Measured results (Active/Inactive) for bioassays ### Data Splits The dataset is split into an 80/10/10 train/valid/test split using random split. ### Source Data #### Initial Data Collection and Normalization Data was originially generated by the Pande Group at Standford ### Licensing Information This dataset was originally released under an MIT license ### Citation Information ``` @misc{https://doi.org/10.48550/arxiv.1703.00564, doi = {10.48550/ARXIV.1703.00564}, url = {https://arxiv.org/abs/1703.00564}, author = {Wu, Zhenqin and Ramsundar, Bharath and Feinberg, Evan N. and Gomes, Joseph and Geniesse, Caleb and Pappu, Aneesh S. and Leswing, Karl and Pande, Vijay}, keywords = {Machine Learning (cs.LG), Chemical Physics (physics.chem-ph), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Physical sciences, FOS: Physical sciences}, title = {MoleculeNet: A Benchmark for Molecular Machine Learning}, publisher = {arXiv}, year = {2017}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` ### Contributions Thanks to [@zanussbaum](https://github.com/zanussbaum) for adding this dataset.
zpn/pcba_686978
[ "task_categories:other", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "license:mit", "bio", "bio-chem", "molnet", "molecule-net", "biophysics", "arxiv:1703.00564", "region:us" ]
2022-11-28T14:25:33+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["other"], "task_ids": [], "pretty_name": "pcba_686978", "tags": ["bio", "bio-chem", "molnet", "molecule-net", "biophysics"]}
2022-12-09T20:30:45+00:00
880db59ed13fe5956e440b45b50818d1d3d2d161
# Dataset Card for "news-and-blogs" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
justinian336/news-and-blogs
[ "region:us" ]
2022-11-28T14:38:06+00:00
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11785675.449565798, "num_examples": 2972}], "download_size": 7254802, "dataset_size": 11785675.449565798}}
2022-11-28T15:00:03+00:00
35752fdb4e82da3aa460e21a8f948cdd782f1c19
# Utilising Weak Supervision to Create S3D: A Sarcasm Annotated Dataset This is the repository for the S3D dataset published at EMNLP 2022. The dataset can help build sarcasm detection models. # SAD The SAD dataset is our gold standard dataset of tweets labelled for sarcasm. These tweets were scraped by observing a '#sarcasm' hashtag and then manually annotated by three annotators. There are a total of 1170 pairs of a sarcastic and non-sarcastic tweets which were both posted by the same user, resulting in a total of 2340 tweets annotated for sarcasm. These tweets can be accessed by using the Twitter API so that they can be used for other experiments. # Data Fields - Tweet ID: The ID of the labelled tweet - Label: A label to denote if a given tweet is sarcastic # Data Splits - Train: 1638 - Valid: 351 - Test: 351
surrey-nlp/SAD
[ "task_categories:text-classification", "annotations_creators:Jordan Painter, Diptesh Kanojia", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2022-11-28T15:26:38+00:00
{"annotations_creators": ["Jordan Painter, Diptesh Kanojia"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "pretty_name": "Utilising Weak Supervision to create S3D: A Sarcasm Annotated Dataset"}
2022-11-28T18:41:51+00:00
fb45c0b1afe4de7cb970d171eb4d19befb03fed6
## Table of Contents - [Dataset Description](#dataset-description) - # Utilising Weak Supervision to Create S3D: A Sarcasm Annotated Dataset This is the repository for the S3D dataset published at EMNLP 2022. The dataset can help build sarcasm detection models. # S3D Summary The S3D dataset is our silver standard dataset of 100,000 tweets labelled for sarcasm using weak supervision by our **BERTweet-sarcasm-combined** model. These tweets can be accessed by using the Twitter API so that they can be used for other experiments. S3D contains 38879 tweets labelled as sarcastic, and 61211 tweets labelled as not being sarcastic. # Data Fields - Tweet ID: The ID of the labelled tweet - Label: A label to denote if a given tweet is sarcastic # Data Splits - Train: 70,000 - Valid: 15,000 - Test: 15,000
surrey-nlp/S3D-v1
[ "task_categories:text-classification", "annotations_creators:Jordan Painter, Diptesh Kanojia", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2022-11-28T15:27:35+00:00
{"annotations_creators": ["Jordan Painter, Diptesh Kanojia"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "pretty_name": "Utilising Weak Supervision to create S3D: A Sarcasm Annotated Dataset"}
2022-11-28T18:46:48+00:00
fbdd92b26e8af4ed340bd295f23af0e7c23e40ea
KirbyShrine/bagbean
[ "license:cc-by-nd-4.0", "region:us" ]
2022-11-28T16:03:48+00:00
{"license": "cc-by-nd-4.0"}
2022-11-29T18:06:59+00:00
c06e746d78f1862a5262198efb26c4474f05ac8b
Replica Dataset for vMAP, including 8 sequences, and for each sequence, we rendered two different trajectories 00 and 01. Traj 00 is same as iMAP, while traj 01 is a different traj for 2D novel view synthesis rendering evaluation. The habitat folder in each sequence contains 3D models of each composed object for 3D geometrical evaluation.
kxic/vMAP
[ "region:us" ]
2022-11-28T16:14:26+00:00
{}
2023-02-21T16:37:43+00:00
2fae623a15c66307ce286243782029520b569690
TEST
davanstrien/testgitupload
[ "arxiv:2211.10086", "region:us" ]
2022-11-28T16:55:47+00:00
{"tags": ["arxiv:2211.10086"]}
2023-01-18T12:10:40+00:00
eebd6592b6bd0874b2b6b72922acdf479de379bd
# DEBUG Dataset Card for "tweetyface" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** [GitHub](https://github.com/ml-projects-kiel/OpenCampus-ApplicationofTransformers) ### Dataset Summary DEBUG ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English, German ## Dataset Structure ### Data Instances #### english - **Size of downloaded dataset files:** 4.77 MB - **Size of the generated dataset:** 5.92 MB - **Total amount of disk used:** 4.77 MB #### german - **Size of downloaded dataset files:** 2.58 MB - **Size of the generated dataset:** 3.10 MB - **Total amount of disk used:** 2.59 MB An example of 'validation' looks as follows. ``` { "text": "@SpaceX @Space_Station About twice as much useful mass to orbit as rest of Earth combined", "label": elonmusk, "idx": 1001283 } ``` ### Data Fields The data fields are the same among all splits and languages. - `text`: a `string` feature. - `label`: a classification label - `idx`: an `string` feature. - `ref_tweet`: a `bool` feature. - `reply_tweet`: a `bool` feature. ### Data Splits | name | train | validation | | ------- | ----: | ---------: | | english | 27857 | 6965 | | german | 10254 | 2564 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed]
ML-Projects-Kiel/tweetyface_debug
[ "task_categories:text-generation", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:10K<n<100K", "language:en", "language:de", "license:apache-2.0", "region:us" ]
2022-11-28T17:01:37+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["crowdsourced"], "language": ["en", "de"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": [], "pretty_name": "tweetyface_debug", "tags": []}
2022-12-05T15:38:09+00:00
a9b01bbb313276a9bde9576048677e1b62bb4bcb
# Dataset Card for Quran audio Content * 7 Imam Full Quran Recitation: 7*6236 wav file - csv contains the Text info for 11k subset short wav file * Tarteel.io user dataset ~25k wav - csv contains the Text info for 18k subset of the accepted user quality
ashraf-ali/quran-data
[ "task_categories:automatic-speech-recognition", "language_creators:Tarteel.io", "license:cc0-1.0", "region:us" ]
2022-11-28T17:14:02+00:00
{"language_creators": ["Tarteel.io"], "license": ["cc0-1.0"], "size_categories": {"ar": [43652]}, "task_categories": ["automatic-speech-recognition"], "task_ids": [], "paperswithcode_id": "quran-data", "pretty_name": "Quran Audio", "language_bcp47": ["ar"]}
2022-12-10T17:35:33+00:00
b84f3ba8f06cbc701168b376a27191afef981919
bfxwayne/data-docs
[ "license:apache-2.0", "region:us" ]
2022-11-28T17:37:48+00:00
{"license": "apache-2.0"}
2022-11-28T18:48:27+00:00
5cff673de826de3c9d90840c73682aa19c9308e1
# Dataset Card for "squad_v1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mlxen/squad_v1
[ "region:us" ]
2022-11-28T18:37:54+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 79346108, "num_examples": 87599}], "download_size": 14457366, "dataset_size": 79346108}}
2022-11-28T18:37:58+00:00
861bfb79319a7bb4fbb1a9c027eb8d3247e9fa48
AnabolAndi/Models_test
[ "license:other", "region:us" ]
2022-11-28T19:02:05+00:00
{"license": "other"}
2022-11-28T19:38:07+00:00
73077c37f5c69d24cfc8bd4b9b02960039cceafe
Textual Inversion embedding to create portraits in the style of the most famous portrait photographer ever, "Yousuf Karsh" Trigger word is "karsh" Example images generated with this prompt template: portrait photo of "character", highly detailed, by karsh ![05898-3647817921-portrait photo of The joker, highly detailed, by karsh.png](https://s3.amazonaws.com/moonup/production/uploads/1669663304101-63507e5e18a4f616c9dfba19.png) ![05793-3428311090-portrait photo of wonder woman, by karsh.png](https://s3.amazonaws.com/moonup/production/uploads/1669663318736-63507e5e18a4f616c9dfba19.png) ![05921-3536412260-portrait photo of harley quinn, highly detailed, by karsh.png](https://s3.amazonaws.com/moonup/production/uploads/1669663375997-63507e5e18a4f616c9dfba19.png) ![05941-2474898187-portrait photo of Han solo, highly detailed, by karsh.png](https://s3.amazonaws.com/moonup/production/uploads/1669663376996-63507e5e18a4f616c9dfba19.png) ![05960-3239527709-portrait photo of Yoda, highly detailed, by karsh.png](https://s3.amazonaws.com/moonup/production/uploads/1669663377978-63507e5e18a4f616c9dfba19.png) ![05790-1118996825-portrait photo of Harley Quinn, by karsh.png](https://s3.amazonaws.com/moonup/production/uploads/1669663368112-63507e5e18a4f616c9dfba19.png) ![05826-790578085-portrait photo of Sonic the hedgehog, highly detailed, by karsh.png](https://s3.amazonaws.com/moonup/production/uploads/1669663378683-63507e5e18a4f616c9dfba19.png) ![05834-879254331-portrait photo of pikachu, highly detailed, by karsh.png](https://s3.amazonaws.com/moonup/production/uploads/1669663376663-63507e5e18a4f616c9dfba19.png) ![05840-2710542385-portrait photo of indiana jones, highly detailed, by karsh.png](https://s3.amazonaws.com/moonup/production/uploads/1669663377666-63507e5e18a4f616c9dfba19.png) ![05875-2065006831-portrait photo of master chief, highly detailed, by karsh.png](https://s3.amazonaws.com/moonup/production/uploads/1669663379060-63507e5e18a4f616c9dfba19.png)
PublicPrompts/Karsh
[ "license:openrail++", "region:us" ]
2022-11-28T19:18:32+00:00
{"license": "openrail++"}
2023-01-08T05:15:57+00:00
d6e7829d66a86747ef92559700cb8dd6fa7fc505
This dataset obtains genealogical and typological information for the 104 languages used for pre-training of the language model multilingual BERT (Devlin et al., 2019). The genealogical information covers the language family and the genus for each language. For typological description of the pre-training languages, 36 features from WALS (Dryer & Haspelmath, 2013) were used. The information provided here can be used, among other things, to investigate how the pre-training corpus is structured from a genealogical and typological perspective and to what extent, if any, this structure is related to the performance of the language model. In addition to the table of linguistic features, a pdf file was uploaded listing all the grammars and language descriptive materials used to compile the linguistic information.
MayaGalvez/linguistic_representation_mBERT
[ "region:us" ]
2022-11-28T19:44:50+00:00
{}
2023-01-26T10:56:38+00:00
716a2bec4f8024a8e47f2d59e17d824dd31712cf
# Contextualized Hate Speech: A dataset of comments in news outlets on Twitter ## Dataset Description - **Homepage:** - **Repository:** - **Paper**: ["Assessing the impact of contextual information in hate speech detection"](https://arxiv.org/abs/2210.00465), Juan Manuel Pérez, Franco Luque, Demian Zayat, Martín Kondratzky, Agustín Moro, Pablo Serrati, Joaquín Zajac, Paula Miguel, Natalia Debandi, Agustín Gravano, Viviana Cotik - **Point of Contact**: jmperez (at) dc uba ar ### Dataset Summary ![Graphical representation of the dataset](Dataset%20graph.png) This dataset is a collection of tweets that were posted in response to news articles from five specific Argentinean news outlets: Clarín, Infobae, La Nación, Perfil and Crónica, during the COVID-19 pandemic. The comments were analyzed for hate speech across eight different characteristics: against women, racist content, class hatred, against LGBTQ+ individuals, against physical appearance, against people with disabilities, against criminals, and for political reasons. All the data is in Spanish. Each comments is labeled with the following variables | Label | Description | | :--------- | :---------------------------------------------------------------------- | | HATEFUL | Contains hate speech (HS)? | | CALLS | If it is hateful, is this message calling to (possibly violent) action? | | WOMEN | Is this against women? | | LGBTI | Is this against LGBTI people? | | RACISM | Is this a racist message? | | CLASS | Is this a classist message? | | POLITICS | Is this HS due to political ideology? | | DISABLED | Is this HS against disabled people? | | APPEARANCE | Is this HS against people due to their appearance? (e.g. fatshaming) | | CRIMINAL | Is this HS against criminals or people in conflict with law? | There is an extra label `CALLS`, which represents whether a comment is a call to violent action or not. ### Citation Information ```bibtex @article{perez2022contextual, author = {Pérez, Juan Manuel and Luque, Franco M. and Zayat, Demian and Kondratzky, Martín and Moro, Agustín and Serrati, Pablo Santiago and Zajac, Joaquín and Miguel, Paula and Debandi, Natalia and Gravano, Agustín and Cotik, Viviana}, journal = {IEEE Access}, title = {Assessing the Impact of Contextual Information in Hate Speech Detection}, year = {2023}, volume = {11}, number = {}, pages = {30575-30590}, doi = {10.1109/ACCESS.2023.3258973} } ``` ### Contributions [More Information Needed]
piuba-bigdata/contextualized_hate_speech
[ "task_categories:text-classification", "size_categories:10K<n<100K", "language:es", "hate_speech", "arxiv:2210.00465", "region:us" ]
2022-11-28T22:12:44+00:00
{"language": ["es"], "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "pretty_name": "contextualized_hate_speech", "tags": ["hate_speech"]}
2023-04-29T13:19:58+00:00
6628c049aa8a40d342c62a2ae0ba1f58faf5c405
# Dataset Card for "VALUE2_mnli_been_done" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
liuyanchen1015/VALUE_mnli_been_done
[ "region:us" ]
2022-11-28T22:28:06+00:00
{"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 11563230, "num_examples": 48515}, {"name": "dev_matched", "num_bytes": 290459, "num_examples": 1226}, {"name": "dev_mismatched", "num_bytes": 377910, "num_examples": 1509}, {"name": "test_matched", "num_bytes": 296760, "num_examples": 1199}, {"name": "test_mismatched", "num_bytes": 380324, "num_examples": 1541}], "download_size": 8136354, "dataset_size": 12908683}}
2022-11-28T22:28:28+00:00
cfdb34e465f9cc889acc3dada41dbadc719733b0
# Dataset Card for "VALUE2_mnli_dey_it" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
liuyanchen1015/VALUE_mnli_dey_it
[ "region:us" ]
2022-11-28T22:28:35+00:00
{"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 7643138, "num_examples": 33927}, {"name": "dev_matched", "num_bytes": 189967, "num_examples": 863}, {"name": "dev_mismatched", "num_bytes": 171667, "num_examples": 709}, {"name": "test_matched", "num_bytes": 186114, "num_examples": 849}, {"name": "test_mismatched", "num_bytes": 158987, "num_examples": 717}], "download_size": 5183771, "dataset_size": 8349873}}
2022-11-28T22:28:58+00:00
dd7d5f26f4195e471f00be483db7deb336f9e4b3
# Dataset Card for "VALUE2_mnli_drop_aux" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
liuyanchen1015/VALUE_mnli_drop_aux
[ "region:us" ]
2022-11-28T22:29:12+00:00
{"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 16847569, "num_examples": 78157}, {"name": "dev_matched", "num_bytes": 416576, "num_examples": 1924}, {"name": "dev_mismatched", "num_bytes": 415096, "num_examples": 1847}, {"name": "test_matched", "num_bytes": 402499, "num_examples": 1945}, {"name": "test_mismatched", "num_bytes": 417259, "num_examples": 1836}], "download_size": 11952293, "dataset_size": 18498999}}
2022-11-28T22:29:36+00:00
b34ff6bfd4c81026aa48e76648812bba17cc232e
# Dataset Card for "VALUE2_mnli_got" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
liuyanchen1015/VALUE_mnli_got
[ "region:us" ]
2022-11-28T22:29:42+00:00
{"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6007046, "num_examples": 25203}, {"name": "dev_matched", "num_bytes": 136053, "num_examples": 611}, {"name": "dev_mismatched", "num_bytes": 130788, "num_examples": 511}, {"name": "test_matched", "num_bytes": 152545, "num_examples": 644}, {"name": "test_mismatched", "num_bytes": 113320, "num_examples": 482}], "download_size": 4055143, "dataset_size": 6539752}}
2022-11-28T22:30:04+00:00
358b94cbdd4f65b999d32dc3807816afee1e5ee1
# Dataset Card for "VALUE2_mnli_lexical" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
liuyanchen1015/VALUE_mnli_lexical
[ "region:us" ]
2022-11-28T22:30:54+00:00
{"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 69129827, "num_examples": 331784}, {"name": "dev_matched", "num_bytes": 1720780, "num_examples": 8340}, {"name": "dev_mismatched", "num_bytes": 1845954, "num_examples": 8603}, {"name": "test_matched", "num_bytes": 1727232, "num_examples": 8345}, {"name": "test_mismatched", "num_bytes": 1840163, "num_examples": 8585}], "download_size": 51850969, "dataset_size": 76263956}}
2022-11-28T22:31:19+00:00
04e2bb8e10dc939bf2c82d235e0aa0db5bd1fa01
# Dataset Card for "VALUE2_mnli_negative_concord" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
liuyanchen1015/VALUE_mnli_negative_concord
[ "region:us" ]
2022-11-28T22:31:29+00:00
{"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 11131248, "num_examples": 49529}, {"name": "dev_matched", "num_bytes": 266084, "num_examples": 1192}, {"name": "dev_mismatched", "num_bytes": 272231, "num_examples": 1203}, {"name": "test_matched", "num_bytes": 255070, "num_examples": 1140}, {"name": "test_mismatched", "num_bytes": 282348, "num_examples": 1214}], "download_size": 7641405, "dataset_size": 12206981}}
2022-11-28T22:31:52+00:00
9bf26eb2f103ce19e86f77adfcefed06cefc35de
# Dataset Card for "VALUE2_mnli_negative_inversion" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
liuyanchen1015/VALUE_mnli_negative_inversion
[ "region:us" ]
2022-11-28T22:31:55+00:00
{"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 140156, "num_examples": 658}, {"name": "dev_matched", "num_bytes": 2553, "num_examples": 14}, {"name": "dev_mismatched", "num_bytes": 3100, "num_examples": 14}, {"name": "test_matched", "num_bytes": 3968, "num_examples": 20}, {"name": "test_mismatched", "num_bytes": 3039, "num_examples": 14}], "download_size": 93760, "dataset_size": 152816}}
2022-11-28T22:32:16+00:00
7747f47c99997ae544f53c5cef570a52c7c91ccb
# Dataset Card for "VALUE2_mnli_null_genetive" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
liuyanchen1015/VALUE_mnli_null_genetive
[ "region:us" ]
2022-11-28T22:32:25+00:00
{"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 12248137, "num_examples": 50122}, {"name": "dev_matched", "num_bytes": 283868, "num_examples": 1167}, {"name": "dev_mismatched", "num_bytes": 330715, "num_examples": 1276}, {"name": "test_matched", "num_bytes": 297546, "num_examples": 1245}, {"name": "test_mismatched", "num_bytes": 343629, "num_examples": 1336}], "download_size": 8810876, "dataset_size": 13503895}}
2022-11-28T22:32:48+00:00
903cbb0012e40483e512d0cc09d95daff583c1ea
# Dataset Card for "NDD_NER" ## Dataset Summary This Named Entity Recognition dataset is created for Neurodevelopmental disorders domain to detected domain specific entities. Initially, pubmed abstracts were annotated with SciSpaCy UMLS entity linker and specific semantic types were mapped to required domain specific labels, which were further validated during manual curation process using Label Studio (an open source data labeling tool). | Label Category | UMLS semantic types | |-----|-----| |CONDITION| Mental or Behavioral Dysfunction, Disease or Syndrome, Neoplastic Process, Congenital Abnormality | |ASSOCIATED_PROBLEM| Sign or Symptom, Mental Process, Injury or Poisoning | |PATIENT_GROUP| Age Group, Population Group, Patient or Disabled Group | |INTERVENTION| Therapeutic or Preventive Procedure, Health Care Activity | |TEST| Diagnostic Procedure, Intellectual Product, Research Activity, Laboratory Procedure | ## Dataset Splits |split name|number of examples|CONDITION|ASSOCIATED_PROBLEM|PATIENT_GROUP|INTERVENTION|TEST| |-----|-----|-----|-----|-----|-----|-----| |train| 341 | 320 | 189 | 240 | 273 | 228 | |test| 160 | 139 | 68 | 87 | 98 | 82 | |validation| 177 | 147 | 82 | 104 | 117 | 98 | ## Source Data Pubmed abstracts for ("Neurodevelopmental Disorders"[Mesh]) AND "Behavioral Disciplines and Activities"[Mesh] query using NCBI E-utilities API.
ManpreetK/NDD_NER
[ "region:us" ]
2022-11-28T22:32:42+00:00
{"viewer": true, "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "I-CONDITION", "1": "I-TEST", "2": "B-CONDITION", "3": "I-PATIENT_GROUP", "4": "B-ASSOCIATED_PROBLEM", "5": "O", "6": "I-ASSOCIATED_PROBLEM", "7": "B-INTERVENTION", "8": "B-PATIENT_GROUP", "9": "I-INTERVENTION", "10": "B-TEST"}}}}], "splits": [{"name": "train", "num_bytes": 156151, "num_examples": 341}, {"name": "validation", "num_bytes": 68495, "num_examples": 177}, {"name": "test", "num_bytes": 67949, "num_examples": 160}], "download_size": 78315, "dataset_size": 292595}}
2022-12-24T21:58:17+00:00
d25767c8e8c3dc54368933195d1d9689ab0009ba
# Dataset Card for "VALUE2_mnli_null_relcl" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
liuyanchen1015/VALUE_mnli_null_relcl
[ "region:us" ]
2022-11-28T22:32:57+00:00
{"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 12182834, "num_examples": 45899}, {"name": "dev_matched", "num_bytes": 297057, "num_examples": 1123}, {"name": "dev_mismatched", "num_bytes": 365012, "num_examples": 1361}, {"name": "test_matched", "num_bytes": 303649, "num_examples": 1153}, {"name": "test_mismatched", "num_bytes": 344268, "num_examples": 1329}], "download_size": 8501673, "dataset_size": 13492820}}
2022-11-28T22:33:19+00:00
a7a9b0641a60169615bf785996249bb50051397d
[Needs More Information] # Dataset Card for Trains ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Each record in the file contains information about one particular shift that an engineer or conductor worked. Clock-in and clock-out information, plus many statistics, are provided. One column, named 'class', is actually the target. This column contains an integer that can have one of three values: 0: No accident occurred during this shift 1: An accident of type '1' occurred during this shift 2. An accident of type '2' occurred during this shift ### Supported Tasks and Leaderboards [Needs More Information] ### Languages English ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields FIRM Class Start End Length Night Gap WS idx Base StartAdj LenAdj Comp Trans Press p1s p1l p2s p2l MalAdj NFZ AFZ MFZ ### Data Splits ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
deancgarcia/cs4990_hw3
[ "region:us" ]
2022-11-28T22:33:11+00:00
{}
2022-11-30T03:14:44+00:00
51779a8032e8b08ef7eac4edbe504ae0adf51d49
# Dataset Card for "VALUE2_mnli_uninflect" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
liuyanchen1015/VALUE_mnli_uninflect
[ "region:us" ]
2022-11-28T22:33:39+00:00
{"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 29268351, "num_examples": 124447}, {"name": "dev_matched", "num_bytes": 703766, "num_examples": 3056}, {"name": "dev_mismatched", "num_bytes": 768556, "num_examples": 3170}, {"name": "test_matched", "num_bytes": 714516, "num_examples": 3095}, {"name": "test_mismatched", "num_bytes": 790706, "num_examples": 3309}], "download_size": 20940263, "dataset_size": 32245895}}
2022-11-28T22:34:03+00:00
5c387417ea0a13d09be6a0f111cefdfbf2e03c9d
# Dataset Card for "petitions_29-ds" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
eminecg/petitions_29-ds
[ "region:us" ]
2022-11-29T00:08:53+00:00
{"dataset_info": {"features": [{"name": "petition", "dtype": "string"}, {"name": "petition_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 30457698.3, "num_examples": 2475}, {"name": "validation", "num_bytes": 3384188.7, "num_examples": 275}], "download_size": 15645193, "dataset_size": 33841887.0}}
2022-11-29T00:08:59+00:00
f328d536425ae8fcac5d098c8408f437bffdd357
# Dataset Card for "articles_and_comments" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
piuba-bigdata/articles_and_comments
[ "region:us" ]
2022-11-29T01:25:15+00:00
{"dataset_info": {"features": [{"name": "tweet_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "user", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "created_at", "dtype": "string"}, {"name": "comments", "list": [{"name": "created_at", "dtype": "string"}, {"name": "prediction", "struct": [{"name": "APPEARANCE", "dtype": "int64"}, {"name": "CALLS", "dtype": "int64"}, {"name": "CLASS", "dtype": "int64"}, {"name": "CRIMINAL", "dtype": "int64"}, {"name": "DISABLED", "dtype": "int64"}, {"name": "LGBTI", "dtype": "int64"}, {"name": "POLITICS", "dtype": "int64"}, {"name": "RACISM", "dtype": "int64"}, {"name": "WOMEN", "dtype": "int64"}]}, {"name": "text", "dtype": "string"}, {"name": "tweet_id", "dtype": "string"}, {"name": "user_id", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4141280942, "num_examples": 537201}], "download_size": 1984419392, "dataset_size": 4141280942}}
2023-02-04T00:32:48+00:00
926412aac72105ba0ed868bf4d399cdea3180de3
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: autoevaluate/binary-classification * Dataset: glue * Config: sst2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-1fbe7e90-eada-4d68-89d2-f46803a319c3-101100
[ "autotrain", "evaluation", "region:us" ]
2022-11-29T05:47:49+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-11-29T05:48:28+00:00
d85c0c1169a93dc797b6a5372cf63da40294c82c
airaspberry/hoodie-cad
[ "license:openrail", "region:us" ]
2022-11-29T05:50:40+00:00
{"license": "openrail"}
2022-12-01T20:47:53+00:00
753a3d855d0d2228a1aaa34b2e7a60a8739ae86d
# Dataset Card for "squad_validation_with_JJ_VB_synonyms" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mlxen/squad_validation_with_JJ_VB_synonyms
[ "region:us" ]
2022-11-29T06:06:25+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "validation", "num_bytes": 10484818, "num_examples": 10570}], "download_size": 1825207, "dataset_size": 10484818}}
2022-11-29T21:29:40+00:00
d326c9b4f447291a11f134cf2e6dbd9f69fa24dc
# Dataset Card for "eclassTrainST" This NLI-Dataset can be used to fine-tune Models for the task of sentence-simularity. It consists of names and descriptions of pump-properties from the ECLASS-standard.
JoBeer/eclassTrainST
[ "task_categories:sentence-similarity", "size_categories:100K<n<1M", "language:en", "region:us" ]
2022-11-29T07:05:17+00:00
{"language": ["en"], "size_categories": ["100K<n<1M"], "task_categories": ["sentence-similarity"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "entailment", "dtype": "string"}, {"name": "contradiction", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 327174992, "num_examples": 698880}, {"name": "eval", "num_bytes": 219201779, "num_examples": 450912}], "download_size": 46751846, "dataset_size": 546376771}}
2023-01-07T12:10:51+00:00
c81bada54e2cac920652c83a97075164115fb7b8
Klarks/naruto
[ "license:afl-3.0", "region:us" ]
2022-11-29T07:31:04+00:00
{"license": "afl-3.0"}
2022-11-29T07:32:15+00:00
8d5519348babfa75b9222fb37891ff66ee0b0aab
SOAP dataset Initial Version
biomegix/soap_inital
[ "license:apache-2.0", "region:us" ]
2022-11-29T07:33:15+00:00
{"license": "apache-2.0"}
2022-11-29T07:36:46+00:00
0228595df3adbff7a03f84470702dccad7670c4e
This dataset is an unspotted version of the idrak dataset.
m-aliabbas/idrak_unsplitted
[ "region:us" ]
2022-11-29T08:33:37+00:00
{}
2022-11-30T05:17:29+00:00
b26b8416b9e750a4c47becf68ce1003e7b6de805
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: autoevaluate/binary-classification * Dataset: glue * Config: sst2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-4c75f893-5bbd-4360-a0fd-dfda62c6960c-103102
[ "autotrain", "evaluation", "region:us" ]
2022-11-29T08:41:24+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-11-29T08:42:03+00:00
ad27ccdbc20c6f9ca1ef7f90712d78d4ca5a5b91
# Dataset Card for "cord-10k-processed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
SuperNova672/cord-10k-processed
[ "region:us" ]
2022-11-29T08:45:59+00:00
{"dataset_info": {"features": [{"name": "data", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 524148223, "num_examples": 695729}], "download_size": 275228391, "dataset_size": 524148223}}
2022-11-29T08:46:09+00:00
927b4b3fdd539334f3f15d7a5540f9b5d4a3fc2f
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License. Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over 110 times larger. The WikiText dataset also features a far larger vocabulary and retains the original case, punctuation and numbers - all of which are removed in PTB. As it is composed of full articles, the dataset is well suited for models that can take advantage of long term dependencies.
DBL/test
[ "region:us" ]
2022-11-29T08:48:08+00:00
{}
2022-11-29T09:50:20+00:00
7059741c560c9fdedb9e0b9f9e85d1ec38205422
nzh324/twinkle
[ "license:mit", "region:us" ]
2022-11-29T08:55:30+00:00
{"license": "mit"}
2022-11-29T08:56:16+00:00
0d9cb051a78ffe1c4c736a51de0f06251d295905
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: autoevaluate/binary-classification * Dataset: glue * Config: sst2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-1eb4eb5e-abe1-49b9-90a0-e2c93c094b24-104103
[ "autotrain", "evaluation", "region:us" ]
2022-11-29T09:06:28+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-11-29T09:07:05+00:00
ce0108bc56cbff2a37107e11918b20fa2e0fa13f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: autoevaluate/binary-classification * Dataset: glue * Config: sst2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cd279959-d310-4487-bd83-52389ad5ed20-107105
[ "autotrain", "evaluation", "region:us" ]
2022-11-29T09:32:01+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-11-29T09:32:38+00:00
3ed42ff5f08a711e22399ab4c4ad8dd4e3dbf323
# Dataset Card for "news_as2" Answer Sentence Selection version of the NewsQA dataset. For more info, check out the original [repository](https://github.com/lucadiliello/answer-selection).
lucadiliello/news_as2
[ "region:us" ]
2022-11-29T11:19:38+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 316302353, "num_examples": 1840533}, {"name": "dev", "num_bytes": 8925506, "num_examples": 51844}, {"name": "test", "num_bytes": 8824280, "num_examples": 51472}], "download_size": 35957517, "dataset_size": 334052139}}
2022-11-29T11:26:06+00:00
e5fc7d683817bdf1e44d5c74daf6b110414bb843
# Dataset Card for "trivia_as2" Answer Sentence Selection version of the TriviaQA dataset. For more info, check out the original [repository](https://github.com/lucadiliello/answer-selection).
lucadiliello/trivia_as2
[ "region:us" ]
2022-11-29T11:20:09+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 419044714, "num_examples": 1843349}, {"name": "dev", "num_bytes": 26773779, "num_examples": 117012}, {"name": "test", "num_bytes": 26061784, "num_examples": 114853}], "download_size": 184246492, "dataset_size": 471880277}}
2022-11-29T11:25:26+00:00
2971b1559b67e28d19d9ff76c3ea52b67a6993c5
# Dataset Card for "search_as2" Answer Sentence Selection version of the SearchQA dataset. For more info, check out the original [repository](https://github.com/lucadiliello/answer-selection).
lucadiliello/search_as2
[ "region:us" ]
2022-11-29T11:20:42+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 758023208, "num_examples": 3281909}, {"name": "dev", "num_bytes": 55656603, "num_examples": 236360}, {"name": "test", "num_bytes": 55473661, "num_examples": 236792}], "download_size": 332417156, "dataset_size": 869153472}}
2022-11-29T11:25:45+00:00
151915540eb243def7f837e08d9ea651e5da8cfd
# Dataset Card for "hotpot_as2" Answer Sentence Selection version of the HotpotQA dataset. For more info, check out the original [repository](https://github.com/lucadiliello/answer-selection).
lucadiliello/hotpot_as2
[ "region:us" ]
2022-11-29T11:21:40+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 132583963, "num_examples": 489238}, {"name": "dev", "num_bytes": 6483895, "num_examples": 25295}, {"name": "test", "num_bytes": 6364224, "num_examples": 24846}], "download_size": 55519634, "dataset_size": 145432082}}
2022-11-29T11:24:51+00:00
31f2cdc6bff5ad6def58e07d3c3549dc393bc086
# Dataset Card for CONDA ## Table of Contents - [Dataset Description](#dataset-description) - [Abstract](#dataset-summary) - [Leaderboards](#leaderboards) - [Evaluation Metrics](#evaluation-metrics) - [Languages](#languages) - [Video](#video) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [CONDA](https://github.com/usydnlp/CONDA) - **Paper:** [CONDA: a CONtextual Dual-Annotated dataset for in-game toxicity understanding and detection](https://arxiv.org/abs/2106.06213) - **Point of Contact:** [Caren Han]([email protected]) ## Dataset Summary Traditional toxicity detection models have focused on the single utterance level without deeper understanding of context. We introduce CONDA, a new dataset for in-game toxic language detection enabling joint intent classification and slot filling analysis, which is the core task of Natural Language Understanding (NLU). The dataset consists of 45K utterances from 12K conversations from the chat logs of 1.9K completed Dota 2 matches. We propose a robust dual semantic-level toxicity framework, which handles utterance and token-level patterns, and rich contextual chatting history. Accompanying the dataset is a thorough in-game toxicity analysis, which provides comprehensive understanding of context at utterance, token, and dual levels. Inspired by NLU, we also apply its metrics to the toxicity detection tasks for assessing toxicity and game-specific aspects. We evaluate strong NLU models on CONDA, providing fine-grained results for different intent classes and slot classes. Furthermore, we examine the coverage of toxicity nature in our dataset by comparing it with other toxicity datasets. ## Leaderboards The Codalab leaderboard can be found at: https://codalab.lisn.upsaclay.fr/competitions/7827 ### Evaluation Metrics **JSA**(Joint Semantic Accuracy) is used for ranking. An utterance is deemed correctly analysed only if both utterance-level and all the token-level labels including Os are correctly predicted. Besides, the f1 score of **utterance-level** E(xplicit) and I(mplicit) classes, **token-level** T(oxicity), D(ota-specific), S(game Slang) classes will be shown on the leaderboard (but not used as the ranking metric). ## Languages English ## Video Please enjoy a video presentation covering the main points from our paper: <p align="centre"> [![ACL_video](https://img.youtube.com/vi/qRCPSSUuf18/0.jpg)](https://www.youtube.com/watch?v=qRCPSSUuf18) </p> ## Citation Information ``` @inproceedings{weld-etal-2021-conda, title = "{CONDA}: a {CON}textual Dual-Annotated dataset for in-game toxicity understanding and detection", author = "Weld, Henry and Huang, Guanghao and Lee, Jean and Zhang, Tongshu and Wang, Kunze and Guo, Xinghong and Long, Siqu and Poon, Josiah and Han, Caren", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.213", doi = "10.18653/v1/2021.findings-acl.213", pages = "2406--2416", } ```
Matrix430/CONDA
[ "task_categories:text-classification", "task_categories:token-classification", "task_ids:intent-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:afl-3.0", "CONDA", "arxiv:2106.06213", "region:us" ]
2022-11-29T12:16:34+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["afl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification", "token-classification"], "task_ids": ["intent-classification"], "pretty_name": "CONDA", "tags": ["CONDA"]}
2022-11-30T07:03:52+00:00
07fbd698096ce94df0047ad32da4623ac9bf5a9e
license: MIT --- --- This is a Knollingbox embedding for SD2 768-v-ema.ckpt This is my first public embedding. Included are two versions of the embedding kbox-500 and xbox1000 ![small city building inside glass case](https://cdn.discordapp.com/attachments/1045349359044280360/1047121292752588820/04911-2385753755-a_small_detailed_city_intricate_details_golden_hour_style_kbox-500.png) ![Bonjai tree inside a glass box](https://cdn.discordapp.com/attachments/1045349359044280360/1047125408581156874/04946-290283845-a_high_resolution_bonsai_tree_intricate_details_golden_hour_style_kbox-500.png) ![Bonjai tree inside a glass box](https://cdn.discordapp.com/attachments/1045349359044280360/1047121980769435759/04912-2385753755-a_small_detailed_city_intricate_details_golden_hour_style_kbox-1000.png)
Rocinante2000/knollingbox
[ "region:us" ]
2022-11-29T12:42:52+00:00
{}
2022-11-29T13:30:15+00:00
7cd8020a4fd1f511fa6e94949800cc6e68410027
This dataset contains images used in the documentation of HuggingFace's Optimum library.
optimum/documentation-images
[ "region:us" ]
2022-11-29T12:47:16+00:00
{}
2023-11-29T14:42:17+00:00
2c359b436ec66c97a2f676fe48c05719c9b26559
AiBototicus/Animals
[ "license:unknown", "region:us" ]
2022-11-29T12:49:04+00:00
{"license": "unknown"}
2022-11-29T12:49:04+00:00
10ecf033e30880bf2bfbf51f7c1b88684ccfbb43
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
DTU54DL/common-native
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
2022-11-29T13:46:08+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["token-classification-other-acronym-identification"], "paperswithcode_id": "acronym-identification", "pretty_name": "Acronym Identification Dataset", "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}, {"name": "accent", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 419902426.3910719, "num_examples": 10000}, {"name": "test", "num_bytes": 41430604.33704293, "num_examples": 994}], "download_size": 440738761, "dataset_size": 461333030.72811484}, "train-eval-index": [{"col_mapping": {"labels": "tags", "tokens": "tokens"}, "config": "default", "splits": {"eval_split": "test"}, "task": "token-classification", "task_id": "entity_extraction"}]}
2022-11-30T05:41:32+00:00
b64163f010caf82f1d7393ea0d799459e089f69c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: autoevaluate/binary-classification * Dataset: glue * Config: sst2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-ecaf0dbc-43a3-4513-bbcf-d0f372522232-109106
[ "autotrain", "evaluation", "region:us" ]
2022-11-29T14:07:50+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-11-29T14:08:27+00:00
469f8914c49ddeecd0dc060a315a3d2077dd4209
# Dataset Card for ccGigafida This repository by default loads the publicly available dataset ccGigafida, which contains a small subset of the Gigafida/Gigafida2 corpus. The full datasets are private due to copyright. **If you happen to have access to the full datasets, the script will also work with those.** Instead of ``` datasets.load_dataset("cjvt/cc_gigafida") ``` please use ``` datasets.load_dataset("cjvt/cc_gigafida", "private", data_dir="<directory-containing-gigafida(2)-TEI-files>") ``` **IMPORTANT:** The script will process all `.xml` files in the provided directory and its subdirectories - make sure there are no schema or metadata files in there! ### Dataset Summary ccGigafida is a reference corpus of Slovene texts. It is a publicly available subsample of an even larger reference corpus, Gigafida (and its successor Gigafida 2). The Gigafida corpus is an extensive collection of Slovene text of various genres, from daily newspapers, magazines, all kinds of books (fiction, non-fiction, textbooks), web pages, transcriptions of parliamentary debates and similar. ### Supported Tasks and Leaderboards Language modeling. ### Languages Slovenian. ## Dataset Structure ### Data Instances The data is loaded at document-level, i.e. one instance is one document. ``` { 'id_doc': 'F0000123', 'doc_title': 'Novi tednik NT&RC', 'authors': ['neznani novinar'], 'publish_date': '1998-03-27', 'publisher': 'Novi tednik', 'genres': ['tisk/periodično/časopis'], 'doc_tokenized': [ [ ['Po', 'nekajletnem', 'počitku', 'pa', 'se', 'vračajo', 'tudi', 'kralji', 'dark', 'rock', 'godbe', 'JESUS', 'AND', 'THE', 'MARY', 'CHAIN', '.'], ['Brata', 'Reid', 'bosta', 'svojo', 'najnovejšo', 'kreacijo', '»', 'Cracking', 'Up', '«', 'objavila', 'v', 'ponedeljek', 'pri', 'trenutno', 'najuspešnejši', 'neodvisni', 'založbi', 'Creation', '(', 'vodi', 'jo', 'njun', 'nekdanji', 'menager', 'Alan', 'McGee', ',', 'zanjo', 'pa', 'poleg', 'Oasis', 'snema', 'še', 'cel', 'kup', 'popularnih', 'brit', '-', 'popovcev', ')', ',', 'tej', 'pa', 'bo', 'kmalu', 'sledil', 'tudi', 'album', '»', 'Munki', '«', '.'] ], [ ['Kultni', 'ameriški', 'tehno', 'freak', 'PLASTIKMAN', 'že', 'vrsto', 'let', 'velja', 'za', 'enega', 'izmed', 'najbolj', 'inovativnih', 'in', 'produktivnih', 'ustvarjalcev', 'sodobne', 'elektronske', 'glasbe', '.'], ['Za', 'založbo', 'Nova', 'Mute', 'je', 'v', 'preteklih', 'nekaj', 'letih', 'posnel', 'cel', 'kup', 'izvrstnih', 'underground', 'dance', 'glasbenih', 'izdelkov', ',', 'pred', 'nedavnim', 'pa', 'je', 'ljubitelje', 'tovrstne', 'godbe', 'presenetil', 'z', 'ambientalnimi', 'odisejadami', ',', 'zbranimi', 'na', 'LP-ju', '»', 'Refused', '«', ',', 'ki', 'ga', 'lahko', 'od', 'prejšnjega', 'ponedeljka', 'kupite', 'tudi', 'v', 'bolje', 'založenih', 'trgovinah', 'z', 'nosilci', 'zvoka', 'na', 'sončni', 'strani', 'Alp', '.'] ], [ ['STANE', 'ŠPEGEL'] ] ], 'doc_lemmas': [...], 'doc_msds': [...], 'doc_string': [ [ 'Po nekajletnem počitku pa se vračajo tudi kralji dark rock godbe JESUS AND THE MARY CHAIN. ', 'Brata Reid bosta svojo najnovejšo kreacijo »Cracking Up« objavila v ponedeljek pri trenutno najuspešnejši neodvisni založbi Creation (vodi jo njun nekdanji menager Alan McGee, zanjo pa poleg Oasis snema še cel kup popularnih brit-popovcev), tej pa bo kmalu sledil tudi album »Munki«.' ], [ 'Kultni ameriški tehno freak PLASTIKMAN že vrsto let velja za enega izmed najbolj inovativnih in produktivnih ustvarjalcev sodobne elektronske glasbe. ', 'Za založbo Nova Mute je v preteklih nekaj letih posnel cel kup izvrstnih underground dance glasbenih izdelkov, pred nedavnim pa je ljubitelje tovrstne godbe presenetil z ambientalnimi odisejadami, zbranimi na LP-ju »Refused«, ki ga lahko od prejšnjega ponedeljka kupite tudi v bolje založenih trgovinah z nosilci zvoka na sončni strani Alp.' ], [ 'STANE ŠPEGEL' ] ], 'id_sents': [['F0000123.000005.0', 'F0000123.000005.1'], ['F0000123.000013.0', 'F0000123.000013.1'], ['F0000123.000020.0']] } ``` ### Data Fields - `id_doc`: the document ID (string); - `doc_title`: the document title (string); - `authors`: author(s) of the document (list of string): "neznani novinar" (sl) = ("unknown/unspecified journalist"); - `publish_date`: publish date (string); - `publisher`: publisher, e.g., the name of a news agency (string); - `genres`: genre(s) of the document (list of string) - possible genres: `['tisk', 'tisk/knjižno', 'tisk/knjižno/leposlovno', 'tisk/knjižno/strokovno', 'tisk/periodično', 'tisk/periodično/časopis', 'tisk/periodično/revija', 'tisk/drugo', 'internet']`; - `doc_tokenized`: tokenized document - the top level lists represent paragraphs, the lists in the level deeper represent sentences, and each sentence contains tokens; - `doc_lemmas`: lemmatized document - same structure as `doc_tokenized`; - `doc_msds`: MSD tags of the document - same structure as `doc_tokenized` ([tagset](http://nl.ijs.si/ME/V6/msd/html/msd-sl.html)); - `doc_string`: same as `doc_tokenized` but with properly placed spaces in sentences; - `id_sents`: IDs of sentences contained inside paragraphs of the document. ## Dataset Creation Gigafida consists of texts which were published between 1990 and 2011. The texts come from printed sources and from the web. Printed part contains fiction, non-fiction and textbooks, and periodicals such as daily newspapers and magazines. Texts originating from the web were published on news portals, pages of big Slovene companies and more important governmental, educational, research, cultural and similar institutions. For more information, please check http://eng.slovenscina.eu/korpusi/gigafida. ## Additional Information ### Dataset Curators Nataša Logar; et al. (please see http://hdl.handle.net/11356/1035 for the full list) ### Licensing Information CC BY-NC-SA 4.0. ### Citation Information ``` @misc{ccGigafida, title = {Written corpus {ccGigafida} 1.0}, author = {Logar, Nata{\v s}a and Erjavec, Toma{\v z} and Krek, Simon and Gr{\v c}ar, Miha and Holozan, Peter}, url = {http://hdl.handle.net/11356/1035}, note = {Slovenian language resource repository {CLARIN}.{SI}}, copyright = {Creative Commons - Attribution-{NonCommercial}-{ShareAlike} 4.0 International ({CC} {BY}-{NC}-{SA} 4.0)}, issn = {2820-4042}, year = {2013} } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
cjvt/cc_gigafida
[ "task_categories:fill-mask", "task_categories:text-generation", "task_ids:masked-language-modeling", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:100M<n<1B", "language:sl", "license:cc-by-nc-sa-4.0", "gigafida", "gigafida2", "kres", "cckres", "reference corpus", "region:us" ]
2022-11-29T15:03:45+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["sl"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K", "100M<n<1B"], "source_datasets": [], "task_categories": ["fill-mask", "text-generation"], "task_ids": ["masked-language-modeling", "language-modeling"], "pretty_name": "Written corpus ccGigafida 1.0", "tags": ["gigafida", "gigafida2", "kres", "cckres", "reference corpus"]}
2023-01-17T13:11:14+00:00
1dd58f346b4f22529f1b9893c5c5cf504fac0a68
# Dataset Card for "multi-label-class-github-issues-text-classification" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Rami/multi-label-class-github-issues-text-classification
[ "region:us" ]
2022-11-29T16:32:12+00:00
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "labels", "sequence": "string"}, {"name": "bodyText", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2713984, "num_examples": 1556}, {"name": "valid", "num_bytes": 1296582, "num_examples": 778}, {"name": "test", "num_bytes": 1307650, "num_examples": 778}], "download_size": 2328003, "dataset_size": 5318216}}
2022-12-02T01:19:08+00:00
e6b4afb7405a5a89e4a7bf036a615619fba51025
nicoclemens/trainimagenes
[ "license:creativeml-openrail-m", "region:us" ]
2022-11-29T17:44:04+00:00
{"license": "creativeml-openrail-m"}
2022-11-29T17:48:20+00:00
76260d2a26c2848b0f307b99be4f5058e7c5f6e8
Dataset of captioned spectrograms (text describing the sound).
vucinatim/spectrogram-captions
[ "task_categories:text-to-image", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:n<1K", "language:en", "license:afl-3.0", "stable diffusion sound generation text-to-sound text-to-image-to-sound spectrogram", "region:us" ]
2022-11-29T17:44:33+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["afl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": [], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Captioned generic audio clips with spectrogram images", "tags": ["stable diffusion sound generation text-to-sound text-to-image-to-sound spectrogram"]}
2023-01-03T00:24:32+00:00
0f62f36687cbb1ce33130e30b2347d70a19403a4
KirbyShrine/bagbean2
[ "license:cc-by-nc-nd-4.0", "region:us" ]
2022-11-29T18:21:35+00:00
{"license": "cc-by-nc-nd-4.0"}
2022-11-29T18:22:57+00:00
a769431e550840dd0df50afe70a8b7b7cd78b9a2
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: bigscience/bloom-1b1 * Dataset: mathemakitten/winobias_antistereotype_test * Config: mathemakitten--winobias_antistereotype_test * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c9cce3-2280272258
[ "autotrain", "evaluation", "region:us" ]
2022-11-29T18:34:50+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b1", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test", "dataset_config": "mathemakitten--winobias_antistereotype_test", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-11-29T18:37:53+00:00
8f0961f4e5f025d6a43003f633a76414527cbedc
bidi
QonfiAI/ringeko
[ "region:us" ]
2022-11-29T19:27:51+00:00
{}
2022-11-29T19:32:58+00:00
cf1bea37aacfb2af416f095e18f9e6502183ed5c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: autoevaluate/binary-classification * Dataset: glue * Config: sst2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-205dcc30-381f-492a-a8e8-fcfbe94b826c-110107
[ "autotrain", "evaluation", "region:us" ]
2022-11-29T19:51:09+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-11-29T19:51:54+00:00
48e2443255e9b9e37fda73d8e63d8c30386eb2c5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: autoevaluate/binary-classification * Dataset: glue * Config: sst2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-7a996eab-fd9f-4453-b298-d76d6134fbe7-111108
[ "autotrain", "evaluation", "region:us" ]
2022-11-29T20:05:07+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-11-29T20:05:45+00:00
40963fbe2a534dd6f31561bef69f0f23419c6c28
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-318497e7-9d2a-403c-be28-ce4ff065ca1d-112109
[ "autotrain", "evaluation", "region:us" ]
2022-11-29T20:07:42+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/zero-shot-classification-sample"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "autoevaluate/zero-shot-classification-sample", "dataset_config": "autoevaluate--zero-shot-classification-sample", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-11-29T20:08:17+00:00
b013fdf8217123ef334fd6961e1b61c95025b28b
# Dataset Card for cSQuAD1 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary A contrast set generated from the eval set of SQuAD. Questions and answers were modified to help detecting dataset artifacts. This dataset only contains a validation set, which should only be used to evaluate a model. ### Supported Tasks Question Answering (SQuAD). ### Languages English ## Dataset Structure ### Data Instances Dataset contains 100 instances ### Data Fields | Field | Description | |----------|-------------------------------------------------- | id | Id of document containing context | | title | Title of the document | | context | The context of the question | | question | The question to answer | | answers | A list of possible answers from the context | | answer_start | The index in context where the answer starts | ### Data Splits A single `eval` split is provided ## Dataset Creation Dataset was created by modifying a sample of 100 examples from SQuAD test split. ## Additional Information ### Licensing Information Apache 2.0 license ### Citation Information TODO: add citations
dferndz/cSQuAD1
[ "task_categories:question-answering", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "language:en", "license:apache-2.0", "region:us" ]
2022-11-30T00:03:13+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["question-answering"], "task_ids": [], "pretty_name": "cSQuAD1", "tags": []}
2022-12-09T23:17:57+00:00
1ef66c38364a6625b41fd2f45a326e9027fb8de2
KirbyShrine/wally_bagbean
[ "license:cc-by-nc-nd-4.0", "region:us" ]
2022-11-30T00:03:35+00:00
{"license": "cc-by-nc-nd-4.0"}
2022-11-30T00:05:00+00:00
7ca3ac844c7902a4e397f30a25dc9957c0741bd6
# Dataset Card for "olm-test-no-dedup" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tristan/olm-test-no-dedup
[ "region:us" ]
2022-11-30T00:03:44+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "crawl_timestamp", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 249659214.0, "num_examples": 46032}], "download_size": 149319674, "dataset_size": 249659214.0}}
2022-11-30T00:03:52+00:00
8d888f4ccdc85c0aaa139556fd5ce59ab713a5de
# Dataset Card for "olm-test-normal-dedup" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tristan/olm-test-normal-dedup
[ "region:us" ]
2022-11-30T00:04:23+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "crawl_timestamp", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 211642596.0, "num_examples": 40900}], "download_size": 128804894, "dataset_size": 211642596.0}}
2022-11-30T00:33:45+00:00
678d3b2173afb3c46c3df869598e41f5f382b199
# Dataset Card for "olm-test-normal-dedup-sorted" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tristan/olm-test-normal-dedup-sorted
[ "region:us" ]
2022-11-30T00:17:14+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "crawl_timestamp", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 211642596.0, "num_examples": 40900}], "download_size": 128095054, "dataset_size": 211642596.0}}
2022-11-30T00:33:18+00:00
a9a95432d0625e10bef6188c8196c8eaa9e24847
# Dataset Card for "olm-test-no-dedup-sorted" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tristan/olm-test-no-dedup-sorted
[ "region:us" ]
2022-11-30T00:18:30+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "crawl_timestamp", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 249659214.0, "num_examples": 46032}], "download_size": 143965993, "dataset_size": 249659214.0}}
2022-11-30T00:18:38+00:00
35f8d4a5476b9a971bc5f456a2fc55b266a15a76
# Dataset Card for cSQuAD2 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary A contrast set to evaluate models trained on SQUAD on out-of-domain data. ### Supported Tasks Evaluate question-answering ### Languages English ## Dataset Structure ### Data Instances Dataset contains 40 instances ### Data Fields | Field | Description | |----------|-------------------------------------------------- | id | Id of document containing context | | title | Title of the document | | context | The context of the question | | question | The question to answer | | answers | A list of possible answers from the context | | answer_start | The index in context where the answer starts | ### Data Splits A single `test` split is provided ## Dataset Creation Dataset was created from Wikipedia articles ## Additional Information ### Licensing Information Apache 2.0 license ### Citation Information TODO: add citations
dferndz/cSQuAD2
[ "task_categories:question-answering", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "language:en", "license:apache-2.0", "region:us" ]
2022-11-30T00:49:11+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["question-answering"], "task_ids": [], "pretty_name": "cSQuAD2", "tags": []}
2022-12-09T23:18:39+00:00
e9e803033b0e37f72915b6d812d8a85d7b64dbf9
# Dataset Card for [Stackoverflow Post Questions] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Contributions](#contributions) ## Dataset Description Companies that sell Open-source software tools usually hire an army of Customer representatives to try to answer every question asked about their tool. The first step in this process is the prioritization of the question. The classification scale usually consists of 4 values, P0, P1, P2, and P3, with different meanings across every participant in the industry. On the other hand, every software developer in the world has dealt with Stack Overflow (SO); the amount of shared knowledge there is incomparable to any other website. Questions in SO are usually annotated and curated by thousands of people, providing metadata about the quality of the question. This dataset aims to provide an accurate prioritization for programming questions. ### Dataset Summary The dataset contains the title and body of stackoverflow questions and a label value(0,1,2,3) that was calculated using thresholds defined by SO badges. ### Languages English ## Dataset Structure title: string, body: string, label: int ### Data Splits The split is 40/40/20, where classes have been balaned to be around the same size. ## Dataset Creation The data set was extracted and labeled with the following query in BigQuery: ``` SELECT title, body, CASE WHEN score >= 100 OR favorite_count >= 100 OR view_count >= 10000 THEN 0 WHEN score >= 25 OR favorite_count >= 25 OR view_count >= 2500 THEN 1 WHEN score >= 10 OR favorite_count >= 10 OR view_count >= 1000 THEN 2 ELSE 3 END AS label FROM `bigquery-public-data`.stackoverflow.posts_questions ``` ### Source Data The data was extracted from the Big Query public dataset: `bigquery-public-data.stackoverflow.posts_questions` #### Initial Data Collection and Normalization The original dataset contained high class imbalance: label count 0 977424 1 2401534 2 3418179 3 16222990 Grand Total 23020127 The data was sampled from each class to have around the same amount of records on every class. ### Contributions Thanks to [@pacofvf](https://github.com/pacofvf) for adding this dataset.
pacovaldez/stackoverflow-questions-2016
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:apache-2.0", "stackoverflow", "technical questions", "region:us" ]
2022-11-30T01:18:27+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "stackoverflow_post_questions", "tags": ["stackoverflow", "technical questions"]}
2022-11-30T23:16:54+00:00
f1cce90e993c9dc149d89aaef72146f291d04c24
# Dataset Card for "train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
devincapriola/train
[ "region:us" ]
2022-11-30T01:58:19+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 291158.0, "num_examples": 15}], "download_size": 286037, "dataset_size": 291158.0}}
2022-11-30T01:58:23+00:00
4578324578f80d0c1b26940c71881ba7059d923f
# Dataset Card for "train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gavincapriola/train
[ "region:us" ]
2022-11-30T01:58:21+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 291158.0, "num_examples": 15}], "download_size": 286037, "dataset_size": 291158.0}}
2022-11-30T01:58:27+00:00
aaa35d616834f34f8efcc4e050fd477564b2a7c6
# Dataset Card for "cc_olm_no_bigscience_filters" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tristan/cc_olm_no_bigscience_filters
[ "region:us" ]
2022-11-30T02:02:11+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "crawl_timestamp", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 166774086.86538956, "num_examples": 30389}], "download_size": 53612339, "dataset_size": 166774086.86538956}}
2022-11-30T02:19:18+00:00
78a469f4bdcc13437b291ee6b50dc7c49ab562e8
# Dataset Card for "cc_olm_no_dedup" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tristan/cc_olm_no_dedup
[ "region:us" ]
2022-11-30T02:12:32+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "crawl_timestamp", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 249659214, "num_examples": 46032}], "download_size": 148670687, "dataset_size": 249659214}}
2022-11-30T02:12:39+00:00
9df6fd230bec11170c0cedab5b502969264cb9d5
# Dataset Card for "cc_olm_standard_suffix_array_dedup" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tristan/cc_olm_standard_suffix_array_dedup
[ "region:us" ]
2022-11-30T02:17:28+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "crawl_timestamp", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 194046870.62526068, "num_examples": 41463}], "download_size": 130811345, "dataset_size": 194046870.62526068}}
2022-11-30T02:17:36+00:00
221236d6f972cde27d3ca4ff1e8c4815b0c53dc4
# Dataset Card for "medmcqa_age_gender" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
society-ethics/medmcqa_age_gender
[ "region:us" ]
2022-11-30T02:20:29+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "opa", "dtype": "string"}, {"name": "opb", "dtype": "string"}, {"name": "opc", "dtype": "string"}, {"name": "opd", "dtype": "string"}, {"name": "cop", "dtype": "int64"}, {"name": "choice_type", "dtype": "string"}, {"name": "exp", "dtype": "string"}, {"name": "subject_name", "dtype": "string"}, {"name": "topic_name", "dtype": "string"}, {"name": "age.child", "dtype": "bool"}, {"name": "age.youth", "dtype": "bool"}, {"name": "age.adult", "dtype": "bool"}, {"name": "age.senior", "dtype": "bool"}, {"name": "gender.male", "dtype": "bool"}, {"name": "gender.female", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 132040415, "num_examples": 182822}, {"name": "validation", "num_bytes": 2224566, "num_examples": 4183}], "download_size": 84155335, "dataset_size": 134264981}}
2022-11-30T02:59:21+00:00
1a071ac7df3bd8ed577788253140eb276737134d
# Dataset Card for "amazon-shoe-reviews" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mchen72/amazon-shoe-reviews
[ "region:us" ]
2022-11-30T02:52:52+00:00
{"dataset_info": {"features": [{"name": "labels", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16847665.2, "num_examples": 90000}, {"name": "test", "num_bytes": 1871962.8, "num_examples": 10000}], "download_size": 11140374, "dataset_size": 18719628.0}}
2022-11-30T02:53:32+00:00
3c17b8671042b339f203018aef02f7b3088615e4
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
DTU54DL/common-native-proc
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
2022-11-30T05:44:54+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["token-classification-other-acronym-identification"], "paperswithcode_id": "acronym-identification", "pretty_name": "Acronym Identification Dataset", "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 9605830041, "num_examples": 10000}, {"name": "test", "num_bytes": 954798551, "num_examples": 994}], "download_size": 2010871786, "dataset_size": 10560628592}, "train-eval-index": [{"col_mapping": {"labels": "tags", "tokens": "tokens"}, "config": "default", "splits": {"eval_split": "test"}, "task": "token-classification", "task_id": "entity_extraction"}]}
2022-11-30T20:46:05+00:00
e5c3719d0092d25288c49a5d55e10b1f81e56021
# Dataset Card for Nail Biting Classification ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://huggingface.co/datasets/alecsharpie/nailbiting_classification](https://huggingface.co/datasets/alecsharpie/nailbiting_classification) - **Repository:** [https://github.com/alecsharpie/nomo_nailbiting](https://github.com/alecsharpie/nomo_nailbiting) - **Point of Contact:** [[email protected]]([email protected]) ### Dataset Summary A binary image dataset for classifying nailbiting. Images are cropped to only show the mouth area. Should contain edge cases such as drinking water, talking on the phone, scratching chin etc.. all in "no biting" category ## Dataset Structure ### Data Instances - 7147 Images - 14879790 bytes total - 12332617 bytes download ### Data Fields 128 x 64 (w x h, pixels) Black and white Labels - '0': biting - '1': no_biting ### Data Splits - train: 6629 (11965737 bytes) - test: 1471 (2914053 bytes) ## Dataset Creation ### Curation Rationale I wanted to create a notification system to help me stop biting my nails. It needed to contain lots of possible no-biting scenarios. eg talking on the phone ### Source Data #### Initial Data Collection and Normalization The data was scraped from stock images sites and photos of myself were taken with my webcam. MTCNN (https://github.com/ipazc/mtcnn) was then used to crop the images down to only the show the mouth area The images were then converted to a black & white colour scheme. ### Annotations #### Annotation process During the scraping process images were labelled with a description, which I then manually sanity checked. I labelled the ones of me manually. #### Who are the annotators? Alec Sharp ## Considerations for Using the Data ### Discussion of Biases & Limitations Tried to make the dataset diverse in terms of age and skin tone. Although, this dataset contains a large number of images of one subject (me) so is biased towards lower quality webcam pictures of a white male with a short beard. ### Dataset Curators Alec Sharp ### Licensing Information MIT ### Contributions Thanks to [@alecsharpie](https://github.com/alecsharpie) for adding this dataset.
alecsharpie/nailbiting_classification
[ "task_categories:image-classification", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:mit", "nailbiting", "image", "preprocesses", "region:us" ]
2022-11-30T06:02:22+00:00
{"annotations_creators": ["expert-generated", "machine-generated"], "language_creators": [], "language": ["en"], "license": ["mit"], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": [], "paperswithcode_id": "acronym-identification", "pretty_name": "Nailbiting Classification", "tags": ["nailbiting", "image", "preprocesses"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "biting", "1": "no_biting"}}}}], "splits": [{"name": "train", "num_bytes": 11965731.715, "num_examples": 6629}, {"name": "test", "num_bytes": 1485426.0, "num_examples": 736}], "download_size": 11546517, "dataset_size": 13451157.715}, "train-eval-index": [{"col_mapping": {"labels": "tags", "tokens": "tokens"}, "config": "default", "splits": {"eval_split": "test"}, "task": "token-classification", "task_id": "entity_extraction"}]}
2022-11-30T07:12:04+00:00
1d69814578151ae7d0af524aee44947c15275b20
brandnewx/sd-v1-5
[ "license:creativeml-openrail-m", "region:us" ]
2022-11-30T06:36:26+00:00
{"license": "creativeml-openrail-m"}
2022-11-30T06:36:26+00:00
945a1d2cafa04367fa1c3752a765999e113c4fa6
ghlghl/test
[ "license:openrail", "region:us" ]
2022-11-30T06:47:03+00:00
{"license": "openrail"}
2022-12-05T07:43:34+00:00
9095e8f6dc0f0bb51623ee632058b625271171dd
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
DTU54DL/common-accent
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
2022-11-30T07:46:58+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["token-classification-other-acronym-identification"], "paperswithcode_id": "acronym-identification", "pretty_name": "Acronym Identification Dataset", "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}, {"name": "accent", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 471755846.3910719, "num_examples": 10000}, {"name": "test", "num_bytes": 19497172.25755167, "num_examples": 451}], "download_size": 436911322, "dataset_size": 491253018.6486236}, "train-eval-index": [{"col_mapping": {"labels": "tags", "tokens": "tokens"}, "config": "default", "splits": {"eval_split": "test"}, "task": "token-classification", "task_id": "entity_extraction"}]}
2022-11-30T13:25:07+00:00
6f70815ecc5a8a6bef1f20e5c889b8872a047c38
# Dataset version 2 Work in progress...
albertvillanova/dummy-version
[ "source_datasets:extended|go_emotions", "license:openrail", "region:us" ]
2022-11-30T08:53:55+00:00
{"license": "openrail", "source_datasets": ["extended|go_emotions"]}
2022-12-02T10:43:51+00:00
5fb84d922ac6af2428ac1453f0b237fba17b2140
# Dataset Card for "hindawi" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gagan3012/hindawi
[ "region:us" ]
2022-11-30T09:16:31+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Noto_Sans_Arabic", "1": "Readex_Pro", "2": "Amiri", "3": "Noto_Kufi_Arabic", "4": "Reem_Kufi_Fun", "5": "Lateef", "6": "Changa", "7": "Kufam", "8": "ElMessiri", "9": "Reem_Kufi", "10": "Noto_Naskh_Arabic", "11": "Reem_Kufi_Ink", "12": "Tajawal", "13": "Aref_Ruqaa_Ink", "14": "Markazi_Text", "15": "IBM_Plex_Sans_Arabic", "16": "Vazirmatn", "17": "Harmattan", "18": "Gulzar", "19": "Scheherazade_New", "20": "Cairo", "21": "Amiri_Quran", "22": "Noto_Nastaliq_Urdu", "23": "Mada", "24": "Aref_Ruqaa", "25": "Almarai", "26": "Alkalami", "27": "Qahiri"}}}}], "splits": [{"name": "train", "num_bytes": 4098675549.992, "num_examples": 64624}, {"name": "validation", "num_bytes": 459422119.624, "num_examples": 7196}], "download_size": 4536653671, "dataset_size": 4558097669.616}}
2022-12-12T00:34:11+00:00
901ab25812e9d8de92aae67ad015699f001a4b58
# ShapeNet SDF Sample Dataset This is a subset of the [ShapeNet SDF Dataset](https://ls7-data.cs.tu-dortmund.de/shape_net/ShapeNet_SDF.tar.gz) provided by the [ShapeGan Project](https://github.com/marian42/shapegan).<br> Only Uniform SDF samples are included. ### Contents The dataset contains 8,320 data samples. Each data sample contains 200,000 uniformly distributed points and their corresponding SDF values.<br> The dataset contains three shape classes: * Airplanes (2156 samples) * Chairs (4189 samples) * Sofas (1975 samples)
AlexWolski/ShapeNet-SDF-Uniform
[ "annotations_creators:no-annotation", "size_categories:1K<n<10K", "Artificial Intelligence", "Machine Learning", "Computational Geometry", "region:us" ]
2022-11-30T09:17:03+00:00
{"annotations_creators": ["no-annotation"], "size_categories": ["1K<n<10K"], "pretty_name": "ShapeNet SDF Uniform", "tags": ["Artificial Intelligence", "Machine Learning", "Computational Geometry"]}
2022-11-30T12:08:06+00:00
40afa4c0aae68de7f75c8837c303ffcac1630cc4
# Dataset Card for "laion-hd-subset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yuvalkirstain/laion-hd-subset
[ "region:us" ]
2022-11-30T09:48:05+00:00
{"dataset_info": {"features": [{"name": "similarity", "dtype": "float64"}, {"name": "hash", "dtype": "int64"}, {"name": "punsafe", "dtype": "float64"}, {"name": "pwatermark", "dtype": "float64"}, {"name": "LANGUAGE", "dtype": "string"}, {"name": "caption", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "key", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "error_message", "dtype": "null"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "original_width", "dtype": "int64"}, {"name": "original_height", "dtype": "int64"}, {"name": "exif", "dtype": "string"}, {"name": "md5", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 4395359106.2963705, "num_examples": 13451}, {"name": "test", "num_bytes": 496904910.53063023, "num_examples": 1495}], "download_size": 4890190248, "dataset_size": 4892264016.827001}}
2022-11-30T11:07:56+00:00
9ea3f547a7df8aa97aa86181d4ea3246b51df720
AiBototicus/animalsV2
[ "license:unknown", "region:us" ]
2022-11-30T10:03:03+00:00
{"license": "unknown"}
2022-11-30T10:03:03+00:00
95a568b549723e0a383b3582e8dd6e8174d15b50
This is a Swedish NE dataset, Swe-NERC v1. Please see https://hdl.handle.net/10794/121 for more information. Included here is the manually tagged part.
vesteinn/swe-nerc
[ "region:us" ]
2022-11-30T10:26:32+00:00
{}
2022-11-30T12:40:35+00:00
9a392a9604a06b4d20929bfb75e8bd220dd10062
## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Citation Information](#citation-information) - [Contributions](#contributions) # Dataset Description - **Homepage: https://moleculenet.org/** - **Repository: https://github.com/deepchem/deepchem/tree/master** - **Paper: https://arxiv.org/abs/1703.00564** ## Dataset Summary `tox21_SRp53` is a dataset included in [MoleculeNet](https://moleculenet.org/). The "Toxicology in the 21st Century" (Tox21) initiative created a public database measuring toxicity of compounds, which has been used in the 2014 Tox21 Data Challenge. This dataset contains qualitative toxicity measurements for 8k compounds on 12 different targets, including nuclear receptors and stress response pathways. # Dataset Structure ## Data Fields Each split contains * `smiles`: the [SMILES](https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system) representation of a molecule * `selfies`: the [SELFIES](https://github.com/aspuru-guzik-group/selfies) representation of a molecule * `target`: Measured results (Active/Inactive) for bioassays ## Data Splits The dataset is split into an 80/10/10 train/valid/test split using random split. # Additional Information ## Citation Information ``` @misc{https://doi.org/10.48550/arxiv.1703.00564, doi = {10.48550/ARXIV.1703.00564}, url = {https://arxiv.org/abs/1703.00564}, author = {Wu, Zhenqin and Ramsundar, Bharath and Feinberg, Evan N. and Gomes, Joseph and Geniesse, Caleb and Pappu, Aneesh S. and Leswing, Karl and Pande, Vijay}, keywords = {Machine Learning (cs.LG), Chemical Physics (physics.chem-ph), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Physical sciences, FOS: Physical sciences}, title = {MoleculeNet: A Benchmark for Molecular Machine Learning}, publisher = {arXiv}, year = {2017}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` ## Contributions Thanks to [@SauravMaheshkar](https://github.com/SauravMaheshkar) and [@zanussbaum](https://github.com/zanussbaum) for adding this dataset
SauravMaheshkar/tox21_SRp53
[ "task_categories:other", "task_categories:graph-ml", "annotations_creators:machine-generated", "language_creators:machine-generated", "bio", "bio-chem", "molnet", "molecule-net", "biophysics", "arxiv:1703.00564", "region:us" ]
2022-11-30T10:33:29+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "task_categories": ["other", "graph-ml"], "task_ids": [], "pretty_name": "tox21_SRp53", "tags": ["bio", "bio-chem", "molnet", "molecule-net", "biophysics"]}
2023-02-12T14:30:43+00:00
81c1547cf4314a74f60c136171327a164430ac7b
# Germeval Task 2017: Shared Task on Aspect-based Sentiment in Social Media Customer Feedback In the connected, modern world, customer feedback is a valuable source for insights on the quality of products or services. This feedback allows other customers to benefit from the experiences of others and enables businesses to react on requests, complaints or recommendations. However, the more people use a product or service, the more feedback is generated, which results in the major challenge of analyzing huge amounts of feedback in an efficient, but still meaningful way. Thus, we propose a shared task on automatically analyzing customer reviews about “Deutsche Bahn” - the german public train operator with about two billion passengers each year. Example: > “RT @XXX: Da hört jemand in der Bahn so laut ‘700 Main Street’ durch seine Kopfhörer, dass ich mithören kann. :( :( :(“ As shown in the example, insights from reviews can be derived on different granularities. The review contains a general evaluation of the travel (The customer disliked the travel). Furthermore, the review evaluates a dedicated aspect of the train travel (“laut” → customer did not like the noise level). Consequently, we frame the task as aspect-based sentiment analysis with four sub tasks: ## Data format ``` ID <tab> Text <tab> Relevance <tab> Sentiment <tab> Aspect:Polarity (whitespace separated) ``` ## Links - http://ltdata1.informatik.uni-hamburg.de/germeval2017/ - https://sites.google.com/view/germeval2017-absa/ ## How to cite ```bibtex @inproceedings{germevaltask2017, title = {{GermEval 2017: Shared Task on Aspect-based Sentiment in Social Media Customer Feedback}}, author = {Michael Wojatzki and Eugen Ruppert and Sarah Holschneider and Torsten Zesch and Chris Biemann}, year = {2017}, booktitle = {Proceedings of the GermEval 2017 – Shared Task on Aspect-based Sentiment in Social Media Customer Feedback}, address={Berlin, Germany}, pages={1--12} } ```
malteos/germeval2017
[ "language:de", "region:us" ]
2022-11-30T12:53:43+00:00
{"language": ["de"]}
2022-11-30T13:49:08+00:00
07c4ea16f5d86bc8678df33857a64172d6f588f6
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
DTU54DL/common-accent-proc
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
2022-11-30T13:24:08+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["token-classification-other-acronym-identification"], "paperswithcode_id": "acronym-identification", "pretty_name": "Acronym Identification Dataset", "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 11534718760.0, "num_examples": 10000}, {"name": "test", "num_bytes": 518496848.0, "num_examples": 451}], "download_size": 3935975243, "dataset_size": 12053215608.0}, "train-eval-index": [{"col_mapping": {"labels": "tags", "tokens": "tokens"}, "config": "default", "splits": {"eval_split": "test"}, "task": "token-classification", "task_id": "entity_extraction"}]}
2022-11-30T20:41:55+00:00
ac0f2f8264b8dffe51691c6281eeb5144c700980
Harionago/Jason
[ "license:cc-by-4.0", "region:us" ]
2022-11-30T14:25:40+00:00
{"license": "cc-by-4.0"}
2022-11-30T14:26:07+00:00
5f2ed7957159a3386228b4ac8cbc9e72c0ccca18
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
DTU54DL/common-accent-augmented
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
2022-11-30T15:42:10+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["token-classification-other-acronym-identification"], "paperswithcode_id": "acronym-identification", "pretty_name": "Acronym Identification Dataset", "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "test", "num_bytes": 433226048, "num_examples": 451}, {"name": "train", "num_bytes": 9606026408, "num_examples": 10000}], "download_size": 2307300737, "dataset_size": 10039252456}, "train-eval-index": [{"col_mapping": {"labels": "tags", "tokens": "tokens"}, "config": "default", "splits": {"eval_split": "test"}, "task": "token-classification", "task_id": "entity_extraction"}]}
2022-12-07T14:00:54+00:00
d7fbc3f8de6951582085319cc2292e2bb3f20e95
shpotes/waxal-wolof
[ "license:cc-by-sa-4.0", "region:us" ]
2022-11-30T15:56:01+00:00
{"license": "cc-by-sa-4.0"}
2022-12-07T14:09:19+00:00
520557a22a9299f45211e9bf66ee75bcd4b8c8d7
# Elite Voice Project これはホロライブ所属Vtuberさくらみこ氏の声をデータセット化し音声認識などで活用できるようにする事を目的とした非公式プロジェクトです。 --- # LICENSEについて ## データセット内の音声データ すべてのデータは、[hololive productionの二次創作ガイドライン](https://hololive.hololivepro.com/guidelines/)に準拠する形で利用されています。 これらのデータの著作権はカバー株式会社等が保有しており、リポジトリオーナー、コントリビューターは一切の権利を有しておりません。 --- # 当プロジェクトへのご協力 当プロジェクトは皆様のご協力を心より歓迎いたします。 以下の方法をご一読いただき、そのうえでプルリクエストをお願い致します。 ## 始める前に [hololive productionの二次創作ガイドライン](https://hololive.hololivepro.com/guidelines/)を必ずお読みください。 --- ## 音声データの追加 基本的には、データセットに追加したい音声データを`audio_raw`ディレクトリ内の所定のディレクトリへ追加していただく形になります。 git等を使用して音声データを追加する場合にはgit-lfsが必要になります。事前にgit-lfsのインストールをお願い致します。 `audio_raw`ディレクトリ内の構造は以下の通りです。 ``` audio_raw ├─twitch │ ├─test │ │ └─<ID> │ │ ├─1.mp3 │ │ ├─2.mp3 │ │ ├─3.mp3 │ │ ├─. │ │ └─. │ └─train │ └─<ID> │ ├─1.mp3 │ ├─2.mp3 │ ├─3.mp3 │ ├─. │ └─. ├─twitter │ ├─test │ │ └─<ID> │ │ ├─1.mp3 │ │ ├─2.mp3 │ │ ├─3.mp3 │ │ ├─. │ │ └─. │ └─train │ └─<ID> │ ├─1.mp3 │ ├─2.mp3 │ ├─3.mp3 │ ├─. │ └─. └─youtube ├─test │ └─<ID> │ ├─1.mp3 │ ├─2.mp3 │ ├─3.mp3 │ ├─. │ └─. └─train └─<ID> ├─1.mp3 ├─2.mp3 ├─3.mp3 ├─. └─. ``` - `youtube`, `twitch`, `twitch`ディレクトリはデータセットに追加するデータの切り出し元のプラットフォーム名です。 - `train`と`test`ディレクトリについてですが、[OpenAI Whisper](https://openai.com/blog/whisper/)等の学習を行う際にtrainとtest、2種類のデータが必要になるために存在しています。 - `train`と`test`には同じ配信から切り出したデータを入れても良いですが全く同じデータを入れることは辞めてください。正確に学習を行うことができなくなります。 - `<ID>`には音声データを切り出す元になった配信等のIDが入ります。 - YouTubeであれば`https://www.youtube.com/watch?v=X9zw0QF12Kc`の`X9zw0QF12Kc`がディレクトリ名となります。 - Twitterであれば`https://twitter.com/i/spaces/1lPKqmyQPOAKb`の`1lPKqmyQPOAKb`がディレクトリ名となります。 - Twitchであれば`https://www.twitch.tv/videos/824387510`の`824387510`がディレクトリ名となります。 - `<ID>`ディレクトリ内には連番でmp3形式の音声ファイルを入れてください。 - 音声データは30秒以内である必要があります。 - BGMやSE、ノイズ等が含まれる音声データは避けてください。 - あまりに短すぎる音声データは避けてください。(既にデータセットにある音声は削除予定です。) - 出来る限り30秒に近い音声データを入れていただけると助かります。 - 文脈のある音声データが望ましいです。 - 英語の音声は避けてください。 --- ## 書き起こしテキストデータの追加 基本的には、データセットに追加したい音声データの書き起こしテキストデータを`transcript_raw`ディレクトリ内の所定のディレクトリへ追加していただく形になります。 `transcript_raw`ディレクトリ内の構造は以下の通りです。 ``` transcript_raw ├─twitch │ ├─test │ │ └─<ID>.csv │ │ │ └─train │ └─<ID>.csv │ ├─twitter │ ├─test │ │ └─<ID>.csv │ │ │ └─train │ └─<ID>.csv │ └─youtube ├─test │ └─<ID>.csv │ └─train └─<ID>.csv ``` - `youtube`, `twitch`, `twitch`ディレクトリはデータセットに追加するデータの切り出し元のプラットフォーム名です。 - `<ID>`には音声データを切り出す元になった配信等のIDが入ります。 - YouTubeであれば`https://www.youtube.com/watch?v=X9zw0QF12Kc`の`X9zw0QF12Kc`がディレクトリ名となります。 - Twitterであれば`https://twitter.com/i/spaces/1lPKqmyQPOAKb`の`1lPKqmyQPOAKb`がディレクトリ名となります。 - Twitchであれば`https://www.twitch.tv/videos/824387510`の`824387510`がディレクトリ名となります。 - `<ID>.csv`について - 必ず`audio_raw`に追加した音声データに対応した書き起こしテキストを追加する必要があります。 - 句読点、!,?等は正確に入れてください。 - 半角英数字記号を使用してください。(!, ?, 1等) - 漢数字は避けてください。 - csvファイルの1行目は必ず`path,sentence`で始めてください。 - 書き起こしテキストはWhisper等で一度書き起こしたものを修正して行く方法を推奨致します。 ### CSVファイルの記述例 ```csv path,sentence 1.mp3,雷が落ちた時のみこ 2.mp3,コメント止まった? 3.mp3,見えてるー?いやコメント止まった。壊れた。 4.mp3,インターネット繋がってない! 5.mp3,雷鳴ったよまた ```
Elite35P-Server/EliteVoiceProject
[ "annotations_creators:crowdsourced", "language_creators:さくらみこ", "language_creators:hololive production", "multilinguality:monolingual", "language:ja", "license:other", "region:us" ]
2022-11-30T16:10:15+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["\u3055\u304f\u3089\u307f\u3053", "hololive production"], "language": ["ja"], "license": "other", "multilinguality": ["monolingual"]}
2023-01-14T19:28:16+00:00
6d84ffafa2b74d9c9b8d567ad338ad2e6c255a6d
# Dataset Card for JSNLI [![CI](https://github.com/shunk031/huggingface-datasets_jsnli/actions/workflows/ci.yaml/badge.svg)](https://github.com/shunk031/huggingface-datasets_jsnli/actions/workflows/ci.yaml) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - Homepage: https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88 - Repository: https://github.com/shunk031/huggingface-datasets_jsnli ### Dataset Summary [日本語 SNLI(JSNLI) データセット - KUROHASHI-CHU-MURAWAKI LAB](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88 ) より: > 本データセットは自然言語推論 (NLI) の標準的ベンチマークである [SNLI](https://nlp.stanford.edu/projects/snli/) を日本語に翻訳したものです。 ### Dataset Preprocessing ### Supported Tasks and Leaderboards ### Languages 注釈はすべて日本語を主要言語としています。 ## Dataset Structure > データセットは TSV フォーマットで、各行がラベル、前提、仮説の三つ組を表します。前提、仮説は JUMAN++ によって形態素分割されています。以下に例をあげます。 ``` entailment 自転車 で 2 人 の 男性 が レース で 競い ます 。 人々 は 自転車 に 乗って います 。 ``` ### Data Instances ```python from datasets import load_dataset load_dataset("shunk031/jsnli", "without-filtering") ``` ```json { 'label': 'neutral', 'premise': 'ガレージ で 、 壁 に ナイフ を 投げる 男 。', 'hypothesis': '男 は 魔法 の ショー の ため に ナイフ を 投げる 行為 を 練習 して い ます 。' } ``` ### Data Fields ### Data Splits | name | train | validation | |-------------------|--------:|-----------:| | without-filtering | 548,014 | 3,916 | | with-filtering | 533,005 | 3,916 | ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process > SNLI に機械翻訳を適用した後、評価データにクラウドソーシングによる正確なフィルタリング、学習データに計算機による自動フィルタリングを施すことで構築されています。 > データセットは学習データを全くフィルタリングしていないものと、フィルタリングした中で最も精度が高かったものの 2 種類を公開しています。データサイズは、フィルタリング前の学習データが 548,014 ペア、フィルタリング後の学習データが 533,005 ペア、評価データは 3,916 ペアです。詳細は参考文献を参照してください。 #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information > 本データセットに関するご質問は nl-resource あっと nlp.ist.i.kyoto-u.ac.jp 宛にお願いいたします。 ### Dataset Curators ### Licensing Information > このデータセットのライセンスは、SNLI のライセンスと同じ [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) に従います。SNLI に関しては参考文献を参照してください。 ### Citation Information ```bibtex @article{吉越卓見 2020 機械翻訳を用いた自然言語推論データセットの多言語化, title={機械翻訳を用いた自然言語推論データセットの多言語化}, author={吉越卓見 and 河原大輔 and 黒橋禎夫 and others}, journal={研究報告自然言語処理 (NL)}, volume={2020}, number={6}, pages={1--8}, year={2020} } ``` ```bibtex @inproceedings{bowman2015large, title={A large annotated corpus for learning natural language inference}, author={Bowman, Samuel and Angeli, Gabor and Potts, Christopher and Manning, Christopher D}, booktitle={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing}, pages={632--642}, year={2015} } ``` ```bibtex @article{young2014image, title={From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions}, author={Young, Peter and Lai, Alice and Hodosh, Micah and Hockenmaier, Julia}, journal={Transactions of the Association for Computational Linguistics}, volume={2}, pages={67--78}, year={2014}, publisher={MIT Press} } ``` ### Contributions JSNLI データセットを公開してくださった吉越 卓見さま,河原 大輔さま,黒橋 禎夫さまに心から感謝します。
shunk031/jsnli
[ "task_categories:text-classification", "task_ids:natural-language-inference", "task_ids:multi-input-text-classification", "multilinguality:monolingual", "language:ja", "license:cc-by-sa-4.0", "natural-language-inference", "nli", "jsnli", "region:us" ]
2022-11-30T16:34:02+00:00
{"language": ["ja"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference", "multi-input-text-classification"], "tags": ["natural-language-inference", "nli", "jsnli"], "datasets": ["without-filtering", "with-filtering"], "metrics": ["accuracy"]}
2022-12-12T07:36:58+00:00
19f3c7b2cc41d158ff70a666de27f76098a1b2e6
# Dataset Card for clintox ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage: https://moleculenet.org/** - **Repository: https://github.com/deepchem/deepchem/tree/master** - **Paper: https://arxiv.org/abs/1703.00564** ### Dataset Summary `clintox` is a dataset included in [MoleculeNet](https://moleculenet.org/). Qualitative data of drugs approved by the FDA and those that have failed clinical trials for toxicity reasons. This uses the `CT_TOX` task. Note, there was one molecule in the training set that could not be converted to SELFIES (`*C(=O)[C@H](CCCCNC(=O)OCCOC)NC(=O)OCCOC`) ## Dataset Structure ### Data Fields Each split contains * `smiles`: the [SMILES](https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system) representation of a molecule * `selfies`: the [SELFIES](https://github.com/aspuru-guzik-group/selfies) representation of a molecule * `target`: clinical trial toxicity (or absence of toxicity) ### Data Splits The dataset is split into an 80/10/10 train/valid/test split using scaffold split. ### Source Data #### Initial Data Collection and Normalization Data was originially generated by the Pande Group at Standford ### Licensing Information This dataset was originally released under an MIT license ### Citation Information ``` @misc{https://doi.org/10.48550/arxiv.1703.00564, doi = {10.48550/ARXIV.1703.00564}, url = {https://arxiv.org/abs/1703.00564}, author = {Wu, Zhenqin and Ramsundar, Bharath and Feinberg, Evan N. and Gomes, Joseph and Geniesse, Caleb and Pappu, Aneesh S. and Leswing, Karl and Pande, Vijay}, keywords = {Machine Learning (cs.LG), Chemical Physics (physics.chem-ph), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Physical sciences, FOS: Physical sciences}, title = {MoleculeNet: A Benchmark for Molecular Machine Learning}, publisher = {arXiv}, year = {2017}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` ### Contributions Thanks to [@zanussbaum](https://github.com/zanussbaum) for adding this dataset.
zpn/clintox
[ "task_categories:other", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "license:mit", "bio", "bio-chem", "molnet", "molecule-net", "biophysics", "arxiv:1703.00564", "region:us" ]
2022-11-30T16:59:11+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["other"], "task_ids": [], "pretty_name": "clintox", "tags": ["bio", "bio-chem", "molnet", "molecule-net", "biophysics"]}
2022-12-09T20:35:15+00:00
f22a1ceffaa5ecbecebef3f5ab15a33c3ee14768
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/led-large-book-summary-continued-r1 * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-dd12a3-2278572227
[ "autotrain", "evaluation", "region:us" ]
2022-11-30T17:03:35+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/led-large-book-summary-continued-r1", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}}
2022-12-01T01:50:12+00:00
0873f3eccba2eb07fc45a3b8faf4186c1345d6f4
# Dataset Card for delaney ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage: https://moleculenet.org/** - **Repository: https://github.com/deepchem/deepchem/tree/master** - **Paper: https://arxiv.org/abs/1703.00564** ### Dataset Summary `delaney` (aka. `ESOL`) is a dataset included in [MoleculeNet](https://moleculenet.org/). Water solubility data(log solubility in mols per litre) for common organic small molecules. ## Dataset Structure ### Data Fields Each split contains * `smiles`: the [SMILES](https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system) representation of a molecule * `selfies`: the [SELFIES](https://github.com/aspuru-guzik-group/selfies) representation of a molecule * `target`: log solubility in mols per litre ### Data Splits The dataset is split into an 80/10/10 train/valid/test split using scaffold split. ### Source Data #### Initial Data Collection and Normalization Data was originially generated by the Pande Group at Standford ### Licensing Information This dataset was originally released under an MIT license ### Citation Information ``` @misc{https://doi.org/10.48550/arxiv.1703.00564, doi = {10.48550/ARXIV.1703.00564}, url = {https://arxiv.org/abs/1703.00564}, author = {Wu, Zhenqin and Ramsundar, Bharath and Feinberg, Evan N. and Gomes, Joseph and Geniesse, Caleb and Pappu, Aneesh S. and Leswing, Karl and Pande, Vijay}, keywords = {Machine Learning (cs.LG), Chemical Physics (physics.chem-ph), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Physical sciences, FOS: Physical sciences}, title = {MoleculeNet: A Benchmark for Molecular Machine Learning}, publisher = {arXiv}, year = {2017}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` ### Contributions Thanks to [@zanussbaum](https://github.com/zanussbaum) for adding this dataset.
zpn/delaney
[ "task_categories:other", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:n<1K", "license:mit", "bio", "bio-chem", "molnet", "molecule-net", "biophysics", "arxiv:1703.00564", "region:us" ]
2022-11-30T17:06:42+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": [], "task_categories": ["other"], "task_ids": [], "pretty_name": "delaney", "tags": ["bio", "bio-chem", "molnet", "molecule-net", "biophysics"]}
2022-11-30T17:09:36+00:00