sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
4cae50882a24a955155db7d170b571e93ab8102f
# POS Tagging Dataset ## Original Data Source #### Conll2003 E. F. Tjong Kim Sang and F. De Meulder, Proceedings of the Seventh Conference on Natural Language Learning at HLT- NAACL 2003, 2003, pp. 142–147. #### The Peen Treebank M. P. Marcus, B. Santorini and M. A. Marcinkiewicz, Comput. Linguist., 1993, 19, 313–330. ## Citation BatteryDataExtractor: battery-aware text-mining software embedded with BERT models
batterydata/pos_tagging
[ "task_categories:token-classification", "language:en", "license:apache-2.0", "region:us" ]
2022-09-05T14:44:21+00:00
{"language": ["en"], "license": ["apache-2.0"], "task_categories": ["token-classification"], "pretty_name": "Part-of-speech(POS) Tagging Dataset for BatteryDataExtractor"}
2022-09-05T15:05:33+00:00
39190a2140c5fc237fed556ef88449015271850b
# Abbreviation Detection Dataset ## Original Data Source #### PLOS I. Zilio, H. Saadany, P. Sharma, D. Kanojia and C. Orasan, PLOD: An Abbreviation Detection Dataset for Scientific Docu- ments, 2022, https://arxiv.org/abs/2204.12061. #### SDU@AAAI-21 A. P. B. Veyseh, F. Dernoncourt, Q. H. Tran and T. H. Nguyen, Proceedings of the 28th International Conference on Compu- tational Linguistics, 2020, pp. 3285–3301 ## Citation BatteryDataExtractor: battery-aware text-mining software embedded with BERT models
batterydata/abbreviation_detection
[ "task_categories:token-classification", "language:en", "license:apache-2.0", "arxiv:2204.12061", "region:us" ]
2022-09-05T14:46:13+00:00
{"language": ["en"], "license": ["apache-2.0"], "task_categories": ["token-classification"], "pretty_name": "Abbreviation Detection Dataset for BatteryDataExtractor"}
2022-09-05T15:02:48+00:00
4976bb5ace12abe22747787d3663a203946c319e
# CNER Dataset ## Original Data Source #### CHEMDNER M. Krallinger, O. Rabal, F. Leitner, M. Vazquez, D. Salgado, Z. Lu, R. Leaman, Y. Lu, D. Ji, D. M. Lowe et al., J. Cheminf., 2015, 7, 1–17. #### MatScholar I. Weston, V. Tshitoyan, J. Dagdelen, O. Kononova, A. Tre- wartha, K. A. Persson, G. Ceder and A. Jain, J. Chem. Inf. Model., 2019, 59, 3692–3702. #### SOFC A. Friedrich, H. Adel, F. Tomazic, J. Hingerl, R. Benteau, A. Maruscyk and L. Lange, The SOFC-exp corpus and neural approaches to information extraction in the materials science domain, 2020, https://arxiv.org/abs/2006.03039. #### BioNLP G. Crichton, S. Pyysalo, B. Chiu and A. Korhonen, BMC Bioinf., 2017, 18, 1–14. ## Citation BatteryDataExtractor: battery-aware text-mining software embedded with BERT models
batterydata/cner
[ "task_categories:token-classification", "language:en", "license:apache-2.0", "arxiv:2006.03039", "region:us" ]
2022-09-05T14:49:33+00:00
{"language": ["en"], "license": ["apache-2.0"], "task_categories": ["token-classification"], "pretty_name": "Chemical Named Entity Recognition (CNER) Dataset for BatteryDataExtractor"}
2022-09-05T15:07:43+00:00
9b0c3068e673d857989dd4d001a118cd945d50e2
daspartho/anime-or-not
[ "license:apache-2.0", "region:us" ]
2022-09-05T16:58:29+00:00
{"license": "apache-2.0"}
2022-09-12T05:52:56+00:00
3d2bbff4d30d5c41d2cbf5b1d55fbc8d10cfdbaa
# Dataset Card for Code Comment Classification ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/poojaruhal/RP-class-comment-classification - **Repository:** https://github.com/poojaruhal/RP-class-comment-classification - **Paper:** https://doi.org/10.1016/j.jss.2021.111047 - **Point of Contact:** https://poojaruhal.github.io ### Dataset Summary The dataset contains class comments extracted from various big and diverse open-source projects of three programming languages Java, Smalltalk, and Python. ### Supported Tasks and Leaderboards Single-label text classification and Multi-label text classification ### Languages Java, Python, Smalltalk ## Dataset Structure ### Data Instances ```json { "class" : "Absy.java", "comment":"* Azure Blob File System implementation of AbstractFileSystem. * This impl delegates to the old FileSystem", "summary":"Azure Blob File System implementation of AbstractFileSystem.", "expand":"This impl delegates to the old FileSystem", "rational":"", "deprecation":"", "usage":"", "exception":"", "todo":"", "incomplete":"", "commentedcode":"", "directive":"", "formatter":"", "license":"", "ownership":"", "pointer":"", "autogenerated":"", "noise":"", "warning":"", "recommendation":"", "precondition":"", "codingGuidelines":"", "extension":"", "subclassexplnation":"", "observation":"", } ``` ### Data Fields class: name of the class with the language extension. comment: class comment of the class categories: a category that sentence is classified to. It indicated a particular type of information. ### Data Splits 10-fold cross validation ## Dataset Creation ### Curation Rationale To identify the infomation embedded in the class comments across various projects and programming languages. ### Source Data #### Initial Data Collection and Normalization It contains the dataset extracted from various open-source projects of three programming languages Java, Smalltalk, and Python. - #### Java Each file contains all the extracted class comments from one project. We have a total of six java projects. We chose a sample of 350 comments from all these files for our experiment. - [Eclipse.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/) - Extracted class comments from the Eclipse project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Eclipse](https://github.com/eclipse). - [Guava.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Guava.csv) - Extracted class comments from the Guava project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Guava](https://github.com/google/guava). - [Guice.csv](/https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Guice.csv) - Extracted class comments from the Guice project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Guice](https://github.com/google/guice). - [Hadoop.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Hadoop.csv) - Extracted class comments from the Hadoop project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Apache Hadoop](https://github.com/apache/hadoop) - [Spark.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Spark.csv) - Extracted class comments from the Apache Spark project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Apache Spark](https://github.com/apache/spark) - [Vaadin.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Vaadin.csv) - Extracted class comments from the Vaadin project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Vaadin](https://github.com/vaadin/framework) - [Parser_Details.md](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Parser_Details.md) - Details of the parser used to parse class comments of Java [ Projects](https://doi.org/10.5281/zenodo.4311839) - #### Smalltalk/ Each file contains all the extracted class comments from one project. We have a total of seven Pharo projects. We chose a sample of 350 comments from all these files for our experiment. - [GToolkit.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/GToolkit.csv) - Extracted class comments from the GToolkit project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [Moose.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Moose.csv) - Extracted class comments from the Moose project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [PetitParser.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/PetitParser.csv) - Extracted class comments from the PetitParser project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [Pillar.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Pillar.csv) - Extracted class comments from the Pillar project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [PolyMath.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/PolyMath.csv) - Extracted class comments from the PolyMath project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [Roassal2.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Roassal2.csv) -Extracted class comments from the Roassal2 project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [Seaside.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Seaside.csv) - Extracted class comments from the Seaside project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [Parser_Details.md](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Parser_Details.md) - Details of the parser used to parse class comments of Pharo [ Projects](https://doi.org/10.5281/zenodo.4311839) - #### Python/ Each file contains all the extracted class comments from one project. We have a total of seven Python projects. We chose a sample of 350 comments from all these files for our experiment. - [Django.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Django.csv) - Extracted class comments from the Django project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Django](https://github.com/django) - [IPython.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/IPython.csv) - Extracted class comments from the Ipython project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub[IPython](https://github.com/ipython/ipython) - [Mailpile.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Mailpile.csv) - Extracted class comments from the Mailpile project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Mailpile](https://github.com/mailpile/Mailpile) - [Pandas.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Pandas.csv) - Extracted class comments from the Pandas project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [pandas](https://github.com/pandas-dev/pandas) - [Pipenv.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Pipenv.csv) - Extracted class comments from the Pipenv project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Pipenv](https://github.com/pypa/pipenv) - [Pytorch.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Pytorch.csv) - Extracted class comments from the Pytorch project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [PyTorch](https://github.com/pytorch/pytorch) - [Requests.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Requests.csv) - Extracted class comments from the Requests project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Requests](https://github.com/psf/requests/) - [Parser_Details.md](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Parser_Details.md) - Details of the parser used to parse class comments of Python [ Projects](https://doi.org/10.5281/zenodo.4311839) ### Annotations #### Annotation process Four evaluators (all authors of this paper (https://doi.org/10.1016/j.jss.2021.111047)), each having at least four years of programming experience, participated in the annonation process. We partitioned Java, Python, and Smalltalk comments equally among all evaluators based on the distribution of the language's dataset to ensure the inclusion of comments from all projects and diversified lengths. Each classification is reviewed by three evaluators. The details are given in the paper [Rani et al., JSS, 2021](https://doi.org/10.1016/j.jss.2021.111047) #### Who are the annotators? [Rani et al., JSS, 2021](https://doi.org/10.1016/j.jss.2021.111047) ### Personal and Sensitive Information Author information embedded in the text ## Additional Information ### Dataset Curators [Pooja Rani, Ivan, Manuel] ### Licensing Information [license: cc-by-nc-sa-4.0] ### Citation Information ``` @article{RANI2021111047, title = {How to identify class comment types? A multi-language approach for class comment classification}, journal = {Journal of Systems and Software}, volume = {181}, pages = {111047}, year = {2021}, issn = {0164-1212}, doi = {https://doi.org/10.1016/j.jss.2021.111047}, url = {https://www.sciencedirect.com/science/article/pii/S0164121221001448}, author = {Pooja Rani and Sebastiano Panichella and Manuel Leuenberger and Andrea {Di Sorbo} and Oscar Nierstrasz}, keywords = {Natural language processing technique, Code comment analysis, Software documentation} } ```
poojaruhal/Code-comment-classification
[ "task_categories:text-classification", "task_ids:intent-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0", "'source code comments'", "'java class comments'", "'python class comments'", "'\nsmalltalk class comments'", "region:us" ]
2022-09-05T20:25:33+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["intent-classification", "multi-label-classification"], "pretty_name": "Code-comment-classification\n", "tags": ["'source code comments'", "'java class comments'", "'python class comments'", "'\nsmalltalk class comments'"]}
2022-10-16T10:11:46+00:00
dbfb6932cd47473876f8869f8fae932cc9099edb
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-book-summary * Dataset: big_patent * Config: y * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-big_patent-y-7d0862-15806176
[ "autotrain", "evaluation", "region:us" ]
2022-09-05T22:51:53+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["big_patent"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-book-summary", "metrics": [], "dataset_name": "big_patent", "dataset_config": "y", "dataset_split": "test", "col_mapping": {"text": "description", "target": "abstract"}}}
2022-09-07T02:32:35+00:00
214a9794ff850e1c35c9d22c58752e1ee0cd10df
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2 * Dataset: big_patent * Config: y * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-big_patent-y-7d0862-15806177
[ "autotrain", "evaluation", "region:us" ]
2022-09-05T22:51:57+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["big_patent"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2", "metrics": [], "dataset_name": "big_patent", "dataset_config": "y", "dataset_split": "test", "col_mapping": {"text": "description", "target": "abstract"}}}
2022-09-06T09:16:50+00:00
f4f99ef293bfa13ce34d2cf7ece919d9776ff0ca
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/led-base-book-summary * Dataset: big_patent * Config: y * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-big_patent-y-7d0862-15806178
[ "autotrain", "evaluation", "region:us" ]
2022-09-05T22:52:02+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["big_patent"], "eval_info": {"task": "summarization", "model": "pszemraj/led-base-book-summary", "metrics": [], "dataset_name": "big_patent", "dataset_config": "y", "dataset_split": "test", "col_mapping": {"text": "description", "target": "abstract"}}}
2022-09-06T15:50:20+00:00
f1e5518e824f5eaddfe81377a58ea18c329abb55
# Dataset Card for BIOSSES ## Dataset Description - **Homepage:** https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html - **Pubmed:** True - **Public:** True - **Tasks:** STS BIOSSES computes similarity of biomedical sentences by utilizing WordNet as the general domain ontology and UMLS as the biomedical domain specific ontology. The original paper outlines the approaches with respect to using annotator score as golden standard. Source view will return all annotator score individually whereas the Bigbio view will return the mean of the annotator score. ## Citation Information ``` @article{souganciouglu2017biosses, title={BIOSSES: a semantic sentence similarity estimation system for the biomedical domain}, author={Soğancıoğlu, Gizem, Hakime Öztürk, and Arzucan Özgür}, journal={Bioinformatics}, volume={33}, number={14}, pages={i49--i58}, year={2017}, publisher={Oxford University Press} } ```
bigbio/biosses
[ "multilinguality:monolingual", "language:en", "license:gpl-3.0", "region:us" ]
2022-09-06T00:12:20+00:00
{"language": ["en"], "license": "gpl-3.0", "multilinguality": "monolingual", "pretty_name": "BIOSSES", "bigbio_language": ["English"], "bigbio_license_shortname": "GPL_3p0", "homepage": "https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["SEMANTIC_SIMILARITY"]}
2022-12-22T15:32:58+00:00
d1cb85a2f99002f343fad318b7f3d9d1b308921f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-xsum * Dataset: samsum * Config: samsum * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-samsum-samsum-fbc19a-15816179
[ "autotrain", "evaluation", "region:us" ]
2022-09-06T01:39:58+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-xsum", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-06T01:43:18+00:00
f716bbc8bd71337c4f04d64ba21af0a9043a76e3
# Dataset Card for UKP ASPECT ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage: https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/1998** - **Paper: https://aclanthology.org/P19-1054/** - **Leaderboard: n/a** - **Point of Contact: data\[at\]ukp.informatik.tu-darmstadt.de** - **(http://www.ukp.tu-darmstadt.de/)** ### Dataset Summary The UKP ASPECT Corpus includes 3,595 sentence pairs over 28 controversial topics. The sentences were crawled from a large web crawl and identified as arguments for a given topic using the ArgumenText system. The sampling and matching of the sentence pairs is described in the paper. Then, the argument similarity annotation was done via crowdsourcing. Each crowd worker could choose from four annotation options (the exact guidelines are provided in the Appendix of the paper). If you are having problems with downloading the dataset from the huggingface hub, please download it from [here](https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/1998). ### Supported Tasks and Leaderboards This dataset supports the following tasks: * Sentence pair classification * Topic classification ### Languages English ## Dataset Structure ### Data Instances Each instance consists of a topic, a pair of sentences, and an argument similarity label. ``` {"3d printing";"This could greatly increase the quality of life of those currently living in less than ideal conditions.";"The advent and spread of new technologies, like that of 3D printing can transform our lives in many ways.";"DTORCD"} ``` ### Data Fields * topic: the topic keywords used to retrieve the documents * sentence_1: the first sentence of the pair * sentence_2: the second sentence of the pair * label: the consolidated crowdsourced gold-standard annotation of the sentence pair (DTORCD, NS, SS, HS) * Different Topic/Can’t decide (DTORCD): Either one or both of the sentences belong to a topic different than the given one, or you can’t understand one or both sentences. If you choose this option, you need to very briefly explain, why you chose it (e.g.“The second sentence is not grammatical”, “The first sentence is from a different topic” etc.). * No Similarity (NS): The two arguments belong to the same topic, but they don’t show any similarity, i.e. they speak aboutcompletely different aspects of the topic * Some Similarity (SS): The two arguments belong to the same topic, showing semantic similarity on a few aspects, but thecentral message is rather different, or one argument is way less specific than the other * High Similarity (HS): The two arguments belong to the same topic, and they speak about the same aspect, e.g. using different words ### Data Splits The dataset currently does not contain standard data splits. ## Dataset Creation ### Curation Rationale This dataset contains sentence pairs annotated with argument similarity labels that can be used to evaluate argument clustering. ### Source Data #### Initial Data Collection and Normalization The UKP ASPECT corpus consists of sentences which have been identified as arguments for given topics using the ArgumenText system (Stab et al., 2018). The ArgumenText system expects as input an arbitrary topic (query) and searches a large web crawl for relevant documents. Finally, it classifies all sentences contained in the most relevant documents for a given query into pro, con or non-arguments (with regard to the given topic). We picked 28 topics related to currently discussed issues from technology and society. To balance the selection of argument pairs with regard to their similarity, we applied a weak supervision approach. For each of our 28 topics, we applied a sampling strategy that picks randomly two pro or con argument sentences at random, calculates their similarity using the system by Misra et al. (2016), and keeps pairs with a probability aiming to balance diversity across the entire similarity scale. This was repeated until we reached 3,595 arguments pairs, about 130 pairs for each topic. #### Who are the source language producers? Unidentified contributors to the world wide web. ### Annotations #### Annotation process The argument pairs were annotated on a range of three degrees of similarity (no, some, and high similarity) with the help of crowd workers on the Amazon Mechanical Turk platform. To account for unrelated pairs due to the sampling process, crowd workers could choose a fourth option. We collected seven assignments per pair and used Multi-Annotator Competence Estimation (MACE) with a threshold of 1.0 (Hovy et al., 2013) to consolidate votes into a gold standard. #### Who are the annotators? Crowd workers on Amazon Mechanical Turk ### Personal and Sensitive Information This dataset is fully anonymized. ## Additional Information You can download the data via: ``` from datasets import load_dataset dataset = load_dataset("UKPLab/UKP_ASPECT") ``` Please find more information about the code and how the data was collected in the [paper](https://aclanthology.org/P19-1054/). ### Dataset Curators Curation is managed by our [data manager](https://www.informatik.tu-darmstadt.de/ukp/research_ukp/ukp_research_data_and_software/ukp_data_and_software.en.jsp) at UKP. ### Licensing Information [CC-by-NC 3.0](https://creativecommons.org/licenses/by-nc/3.0/) ### Citation Information Please cite this data using: ``` @inproceedings{reimers2019classification, title={Classification and Clustering of Arguments with Contextualized Word Embeddings}, author={Reimers, Nils and Schiller, Benjamin and Beck, Tilman and Daxenberger, Johannes and Stab, Christian and Gurevych, Iryna}, booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics}, pages={567--578}, year={2019} } ``` ### Contributions Thanks to [@buenalaune](https://github.com/buenalaune) for adding this dataset. ## Tags annotations_creators: - crowdsourced language: - en language_creators: - found license: - cc-by-nc-3.0 multilinguality: - monolingual pretty_name: UKP ASPECT Corpus size_categories: - 1K<n<10K source_datasets: - original tags: - argument pair - argument similarity task_categories: - text-classification task_ids: - topic-classification - multi-input-text-classification - semantic-similarity-classification
UKPLab/UKP_ASPECT
[ "license:cc-by-nc-3.0", "region:us" ]
2022-09-06T07:30:15+00:00
{"license": "cc-by-nc-3.0"}
2023-06-19T07:18:13+00:00
c2bb89e72da89cf38680d5bb47fe689b0716bfc5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: t5-small * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Carmen](https://huggingface.co/Carmen) for evaluating this model.
autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-0b05dc-15886185
[ "autotrain", "evaluation", "region:us" ]
2022-09-06T09:39:07+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "t5-small", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-09-06T09:42:21+00:00
e0ec01c52f1ebc2be766493eca5f571b4e20474b
# Dataset Card for FaQuAD ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/liafacom/faquad - **Repository:** https://github.com/liafacom/faquad - **Paper:** https://ieeexplore.ieee.org/document/8923668/ <!-- - **Leaderboard:** --> - **Point of Contact:** Eraldo R. Fernandes <[email protected]> ### Dataset Summary Academic secretaries and faculty members of higher education institutions face a common problem: the abundance of questions sent by academics whose answers are found in available institutional documents. The official documents produced by Brazilian public universities are vast and disperse, which discourage students to further search for answers in such sources. In order to lessen this problem, we present FaQuAD: a novel machine reading comprehension dataset in the domain of Brazilian higher education institutions. FaQuAD follows the format of SQuAD (Stanford Question Answering Dataset) [Rajpurkar et al. 2016]. It comprises 900 questions about 249 reading passages (paragraphs), which were taken from 18 official documents of a computer science college from a Brazilian federal university and 21 Wikipedia articles related to Brazilian higher education system. As far as we know, this is the first Portuguese reading comprehension dataset in this format. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields | name |train|validation| |---------|----:|----:| |faquad|837|63| ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
eraldoluis/faquad
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:extended|wikipedia", "language:pt", "license:cc-by-4.0", "region:us" ]
2022-09-06T10:05:01+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["pt"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["extended|wikipedia"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "FaQuAD", "train-eval-index": [{"config": "plain_text", "task": "question-answering", "task_id": "extractive_question_answering", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question": "question", "context": "context", "answers": {"text": "text", "answer_start": "answer_start"}}, "metrics": [{"type": "squad", "name": "SQuAD"}]}]}
2023-01-23T08:45:41+00:00
28d972c94caec3a6308383a261e6c84733baaa80
schibsted/recsys-slates-dataset
[ "license:apache-2.0", "region:us" ]
2022-09-06T10:27:53+00:00
{"license": "apache-2.0"}
2022-09-06T10:27:53+00:00
1e0c6e8c8ff4fe9d22b72ba8abbc408df84eb265
# AutoTrain Dataset for project: emotion-detection ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project emotion-detection. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "feat_tweet_id": 1694457763, "target": 8, "text": "I am going to see how long I can do this for." }, { "feat_tweet_id": 1694627613, "target": 8, "text": "@anitabora yeah, right. What if our politicians start using uploading their pics, lots of inside stories will be out" } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "feat_tweet_id": "Value(dtype='int64', id=None)", "target": "ClassLabel(num_classes=13, names=['anger', 'boredom', 'empty', 'enthusiasm', 'fun', 'happiness', 'hate', 'love', 'neutral', 'relief', 'sadness', 'surprise', 'worry'], id=None)", "text": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 31995 | | valid | 8005 |
rahulmallah/autotrain-data-emotion-detection
[ "task_categories:text-classification", "language:en", "region:us" ]
2022-09-06T12:04:07+00:00
{"language": ["en"], "task_categories": ["text-classification"]}
2022-09-06T12:13:37+00:00
c9bc2dc442b053e2f70f11cbcf6aa3ee01b54286
label_ids: - (0) contradiction - (2) entailment
gorkaartola/ZS-train_SDG_Descriptions_S1-sentence_S2-SDGtitle_Negative_Sample_Filter-Only_Title_and_Headline
[ "region:us" ]
2022-09-06T13:50:12+00:00
{}
2022-09-06T13:52:46+00:00
b314649ae9af4fd4e235b506acea00bb09ebe923
tartuNLP/finno-ugric-train
[ "license:cc-by-4.0", "region:us" ]
2022-09-06T14:27:21+00:00
{"license": "cc-by-4.0"}
2022-09-08T13:27:45+00:00
c0afa552316676917fa38974717285d6cb5f133d
git config --global credential.helper store
Riilax/Dali-2
[ "region:us" ]
2022-09-06T16:41:04+00:00
{}
2022-09-06T16:43:51+00:00
f1fed66dfcbbc155f73431e9f2c9362fe2ace7d4
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: kamalkraj/bert-base-cased-ner-conll2003 * Dataset: conll2003 * Config: conll2003 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@akdeniz27](https://huggingface.co/akdeniz27) for evaluating this model.
autoevaluate/autoeval-staging-eval-conll2003-conll2003-0054c2-15936187
[ "autotrain", "evaluation", "region:us" ]
2022-09-06T16:51:31+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "kamalkraj/bert-base-cased-ner-conll2003", "metrics": [], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-09-06T16:53:00+00:00
209d2db4b4a2ac4b477a184c8d5231fd5d4c81fb
# Dataset Card for "jigsaw-toxic-comment" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
affahrizain/jigsaw-toxic-comment
[ "region:us" ]
2022-09-06T18:36:24+00:00
{"dataset_info": {"features": [{"name": "labels", "dtype": "int64"}, {"name": "comment_clean", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 57080609, "num_examples": 159100}, {"name": "dev", "num_bytes": 7809213, "num_examples": 22393}, {"name": "test", "num_bytes": 22245686, "num_examples": 63978}], "download_size": 13050863, "dataset_size": 87135508}}
2023-02-19T11:51:27+00:00
45970ba9a0fc0f0e7971757228ea1b17d9dd3dfb
Source of data: https://github.com/FudanVI/benchmarking-chinese-text-recognition
priyank-m/chinese_text_recognition
[ "task_categories:image-to-text", "task_ids:image-captioning", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:zh", "ocr", "text-recognition", "chinese", "region:us" ]
2022-09-06T20:18:47+00:00
{"annotations_creators": [], "language_creators": [], "language": ["zh"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["image-to-text"], "task_ids": ["image-captioning"], "pretty_name": "chinese_text_recognition", "tags": ["ocr", "text-recognition", "chinese"]}
2022-09-21T08:08:19+00:00
19654330f83566c724afc264534fa726aa834bb9
CShorten/1000-CORD19-Papers-Text
[ "license:afl-3.0", "region:us" ]
2022-09-06T21:04:48+00:00
{"license": "afl-3.0"}
2022-09-06T21:05:10+00:00
499e407cf6a86f408818969400d1de63163e65a1
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-fcbcd1-15976191
[ "autotrain", "evaluation", "region:us" ]
2022-09-06T21:24:11+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["rouge", "accuracy", "exact_match"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-09-06T22:16:06+00:00
5909507bf7ac0113a0a906b0a5583c8b8e0d4085
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-5863f2-15966190
[ "autotrain", "evaluation", "region:us" ]
2022-09-06T21:24:28+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["rouge", "accuracy"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-09-06T22:14:30+00:00
1139ac8154d30113fab374b3961faec562b0dd8f
# Dataset Card for citizen_nlu ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ### Dataset Description - **Homepage**: [NeuralSpace Homepage](https://huggingface.co/neuralspace) - **Repository:** [citizen_nlu Dataset](https://huggingface.co/datasets/neuralspace/citizen_nlu) - **Point of Contact:** [Juhi Jain](mailto:[email protected]) - **Point of Contact:** [Ayushman Dash](mailto:[email protected]) - **Size of downloaded dataset files:** 67.6 MB ### Dataset Summary NeuralSpace strives to provide AutoNLP text and speech services, especially for low-resource languages. One of the major services provided by NeuralSpace on its platform is the “Language Understanding” service, where you can build, train and deploy your NLU model to recognize intents and entities with minimal code and just a few clicks. The initiative of this challenge is created with the purpose of sparkling AI applications to address some of the pressing problems in India and find unique ways to address them. Starting with a focus on NLU, this challenge hopes to make progress towards multilingual modelling, as language diversity is significantly underserved on the web. NeuralSpace aims at mastering the low-resource domain, and the citizen services use case is naturally a multilingual and essential domain for the general citizen. Citizen services refer to the essential services provided by organizations to general citizens. In this case, we focus on important services like various FIR-based requests, Blood/Platelets Donation, and Coronavirus-related queries. Such services may not be needed regularly by any particular city but when needed are of utmost importance, and in general, the needs for such services are prevalent every day. Despite the importance of citizen services, linguistically rich countries like India are still far behind in delivering such essential needs to the citizens with absolute ease. The best services currently available do not exist in various low-resource languages that are native to different groups of people. This challenge aims to make government services more efficient, responsive, and customer-friendly. As our computing resources and modelling capabilities grow, so does our potential to support our citizens by delivering a far superior customer experience. Equipping a Citizen services bot with the ability to converse in vernacular languages would make them accessible to a vast group of people for whom English is not a language of choice, but for who are increasingly turning to digital platforms and interfaces for a wide range of needs and wants. ### Supported Tasks A key component of any chatbot system is the NLU pipeline for ‘Intent Classification’ and ‘Named Entity Recognition. This primarily enables any chatbot to perform various tasks at ease. A fully functional multilingual chatbot needs to be able to decipher the language and understand exactly what the user wants. #### citizen_nlu A manually-curated multilingual dataset by Data Engineers at [NeuralSpace](https://www.neuralspace.ai/) for citizen services in 9 Indian languages for a realistic information-seeking task with data samples written by native-speaking expert data annotators [here](https://www.neuralspace.ai/). The dataset files are available in CSV format. ### Languages The citizen_nlu data is available in nine Indian languages i.e, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Punjabi, Tamil, and Telugu ## Dataset Structure ### Data Instances - **Size of downloaded dataset files:** 67.6 MB An example of 'test' looks as follows. ``` text,intents मेरे पिता की कार उनके कार्यालय की पार्किंग से कल से गायब है। वाहन संख्या केए-03-एचए-1985 । मैं एफआईआर कराना चाहता हूं।,ReportingMissingVehicle ``` An example of 'train' looks as follows. ```text,intents என் தாத்தா எனக்கு பிறந்தநாள் பரிசு கொடுத்தார் மஞ்சள் நான் டாடனானோவை இழந்தேன். காணவில்லை என புகார் தெரிவிக்க விரும்புகிறேன்,ReportingMissingVehicle ``` ### Data Fields The data fields are the same among all splits. #### citizen_nlu - `text`: a `string` feature. - `intent`: a `string` feature. - `type`: a classification label, with possible values including `train` or `test`. ### Data Splits #### citizen_nlu | |train|test| |----|----:|---:| |citizen_nlu| 287832| 4752| ### Contributions Mehar Bhatia ([email protected])
neuralspace/citizen_nlu
[ "task_categories:question-answering", "task_categories:text-retrieval", "task_categories:text2text-generation", "task_categories:other", "task_categories:translation", "task_categories:conversational", "task_ids:extractive-qa", "task_ids:closed-domain-qa", "task_ids:utterance-retrieval", "task_ids:document-retrieval", "task_ids:open-book-qa", "task_ids:closed-book-qa", "annotations_creators:other", "language_creators:other", "multilinguality:multilingual", "size_categories:n>1K", "source_datasets:original", "language:as", "language:bn", "language:gu", "language:hi", "language:kn", "language:mr", "language:pa", "language:ta", "language:te", "chatbots", "citizen services", "help", "emergency services", "health", "reporting crime", "region:us" ]
2022-09-07T03:43:33+00:00
{"annotations_creators": ["other"], "language_creators": ["other"], "language": ["as", "bn", "gu", "hi", "kn", "mr", "pa", "ta", "te"], "multilinguality": ["multilingual"], "size_categories": ["n>1K"], "source_datasets": ["original"], "task_categories": ["question-answering", "text-retrieval", "text2text-generation", "other", "translation", "conversational"], "task_ids": ["extractive-qa", "closed-domain-qa", "utterance-retrieval", "document-retrieval", "closed-domain-qa", "open-book-qa", "closed-book-qa"], "paperswithcode_id": "acronym-identification", "pretty_name": "Citizen Services NLU Multilingual Dataset.", "expert-generated license": ["cc-by-nc-sa-4.0"], "tags": ["chatbots", "citizen services", "help", "emergency services", "health", "reporting crime"], "configs": ["citizen_nlu"], "train-eval-index": [{"config": "citizen_nlu", "task": "token-classification", "task_id": "entity_extraction", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"sentence": "text", "label": "target"}, "metrics": [{"type": "citizen_nlu", "name": "citizen_nlu", "config": "citizen_nlu"}]}]}
2022-09-09T04:53:16+00:00
542460b9f8fefcc6544fdd06991e3a3d9be2eef3
# AutoTrain Dataset for project: citizen_nlu_bn ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project citizen_nlu_bn. ### Languages The BCP-47 code for the dataset's language is bn. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "\u0997\u09a4 \u09e8 \u09ae\u09be\u09b8 \u0986\u09ae\u09be\u09b0 \u0986\u0997\u09c7 \u0995\u09b0\u09cb \u09a8\u09be \u0986\u09ae\u09bf \u0995\u09a4 \u09a6\u09bf\u09a8 \u09aa\u09b0\u09c7 \u09b0\u0995\u09cd\u09a4 \u09a6\u09bf\u09a4\u09c7 \u09aa\u09be\u09b0\u09bf?", "target": 3 }, { "text": "\u09b9\u09a0\u09be\u09ce \u0986\u09ae\u09bf \u09a6\u09cb\u0995\u09be\u09a8\u09c7 \u09af\u09be\u0993\u09af\u09bc\u09be\u09b0 \u099c\u09a8\u09cd\u09af \u098f\u0995\u099f\u09bf \u0996\u09be\u09b2\u09bf \u09b0\u09be\u09b8\u09cd\u09a4\u09be\u09af\u09bc \u09b9\u09be\u0981\u099f\u099b\u09bf\u09b2\u09be\u09ae \u09b8\u09be\u09a6\u09be \u09b0\u0999\u09c7\u09b0 \u0993\u09ac\u09bf 005639 \u0986\u09ae\u09bf \u09b0\u09bf\u09aa\u09cb\u09b0\u09cd\u099f \u0995\u09b0\u09ac \u09af\u0996\u09a8 \u0986\u09ae\u09bf \u09a4\u09be\u09b0 \u0995\u09be\u099b\u09c7 \u0986\u09b8\u09ac \u098f\u09ac\u0982 \u09a7\u09be\u0995\u09cd\u0995\u09be \u09a6\u09bf\u09af\u09bc\u09c7 \u099a\u09b2\u09c7 \u09af\u09be\u09ac", "target": 44 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=55, names=['ContactRealPerson', 'Eligibility For BloodDonationWithComorbidities', 'EligibilityForBloodDonationAgeLimit', 'EligibilityForBloodDonationCovidGap', 'EligibilityForBloodDonationForPregnantWomen', 'EligibilityForBloodDonationGap', 'EligibilityForBloodDonationSTD', 'EligibilityForBloodReceiversBloodGroup', 'EligitbilityForVaccine', 'InquiryForCovidActiveCasesCount', 'InquiryForCovidDeathCount', 'InquiryForCovidPrevention', 'InquiryForCovidRecentCasesCount', 'InquiryForCovidTotalCasesCount', 'InquiryForDoctorConsultation', 'InquiryForQuarantinePeriod', 'InquiryForTravelRestrictions', 'InquiryForVaccinationRequirements', 'InquiryForVaccineCost', 'InquiryForVaccineCount', 'InquiryOfContact', 'InquiryOfCovidSymptoms', 'InquiryOfEmergencyContact', 'InquiryOfLocation', 'InquiryOfLockdownDetails', 'InquiryOfTiming', 'InquiryofBloodDonationRequirements', 'InquiryofBloodReceivalRequirements', 'InquiryofPostBloodDonationCareSchemes', 'InquiryofPostBloodDonationCertificate', 'InquiryofPostBloodDonationEffects', 'InquiryofPostBloodReceivalCareSchemes', 'InquiryofPostBloodReceivalEffects', 'InquiryofVaccinationAgeLimit', 'IntentForBloodDonationAppointment', 'IntentForBloodReceivalAppointment', 'ReportingAnimalAbuse', 'ReportingAnimalPoaching', 'ReportingChildAbuse', 'ReportingCyberCrime', 'ReportingDomesticViolence', 'ReportingDowry', 'ReportingDrugConsumption', 'ReportingDrugTrafficing', 'ReportingHitAndRun', 'ReportingMissingPerson', 'ReportingMissingPets', 'ReportingMissingVehicle', 'ReportingMurder', 'ReportingPropertyTakeOver', 'ReportingSexualAssault', 'ReportingTheft', 'ReportingTresspassing', 'ReportingVehicleAccident', 'StatusOfFIR'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 27146 | | valid | 6800 |
neuralspace/autotrain-data-citizen_nlu_bn
[ "task_categories:text-classification", "language:bn", "region:us" ]
2022-09-07T04:31:08+00:00
{"language": ["bn"], "task_categories": ["text-classification"]}
2022-09-07T04:32:14+00:00
90d581bb08843607d7d75eabeba4047109f4f434
asaxena1990/citizen_nlu
[ "license:cc-by-nc-sa-4.0", "region:us" ]
2022-09-07T04:42:33+00:00
{"license": "cc-by-nc-sa-4.0"}
2022-09-07T04:45:47+00:00
a77ffb4773b694d03c805d80ea128b44e5c709f3
# Dataset Card for solar3 ### Dataset Summary Šolar* is a developmental corpus of 5485 school texts (e.g., essays), written by students in Slovenian secondary schools (age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the document available at https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Smernice-za-oznacevanje-korpusa-Solar_V1.1.pdf (in Slovenian). \(*) pronounce "š" as "sh" in "shoe". By default the dataset is provided at **sentence-level** (125867 instances): each instance contains a source (the original) and a target (the corrected) sentence. Note that either the source or the target sentence in an instance may be missing - this usually happens when a source sentence is marked as redundant or when a new sentence is added by the teacher. Additionally, a source or a target sentence may appear in multiple instances - for example, this happens when one sentence gets divided into multiple sentences. There is also an option to aggregate the instances at the **document-level** or **paragraph-level** by explicitly providing the correct config: ``` datasets.load_dataset("cjvt/solar3", "paragraph_level")` datasets.load_dataset("cjvt/solar3", "document_level")` ``` ### Supported Tasks and Leaderboards Error correction, e.g., at token/sequence level, as token/sequence classification or text2text generation. ### Languages Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset: ```json { 'id_doc': 'solar1', 'doc_title': 'KUS-G-slo-1-GO-E-2009-10001', 'is_manually_validated': True, 'src_tokens': ['”', 'Ne', 'da', 'sovražim', ',', 'da', 'ljubim', 'sem', 'na', 'svetu', '”', ',', 'izreče', 'Antigona', 'v', 'bran', 'kralju', 'Kreonu', 'za', 'svoje', 'nasprotno', 'mišljenje', 'pred', 'smrtjo', '.'], 'src_ling_annotations': { # truncated for conciseness 'lemma': ['”', 'ne', 'da', 'sovražiti', ...], 'ana': ['mte:U', 'mte:L', 'mte:Vd', ...], 'msd': ['UPosTag=PUNCT', 'UPosTag=PART|Polarity=Neg', 'UPosTag=SCONJ', ...], 'ne_tag': [..., 'O', 'B-PER', 'O', ...], 'space_after': [False, True, True, False, ...] }, 'tgt_tokens': ['„', 'Ne', 'da', 'sovražim', ',', 'da', 'ljubim', 'sem', 'na', 'svetu', ',', '”', 'izreče', 'Antigona', 'sebi', 'v', 'bran', 'kralju', 'Kreonu', 'za', 'svoje', 'nasprotno', 'mišljenje', 'pred', 'smrtjo', '.'], # omitted for conciseness, the format is the same as in 'src_ling_annotations' 'tgt_ling_annotations': {...}, 'corrections': [ {'idx_src': [0], 'idx_tgt': [0], 'corr_types': ['Z/LOČ/nerazvrščeno']}, {'idx_src': [10, 11], 'idx_tgt': [10, 11], 'corr_types': ['Z/LOČ/nerazvrščeno']}, {'idx_src': [], 'idx_tgt': [14], 'corr_types': ['O/KAT/povratnost']} ] } ``` The instance represents a correction in the document 'solar1' (`id_doc`), which were manually assigned/validated (`is_manually_validated`). More concretely, the source sentence contains three errors (as indicated by three elements in `corrections`): - a punctuation change: '”' -> '„'; - a punctuation change: ['”', ','] -> [',', '”'] (i.e. comma inside the quote, not outside); - addition of a new word: 'sebi'. ### Data Fields - `id_doc`: a string containing the identifying name of the document in which the sentence appears; - `doc_title`: a string containing the assigned document title; - `is_manually_validated`: a bool indicating whether the document in which the sentence appears was reviewed by a teacher; - `src_tokens`: words in the source sentence (`[]` if there is no source sentence); - `src_ling_annotations`: a dict containing the lemmas (key `"lemma"`), morphosyntactic descriptions using UD (key `"msd"`) and JOS/MULTEXT-East (key `"ana"`) specification, named entity tags encoded using IOB2 (key `"ne_tag"`) for the source tokens (**automatically annotated**), and spacing information (key `"space_after"`), i.e. whether there is a whitespace after each token; - `tgt_tokens`: words in the target sentence (`[]` if there is no target sentence); - `tgt_ling_annotations`: a dict containing the lemmas (key `"lemma"`), morphosyntactic descriptions using UD (key `"msd"`) and JOS/MULTEXT-East (key `"ana"`) specification, named entity tags encoded using IOB2 (key `"ne_tag"`) for the target tokens (**automatically annotated**), and spacing information (key `"space_after"`), i.e. whether there is a whitespace after each token; - `corrections`: a list of the corrections, with each correction represented with a dictionary, containing the indices of the source tokens involved (`idx_src`), target tokens involved (`idx_tgt`), and the categories of the corrections made (`corr_types`). Please note that there can be multiple assigned categories for one annotated correction, in which case `len(corr_types) > 1`. ## Dataset Creation The Developmental corpus Šolar consists of 5,485 texts written by students in Slovenian secondary schools (age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. The information on school (elementary or secondary), subject, level (grade or year), type of text, region, and date of production is provided for each text. School essays form the majority of the corpus while other material includes texts created during lessons, such as text recapitulations or descriptions, examples of formal applications, etc. Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the attached document (in Slovenian). Teacher corrections were part of the original files and reflect real classroom situations of essay marking. Corrections were then inserted into texts by annotators and subsequently categorized. Due to the annotations being gathered in a practical (i.e. classroom) setting, only the most relevant errors may sometimes be annotated, e.g., not all incorrectly placed commas are annotated if there is a bigger issue in the text. ## Additional Information ### Dataset Curators Špela Arhar Holdt; et al. (please see http://hdl.handle.net/11356/1589 for the full list) ### Licensing Information CC BY-NC-SA 4.0. ### Citation Information ``` @misc{solar3, title = {Developmental corpus {\v S}olar 3.0}, author = {Arhar Holdt, {\v S}pela and Rozman, Tadeja and Stritar Ku{\v c}uk, Mojca and Krek, Simon and Krap{\v s} Vodopivec, Irena and Stabej, Marko and Pori, Eva and Goli, Teja and Lavri{\v c}, Polona and Laskowski, Cyprian and Kocjan{\v c}i{\v c}, Polonca and Klemenc, Bojan and Krsnik, Luka and Kosem, Iztok}, url = {http://hdl.handle.net/11356/1589}, note = {Slovenian language resource repository {CLARIN}.{SI}}, year = {2022} } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
cjvt/solar3
[ "task_categories:text2text-generation", "task_categories:other", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:1K<n<10K", "source_datasets:original", "language:sl", "license:cc-by-nc-sa-4.0", "grammatical-error-correction", "other-token-classification-of-text-errors", "region:us" ]
2022-09-07T08:16:23+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["sl"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text2text-generation", "other"], "task_ids": [], "pretty_name": "solar3", "tags": ["grammatical-error-correction", "other-token-classification-of-text-errors"]}
2022-10-21T06:35:45+00:00
aa18c10ce999c806bf6f30a050b0d9a720ccd0c3
**Published**: September 21th, 2022 <br> **Author**: Julius Breiholz # GARFAB-Dataset The (G)erman corpus of annotated (A)pp (R)eviews to detect (F)eature requests (A)nd (B)ug reports (GARFAB) is a dataset to fine-tune models regarding classification of app reviews (ASRs) into "Feature Requests", "Bug Reports" and "Irrelevants" for the German language. All ASRs were collected from the Google Play Store and were classified manually by two independent annotators. A weighted and a full version are published with the following distributions of ASRs: | | Feature Request | Bug Reports | Irrelevant | Total | | --- | --- | --- | --- | --- | full | 345 | 387 | 2212 | 2944 | weighted | 345 | 345 | 345 | 1035 |
julius-br/GARFAB
[ "license:mit", "region:us" ]
2022-09-07T10:33:31+00:00
{"license": "mit"}
2022-09-21T14:54:55+00:00
85f90b5212cc669b29aac223f6e7a97e82da95c9
# Reddit Demo dataset
jamescalam/reddit-demo
[ "region:us" ]
2022-09-07T10:57:04+00:00
{}
2022-09-07T11:12:43+00:00
b514058e84ca638776d8b92786dc41a343aafdbf
;oertjh
helliun/mePics
[ "region:us" ]
2022-09-07T12:36:53+00:00
{}
2022-09-07T13:33:55+00:00
bf8ef036aa26d956ce5adf2e4e614f2fa714d595
Outside/prova
[ "license:other", "region:us" ]
2022-09-07T12:38:43+00:00
{"license": "other"}
2022-09-07T12:38:43+00:00
c59a9221b13784714d149bd63d66e7c7df90ce3a
abcefgdfdsf/stablediff
[ "license:apache-2.0", "region:us" ]
2022-09-07T14:14:13+00:00
{"license": "apache-2.0"}
2022-09-07T14:14:14+00:00
37ff92ce72b49a5e1bfb603b158475a6506db739
nagyigergo/gyurcsany
[ "license:unknown", "region:us" ]
2022-09-07T15:53:18+00:00
{"license": "unknown"}
2022-09-07T15:56:02+00:00
11d59a59eeee7591bd6e8fe2611be016e9f15f22
Read this [BLOG](https://neuralmagic.com/blog/classifying-finance-tweets-in-real-time-with-sparse-transformers/) to see how I fine-tuned a sparse transformer on this dataset. ### Dataset Description The Twitter Financial News dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their topic. 1. The dataset holds 21,107 documents annotated with 20 labels: ```python topics = { "LABEL_0": "Analyst Update", "LABEL_1": "Fed | Central Banks", "LABEL_2": "Company | Product News", "LABEL_3": "Treasuries | Corporate Debt", "LABEL_4": "Dividend", "LABEL_5": "Earnings", "LABEL_6": "Energy | Oil", "LABEL_7": "Financials", "LABEL_8": "Currencies", "LABEL_9": "General News | Opinion", "LABEL_10": "Gold | Metals | Materials", "LABEL_11": "IPO", "LABEL_12": "Legal | Regulation", "LABEL_13": "M&A | Investments", "LABEL_14": "Macro", "LABEL_15": "Markets", "LABEL_16": "Politics", "LABEL_17": "Personnel Change", "LABEL_18": "Stock Commentary", "LABEL_19": "Stock Movement", } ``` The data was collected using the Twitter API. The current dataset supports the multi-class classification task. ### Task: Topic Classification # Data Splits There are 2 splits: train and validation. Below are the statistics: | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 16,990 | | Validation | 4,118 | # Licensing Information The Twitter Financial Dataset (topic) version 1.0.0 is released under the MIT License.
zeroshot/twitter-financial-news-topic
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:other", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "twitter", "finance", "markets", "stocks", "wallstreet", "quant", "hedgefunds", "region:us" ]
2022-09-07T17:43:21+00:00
{"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "twitter financial news", "tags": ["twitter", "finance", "markets", "stocks", "wallstreet", "quant", "hedgefunds", "markets"]}
2022-12-04T16:50:10+00:00
baa2e9a0a5d19ff2838e9cfbceb85b81d7a06f8e
# Dataset Card for Law Stack Exchange Dataset ## Dataset Description - **Paper: [Parameter-Efficient Legal Domain Adaptation](https://aclanthology.org/2022.nllp-1.10/)** - **Point of Contact: [email protected]** ### Dataset Summary Dataset from the Law Stack Exchange, as used in "Parameter-Efficient Legal Domain Adaptation". ### Citation Information ``` @inproceedings{li-etal-2022-parameter, title = "Parameter-Efficient Legal Domain Adaptation", author = "Li, Jonathan and Bhambhoria, Rohan and Zhu, Xiaodan", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2022", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.nllp-1.10", pages = "119--129", } ```
jonathanli/law-stack-exchange
[ "task_categories:text-classification", "language:en", "stackexchange", "law", "region:us" ]
2022-09-07T18:49:21+00:00
{"language": ["en"], "task_categories": ["text-classification"], "pretty_name": "Law Stack Exchange", "tags": ["stackexchange", "law"]}
2023-02-23T16:37:19+00:00
9d48f81e8065d6e3eaec1ad961067941818ed327
Blueo/images
[ "region:us" ]
2022-09-07T21:10:57+00:00
{}
2022-09-07T21:14:38+00:00
5873a8aa4a5b3b4010501de70241f853acbbadc0
# Dataset Card for US Accidents (2016 - 2021) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/sobhanmoosavi/us-accidents - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary ### Description This is a countrywide car accident dataset, which covers __49 states of the USA__. The accident data are collected from __February 2016 to Dec 2021__, using multiple APIs that provide streaming traffic incident (or event) data. These APIs broadcast traffic data captured by a variety of entities, such as the US and state departments of transportation, law enforcement agencies, traffic cameras, and traffic sensors within the road-networks. Currently, there are about __2.8 million__ accident records in this dataset. Check [here](https://smoosavi.org/datasets/us_accidents) to learn more about this dataset. ### Acknowledgements Please cite the following papers if you use this dataset: - Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, and Rajiv Ramnath. “[A Countrywide Traffic Accident Dataset](https://arxiv.org/abs/1906.05409).”, 2019. - Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, Radu Teodorescu, and Rajiv Ramnath. ["Accident Risk Prediction based on Heterogeneous Sparse Data: New Dataset and Insights."](https://arxiv.org/abs/1909.09638) In proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, ACM, 2019. ### Content This dataset has been collected in real-time, using multiple Traffic APIs. Currently, it contains accident data that are collected from February 2016 to Dec 2021 for the Contiguous United States. Check [here](https://smoosavi.org/datasets/us_accidents) to learn more about this dataset. ### Inspiration US-Accidents can be used for numerous applications such as real-time car accident prediction, studying car accidents hotspot locations, casualty analysis and extracting cause and effect rules to predict car accidents, and studying the impact of precipitation or other environmental stimuli on accident occurrence. The most recent release of the dataset can also be useful to study the impact of COVID-19 on traffic behavior and accidents. ### Usage Policy and Legal Disclaimer This dataset is being distributed only for __Research__ purposes, under Creative Commons Attribution-Noncommercial-ShareAlike license (CC BY-NC-SA 4.0). By clicking on download button(s) below, you are agreeing to use this data only for non-commercial, research, or academic applications. You may need to cite the above papers if you use this dataset. ### Inquiries or need help? For any inquiries, contact me at [email protected] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@sobhanmoosavi](https://kaggle.com/sobhanmoosavi) ### Licensing Information The license for this dataset is cc-by-nc-sa-4.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/us-accidents
[ "license:cc-by-nc-sa-4.0", "arxiv:1906.05409", "arxiv:1909.09638", "region:us" ]
2022-09-07T21:24:31+00:00
{"license": ["cc-by-nc-sa-4.0"], "kaggle_id": "sobhanmoosavi/us-accidents"}
2022-09-07T21:24:52+00:00
651baf9f1fbef3d6fb3de9b01651f3a5454f8c09
nupurkmr9/tortoise
[ "license:mit", "region:us" ]
2022-09-08T01:55:31+00:00
{"license": "mit"}
2022-09-08T01:57:37+00:00
95ec1d31cef548b24b6071771ed2a2d317fd7717
# OneStopEnglish OneStopEnglish is a corpus of texts written at three reading levels, and demonstrates its usefulness for through two applications - automatic readability assessment and automatic text simplification. This dataset is a version of [onestop_english](https://huggingface.co/datasets/onestop_english), which was randomly split into (64*3=) 192 train examples, and 375 test examples (stratified).
SetFit/onestop_english
[ "license:cc-by-sa-4.0", "region:us" ]
2022-09-08T05:12:18+00:00
{"license": "cc-by-sa-4.0"}
2022-09-08T05:16:39+00:00
8f4edc041879a2e0162401ee1754a7555b660c6a
# School Notebooks Dataset The images of school notebooks with handwritten notes in English. The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages. ## Annotation format The annotation is in COCO format. The `annotation.json` should have the following dictionaries: - `annotation["categories"]` - a list of dicts with a categories info (categotiy names and indexes). - `annotation["images"]` - a list of dictionaries with a description of images, each dictionary must contain fields: - `file_name` - name of the image file. - `id` for image id. - `annotation["annotations"]` - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields: - `image_id` - the index of the image on which the polygon is located. - `category_id` - the polygon’s category index. - `attributes` - dict with some additional annotation information. In the `translation` subdict you can find text translation for the line. - `segmentation` - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y.
ai-forever/school_notebooks_EN
[ "task_categories:image-segmentation", "task_categories:object-detection", "source_datasets:original", "language:en", "license:mit", "optical-character-recognition", "text-detection", "ocr", "region:us" ]
2022-09-08T08:31:05+00:00
{"language": ["en"], "license": ["mit"], "source_datasets": ["original"], "task_categories": ["image-segmentation", "object-detection"], "task_ids": [], "tags": ["optical-character-recognition", "text-detection", "ocr"]}
2023-02-09T18:26:07+00:00
360875ac83db1a044fa95d969013eda19d8c2667
Bynny dataset
Anastasia1812/bunny
[ "region:us" ]
2022-09-08T08:41:27+00:00
{}
2022-09-08T08:56:50+00:00
a10cd26104f054dc116a9dbc4a29c34b494eb9ae
# School Notebooks Dataset The images of school notebooks with handwritten notes in Russian. The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages. ## Annotation format The annotation is in COCO format. The `annotation.json` should have the following dictionaries: - `annotation["categories"]` - a list of dicts with a categories info (categotiy names and indexes). - `annotation["images"]` - a list of dictionaries with a description of images, each dictionary must contain fields: - `file_name` - name of the image file. - `id` for image id. - `annotation["annotations"]` - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields: - `image_id` - the index of the image on which the polygon is located. - `category_id` - the polygon’s category index. - `attributes` - dict with some additional annotation information. In the `translation` subdict you can find text translation for the line. - `segmentation` - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y.
ai-forever/school_notebooks_RU
[ "task_categories:image-segmentation", "task_categories:object-detection", "source_datasets:original", "language:ru", "license:mit", "optical-character-recognition", "text-detection", "ocr", "region:us" ]
2022-09-08T09:06:32+00:00
{"language": ["ru"], "license": ["mit"], "source_datasets": ["original"], "task_categories": ["image-segmentation", "object-detection"], "task_ids": [], "tags": ["optical-character-recognition", "text-detection", "ocr"]}
2023-02-09T18:27:24+00:00
c7186656e42f3b8660bf4a0e7768d54bb8d9429d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: lewtun/sagemaker-distilbert-emotion-1 * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-emotion-default-39ecfd-16096203
[ "autotrain", "evaluation", "region:us" ]
2022-09-08T09:09:45+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "lewtun/sagemaker-distilbert-emotion-1", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-09-08T09:10:12+00:00
b168b613f0d023619bf0d00d9b7b34e9bc407afe
jmacs/jmacsface
[ "license:cc", "region:us" ]
2022-09-08T10:30:18+00:00
{"license": "cc"}
2022-09-08T10:43:37+00:00
2482635b77c1cbd351e72955dca35bed0c135a41
merve/supersoaker-failures
[ "license:apache-2.0", "region:us" ]
2022-09-08T15:05:41+00:00
{"license": "apache-2.0"}
2022-09-08T15:06:06+00:00
c446a2bc325ba054ed9adb05a6113e5f41e04d68
Aitrepreneur/testing
[ "license:afl-3.0", "region:us" ]
2022-09-08T15:51:04+00:00
{"license": "afl-3.0"}
2022-09-08T15:52:29+00:00
6d5678654a99a8fd5150bf7523ced793e92a0be6
# Dataset Card for the-reddit-climate-change-dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) ## Dataset Description - **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-climate-change-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditclimatechangedataset) - **Reddit downloader used:** [https://socialgrep.com/exports](https://socialgrep.com/exports?utm_source=huggingface&utm_medium=link&utm_campaign=theredditclimatechangedataset) - **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditclimatechangedataset) ### Dataset Summary All the mentions of climate change on Reddit before Sep 1 2022. ### Languages Mainly English. ## Dataset Structure ### Data Instances A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared. ### Data Fields - 'type': the type of the data point. Can be 'post' or 'comment'. - 'id': the base-36 Reddit ID of the data point. Unique when combined with type. - 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique. - 'subreddit.name': the human-readable name of the data point's host subreddit. - 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not. - 'created_utc': a UTC timestamp for the data point. - 'permalink': a reference link to the data point on Reddit. - 'score': score of the data point on Reddit. - 'domain': (Post only) the domain of the data point's link. - 'url': (Post only) the destination of the data point's link, if any. - 'selftext': (Post only) the self-text of the data point, if any. - 'title': (Post only) the title of the post data point. - 'body': (Comment only) the body of the comment data point. - 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis. ## Additional Information ### Licensing Information CC-BY v4.0
SocialGrep/the-reddit-climate-change-dataset
[ "annotations_creators:lexyr", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-09-08T17:24:14+00:00
{"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"]}
2022-09-08T17:24:20+00:00
17f24d0e1728d03561905934d6ba0368431d4e42
# Dataset Card for Airbnb Stock Price ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/evangower/airbnb-stock-price - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@evangower](https://kaggle.com/evangower) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/airbnb-stock-price-new-new
[ "license:cc0-1.0", "region:us" ]
2022-09-08T17:48:04+00:00
{"license": ["cc0-1.0"], "kaggle_id": "evangower/airbnb-stock-price"}
2022-09-08T17:48:08+00:00
2e1bafd99ce03bfe95c2473ecc422bde8dd74ef2
# Dataset Card for Airbnb Stock Price ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/evangower/airbnb-stock-price - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@evangower](https://kaggle.com/evangower) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/airbnb-stock-price-new-new-new
[ "license:cc0-1.0", "region:us" ]
2022-09-08T17:52:57+00:00
{"license": ["cc0-1.0"], "converted_from": "kaggle", "kaggle_id": "evangower/airbnb-stock-price"}
2022-09-08T17:53:00+00:00
c5c7d736a46f8e0b84448d4a4d7b722f257eaea9
# Dataset Card for Electrical half hourly raw and cleaned datasets for Great Britain from 2008-11-05 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://zenodo.org/record/6606485 - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary <p><strong>A journal paper published in Energy Strategy Reviews details the method to create the data.</strong></p> <p><strong>https://www.sciencedirect.com/science/article/pii/S2211467X21001280</strong></p> <p>&nbsp;</p> <p>2021-09-09: Version 6.0.0 was created. Now includes data for the North Sea Link (NSL) interconnector from Great Britain to Norway (https://www.northsealink.com). The previous version (5.0.4) should not be used - as there was an error with interconnector data having a static value over the summer 2021.</p> <p>&nbsp;</p> <p>2021-05-05: Version 5.0.0 was created. Datetimes now in ISO 8601 format (with capital letter &#39;T&#39; between the date and time) rather than previously with a space (to RFC 3339 format) and with an offset to identify both UTC and localtime. MW values now all saved as integers rather than floats. Elexon data as always from www.elexonportal.co.uk/fuelhh, National Grid data from&nbsp;https://data.nationalgrideso.com/demand/historic-demand-data &nbsp; Raw data now added again for comparison of pre and post cleaning - to allow for training of additional cleaning methods. If using Microsoft Excel, the T between the date and time can be removed using the =SUBSTITUTE() command - and substitute &quot;T&quot; for a space &quot; &quot;</p> <p>_____________________________________________________________________________________________________</p> <p>2021-03-02: Version 4.0.0 was created. Due to a new interconnecter (IFA2 -&nbsp;https://en.wikipedia.org/wiki/IFA-2) being commissioned in Q1 2021, there is an additional column with data from National Grid - this is called &#39;POWER_NGEM_IFA2_FLOW_MW&#39; in the espeni dataset. In addition, National Grid has dropped&nbsp;the column name &#39;FRENCH_FLOW&#39; that used to provide&nbsp;the value for the column&nbsp;&#39;POWER_NGEM_FRENCH_FLOW_MW&#39; in previous espeni versions. However, this has been changed to &#39;IFA_FLOW&#39; in National Grid&#39;s original data, which is now called &#39;POWER_NGEM_IFA_FLOW_MW&#39; in the espeni dataset. Lastly, the IO14 columns have all been dropped by National Grid - and potentially unlikely to appear again in future.</p> <p>2020-12-02: Version 3.0.0 was created. There was a problem with earlier versions&nbsp;local time format - where the +01:00 value was not carried through into the data properly. Now addressed - therefore - local time now has the format e.g.&nbsp;2020-03-31 20:00:00+01:00 when in British Summer Time.</p> <p>2020-10-03: Version 2.0.0 was created as it looks like National Grid has&nbsp;had a significant change&nbsp;to the methodology underpinning the embedded wind calculations. The wind profile seems similar to previous values, but with an increasing value in comparison&nbsp;to the value published in earlier&nbsp;the greater the embedded value is. The &#39;new&#39; values are from&nbsp;https://data.nationalgrideso.com/demand/daily-demand-update from 2013.</p> <p>Previously: raw and cleaned datasets for Great Britain&#39;s&nbsp;publicly available electrical data from&nbsp;Elexon (www.elexonportal.co.uk) and National Grid (https://demandforecast.nationalgrid.com/efs_demand_forecast/faces/DataExplorer). Updated versions with more recent data will be uploaded with a differing&nbsp;version number and doi</p> <p>All data is released in accordance with Elexon&#39;s disclaimer and reservation of rights.</p> <p>https://www.elexon.co.uk/using-this-website/disclaimer-and-reservation-of-rights/</p> <p>This disclaimer is also felt to cover&nbsp;the data from National Grid, and the parsed data from the Energy Informatics Group at the University of Birmingham.</p> ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The class labels in the dataset are in English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by Grant Wilson, Noah Godfrey ### Licensing Information The license for this dataset is https://creativecommons.org/licenses/by-nc/4.0/legalcode ### Citation Information ```bibtex @dataset{grant_wilson_2022_6606485, author = {Grant Wilson and Noah Godfrey}, title = {{Electrical half hourly raw and cleaned datasets for Great Britain from 2008-11-05}}, month = jun, year = 2022, note = {{Grant funding as part of Research Councils (UK) EP/L024756/1 - UK Energy Research Centre research programme Phase 3 Grant funding as part of Research Councils (UK) EP/V012053/1 - The Active Building Centre Research Programme (ABC RP)}}, publisher = {Zenodo}, version = {6.0.9}, doi = {10.5281/zenodo.6606485}, url = {https://doi.org/10.5281/zenodo.6606485} } ``` ### Contributions [More Information Needed]
nateraw/espeni-3
[ "license:unknown", "region:us" ]
2022-09-08T17:58:36+00:00
{"license": ["unknown"], "zenodo_id": "6606485", "converted_from": "zenodo"}
2022-09-08T17:58:52+00:00
f9846ec84537f7986056d138e0219648639dcdb8
annotations_creators: [] language: [] language_creators: - other license: - afl-3.0 multilinguality: [] pretty_name: bunny images size_categories: - unknown source_datasets: - original tags: [] task_categories: - text-to-image task_ids: []
Anastasia1812/bunnies
[ "region:us" ]
2022-09-08T18:23:33+00:00
{}
2022-09-08T18:31:08+00:00
d0955128fa4c42ef9dd97fd022294a4474cf290e
# Dataset Card for Avocado Prices ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/neuromusic/avocado-prices - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary ### Context It is a well known fact that Millenials LOVE Avocado Toast. It's also a well known fact that all Millenials live in their parents basements. Clearly, they aren't buying home because they are buying too much Avocado Toast! But maybe there's hope... if a Millenial could find a city with cheap avocados, they could live out the Millenial American Dream. ### Content This data was downloaded from the Hass Avocado Board website in May of 2018 & compiled into a single CSV. Here's how the [Hass Avocado Board describes the data on their website][1]: &gt; The table below represents weekly 2018 retail scan data for National retail volume (units) and price. Retail scan data comes directly from retailers’ cash registers based on actual retail sales of Hass avocados. Starting in 2013, the table below reflects an expanded, multi-outlet retail data set. Multi-outlet reporting includes an aggregation of the following channels: grocery, mass, club, drug, dollar and military. The Average Price (of avocados) in the table reflects a per unit (per avocado) cost, even when multiple units (avocados) are sold in bags. The Product Lookup codes (PLU’s) in the table are only for Hass avocados. Other varieties of avocados (e.g. greenskins) are not included in this table. Some relevant columns in the dataset: - `Date` - The date of the observation - `AveragePrice` - the average price of a single avocado - `type` - conventional or organic - `year` - the year - `Region` - the city or region of the observation - `Total Volume` - Total number of avocados sold - `4046` - Total number of avocados with PLU 4046 sold - `4225` - Total number of avocados with PLU 4225 sold - `4770` - Total number of avocados with PLU 4770 sold ### Acknowledgements Many thanks to the Hass Avocado Board for sharing this data!! http://www.hassavocadoboard.com/retail/volume-and-price-data ### Inspiration In which cities can millenials have their avocado toast AND buy a home? Was the Avocadopocalypse of 2017 real? [1]: http://www.hassavocadoboard.com/retail/volume-and-price-data ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@neuromusic](https://kaggle.com/neuromusic) ### Licensing Information The license for this dataset is odbl ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/avocado-prices
[ "license:odbl", "region:us" ]
2022-09-08T19:35:54+00:00
{"license": ["odbl"], "converted_from": "kaggle", "kaggle_id": "neuromusic/avocado-prices"}
2022-09-08T19:43:27+00:00
9ee569ca22bab4e5b7addf77abb150463c4030c1
# Dataset Card for Midjourney User Prompts & Generated Images (250k) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/succinctlyai/midjourney-texttoimage - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary General Context === [Midjourney](https://midjourney.com) is an independent research lab whose broad mission is to "explore new mediums of thought". In 2022, they launched a text-to-image service that, given a natural language prompt, produces visual depictions that are faithful to the description. Their service is accessible via a public [Discord server](https://discord.com/invite/midjourney), where users interact with a [Midjourney bot](https://midjourney.gitbook.io/docs/#create-your-first-image). When issued a query in natural language, the bot returns four low-resolution images and offers further options like upscaling or re-generating a variation of the original images. This dataset was obtained by scraping messages from the public Discord server over a period of four weeks (June 20, 2002 - July 17, 2022). The authors have no affiliation with Midjourney and are releasing this data with the sole purpose of enabling research on text-to-image model prompting (see the Sample Use Case section below). Midjourney's Discord Server --- Here is what the interaction with the Midjourney bot looks like on Discord: 1. Issuing an initial prompt: ![Screenshot showing how to issue an initial prompt](https://drive.google.com/uc?export=view&id=1k6BuaJNWThCr1x2Ezojx3fAmDIyeZhbp "Result of issuing an initial prompt") 2. Upscaling the bottom-left image: ![Screenshot showing how to request upscaling an image](https://drive.google.com/uc?export=view&id=15Y65Fe0eVKVPK5YOul0ZndLuqo4Lg4xk "Result of upscaling an image") 3. Requesting variations of the bottom-left image: ![Screenshot showing how to request a variation of a generated image](https://drive.google.com/uc?export=view&id=1-9kw69PgM5eIM5n1dir4lQqGCn_hJfOA "Result of requesting a variation of an image") Dataset Format === The dataset was produced by scraping ten public Discord channels in the "general" category (i.e., with no dedicated topic) over four weeks. Filenames follow the pattern `channel-name_yyyy_mm_dd.json`. The `"messages"` field in each JSON file contains a list of [Message](https://discord.com/developers/docs/resources/channel#message-object) objects, one per user query. A message includes information such as the user-issued prompt, a link to the generated image, and other metadata. See [the companion notebook](https://www.kaggle.com/succinctlyai/midjourney-prompt-analysis) with utilities for extracting such information. | User Prompt | Generated Image URL | | --- | --- | | anatomical heart fill with deers, neon, pastel, artstation | https://cdn.discordapp.com/attachments/985204969722486814/989673529102463016/f14d5cb4-aa4d-4060-b017-5ee6c1db42d6_Ko_anatomical_heart_fill_with_deers_neon_pastel_artstation.png | | anatomical heart fill with jumping running deers, neon, pastel, artstation | https://cdn.discordapp.com/attachments/985204969722486814/989675045439815721/1d7541f2-b659-4a74-86a3-ae211918723c_Ko_anatomical_heart_fill_with_jumping_running_deers_neon_pastel_artstation.png | | https://s.mj.run/UlkFmVAKfaE cat with many eyes floating in colorful glowing swirling whisps, occult inspired, emerging from the void, shallow depth of field | https://cdn.discordapp.com/attachments/982990243621908480/988957623229501470/6116dc5f-64bb-4afb-ba5f-95128645c247_MissTwistedRose_cat_with_many_eyes_floating_in_colorful_glowing_swirling_whisps_occult_inspired_emerging_from_the_vo.png | Dataset Stats === The dataset contains: - **268k** messages from 10 public Discord channel collected over 28 days. - **248k** user-generated prompts and their associated generated images, out of which: + 60% are requests for new images (initial or variation requests for a previously-generated image), and + 40% are requests for upscaling previously-generated images. Prompt Analysis === Here are the most prominent phrases among the user-generated text prompts: ![word cloud](https://drive.google.com/uc?export=view&id=1J432wrecf2zibDFU5sT3BXFxqmt3PJ-P) Prompt lengths span from 1 to 60 whitespace-separated tokens, with the mode around 15 tokens: ![prompt lengths](https://drive.google.com/uc?export=view&id=1fFObFvcWwOEGJ3k47G4fzIHZXmxS3RiW) See the [the companion notebook](https://www.kaggle.com/succinctlyai/midjourney-prompt-analysis) for an in-depth analysis of how users control various aspects of the generated images (lighting, resolution, photographic elements, artistic style, etc.). Sample Use Case === One way of leveraging this dataset is to help address the [prompt engineering](https://www.wired.com/story/dalle-art-curation-artificial-intelligence/) problem: artists that use text-to-image models in their work spend a significant amount of time carefully crafting their text prompts. We built an additional model for prompt autocompletion by learning from the queries issued by Midjourney users. [This notebook](https://www.kaggle.com/code/succinctlyai/midjourney-text-prompts-huggingface) shows how to extract the natural language prompts from the Discord messages and create a HuggingFace dataset to be used for training. The processed dataset can be found at [succinctly/midjourney-prompts](https://huggingface.co/datasets/succinctly/midjourney-prompts), and the prompt generator (a GPT-2 model fine-tuned on prompts) is located at [succinctly/text2image-prompt-generator](https://huggingface.co/succinctly/text2image-prompt-generator). Here is how our model can help brainstorm creative prompts and speed up prompt engineering: ![prompt autocomplete model](https://drive.google.com/uc?export=view&id=1JqZ-CaWNpQ4iO0Qcd3b8u_QnBp-Q0PKu) Authors === This project was a collaboration between [Iulia Turc](https://twitter.com/IuliaTurc) and [Gaurav Nemade](https://twitter.com/gaurav_nemade15). We recently left Google Research to work on something new. Feel free to Tweet at us, or follow our journey at [succinctly.ai](https://succinctly.ai). Interesting Finds === Here are some of the generated images that drew our attention: | User Prompt | Generated Image | | --- | --- | | https://s.mj.run/JlwNbH Historic Ensemble of the Potala Palace Lhasa, japanese style painting,trending on artstation, temple, architecture, fiction, sci-fi, underwater city, Atlantis , cyberpunk style, 8k revolution, Aokigahara fall background , dramatic lighting, epic, photorealistic, in his lowest existential moment with high detail, trending on artstation,cinematic light, volumetric shading ,high radiosity , high quality, form shadow, rim lights , concept art of architecture, 3D,hyper deatiled,very high quality,8k,Maxon cinema,visionary,imaginary,realistic,as trending on the imagination of Gustave Doré idea,perspective view,ornate light --w 1920 --h 1024 | ![palace](https://drive.google.com/uc?export=view&id=1xl2Gr1TSWCh0p_8o_wJnQIsO1qxW02Z_) | | a dark night with fog in a metropolis of tomorrow by hugh ferriss:, epic composition, maximum detail, Westworld, Elysium space station, space craft shuttle, star trek enterprise interior, moody, peaceful, hyper detailed, neon lighting, populated, minimalist design, monochromatic, rule of thirds, photorealistic, alien world, concept art, sci-fi, artstation, photorealistic, arch viz , volumetric light moody cinematic epic, 3d render, octane render, trending on artstation, in the style of dylan cole + syd mead + by zaha hadid, zaha hadid architecture + reaction-diffusion + poly-symmetric + parametric modelling, open plan, minimalist design 4k --ar 3:1 | ![metropolis](https://drive.google.com/uc?export=view&id=16A-VtlbSZCaUFiA6CZQzevPgBGyBiXWI) | | https://s.mj.run/qKj8n0 fantasy art, hyperdetailed, panoramic view, foreground is a crowd of ancient Aztec robots are doing street dance battle , main part is middleground is majestic elegant Gundam mecha robot design with black power armor and unsettling ancient Aztec plumes and decorations scary looking with two magical neon swords combat fighting::2 , background is at night with nebula eruption, Rembrandt lighting, global illumination, high details, hyper quality, unreal negine, octane render, arnold render, vray render, photorealistic, 8k --ar 3:1 --no dof,blur,bokeh | ![ancient](https://drive.google.com/uc?export=view&id=1a3jI3eiQwLbulaSS2-l1iGJ6-kokMMvc) | | https://s.mj.run/zMIhrKBDBww in side a Amethyst geode cave, 8K symmetrical portrait, trending in artstation, epic, fantasy, Klimt, Monet, clean brush stroke, realistic highly detailed, wide angle view, 8k post-processing highly detailed, moody lighting rendered by octane engine, artstation,cinematic lighting, intricate details, 8k detail post processing, --no face --w 512 --h 256 | ![cave](https://drive.google.com/uc?export=view&id=1gUx-3drfCBBFha8Hoal4Ly4efDXSrxlB) | | https://s.mj.run/GTuMoq whimsically designed gothic, interior of a baroque cathedral in fire with moths and birds flying, rain inside, with angels, beautiful woman dressed with lace victorian and plague mask, moody light, 8K photgraphy trending on shotdeck, cinema lighting, simon stålenhag, hyper realistic octane render, octane render, 4k post processing is very detailed, moody lighting, Maya+V-Ray +metal art+ extremely detailed, beautiful, unreal engine, lovecraft, Big Bang cosmology in LSD+IPAK,4K, beatiful art by Lêon François Comerre, ashley wood, craig mullins, ,outer space view, William-Adolphe Bouguereau, Rosetti --w 1040 --h 2080 | ![gothic](https://drive.google.com/uc?export=view&id=1nmsTEdPEbvDq9SLnyjjw3Pb8Eb-C1WaP) | ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@succinctlyai](https://kaggle.com/succinctlyai) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/midjourney-texttoimage
[ "license:cc0-1.0", "region:us" ]
2022-09-08T19:49:52+00:00
{"license": ["cc0-1.0"], "converted_from": "kaggle", "kaggle_id": "succinctlyai/midjourney-texttoimage"}
2022-09-08T20:14:37+00:00
1cea5d99551c5817ca98c404c39b8846f04a3a12
# spanish-tweets ## A big corpus of tweets for pretraining embeddings and language models ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage**: https://github.com/pysentimiento/robertuito - **Paper**: [RoBERTuito: a pre-trained language model for social media text in Spanish](https://aclanthology.org/2022.lrec-1.785/) - **Point of Contact:** jmperez (at) dc.uba.ar ### Dataset Summary A big dataset of (mostly) Spanish tweets for pre-training language models (or other representations). ### Supported Tasks and Leaderboards Language Modeling ### Languages Mostly Spanish, but some Portuguese, English, and other languages. ## Dataset Structure ### Data Fields - *tweet_id*: id of the tweet - *user_id*: id of the user - *text*: text from the tweet ## Dataset Creation The full process of data collection is described in the paper. Here we roughly outline the main points: - A Spritzer collection uploaded to Archive.org dating from May 2019 was downloaded - From this, we only kept tweets with language metadata equal to Spanish, and mark the users who posted these messages. - Then, the tweetline from each of these marked users was downloaded. This corpus consists of 622M tweets from around 432K users. Please note that we did not filter tweets from other languages, so you might find English, Portuguese, Catalan and other languages in the dataset (around 7/8% of the tweets are not in Spanish) ### Citation Information ``` @inproceedings{perez-etal-2022-robertuito, title = "{R}o{BERT}uito: a pre-trained language model for social media text in {S}panish", author = "P{\'e}rez, Juan Manuel and Furman, Dami{\'a}n Ariel and Alonso Alemany, Laura and Luque, Franco M.", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.785", pages = "7235--7243", abstract = "Since BERT appeared, Transformer language models and transfer learning have become state-of-the-art for natural language processing tasks. Recently, some works geared towards pre-training specially-crafted models for particular domains, such as scientific papers, medical documents, user-generated texts, among others. These domain-specific models have been shown to improve performance significantly in most tasks; however, for languages other than English, such models are not widely available. In this work, we present RoBERTuito, a pre-trained language model for user-generated text in Spanish, trained on over 500 million tweets. Experiments on a benchmark of tasks involving user-generated text showed that RoBERTuito outperformed other pre-trained language models in Spanish. In addition to this, our model has some cross-lingual abilities, achieving top results for English-Spanish tasks of the Linguistic Code-Switching Evaluation benchmark (LinCE) and also competitive performance against monolingual models in English Twitter tasks. To facilitate further research, we make RoBERTuito publicly available at the HuggingFace model hub together with the dataset used to pre-train it.", } ```
pysentimiento/spanish-tweets
[ "language:es", "region:us" ]
2022-09-08T20:02:38+00:00
{"language": "es", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "tweet_id", "dtype": "string"}, {"name": "user_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 82649695458, "num_examples": 597433111}, {"name": "test", "num_bytes": 892219251, "num_examples": 6224733}], "download_size": 51737237106, "dataset_size": 83541914709}}
2023-07-13T14:44:41+00:00
e0c29cfa541e8a082ce6ee1c9bec75d37333a98d
# Dataset Card for Midjourney User Prompts & Generated Images (250k) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/succinctlyai/midjourney-texttoimage - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary General Context === [Midjourney](https://midjourney.com) is an independent research lab whose broad mission is to "explore new mediums of thought". In 2022, they launched a text-to-image service that, given a natural language prompt, produces visual depictions that are faithful to the description. Their service is accessible via a public [Discord server](https://discord.com/invite/midjourney), where users interact with a [Midjourney bot](https://midjourney.gitbook.io/docs/#create-your-first-image). When issued a query in natural language, the bot returns four low-resolution images and offers further options like upscaling or re-generating a variation of the original images. This dataset was obtained by scraping messages from the public Discord server over a period of four weeks (June 20, 2002 - July 17, 2022). The authors have no affiliation with Midjourney and are releasing this data with the sole purpose of enabling research on text-to-image model prompting (see the Sample Use Case section below). Midjourney's Discord Server --- Here is what the interaction with the Midjourney bot looks like on Discord: 1. Issuing an initial prompt: ![Screenshot showing how to issue an initial prompt](https://drive.google.com/uc?export=view&id=1k6BuaJNWThCr1x2Ezojx3fAmDIyeZhbp "Result of issuing an initial prompt") 2. Upscaling the bottom-left image: ![Screenshot showing how to request upscaling an image](https://drive.google.com/uc?export=view&id=15Y65Fe0eVKVPK5YOul0ZndLuqo4Lg4xk "Result of upscaling an image") 3. Requesting variations of the bottom-left image: ![Screenshot showing how to request a variation of a generated image](https://drive.google.com/uc?export=view&id=1-9kw69PgM5eIM5n1dir4lQqGCn_hJfOA "Result of requesting a variation of an image") Dataset Format === The dataset was produced by scraping ten public Discord channels in the "general" category (i.e., with no dedicated topic) over four weeks. Filenames follow the pattern `channel-name_yyyy_mm_dd.json`. The `"messages"` field in each JSON file contains a list of [Message](https://discord.com/developers/docs/resources/channel#message-object) objects, one per user query. A message includes information such as the user-issued prompt, a link to the generated image, and other metadata. See [the companion notebook](https://www.kaggle.com/succinctlyai/midjourney-prompt-analysis) with utilities for extracting such information. | User Prompt | Generated Image URL | | --- | --- | | anatomical heart fill with deers, neon, pastel, artstation | https://cdn.discordapp.com/attachments/985204969722486814/989673529102463016/f14d5cb4-aa4d-4060-b017-5ee6c1db42d6_Ko_anatomical_heart_fill_with_deers_neon_pastel_artstation.png | | anatomical heart fill with jumping running deers, neon, pastel, artstation | https://cdn.discordapp.com/attachments/985204969722486814/989675045439815721/1d7541f2-b659-4a74-86a3-ae211918723c_Ko_anatomical_heart_fill_with_jumping_running_deers_neon_pastel_artstation.png | | https://s.mj.run/UlkFmVAKfaE cat with many eyes floating in colorful glowing swirling whisps, occult inspired, emerging from the void, shallow depth of field | https://cdn.discordapp.com/attachments/982990243621908480/988957623229501470/6116dc5f-64bb-4afb-ba5f-95128645c247_MissTwistedRose_cat_with_many_eyes_floating_in_colorful_glowing_swirling_whisps_occult_inspired_emerging_from_the_vo.png | Dataset Stats === The dataset contains: - **268k** messages from 10 public Discord channel collected over 28 days. - **248k** user-generated prompts and their associated generated images, out of which: + 60% are requests for new images (initial or variation requests for a previously-generated image), and + 40% are requests for upscaling previously-generated images. Prompt Analysis === Here are the most prominent phrases among the user-generated text prompts: ![word cloud](https://drive.google.com/uc?export=view&id=1J432wrecf2zibDFU5sT3BXFxqmt3PJ-P) Prompt lengths span from 1 to 60 whitespace-separated tokens, with the mode around 15 tokens: ![prompt lengths](https://drive.google.com/uc?export=view&id=1fFObFvcWwOEGJ3k47G4fzIHZXmxS3RiW) See the [the companion notebook](https://www.kaggle.com/succinctlyai/midjourney-prompt-analysis) for an in-depth analysis of how users control various aspects of the generated images (lighting, resolution, photographic elements, artistic style, etc.). Sample Use Case === One way of leveraging this dataset is to help address the [prompt engineering](https://www.wired.com/story/dalle-art-curation-artificial-intelligence/) problem: artists that use text-to-image models in their work spend a significant amount of time carefully crafting their text prompts. We built an additional model for prompt autocompletion by learning from the queries issued by Midjourney users. [This notebook](https://www.kaggle.com/code/succinctlyai/midjourney-text-prompts-huggingface) shows how to extract the natural language prompts from the Discord messages and create a HuggingFace dataset to be used for training. The processed dataset can be found at [succinctly/midjourney-prompts](https://huggingface.co/datasets/succinctly/midjourney-prompts), and the prompt generator (a GPT-2 model fine-tuned on prompts) is located at [succinctly/text2image-prompt-generator](https://huggingface.co/succinctly/text2image-prompt-generator). Here is how our model can help brainstorm creative prompts and speed up prompt engineering: ![prompt autocomplete model](https://drive.google.com/uc?export=view&id=1JqZ-CaWNpQ4iO0Qcd3b8u_QnBp-Q0PKu) Authors === This project was a collaboration between [Iulia Turc](https://twitter.com/IuliaTurc) and [Gaurav Nemade](https://twitter.com/gaurav_nemade15). We recently left Google Research to work on something new. Feel free to Tweet at us, or follow our journey at [succinctly.ai](https://succinctly.ai). Interesting Finds === Here are some of the generated images that drew our attention: | User Prompt | Generated Image | | --- | --- | | https://s.mj.run/JlwNbH Historic Ensemble of the Potala Palace Lhasa, japanese style painting,trending on artstation, temple, architecture, fiction, sci-fi, underwater city, Atlantis , cyberpunk style, 8k revolution, Aokigahara fall background , dramatic lighting, epic, photorealistic, in his lowest existential moment with high detail, trending on artstation,cinematic light, volumetric shading ,high radiosity , high quality, form shadow, rim lights , concept art of architecture, 3D,hyper deatiled,very high quality,8k,Maxon cinema,visionary,imaginary,realistic,as trending on the imagination of Gustave Doré idea,perspective view,ornate light --w 1920 --h 1024 | ![palace](https://drive.google.com/uc?export=view&id=1xl2Gr1TSWCh0p_8o_wJnQIsO1qxW02Z_) | | a dark night with fog in a metropolis of tomorrow by hugh ferriss:, epic composition, maximum detail, Westworld, Elysium space station, space craft shuttle, star trek enterprise interior, moody, peaceful, hyper detailed, neon lighting, populated, minimalist design, monochromatic, rule of thirds, photorealistic, alien world, concept art, sci-fi, artstation, photorealistic, arch viz , volumetric light moody cinematic epic, 3d render, octane render, trending on artstation, in the style of dylan cole + syd mead + by zaha hadid, zaha hadid architecture + reaction-diffusion + poly-symmetric + parametric modelling, open plan, minimalist design 4k --ar 3:1 | ![metropolis](https://drive.google.com/uc?export=view&id=16A-VtlbSZCaUFiA6CZQzevPgBGyBiXWI) | | https://s.mj.run/qKj8n0 fantasy art, hyperdetailed, panoramic view, foreground is a crowd of ancient Aztec robots are doing street dance battle , main part is middleground is majestic elegant Gundam mecha robot design with black power armor and unsettling ancient Aztec plumes and decorations scary looking with two magical neon swords combat fighting::2 , background is at night with nebula eruption, Rembrandt lighting, global illumination, high details, hyper quality, unreal negine, octane render, arnold render, vray render, photorealistic, 8k --ar 3:1 --no dof,blur,bokeh | ![ancient](https://drive.google.com/uc?export=view&id=1a3jI3eiQwLbulaSS2-l1iGJ6-kokMMvc) | | https://s.mj.run/zMIhrKBDBww in side a Amethyst geode cave, 8K symmetrical portrait, trending in artstation, epic, fantasy, Klimt, Monet, clean brush stroke, realistic highly detailed, wide angle view, 8k post-processing highly detailed, moody lighting rendered by octane engine, artstation,cinematic lighting, intricate details, 8k detail post processing, --no face --w 512 --h 256 | ![cave](https://drive.google.com/uc?export=view&id=1gUx-3drfCBBFha8Hoal4Ly4efDXSrxlB) | | https://s.mj.run/GTuMoq whimsically designed gothic, interior of a baroque cathedral in fire with moths and birds flying, rain inside, with angels, beautiful woman dressed with lace victorian and plague mask, moody light, 8K photgraphy trending on shotdeck, cinema lighting, simon stålenhag, hyper realistic octane render, octane render, 4k post processing is very detailed, moody lighting, Maya+V-Ray +metal art+ extremely detailed, beautiful, unreal engine, lovecraft, Big Bang cosmology in LSD+IPAK,4K, beatiful art by Lêon François Comerre, ashley wood, craig mullins, ,outer space view, William-Adolphe Bouguereau, Rosetti --w 1040 --h 2080 | ![gothic](https://drive.google.com/uc?export=view&id=1nmsTEdPEbvDq9SLnyjjw3Pb8Eb-C1WaP) | ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@succinctlyai](https://kaggle.com/succinctlyai) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/midjourney-texttoimage-new
[ "license:cc0-1.0", "region:us" ]
2022-09-08T20:17:45+00:00
{"license": ["cc0-1.0"], "converted_from": "kaggle", "kaggle_id": "succinctlyai/midjourney-texttoimage"}
2022-09-08T20:22:05+00:00
6d108e64c8f43f95c0893b67ca7a5bb2bb9904b3
# Dataset Card for Prescription-based prediction ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/roamresearch/prescriptionbasedprediction - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This is the dataset used in the Roam blog post [Prescription-based prediction](http://roamanalytics.com/2016/09/13/prescription-based-prediction/). It is derived from a variety of US open health datasets, but the bulk of the data points come from the [Medicare Part D](https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Provider-Charge-Data/Part-D-Prescriber.html) dataset and the [National Provider Identifier](https://npiregistry.cms.hhs.gov) dataset. The prescription vector for each doctor tells a rich story about that doctor's attributes, including specialty, gender, age, and region. There are 239,930 doctors in the dataset. The file is in JSONL format (one JSON record per line): <pre> { 'provider_variables': { 'brand_name_rx_count': int, 'gender': 'M' or 'F', 'generic_rx_count': int, 'region': 'South' or 'MidWest' or 'Northeast' or 'West', 'settlement_type': 'non-urban' or 'urban' 'specialty': str 'years_practicing': int }, 'npi': str, 'cms_prescription_counts': { `drug_name`: int, `drug_name`: int, ... } } </pre> The brand/generic classifications behind `brand_name_rx_count` and `generic_rx_count` are defined heuristically. For more details, see [the blog post](http://roamanalytics.com/2016/09/13/prescription-based-prediction/) or go directly to [the associated code](https://github.com/roaminsight/roamresearch/tree/master/BlogPosts/Prescription_based_prediction). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@roamresearch](https://kaggle.com/roamresearch) ### Licensing Information The license for this dataset is cc-by-nc-sa-4.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/prescriptionbasedprediction
[ "license:cc-by-nc-sa-4.0", "region:us" ]
2022-09-08T20:40:40+00:00
{"license": ["cc-by-nc-sa-4.0"], "converted_from": "kaggle", "kaggle_id": "roamresearch/prescriptionbasedprediction"}
2022-09-08T20:40:53+00:00
6bba8e2773773739878a9e5ab1d8e10b8733260f
# Dataset Card for World Happiness Report ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/unsdsn/world-happiness - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary ### Context The World Happiness Report is a landmark survey of the state of global happiness. The first report was published in 2012, the second in 2013, the third in 2015, and the fourth in the 2016 Update. The World Happiness 2017, which ranks 155 countries by their happiness levels, was released at the United Nations at an event celebrating International Day of Happiness on March 20th. The report continues to gain global recognition as governments, organizations and civil society increasingly use happiness indicators to inform their policy-making decisions. Leading experts across fields – economics, psychology, survey analysis, national statistics, health, public policy and more – describe how measurements of well-being can be used effectively to assess the progress of nations. The reports review the state of happiness in the world today and show how the new science of happiness explains personal and national variations in happiness. ### Content The happiness scores and rankings use data from the Gallup World Poll. The scores are based on answers to the main life evaluation question asked in the poll. This question, known as the Cantril ladder, asks respondents to think of a ladder with the best possible life for them being a 10 and the worst possible life being a 0 and to rate their own current lives on that scale. The scores are from nationally representative samples for the years 2013-2016 and use the Gallup weights to make the estimates representative. The columns following the happiness score estimate the extent to which each of six factors – economic production, social support, life expectancy, freedom, absence of corruption, and generosity – contribute to making life evaluations higher in each country than they are in Dystopia, a hypothetical country that has values equal to the world’s lowest national averages for each of the six factors. They have no impact on the total score reported for each country, but they do explain why some countries rank higher than others. ### Inspiration What countries or regions rank the highest in overall happiness and each of the six factors contributing to happiness? How did country ranks or scores change between the 2015 and 2016 as well as the 2016 and 2017 reports? Did any country experience a significant increase or decrease in happiness? **What is Dystopia?** Dystopia is an imaginary country that has the world’s least-happy people. The purpose in establishing Dystopia is to have a benchmark against which all countries can be favorably compared (no country performs more poorly than Dystopia) in terms of each of the six key variables, thus allowing each sub-bar to be of positive width. The lowest scores observed for the six key variables, therefore, characterize Dystopia. Since life would be very unpleasant in a country with the world’s lowest incomes, lowest life expectancy, lowest generosity, most corruption, least freedom and least social support, it is referred to as “Dystopia,” in contrast to Utopia. **What are the residuals?** The residuals, or unexplained components, differ for each country, reflecting the extent to which the six variables either over- or under-explain average 2014-2016 life evaluations. These residuals have an average value of approximately zero over the whole set of countries. Figure 2.2 shows the average residual for each country when the equation in Table 2.1 is applied to average 2014- 2016 data for the six variables in that country. We combine these residuals with the estimate for life evaluations in Dystopia so that the combined bar will always have positive values. As can be seen in Figure 2.2, although some life evaluation residuals are quite large, occasionally exceeding one point on the scale from 0 to 10, they are always much smaller than the calculated value in Dystopia, where the average life is rated at 1.85 on the 0 to 10 scale. **What do the columns succeeding the Happiness Score(like Family, Generosity, etc.) describe?** The following columns: GDP per Capita, Family, Life Expectancy, Freedom, Generosity, Trust Government Corruption describe the extent to which these factors contribute in evaluating the happiness in each country. The Dystopia Residual metric actually is the Dystopia Happiness Score(1.85) + the Residual value or the unexplained value for each country as stated in the previous answer. If you add all these factors up, you get the happiness score so it might be un-reliable to model them to predict Happiness Scores. #[Start a new kernel][1] [1]: https://www.kaggle.com/unsdsn/world-happiness/kernels?modal=true ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@unsdsn](https://kaggle.com/unsdsn) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/world-happiness
[ "license:cc0-1.0", "region:us" ]
2022-09-08T20:51:07+00:00
{"license": ["cc0-1.0"], "converted_from": "kaggle", "kaggle_id": "unsdsn/world-happiness"}
2022-09-08T20:51:15+00:00
3394542328c7c3ed2ee0fb6d902fd73ada2492c0
Chuckbets47/CarmE
[ "license:afl-3.0", "region:us" ]
2022-09-09T02:08:16+00:00
{"license": "afl-3.0"}
2022-09-09T02:08:16+00:00
c614e40ca0c9a5b6ba8553754158652a1156f694
stable-diffusion-discord-prompts All messages from dreambot from all dream-[1-50] channels in stable-diffusion discord source: https://github.com/bartman081523/stable-diffusion-discord-prompts
neuralworm/stable-diffusion-discord-prompts
[ "region:us" ]
2022-09-09T02:32:22+00:00
{}
2022-09-15T02:52:04+00:00
00d53922bad2faab09916b1b83c6be5bf6bd9e96
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-book-summary * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116209
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:13+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-book-summary", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T08:47:59+00:00
0d656ce2d05249f8bc06a3048a577ce1cb9eb4b7
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_sumpubmed * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116210
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:17+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "Blaise-g/longt5_tglobal_large_sumpubmed", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T03:44:55+00:00
2554e99bf5d02a551aebe4b0d2fb9276e7ebc8c5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13 * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116211
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:21+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T20:07:42+00:00
02b9c6352eba657cc3bade52d89764a539b711f9
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2 * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116212
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:29+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T03:54:17+00:00
0e900883ed246d6237128ebd68ff98e0e1caf78f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116213
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:33+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T17:13:00+00:00
2bf032fc8926b7e424852caef15844242b4888fc
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11 * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116214
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:38+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T18:49:27+00:00
a757cd2381b43a4b03146acdfe34722d8968ba78
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_baseline_sumpubmed_nolenpen * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116215
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:44+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "Blaise-g/longt5_tglobal_large_baseline_sumpubmed_nolenpen", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T04:26:50+00:00
3c6e630b83d5ad560f90b0cee9027ec8f754a59e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_scitldr * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116216
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:50+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "Blaise-g/longt5_tglobal_large_scitldr", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T04:44:46+00:00
72bd968d199b079c4a66863ab4844def3e05042c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116217
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:56+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T03:13:14+00:00
3ba67d037d51a119698f136ecf0592d88a5ac6e8
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126218
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T03:17:42+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T03:23:02+00:00
079bc3a029f12d1565725a76f2d83fd93be783a4
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126219
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T03:17:47+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-arxiv", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T03:51:42+00:00
19a6f6c5483163f19b0ddc4e922da5abc3b52e14
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-bigpatent * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126220
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T03:17:52+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-bigpatent", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T03:50:59+00:00
d74ce7aa783f47c3bb17f0259d7fee1f6a89d0e9
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-pubmed * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126221
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T03:17:58+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-pubmed", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T03:51:31+00:00
ab7cb615535d508799f224e10906d556ab4cfcb0
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/bigbird-pegasus-large-K-booksum * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126222
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T03:18:04+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/bigbird-pegasus-large-K-booksum", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T16:51:54+00:00
3e25f0d8068ff5f9a904d9afce7c4a6e9744fe10
# Dataset Card for 100 Richest People In World ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/tarundalal/100-richest-people-in-world - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains the list of Top 100 Richest People in the World Column Information:- - Name - Person Name - NetWorth - His/Her Networth - Age - Person Age - Country - The country person belongs to - Source - Information Source - Industry - Expertise Domain ### Join our Community <a href="https://discord.com/invite/kxZYxdTKp6"> <img src="https://discord.com/api/guilds/939520548726272010/widget.png?style=banner1"></a> ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@tarundalal](https://kaggle.com/tarundalal) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/100-richest-people-in-world
[ "license:cc0-1.0", "region:us" ]
2022-09-09T04:10:55+00:00
{"license": ["cc0-1.0"], "converted_from": "kaggle", "kaggle_id": "tarundalal/100-richest-people-in-world"}
2022-09-09T04:10:59+00:00
168fbd6f0754738d7166d357c6b02790752fc251
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-book-summary * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136223
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:27:25+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-book-summary", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T11:39:24+00:00
eab963274de7e0edf0109b653d681cd6c6c7008a
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_sumpubmed * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136224
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:27:29+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "Blaise-g/longt5_tglobal_large_sumpubmed", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T06:42:30+00:00
2382ecc2f9a282294489185d349b258db8d0d58c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_scitldr * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136225
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:27:35+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "Blaise-g/longt5_tglobal_large_scitldr", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T07:34:26+00:00
eb6c99edc51cb573d18449706847d102403dc990
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13 * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136226
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:27:41+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T22:50:32+00:00
5195b0a556f0b34cb4d57881fdb7f75d8d717119
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2 * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136227
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:27:47+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T06:42:27+00:00
7dcd04b3c24b999f3cdfe7648b37253672e9ce85
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136228
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:27:52+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T20:20:32+00:00
f8602deb08d2439c83c66316ab0653bb427f758d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11 * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136229
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:27:59+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T21:59:16+00:00
f166e325789a8af88e96df52ae986e9b1b001ef8
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136230
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:28:05+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T06:03:19+00:00
320a0e9a51c3bbfd7241c69021671c6bce556011
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146231
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:28:12+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-arxiv", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T06:01:53+00:00
cf37b02033fd20c1aef9f0f23f747dde24ef2064
darnels30/skeld
[ "license:afl-3.0", "region:us" ]
2022-09-09T05:35:24+00:00
{"license": "afl-3.0"}
2022-09-09T05:53:29+00:00
11c3beb3ad0180fe5e34012b25a913f2bea08d6a
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-bigpatent * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146232
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T06:02:16+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-bigpatent", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T06:35:19+00:00
e1be3a1fe4bac74e9cfc091131e267f71c9d3e8c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-pubmed * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146233
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T06:04:22+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-pubmed", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T06:37:56+00:00
53f9acad369028aa1cc20fd839f32076f85287c4
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/bigbird-pegasus-large-K-booksum * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146234
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T06:35:43+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/bigbird-pegasus-large-K-booksum", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T20:23:07+00:00
0c6e30c26ef7cda27ea3e5100abc8d6c3c71b9ab
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146235
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T06:38:18+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T06:44:04+00:00
21fd72693c7a977f5a13203816c20c528e39b5ac
# Dataset Card for xP3 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/bigscience-workshop/xmtf - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786) - **Point of Contact:** [Niklas Muennighoff](mailto:[email protected]) ### Dataset Summary > xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot. - **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility. - **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3)) - **xP3 Dataset Family:** <table> <tr> <th>Name</th> <th>Explanation</th> <th>Example models</th> </tr> <tr> <td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t> <td>Mixture of 17 tasks in 277 languages with English prompts</td> <td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t> <td>Mixture of 13 training tasks in 46 languages with English prompts</td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t> <td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t> <td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td> <td></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t> <td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t> <td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> </tr> </table> ## Dataset Structure ### Data Instances An example of "train" looks as follows: ```json { "inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?", "targets": "Yes" } ``` ### Data Fields The data fields are the same among all splits: - `inputs`: the natural language input fed to the model - `targets`: the natural language target that the model has to generate ### Data Splits The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage. |Language|Kilobytes|%|Samples|%| |--------|------:|-:|---:|-:| |tw|106288|0.11|265071|0.34| |bm|107056|0.11|265180|0.34| |ak|108096|0.11|265071|0.34| |eu|108112|0.11|269973|0.34| |ca|110608|0.12|271191|0.34| |fon|113072|0.12|265063|0.34| |st|114080|0.12|265063|0.34| |ki|115040|0.12|265180|0.34| |tum|116032|0.12|265063|0.34| |wo|122560|0.13|365063|0.46| |ln|126304|0.13|365060|0.46| |as|156256|0.16|265063|0.34| |or|161472|0.17|265063|0.34| |kn|165456|0.17|265063|0.34| |ml|175040|0.18|265864|0.34| |rn|192992|0.2|318189|0.4| |nso|229712|0.24|915051|1.16| |tn|235536|0.25|915054|1.16| |lg|235936|0.25|915021|1.16| |rw|249360|0.26|915043|1.16| |ts|250256|0.26|915044|1.16| |sn|252496|0.27|865056|1.1| |xh|254672|0.27|915058|1.16| |zu|263712|0.28|915061|1.16| |ny|272128|0.29|915063|1.16| |ig|325232|0.34|950097|1.2| |yo|352784|0.37|918416|1.16| |ne|393680|0.41|315754|0.4| |pa|523248|0.55|339210|0.43| |gu|560688|0.59|347499|0.44| |sw|560896|0.59|1114455|1.41| |mr|666240|0.7|417269|0.53| |bn|832720|0.88|428843|0.54| |ta|924496|0.97|410633|0.52| |te|1332912|1.4|573364|0.73| |ur|1918272|2.02|855756|1.08| |vi|3101408|3.27|1667306|2.11| |code|4330752|4.56|2707724|3.43| |hi|4393696|4.63|1543441|1.96| |zh|4589904|4.83|3560556|4.51| |id|4606288|4.85|2627392|3.33| |ar|4677264|4.93|2148955|2.72| |fr|5546688|5.84|5055942|6.41| |pt|6129584|6.46|3562772|4.52| |es|7571808|7.98|5151349|6.53| |en|37261104|39.25|31495184|39.93| |total|94941936|100.0|78883588|100.0| ## Dataset Creation ### Source Data #### Training datasets - Code Miscellaneous - [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex) - [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus) - [GreatCode](https://huggingface.co/datasets/great_code) - [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes) - Closed-book QA - [Hotpot QA](https://huggingface.co/datasets/hotpot_qa) - [Trivia QA](https://huggingface.co/datasets/trivia_qa) - [Web Questions](https://huggingface.co/datasets/web_questions) - [Wiki QA](https://huggingface.co/datasets/wiki_qa) - Extractive QA - [Adversarial QA](https://huggingface.co/datasets/adversarial_qa) - [CMRC2018](https://huggingface.co/datasets/cmrc2018) - [DRCD](https://huggingface.co/datasets/clue) - [DuoRC](https://huggingface.co/datasets/duorc) - [MLQA](https://huggingface.co/datasets/mlqa) - [Quoref](https://huggingface.co/datasets/quoref) - [ReCoRD](https://huggingface.co/datasets/super_glue) - [ROPES](https://huggingface.co/datasets/ropes) - [SQuAD v2](https://huggingface.co/datasets/squad_v2) - [xQuAD](https://huggingface.co/datasets/xquad) - TyDI QA - [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary) - [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp) - Multiple-Choice QA - [ARC](https://huggingface.co/datasets/ai2_arc) - [C3](https://huggingface.co/datasets/c3) - [CoS-E](https://huggingface.co/datasets/cos_e) - [Cosmos](https://huggingface.co/datasets/cosmos) - [DREAM](https://huggingface.co/datasets/dream) - [MultiRC](https://huggingface.co/datasets/super_glue) - [OpenBookQA](https://huggingface.co/datasets/openbookqa) - [PiQA](https://huggingface.co/datasets/piqa) - [QUAIL](https://huggingface.co/datasets/quail) - [QuaRel](https://huggingface.co/datasets/quarel) - [QuaRTz](https://huggingface.co/datasets/quartz) - [QASC](https://huggingface.co/datasets/qasc) - [RACE](https://huggingface.co/datasets/race) - [SciQ](https://huggingface.co/datasets/sciq) - [Social IQA](https://huggingface.co/datasets/social_i_qa) - [Wiki Hop](https://huggingface.co/datasets/wiki_hop) - [WiQA](https://huggingface.co/datasets/wiqa) - Paraphrase Identification - [MRPC](https://huggingface.co/datasets/super_glue) - [PAWS](https://huggingface.co/datasets/paws) - [PAWS-X](https://huggingface.co/datasets/paws-x) - [QQP](https://huggingface.co/datasets/qqp) - Program Synthesis - [APPS](https://huggingface.co/datasets/codeparrot/apps) - [CodeContests](https://huggingface.co/datasets/teven/code_contests) - [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs) - [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp) - [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search) - [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code) - Structure-to-text - [Common Gen](https://huggingface.co/datasets/common_gen) - [Wiki Bio](https://huggingface.co/datasets/wiki_bio) - Sentiment - [Amazon](https://huggingface.co/datasets/amazon_polarity) - [App Reviews](https://huggingface.co/datasets/app_reviews) - [IMDB](https://huggingface.co/datasets/imdb) - [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes) - [Yelp](https://huggingface.co/datasets/yelp_review_full) - Simplification - [BiSECT](https://huggingface.co/datasets/GEM/BiSECT) - Summarization - [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail) - [Gigaword](https://huggingface.co/datasets/gigaword) - [MultiNews](https://huggingface.co/datasets/multi_news) - [SamSum](https://huggingface.co/datasets/samsum) - [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua) - [XLSum](https://huggingface.co/datasets/GEM/xlsum) - [XSum](https://huggingface.co/datasets/xsum) - Topic Classification - [AG News](https://huggingface.co/datasets/ag_news) - [DBPedia](https://huggingface.co/datasets/dbpedia_14) - [TNEWS](https://huggingface.co/datasets/clue) - [TREC](https://huggingface.co/datasets/trec) - [CSL](https://huggingface.co/datasets/clue) - Translation - [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200) - [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt) - Word Sense disambiguation - [WiC](https://huggingface.co/datasets/super_glue) - [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic) #### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for NLI & HumanEval) - Natural Language Inference (NLI) - [ANLI](https://huggingface.co/datasets/anli) - [CB](https://huggingface.co/datasets/super_glue) - [RTE](https://huggingface.co/datasets/super_glue) - [XNLI](https://huggingface.co/datasets/xnli) - Coreference Resolution - [Winogrande](https://huggingface.co/datasets/winogrande) - [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd) - Program Synthesis - [HumanEval](https://huggingface.co/datasets/openai_humaneval) - Sentence Completion - [COPA](https://huggingface.co/datasets/super_glue) - [Story Cloze](https://huggingface.co/datasets/story_cloze) - [XCOPA](https://huggingface.co/datasets/xcopa) - [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze) ## Additional Information ### Licensing Information The dataset is released under Apache 2.0. ### Citation Information ```bibtex @misc{muennighoff2022crosslingual, title={Crosslingual Generalization through Multitask Finetuning}, author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel}, year={2022}, eprint={2211.01786}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset.
bigscience/xP3megds
[ "task_categories:other", "annotations_creators:expert-generated", "annotations_creators:crowdsourced", "multilinguality:multilingual", "size_categories:100M<n<1B", "language:ak", "language:ar", "language:as", "language:bm", "language:bn", "language:ca", "language:code", "language:en", "language:es", "language:eu", "language:fon", "language:fr", "language:gu", "language:hi", "language:id", "language:ig", "language:ki", "language:kn", "language:lg", "language:ln", "language:ml", "language:mr", "language:ne", "language:nso", "language:ny", "language:or", "language:pa", "language:pt", "language:rn", "language:rw", "language:sn", "language:st", "language:sw", "language:ta", "language:te", "language:tn", "language:ts", "language:tum", "language:tw", "language:ur", "language:vi", "language:wo", "language:xh", "language:yo", "language:zh", "language:zu", "license:apache-2.0", "arxiv:2211.01786", "region:us" ]
2022-09-09T07:15:42+00:00
{"annotations_creators": ["expert-generated", "crowdsourced"], "language": ["ak", "ar", "as", "bm", "bn", "ca", "code", "en", "es", "eu", "fon", "fr", "gu", "hi", "id", "ig", "ki", "kn", "lg", "ln", "ml", "mr", "ne", "nso", "ny", "or", "pa", "pt", "rn", "rw", "sn", "st", "sw", "ta", "te", "tn", "ts", "tum", "tw", "ur", "vi", "wo", "xh", "yo", "zh", "zu"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": ["100M<n<1B"], "task_categories": ["other"], "pretty_name": "xP3", "programming_language": ["C", "C++", "C#", "Go", "Java", "JavaScript", "Lua", "PHP", "Python", "Ruby", "Rust", "Scala", "TypeScript"]}
2023-05-30T14:52:11+00:00
1fa69dcaa86f33080b902982c85c381b908c2d64
McClain/Cnn-Article-QA
[ "license:mit", "region:us" ]
2022-09-09T11:05:00+00:00
{"license": "mit"}
2022-09-09T11:05:00+00:00
aa2054ba0acbbb5af2900409225c51ecfc86e440
000hen/captchaCode
[ "license:apache-2.0", "region:us" ]
2022-09-09T11:57:09+00:00
{"license": "apache-2.0"}
2022-09-09T11:57:09+00:00
ecfdb05411aae3326a3949f41b060246facbd12b
CShorten/CORD19-Chunk-1
[ "license:afl-3.0", "region:us" ]
2022-09-09T12:43:56+00:00
{"license": "afl-3.0"}
2022-09-09T14:12:40+00:00
55cecad455f7df12b6c7c1c8c206aacc9f764e3e
# Dataset Card for COVID News Articles (2020 - 2022) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/timmayer/covid-news-articles-2020-2022 - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The dataset encapsulates approximately half a million news articles collected over a period of 2 years during the Coronavirus pandemic onset and surge. It consists of 3 columns - **title**, **content** and **category**. **title** refers to the headline of the news article. **content** refers to the article in itself and **category** denotes the overall context of the news article at a high level. The dataset encapsulates approximately half a million news articles collected over a period of 2 years during the Coronavirus pandemic onset and surge. It consists of 3 columns - **title**, **content** and **category**. **title** refers to the headline of the news article. **content** refers to the article in itself and **category** denotes the overall context of the news article at a high level. This dataset can be used to pre-train large language models (LLMs) and demonstrate NLP downstream tasks like binary/multi-class text classification. The dataset can be used to study the difference in behaviors of language models when there is a shift in data. For e.g., the classic transformers based BERT model was trained before the COVID era. By training a masked language model (MLM) using this dataset, we can try to differentiate the behaviors of the original BERT model vs the newly trained models. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@timmayer](https://kaggle.com/timmayer) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
osanseviero/covid_news
[ "license:cc0-1.0", "region:us" ]
2022-09-09T13:52:52+00:00
{"license": ["cc0-1.0"], "converted_from": "kaggle", "kaggle_id": "timmayer/covid-news-articles-2020-2022"}
2022-09-09T13:53:32+00:00
0d8eeed7e5073b74bcf7e29f6fcb505ba658108f
CShorten/CORD19-Chunk-2
[ "license:afl-3.0", "region:us" ]
2022-09-09T13:55:28+00:00
{"license": "afl-3.0"}
2022-09-09T13:58:11+00:00
96a67cfd72472bfa0d2585cd19f008b01b1fdd30
moonlit78/MoebStyle
[ "license:afl-3.0", "region:us" ]
2022-09-09T14:19:50+00:00
{"license": "afl-3.0"}
2022-09-09T14:19:50+00:00
233147fe574a16a3ef05d3a71163f0b18080f438
# Dataset Card for LibriVox Indonesia 1.0 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia - **Repository:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia - **Point of Contact:** [Cahya Wirawan](mailto:[email protected]) ### Dataset Summary The LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public domain audiobooks [LibriVox](https://librivox.org/). We collected only languages in Indonesia for this dataset. The original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds. We converted the audiobooks to speech datasets using the forced alignment software we developed. It supports multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it for other languages without additional work to train the model. The dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files as we collect them. ### Languages ``` Acehnese, Balinese, Bugisnese, Indonesian, Minangkabau, Javanese, Sundanese ``` ## Dataset Structure ### Data Instances A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include `reader` and `language`. ```python { 'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3', 'language': 'sun', 'reader': '3174', 'sentence': 'pernyataan umum ngeunaan hak hak asasi manusa sakabeh manusa', 'audio': { 'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 44100 }, } ``` ### Data Fields `path` (`string`): The path to the audio file `language` (`string`): The language of the audio file `reader` (`string`): The reader Id in LibriVox `sentence` (`string`): The sentence the user read from the book. `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. ### Data Splits The speech material has only train split. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/) ### Citation Information ``` ```
cahya/librivox-indonesia
[ "task_categories:automatic-speech-recognition", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:librivox", "language:ace", "language:ban", "language:bug", "language:id", "language:min", "language:jav", "language:sun", "license:cc", "region:us" ]
2022-09-09T14:21:18+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ace", "ban", "bug", "id", "min", "jav", "sun"], "license": "cc", "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["librivox"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "LibriVox Indonesia 1.0"}
2024-02-01T21:01:52+00:00
1d92f618902feec6176b2642058a633bee91e28b
CShorten/CORD19-init-160k
[ "license:afl-3.0", "region:us" ]
2022-09-09T15:34:16+00:00
{"license": "afl-3.0"}
2022-09-14T13:25:04+00:00
c7a7286370bdbedb08962e147b3b4c0752c8d2c8
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-staging-eval-autoevaluate__zero-shot-classification-sample-autoevalu-a8cade-61
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T15:34:53+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/zero-shot-classification-sample"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "autoevaluate/zero-shot-classification-sample", "dataset_config": "autoevaluate--zero-shot-classification-sample", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-09-09T15:35:54+00:00
2e68efee3e15ad4aee700a9b569fc5c2e3b05a45
# Dataset Card for REBEL-Portuguese ## Table of Contents - [Dataset Card for REBEL-Portuguese](#dataset-card-for-rebel) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [https://github.com/Babelscape/rebel](https://github.com/Babelscape/rebel) - **Paper:** [https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf](https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf) - **Point of Contact:** [[email protected]]([email protected]) ### Dataset Summary Dataset adapted to Portuguese from [REBEL-dataset](https://huggingface.co/datasets/Babelscape/rebel-dataset) . ### Supported Tasks and Leaderboards - `text-retrieval-other-relation-extraction`: The dataset can be used to train a model for Relation Extraction, which consists in extracting triplets from raw text, made of subject, object and relation type. ### Languages The dataset is in Portuguese, from the Portuguese Wikipedia. ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data Data comes from Wikipedia text before the table of contents, as well as Wikidata for the triplets annotation. #### Initial Data Collection and Normalization For the data collection, the dataset extraction pipeline [cRocoDiLe: Automati**c** **R**elati**o**n Extra**c**ti**o**n **D**ataset w**i**th N**L**I filt**e**ring](https://github.com/Babelscape/crocodile) insipired by [T-REx Pipeline](https://github.com/hadyelsahar/RE-NLG-Dataset) more details found at: [T-REx Website](https://hadyelsahar.github.io/t-rex/). The starting point is a Wikipedia dump as well as a Wikidata one. After the triplets are extracted, an NLI system was used to filter out those not entailed by the text. #### Who are the source language producers? Any Wikipedia and Wikidata contributor. ### Annotations #### Annotation process The dataset extraction pipeline [cRocoDiLe: Automati**c** **R**elati**o**n Extra**c**ti**o**n **D**ataset w**i**th N**L**I filt**e**ring](https://github.com/ju-resplande/crocodile). #### Who are the annotators? Automatic annottations ### Personal and Sensitive Information All text is from Wikipedia, any Personal or Sensitive Information there may be present in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Not for now ## Additional Information ### Dataset Curators ### Licensing Information ### Citation Information ### Contributions Thanks to [@ju-resplande](https://github.com/ju-resplade) for adding this dataset.
ju-resplande/rebel-pt
[ "task_categories:text-retrieval", "task_categories:text2text-generation", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:extended|rebel-dataset", "language:pt", "license:cc-by-nc-sa-4.0", "relation-extraction", "conditional-text-generation", "region:us" ]
2022-09-09T16:09:13+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["pt"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["extended|rebel-dataset"], "task_categories": ["text-retrieval", "text2text-generation"], "task_ids": [], "pretty_name": "rebel-portuguese", "tags": ["relation-extraction", "conditional-text-generation"]}
2022-10-29T11:19:46+00:00