sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
7b732531620accba4bbedd431b7f8a6100be6d41
Julie1901/pictures
[ "region:us" ]
2022-11-02T11:02:00+00:00
{}
2022-11-02T11:10:16+00:00
9361d38c024c137755d8cefe9be826dc16be4885
# Dataset Card for "audio-test-push" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lewtun/audio-test-push
[ "region:us" ]
2022-11-02T11:36:14+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "song_id", "dtype": "int64"}, {"name": "genre_id", "dtype": "int64"}, {"name": "genre", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 3994705.0, "num_examples": 10}, {"name": "train", "num_bytes": 3738678.0, "num_examples": 10}], "download_size": 7730848, "dataset_size": 7733383.0}}
2022-11-02T11:36:48+00:00
a5e76a325594cc02dfb1cba47f07c497ab01bf60
# Dataset Card for "muld_OpenSubtitles" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ghomasHudson/muld_OpenSubtitles
[ "region:us" ]
2022-11-02T11:55:18+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "metadata", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 176793874, "num_examples": 1385}, {"name": "train", "num_bytes": 1389584660, "num_examples": 27749}], "download_size": 967763941, "dataset_size": 1566378534}}
2022-11-02T11:56:13+00:00
282a412b73478e5e843367c5ece3d3f8660f05b0
# Dataset Card for "muld_AO3_Style_Change_Detection" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ghomasHudson/muld_AO3_Style_Change_Detection
[ "region:us" ]
2022-11-02T12:06:13+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "metadata", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 282915635, "num_examples": 2352}, {"name": "train", "num_bytes": 762370660, "num_examples": 6354}, {"name": "validation", "num_bytes": 83699681, "num_examples": 705}], "download_size": 677671983, "dataset_size": 1128985976}}
2022-11-02T12:06:59+00:00
c50eef2470554f8a1271a921d55aa7dc34420738
# Dataset Card for "processed_multiscale_rt_critics" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
frankier/processed_multiscale_rt_critics
[ "region:us" ]
2022-11-02T12:15:25+00:00
{"dataset_info": {"features": [{"name": "movie_title", "dtype": "string"}, {"name": "publisher_name", "dtype": "string"}, {"name": "critic_name", "dtype": "string"}, {"name": "review_content", "dtype": "string"}, {"name": "review_score", "dtype": "string"}, {"name": "grade_type", "dtype": "string"}, {"name": "orig_num", "dtype": "float32"}, {"name": "orig_denom", "dtype": "float32"}, {"name": "includes_zero", "dtype": "bool"}, {"name": "label", "dtype": "uint8"}, {"name": "scale_points", "dtype": "uint8"}, {"name": "multiplier", "dtype": "uint8"}, {"name": "group_id", "dtype": "uint32"}], "splits": [{"name": "train", "num_bytes": 117244343, "num_examples": 540256}, {"name": "test", "num_bytes": 28517095, "num_examples": 131563}], "download_size": 0, "dataset_size": 145761438}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2023-10-03T16:16:04+00:00
63b6d26bb53a87c2b8ea9c9428bee6ab7a7532ef
# Dataset Card for "muld_NarrativeQA" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ghomasHudson/muld_NarrativeQA
[ "region:us" ]
2022-11-02T12:17:00+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 3435452065, "num_examples": 10143}, {"name": "train", "num_bytes": 11253796383, "num_examples": 32747}, {"name": "validation", "num_bytes": 1176625993, "num_examples": 3373}], "download_size": 8819172017, "dataset_size": 15865874441}}
2022-11-02T12:24:41+00:00
8df0b33afd830cd72656e23c6b1cedec2b285b37
# Dataset Card for GEM/TaTA ## Dataset Description - **Homepage:** https://github.com/google-research/url-nlp - **Repository:** https://github.com/google-research/url-nlp - **Paper:** https://arxiv.org/abs/2211.00142 - **Leaderboard:** https://github.com/google-research/url-nlp - **Point of Contact:** Sebastian Ruder ### Link to Main Data Card You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/TaTA). ### Dataset Summary Existing data-to-text generation datasets are mostly limited to English. Table-to-Text in African languages (TaTA) addresses this lack of data as the first large multilingual table-to-text dataset with a focus on African languages. TaTA was created by transcribing figures and accompanying text in bilingual reports by the Demographic and Health Surveys Program, followed by professional translation to make the dataset fully parallel. TaTA includes 8,700 examples in nine languages including four African languages (Hausa, Igbo, Swahili, and Yorùbá) and a zero-shot test language (Russian). You can load the dataset via: ``` import datasets data = datasets.load_dataset('GEM/TaTA') ``` The data loader can be found [here](https://huggingface.co/datasets/GEM/TaTA). #### website [Github](https://github.com/google-research/url-nlp) #### paper [ArXiv](https://arxiv.org/abs/2211.00142) #### authors Sebastian Gehrmann, Sebastian Ruder , Vitaly Nikolaev, Jan A. Botha, Michael Chavinda, Ankur Parikh, Clara Rivera ## Dataset Overview ### Where to find the Data and its Documentation #### Webpage <!-- info: What is the webpage for the dataset (if it exists)? --> <!-- scope: telescope --> [Github](https://github.com/google-research/url-nlp) #### Download <!-- info: What is the link to where the original dataset is hosted? --> <!-- scope: telescope --> [Github](https://github.com/google-research/url-nlp) #### Paper <!-- info: What is the link to the paper describing the dataset (open access preferred)? --> <!-- scope: telescope --> [ArXiv](https://arxiv.org/abs/2211.00142) #### BibTex <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. --> <!-- scope: microscope --> ``` @misc{gehrmann2022TaTA, Author = {Sebastian Gehrmann and Sebastian Ruder and Vitaly Nikolaev and Jan A. Botha and Michael Chavinda and Ankur Parikh and Clara Rivera}, Title = {TaTa: A Multilingual Table-to-Text Dataset for African Languages}, Year = {2022}, Eprint = {arXiv:2211.00142}, } ``` #### Contact Name <!-- quick --> <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> Sebastian Ruder #### Contact Email <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> [email protected] #### Has a Leaderboard? <!-- info: Does the dataset have an active leaderboard? --> <!-- scope: telescope --> yes #### Leaderboard Link <!-- info: Provide a link to the leaderboard. --> <!-- scope: periscope --> [Github](https://github.com/google-research/url-nlp) #### Leaderboard Details <!-- info: Briefly describe how the leaderboard evaluates models. --> <!-- scope: microscope --> The paper introduces a metric StATA which is trained on human ratings and which is used to rank approaches submitted to the leaderboard. ### Languages and Intended Use #### Multilingual? <!-- quick --> <!-- info: Is the dataset multilingual? --> <!-- scope: telescope --> yes #### Covered Languages <!-- quick --> <!-- info: What languages/dialects are covered in the dataset? --> <!-- scope: telescope --> `English`, `Portuguese`, `Arabic`, `French`, `Hausa`, `Swahili (macrolanguage)`, `Igbo`, `Yoruba`, `Russian` #### Whose Language? <!-- info: Whose language is in the dataset? --> <!-- scope: periscope --> The language is taken from reports by the demographic and health surveys program. #### License <!-- quick --> <!-- info: What is the license of the dataset? --> <!-- scope: telescope --> cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International #### Intended Use <!-- info: What is the intended use of the dataset? --> <!-- scope: microscope --> The dataset poses significant reasoning challenges and is thus meant as a way to asses the verbalization and reasoning capabilities of structure-to-text models. #### Primary Task <!-- info: What primary task does the dataset support? --> <!-- scope: telescope --> Data-to-Text #### Communicative Goal <!-- quick --> <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. --> <!-- scope: periscope --> Summarize key information from a table in a single sentence. ### Credit #### Curation Organization Type(s) <!-- info: In what kind of organization did the dataset curation happen? --> <!-- scope: telescope --> `industry` #### Curation Organization(s) <!-- info: Name the organization(s). --> <!-- scope: periscope --> Google Research #### Dataset Creators <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). --> <!-- scope: microscope --> Sebastian Gehrmann, Sebastian Ruder , Vitaly Nikolaev, Jan A. Botha, Michael Chavinda, Ankur Parikh, Clara Rivera #### Funding <!-- info: Who funded the data creation? --> <!-- scope: microscope --> Google Research #### Who added the Dataset to GEM? <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. --> <!-- scope: microscope --> Sebastian Gehrmann (Google Research) ### Dataset Structure #### Data Fields <!-- info: List and describe the fields present in the dataset. --> <!-- scope: telescope --> - `example_id`: The ID of the example. Each ID (e.g., `AB20-ar-1`) consists of three parts: the document ID, the language ISO 639-1 code, and the index of the table within the document. - `title`: The title of the table. - `unit_of_measure`: A description of the numerical value of the data. E.g., percentage of households with clean water. - `chart_type`: The kind of chart associated with the data. We consider the following (normalized) types: horizontal bar chart, map chart, pie graph, tables, line chart, pie chart, vertical chart type, line graph, vertical bar chart, and other. - `was_translated`: Whether the table was transcribed in the original language of the report or translated. - `table_data`: The table content is a JSON-encoded string of a two-dimensional list, organized by row, from left to right, starting from the top of the table. Number of items varies per table. Empty cells are given as empty string values in the corresponding table cell. - `table_text`: The sentences forming the description of each table are encoded as a JSON object. In the case of more than one sentence, these are separated by commas. Number of items varies per table. - `linearized_input`: A single string that contains the table content separated by vertical bars, i.e., |. Including title, unit of measurement, and the content of each cell including row and column headers in between brackets, i.e., (Medium Empowerment, Mali, 17.9). #### Reason for Structure <!-- info: How was the dataset structure determined? --> <!-- scope: microscope --> The structure includes all available information for the infographics on which the dataset is based. #### How were labels chosen? <!-- info: How were the labels chosen? --> <!-- scope: microscope --> Annotators looked through English text to identify sentences that describe an infographic. They then identified the corresponding location of the parallel non-English document. All sentences were extracted. #### Example Instance <!-- info: Provide a JSON formatted example of a typical instance in the dataset. --> <!-- scope: periscope --> ``` { "example_id": "FR346-en-39", "title": "Trends in early childhood mortality rates", "unit_of_measure": "Deaths per 1,000 live births for the 5-year period before the survey", "chart_type": "Line chart", "was_translated": "False", "table_data": "[[\"\", \"Child mortality\", \"Neonatal mortality\", \"Infant mortality\", \"Under-5 mortality\"], [\"1990 JPFHS\", 5, 21, 34, 39], [\"1997 JPFHS\", 6, 19, 29, 34], [\"2002 JPFHS\", 5, 16, 22, 27], [\"2007 JPFHS\", 2, 14, 19, 21], [\"2009 JPFHS\", 5, 15, 23, 28], [\"2012 JPFHS\", 4, 14, 17, 21], [\"2017-18 JPFHS\", 3, 11, 17, 19]]", "table_text": [ "neonatal, infant, child, and under-5 mortality rates for the 5 years preceding each of seven JPFHS surveys (1990 to 2017-18).", "Under-5 mortality declined by half over the period, from 39 to 19 deaths per 1,000 live births.", "The decline in mortality was much greater between the 1990 and 2007 surveys than in the most recent period.", "Between 2012 and 2017-18, under-5 mortality decreased only modestly, from 21 to 19 deaths per 1,000 live births, and infant mortality remained stable at 17 deaths per 1,000 births." ], "linearized_input": "Trends in early childhood mortality rates | Deaths per 1,000 live births for the 5-year period before the survey | (Child mortality, 1990 JPFHS, 5) (Neonatal mortality, 1990 JPFHS, 21) (Infant mortality, 1990 JPFHS, 34) (Under-5 mortality, 1990 JPFHS, 39) (Child mortality, 1997 JPFHS, 6) (Neonatal mortality, 1997 JPFHS, 19) (Infant mortality, 1997 JPFHS, 29) (Under-5 mortality, 1997 JPFHS, 34) (Child mortality, 2002 JPFHS, 5) (Neonatal mortality, 2002 JPFHS, 16) (Infant mortality, 2002 JPFHS, 22) (Under-5 mortality, 2002 JPFHS, 27) (Child mortality, 2007 JPFHS, 2) (Neonatal mortality, 2007 JPFHS, 14) (Infant mortality, 2007 JPFHS, 19) (Under-5 mortality, 2007 JPFHS, 21) (Child mortality, 2009 JPFHS, 5) (Neonatal mortality, 2009 JPFHS, 15) (Infant mortality, 2009 JPFHS, 23) (Under-5 mortality, 2009 JPFHS, 28) (Child mortality, 2012 JPFHS, 4) (Neonatal mortality, 2012 JPFHS, 14) (Infant mortality, 2012 JPFHS, 17) (Under-5 mortality, 2012 JPFHS, 21) (Child mortality, 2017-18 JPFHS, 3) (Neonatal mortality, 2017-18 JPFHS, 11) (Infant mortality, 2017-18 JPFHS, 17) (Under-5 mortality, 2017-18 JPFHS, 19)" } ``` #### Data Splits <!-- info: Describe and name the splits in the dataset if there are more than one. --> <!-- scope: periscope --> - `Train`: Training set, includes examples with 0 or more references. - `Validation`: Validation set, includes examples with 3 or more references. - `Test`: Test set, includes examples with 3 or more references. - `Ru`: Russian zero-shot set. Includes English and Russian examples (Russian is not includes in any of the other splits). #### Splitting Criteria <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. --> <!-- scope: microscope --> The same table across languages is always in the same split, i.e., if table X is in the test split in language A, it will also be in the test split in language B. In addition to filtering examples without transcribed table values, every example of the development and test splits has at least 3 references. From the examples that fulfilled these criteria, 100 tables were sampled for both development and test for a total of 800 examples each. A manual review process excluded a few tables in each set, resulting in a training set of 6,962 tables, a development set of 752 tables, and a test set of 763 tables. #### <!-- info: What does an outlier of the dataset in terms of length/perplexity/embedding look like? --> <!-- scope: microscope --> There are tables without references, without values, and others that are very large. The dataset is distributed as-is, but the paper describes multiple strategies to deal with data issues. ## Dataset in GEM ### Rationale for Inclusion in GEM #### Why is the Dataset in GEM? <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? --> <!-- scope: microscope --> There is no other multilingual data-to-text dataset that is parallel over languages. Moreover, over 70% of references in the dataset require reasoning and it is thus of very high quality and challenging for models. #### Similar Datasets <!-- info: Do other datasets for the high level task exist? --> <!-- scope: telescope --> yes #### Unique Language Coverage <!-- info: Does this dataset cover other languages than other datasets for the same task? --> <!-- scope: periscope --> yes #### Difference from other GEM datasets <!-- info: What else sets this dataset apart from other similar datasets in GEM? --> <!-- scope: microscope --> More languages, parallel across languages, grounded in infographics, not centered on Western entities or source documents #### Ability that the Dataset measures <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: periscope --> reasoning, verbalization, content planning ### GEM-Specific Curation #### Modificatied for GEM? <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? --> <!-- scope: telescope --> no #### Additional Splits? <!-- info: Does GEM provide additional splits to the dataset? --> <!-- scope: telescope --> no ### Getting Started with the Task #### Pointers to Resources <!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. --> <!-- scope: microscope --> The background section of the [paper](https://arxiv.org/abs/2211.00142) provides a list of related datasets. #### Technical Terms <!-- info: Technical terms used in this card and the dataset and their definitions --> <!-- scope: microscope --> - `data-to-text`: Term that refers to NLP tasks in which the input is structured information and the output is natural language. ## Previous Results ### Previous Results #### Metrics <!-- info: What metrics are typically used for this task? --> <!-- scope: periscope --> `Other: Other Metrics` #### Other Metrics <!-- info: Definitions of other metrics --> <!-- scope: periscope --> `StATA`: A new metric associated with TaTA that is trained on human judgments and which has a much higher correlation with them. #### Proposed Evaluation <!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. --> <!-- scope: microscope --> The creators used a human evaluation that measured [attribution](https://arxiv.org/abs/2112.12870) and reasoning capabilities of various models. Based on these ratings, they trained a new metric and showed that existing metrics fail to measure attribution. #### Previous results available? <!-- info: Are previous results available? --> <!-- scope: telescope --> no ## Dataset Curation ### Original Curation #### Original Curation Rationale <!-- info: Original curation rationale --> <!-- scope: telescope --> The curation rationale is to create a multilingual data-to-text dataset that is high-quality and challenging. #### Communicative Goal <!-- info: What was the communicative goal? --> <!-- scope: periscope --> The communicative goal is to describe a table in a single sentence. #### Sourced from Different Sources <!-- info: Is the dataset aggregated from different data sources? --> <!-- scope: telescope --> no ### Language Data #### How was Language Data Obtained? <!-- info: How was the language data obtained? --> <!-- scope: telescope --> `Found` #### Where was it found? <!-- info: If found, where from? --> <!-- scope: telescope --> `Single website` #### Language Producers <!-- info: What further information do we have on the language producers? --> <!-- scope: microscope --> The language was produced by USAID as part of the Demographic and Health Surveys program (https://dhsprogram.com/). #### Topics Covered <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? --> <!-- scope: periscope --> The topics are related to fertility, family planning, maternal and child health, gender, and nutrition. #### Data Validation <!-- info: Was the text validated by a different worker or a data curator? --> <!-- scope: telescope --> validated by crowdworker #### Was Data Filtered? <!-- info: Were text instances selected or filtered? --> <!-- scope: telescope --> not filtered ### Structured Annotations #### Additional Annotations? <!-- quick --> <!-- info: Does the dataset have additional annotations for each instance? --> <!-- scope: telescope --> expert created #### Number of Raters <!-- info: What is the number of raters --> <!-- scope: telescope --> 11<n<50 #### Rater Qualifications <!-- info: Describe the qualifications required of an annotator. --> <!-- scope: periscope --> Professional annotator who is a fluent speaker of the respective language #### Raters per Training Example <!-- info: How many annotators saw each training example? --> <!-- scope: periscope --> 0 #### Raters per Test Example <!-- info: How many annotators saw each test example? --> <!-- scope: periscope --> 1 #### Annotation Service? <!-- info: Was an annotation service used? --> <!-- scope: telescope --> yes #### Which Annotation Service <!-- info: Which annotation services were used? --> <!-- scope: periscope --> `other` #### Annotation Values <!-- info: Purpose and values for each annotation --> <!-- scope: microscope --> The additional annotations are for system outputs and references and serve to develop metrics for this task. #### Any Quality Control? <!-- info: Quality control measures? --> <!-- scope: telescope --> validated by data curators #### Quality Control Details <!-- info: Describe the quality control measures that were taken. --> <!-- scope: microscope --> Ratings were compared to a small (English) expert-curated set of ratings to ensure high agreement. There were additional rounds of training and feedback to annotators to ensure high quality judgments. ### Consent #### Any Consent Policy? <!-- info: Was there a consent policy involved when gathering the data? --> <!-- scope: telescope --> yes #### Other Consented Downstream Use <!-- info: What other downstream uses of the data did the original data creators and the data curators consent to? --> <!-- scope: microscope --> In addition to data-to-text generation, the dataset can be used for translation or multimodal research. ### Private Identifying Information (PII) #### Contains PII? <!-- quick --> <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? --> <!-- scope: telescope --> no PII #### Justification for no PII <!-- info: Provide a justification for selecting `no PII` above. --> <!-- scope: periscope --> The DHS program only publishes aggregate survey information and thus, no personal information is included. ### Maintenance #### Any Maintenance Plan? <!-- info: Does the original dataset have a maintenance plan? --> <!-- scope: telescope --> no ## Broader Social Context ### Previous Work on the Social Impact of the Dataset #### Usage of Models based on the Data <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? --> <!-- scope: telescope --> no ### Impact on Under-Served Communities #### Addresses needs of underserved Communities? <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). --> <!-- scope: telescope --> yes #### Details on how Dataset Addresses the Needs <!-- info: Describe how this dataset addresses the needs of underserved communities. --> <!-- scope: microscope --> The dataset is focusing on data about African countries and the languages included in the dataset are spoken in Africa. It aims to improve the representation of African languages in the NLP and NLG communities. ### Discussion of Biases #### Any Documented Social Biases? <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. --> <!-- scope: telescope --> no #### Are the Language Producers Representative of the Language? <!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? --> <!-- scope: periscope --> The language producers for this dataset are those employed by the DHS program which is a US-funded program. While the data is focused on African countries, there may be implicit western biases in how the data is presented. ## Considerations for Using the Data ### PII Risks and Liability ### Licenses #### Copyright Restrictions on the Dataset <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? --> <!-- scope: periscope --> `open license - commercial use allowed` #### Copyright Restrictions on the Language Data <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? --> <!-- scope: periscope --> `open license - commercial use allowed` ### Known Technical Limitations #### Technical Limitations <!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. --> <!-- scope: microscope --> While tables were transcribed in the available languages, the majority of the tables were published in English as the first language. Professional translators were used to translate the data, which makes it plausible that some translationese exists in the data. Moreover, it was unavoidable to collect reference sentences that are only partially entailed by the source tables. #### Unsuited Applications <!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. --> <!-- scope: microscope --> The domain of health reports includes potentially sensitive topics relating to reproduction, violence, sickness, and death. Perceived negative values could be used to amplify stereotypes about people from the respective regions or countries. The intended academic use of this dataset is to develop and evaluate models that neutrally report the content of these tables but not use the outputs to make value judgments, and these applications are thus discouraged.
GEM/TaTA
[ "task_categories:table-to-text", "annotations_creators:none", "language_creators:unknown", "multilinguality:yes", "size_categories:unknown", "source_datasets:original", "language:ar", "language:en", "language:fr", "language:ha", "language:ig", "language:pt", "language:ru", "language:sw", "language:yo", "license:cc-by-sa-4.0", "data-to-text", "arxiv:2211.00142", "arxiv:2112.12870", "region:us" ]
2022-11-02T13:21:53+00:00
{"annotations_creators": ["none"], "language_creators": ["unknown"], "language": ["ar", "en", "fr", "ha", "ig", "pt", "ru", "sw", "yo"], "license": "cc-by-sa-4.0", "multilinguality": [true], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["table-to-text"], "task_ids": [], "pretty_name": "TaTA", "tags": ["data-to-text"], "dataset_info": {"features": [{"name": "gem_id", "dtype": "string"}, {"name": "example_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "unit_of_measure", "dtype": "string"}, {"name": "chart_type", "dtype": "string"}, {"name": "was_translated", "dtype": "string"}, {"name": "table_data", "dtype": "string"}, {"name": "linearized_input", "dtype": "string"}, {"name": "table_text", "sequence": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "ru", "num_bytes": 308435, "num_examples": 210}, {"name": "test", "num_bytes": 1691383, "num_examples": 763}, {"name": "train", "num_bytes": 10019272, "num_examples": 6962}, {"name": "validation", "num_bytes": 1598442, "num_examples": 754}], "download_size": 18543506, "dataset_size": 13617532}}
2022-11-03T14:23:59+00:00
dfbc45e3c26ef1a03ef6e9e8c5e3d3da3ffc50f9
alfredodeza/world-junior-championships-results
[ "license:mit", "region:us" ]
2022-11-02T15:39:16+00:00
{"license": "mit"}
2022-11-02T15:41:33+00:00
048280e285175987c092a96b6149c032fcecc0c7
# Introduction The recognition and classification of proper nouns and names in plain text is of key importance in Natural Language Processing (NLP) as it has a beneficial effect on the performance of various types of applications, including Information Extraction, Machine Translation, Syntactic Parsing/Chunking, etc. ## Corpus of Business Newswire Texts (business) The Named Entity Corpus for Hungarian is a subcorpus of the Szeged Treebank, which contains full syntactic annotations done manually by linguist experts. A significant part of these texts has been annotated with Named Entity class labels in line with the annotation standards used on the CoNLL-2003 shared task. Statistical data on Named Entities occurring in the corpus: ``` | tokens | phrases ------ | ------ | ------- non NE | 200067 | PER | 1921 | 982 ORG | 20433 | 10533 LOC | 1501 | 1294 MISC | 2041 | 1662 ``` ### Reference > György Szarvas, Richárd Farkas, László Felföldi, András Kocsor, János Csirik: Highly accurate Named Entity corpus for Hungarian. International Conference on Language Resources and Evaluation 2006, Genova (Italy) ## Criminal NE corpus (criminal) The Hungarian National Corpus and its Heti Világgazdaság (HVG) subcorpus provided the basis for corpus text selection: articles related to the topic of financially liable offences were selected and annotated for the categories person, organization, location and miscellaneous. There are two annotated versions of the corpus. When preparing the tag-for-meaning annotation, our linguists took into consideration the context in which the Named Entity under investigation occurred, thus, it was not the primary sense of the Named Entity that determined the tag (e.g. Manchester=LOC) but its contextual reference (e.g. Manchester won the Premier League=ORG). As for tag-for-tag annotation, these cases were not differentiated: tags were always given on the basis of the primary sense. Statistical data on Named Entities occurring in the corpus: ``` | tag-for-meaning | tag-for-tag ------ | --------------- | ----------- non NE | 200067 | PER | 8101 | 8121 ORG | 8782 | 9480 LOC | 5049 | 5391 MISC | 1917 | 854 ``` ## Metadata dataset_info: - config_name: business features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: 0: O 1: B-PER 2: I-PER 3: B-ORG 4: I-ORG 5: B-LOC 6: I-LOC 7: B-MISC 8: I-MISC - name: document_id dtype: string - name: sentence_id dtype: string splits: - name: original num_bytes: 4452207 num_examples: 9573 - name: test num_bytes: 856798 num_examples: 1915 - name: train num_bytes: 3171931 num_examples: 6701 - name: validation num_bytes: 423478 num_examples: 957 download_size: 0 dataset_size: 8904414 - config_name: criminal features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: 0: O 1: B-PER 2: I-PER 3: B-ORG 4: I-ORG 5: B-LOC 6: I-LOC 7: B-MISC 8: I-MISC - name: document_id dtype: string - name: sentence_id dtype: string splits: - name: original num_bytes: 2807970 num_examples: 5375 - name: test num_bytes: 520959 num_examples: 1089 - name: train num_bytes: 1989662 num_examples: 3760 - name: validation num_bytes: 297349 num_examples: 526 download_size: 0 dataset_size: 5615940
ficsort/SzegedNER
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:hu", "hungarian", "szeged", "ner", "region:us" ]
2022-11-02T15:46:47+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["hu"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "SzegedNER", "tags": ["hungarian", "szeged", "ner"]}
2022-11-02T15:56:22+00:00
64335ac3f9bfae6f6e2b467c6c904820ede01999
# AutoTrain Dataset for project: testtextexists ## Dataset Description This dataset has been automatically processed by AutoTrain for project testtextexists. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "According to the National Soft Drink Association, the annual consumption of soda by the U.S. citizens is 600 cans", "target": 66.0 }, { "text": "Experts say new vaccines are fake!", "target": 50.0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "Value(dtype='float32', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 19 | | valid | 18 |
LiveEvil/autotrain-data-testtextexists
[ "language:en", "region:us" ]
2022-11-02T15:54:22+00:00
{"language": ["en"], "task_categories": ["text-scoring"]}
2022-11-03T15:55:01+00:00
c466f287741cdebbe8a01c14f11b0b3a10ba3b36
Meiruofeng/test
[ "region:us" ]
2022-11-02T15:55:20+00:00
{}
2022-11-05T03:28:10+00:00
82e266d8effde67520d50532587b5f000237b50a
# CSAbstruct CSAbstruct was created as part of *"Pretrained Language Models for Sequential Sentence Classification"* ([ACL Anthology][2], [arXiv][1], [GitHub][6]). It contains 2,189 manually annotated computer science abstracts with sentences annotated according to their rhetorical roles in the abstract, similar to the [PUBMED-RCT][3] categories. ## Dataset Construction Details CSAbstruct is a new dataset of annotated computer science abstracts with sentence labels according to their rhetorical roles. The key difference between this dataset and [PUBMED-RCT][3] is that PubMed abstracts are written according to a predefined structure, whereas computer science papers are free-form. Therefore, there is more variety in writing styles in CSAbstruct. CSAbstruct is collected from the Semantic Scholar corpus [(Ammar et a3., 2018)][4]. E4ch sentence is annotated by 5 workers on the [Figure-eight platform][5], with one of 5 categories `{BACKGROUND, OBJECTIVE, METHOD, RESULT, OTHER}`. We use 8 abstracts (with 51 sentences) as test questions to train crowdworkers. Annotators whose accuracy is less than 75% are disqualified from doing the actual annotation job. The annotations are aggregated using the agreement on a single sentence weighted by the accuracy of the annotator on the initial test questions. A confidence score is associated with each instance based on the annotator initial accuracy and agreement of all annotators on that instance. We then split the dataset 75%/15%/10% into train/dev/test partitions, such that the test set has the highest confidence scores. Agreement rate on a random subset of 200 sentences is 75%, which is quite high given the difficulty of the task. Compared with [PUBMED-RCT][3], our dataset exhibits a wider variety of writ- ing styles, since its abstracts are not written with an explicit structural template. ## Dataset Statistics | Statistic | Avg ± std | |--------------------------|-------------| | Doc length in sentences | 6.7 ± 1.99 | | Sentence length in words | 21.8 ± 10.0 | | Label | % in Dataset | |---------------|--------------| | `BACKGROUND` | 33% | | `METHOD` | 32% | | `RESULT` | 21% | | `OBJECTIVE` | 12% | | `OTHER` | 03% | ## Citation If you use this dataset, please cite the following paper: ``` @inproceedings{Cohan2019EMNLP, title={Pretrained Language Models for Sequential Sentence Classification}, author={Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld}, year={2019}, booktitle={EMNLP}, } ``` [1]: https://arxiv.org/abs/1909.04054 [2]: https://aclanthology.org/D19-1383 [3]: https://github.com/Franck-Dernoncourt/pubmed-rct [4]: https://aclanthology.org/N18-3011/ [5]: https://www.figure-eight.com/ [6]: https://github.com/allenai/sequential_sentence_classification
allenai/csabstruct
[ "license:apache-2.0", "arxiv:1909.04054", "region:us" ]
2022-11-02T17:15:53+00:00
{"license": "apache-2.0"}
2022-11-02T17:54:38+00:00
e3dc6d24c7d76a0c9d1c20b6c838abbc918a36b0
# Dataset Card for COCO-Stuff [![CI](https://github.com/shunk031/huggingface-datasets_cocostuff/actions/workflows/ci.yaml/badge.svg)](https://github.com/shunk031/huggingface-datasets_cocostuff/actions/workflows/ci.yaml) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - Homepage: https://github.com/nightrome/cocostuff - Repository: https://github.com/nightrome/cocostuff - Paper (preprint): https://arxiv.org/abs/1612.03716 - Paper (CVPR2018): https://openaccess.thecvf.com/content_cvpr_2018/html/Caesar_COCO-Stuff_Thing_and_CVPR_2018_paper.html ### Dataset Summary COCO-Stuff is the largest existing dataset with dense stuff and thing annotations. From the paper: > Semantic classes can be either things (objects with a well-defined shape, e.g. car, person) or stuff (amorphous background regions, e.g. grass, sky). While lots of classification and detection works focus on thing classes, less attention has been given to stuff classes. Nonetheless, stuff classes are important as they allow to explain important aspects of an image, including (1) scene type; (2) which thing classes are likely to be present and their location (through contextual reasoning); (3) physical attributes, material types and geometric properties of the scene. To understand stuff and things in context we introduce COCO-Stuff, which augments all 164K images of the COCO 2017 dataset with pixel-wise annotations for 91 stuff classes. We introduce an efficient stuff annotation protocol based on superpixels, which leverages the original thing annotations. We quantify the speed versus quality trade-off of our protocol and explore the relation between annotation time and boundary complexity. Furthermore, we use COCO-Stuff to analyze: (a) the importance of stuff and thing classes in terms of their surface cover and how frequently they are mentioned in image captions; (b) the spatial relations between stuff and things, highlighting the rich contextual relations that make our dataset unique; (c) the performance of a modern semantic segmentation method on stuff and thing classes, and whether stuff is easier to segment than things. ### Dataset Preprocessing ### Supported Tasks and Leaderboards ### Languages All of annotations use English as primary language. ## Dataset Structure ### Data Instances When loading a specific configuration, users has to append a version dependent suffix: ```python from datasets import load_dataset load_dataset("shunk031/cocostuff", "stuff-thing") ``` #### stuff-things An example of looks as follows. ```json { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x480 at 0x7FCA033C9C40>, 'image_filename': '000000000009.jpg', 'image_id': '9', 'width': 640 'height': 480, 'objects': [ { 'object_id': '121', 'x': 0, 'y': 11, 'w': 640, 'h': 469, 'name': 'food-other' }, { 'object_id': '143', 'x': 0, 'y': 0 'w': 640, 'h': 480, 'name': 'plastic' }, { 'object_id': '165', 'x': 0, 'y': 0, 'w': 319, 'h': 118, 'name': 'table' }, { 'object_id': '183', 'x': 0, 'y': 2, 'w': 631, 'h': 472, 'name': 'unknown-183' } ], 'stuff_map': <PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FCA0222D880>, } ``` #### stuff-only An example of looks as follows. ```json { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x480 at 0x7FCA033C9C40>, 'image_filename': '000000000009.jpg', 'image_id': '9', 'width': 640 'height': 480, 'objects': [ { 'object_id': '121', 'x': 0, 'y': 11, 'w': 640, 'h': 469, 'name': 'food-other' }, { 'object_id': '143', 'x': 0, 'y': 0 'w': 640, 'h': 480, 'name': 'plastic' }, { 'object_id': '165', 'x': 0, 'y': 0, 'w': 319, 'h': 118, 'name': 'table' }, { 'object_id': '183', 'x': 0, 'y': 2, 'w': 631, 'h': 472, 'name': 'unknown-183' } ] } ``` ### Data Fields #### stuff-things - `image`: A `PIL.Image.Image` object containing the image. - `image_id`: Unique numeric ID of the image. - `image_filename`: File name of the image. - `width`: Image width. - `height`: Image height. - `stuff_map`: A `PIL.Image.Image` object containing the Stuff + thing PNG-style annotations - `objects`: Holds a list of `Object` data classes: - `object_id`: Unique numeric ID of the object. - `x`: x coordinate of bounding box's top left corner. - `y`: y coordinate of bounding box's top left corner. - `w`: Bounding box width. - `h`: Bounding box height. - `name`: object name #### stuff-only - `image`: A `PIL.Image.Image` object containing the image. - `image_id`: Unique numeric ID of the image. - `image_filename`: File name of the image. - `width`: Image width. - `height`: Image height. - `objects`: Holds a list of `Object` data classes: - `object_id`: Unique numeric ID of the object. - `x`: x coordinate of bounding box's top left corner. - `y`: y coordinate of bounding box's top left corner. - `w`: Bounding box width. - `h`: Bounding box height. - `name`: object name ### Data Splits | name | train | validation | |-------------|--------:|-----------:| | stuff-thing | 118,280 | 5,000 | | stuff-only | 118,280 | 5,000 | ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? From the paper: > COCO-Stuff contains 172 classes: 80 thing, 91 stuff, and 1 class unlabeled. The 80 thing classes are the same as in COCO [35]. The 91 stuff classes are curated by an expert annotator. The class unlabeled is used in two situations: if a label does not belong to any of the 171 predefined classes, or if the annotator cannot infer the label of a pixel. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information COCO-Stuff is a derivative work of the COCO dataset. The authors of COCO do not in any form endorse this work. Different licenses apply: - COCO images: [Flickr Terms of use](http://cocodataset.org/#termsofuse) - COCO annotations: [Creative Commons Attribution 4.0 License](http://cocodataset.org/#termsofuse) - COCO-Stuff annotations & code: [Creative Commons Attribution 4.0 License](http://cocodataset.org/#termsofuse) ### Citation Information ```bibtex @INPROCEEDINGS{caesar2018cvpr, title={COCO-Stuff: Thing and stuff classes in context}, author={Caesar, Holger and Uijlings, Jasper and Ferrari, Vittorio}, booktitle={Computer vision and pattern recognition (CVPR), 2018 IEEE conference on}, organization={IEEE}, year={2018} } ``` ### Contributions Thanks to [@nightrome](https://github.com/nightrome) for publishing the COCO-Stuff dataset.
shunk031/cocostuff
[ "language:en", "license:cc-by-4.0", "computer-vision", "object-detection", "ms-coco", "arxiv:1612.03716", "region:us" ]
2022-11-02T17:47:27+00:00
{"language": ["en"], "license": "cc-by-4.0", "tags": ["computer-vision", "object-detection", "ms-coco"], "datasets": ["stuff-thing", "stuff-only"], "metrics": ["accuracy", "iou"]}
2022-12-09T04:29:27+00:00
6b103e4b7fd9abf2d1aa6af0a2aa5ce8536af705
Vanimal0221/VaanceFace
[ "license:artistic-2.0", "region:us" ]
2022-11-02T19:10:02+00:00
{"license": "artistic-2.0"}
2022-11-02T19:17:09+00:00
88835bf225b88600767b73618ad4f6aa7ea4d77d
# Sciamano Artist Embedding / Textual Inversion ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"drawn by sciamano"``` If it is to strong just add [] around it. Trained until 14000 steps Have fun :) ## Example Pictures <table> <tr> <td><img src=https://i.imgur.com/xlHVUJ4.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/Nsqdc5Q.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/Av4NTd8.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/ctVCTiY.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/kO6IE4S.png width=100% height=100%/></td> </tr> </table> ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/sciamano
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "region:us" ]
2022-11-02T21:06:12+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false}
2022-11-02T21:15:27+00:00
768e7ebca5725cd852f4579d170a8726b061619d
# John Kafka Artist Embedding / Textual Inversion ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"drawn by john_kafka"``` If it is to strong just add [] around it. Trained until 6000 steps Have fun :) ## Example Pictures <table> <tr> <td><img src=https://i.imgur.com/aCnC1zv.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/FdBuWbG.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/1rkuXkZ.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/5N9Wp7q.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/v2AkXjU.png width=100% height=100%/></td> </tr> </table> ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/john_kafka
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "region:us" ]
2022-11-02T21:23:38+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false}
2022-11-02T21:25:38+00:00
f480d9dfb53d9f3a663001496e929c9184cbeeea
# Shatter Style Embedding / Textual Inversion ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"drawn by shatter_style"``` If it is to strong just add [] around it. Trained until 6000 steps Have fun :) ## Example Pictures <table> <tr> <td><img src=https://i.imgur.com/ebXN3C2.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/7zUtEDQ.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/uEuKyBP.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/qRJ5o3E.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/FybZxbO.png width=100% height=100%/></td> </tr> </table> ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/shatter_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "region:us" ]
2022-11-02T21:26:24+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false}
2022-11-02T21:30:48+00:00
b62839591f22b070148a84e852aea9183a01778c
connorhoehn/card_display_v1
[ "language:en", "region:us" ]
2022-11-03T01:32:12+00:00
{"language": ["en"], "dataset_info": [{"config_name": "card-detection", "features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}, {"name": "objects", "list": [{"name": "category_id", "dtype": {"class_label": {"names": {"0": "boxed", "1": "grid", "2": "spread", "3": "stack"}}}}, {"name": "image_id", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "area", "dtype": "int64"}, {"name": "bbox", "sequence": "float32", "length": 4}, {"name": "iscrowd", "dtype": "bool"}]}], "splits": [{"name": "train"}], "download_size": 96890427, "dataset_size": 0}, {"config_name": "display-detection", "features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}, {"name": "objects", "list": [{"name": "category_id", "dtype": {"class_label": {"names": {"0": "boxed", "1": "grid", "2": "spread", "3": "stack"}}}}, {"name": "image_id", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "area", "dtype": "int64"}, {"name": "bbox", "sequence": "float32", "length": 4}, {"name": "iscrowd", "dtype": "bool"}]}], "splits": [{"name": "train", "num_bytes": 42942, "num_examples": 154}], "download_size": 96967919, "dataset_size": 42942}]}
2022-11-03T02:21:11+00:00
a4d0d1862c7cb8176bcdf098ee2b11705dcb6800
liyongsea/PTB-XL
[ "license:other", "region:us" ]
2022-11-03T02:56:01+00:00
{"license": "other"}
2022-11-03T15:57:19+00:00
48df4de700a2757b6122b4b3633aeb5c36120473
sabita9/mauricio-macri-2
[ "license:mit", "region:us" ]
2022-11-03T03:29:55+00:00
{"license": "mit"}
2022-11-03T03:33:23+00:00
85a486545ea37fc9f2326e171ca42d32fcccf89a
This is the dataset! Not the .ckpt trained model - the model is located here: https://huggingface.co/0xJustin/Dungeons-and-Diffusion/tree/main The newest version has manually captioned races and classes, and the model is trained with EveryDream. 30 images each of: aarakocra, aasimar, air_genasi, centaur, dragonborn, drow, dwarf, earth_genasi, elf, firbolg, fire_genasi, gith, gnome, goblin, goliath, halfling, human, illithid, kenku, kobold, lizardfolk, minotaur, orc, tabaxi, thrikreen, tiefling, tortle, warforged, water_genasi The original dataset includes ~2500 images of fantasy RPG character art. This dataset has a distribution of races and classes, though only races are annotated right now. Additionally, BLIP captions were generated for all examples. Thus, there are two datasets- one with the human generated race annotation formatted as 'D&D Character, {race}' BLIP captions are formatted as 'D&D Character, {race} {caption}' for example: 'D&D Character, drow a woman with horns and horns' Distribution of races: ({'kenku': 31, 'drow': 162, 'tiefling': 285, 'dwarf': 116, 'dragonborn': 110, 'gnome': 72, 'orc': 184, 'aasimar': 74, 'kobold': 61, 'aarakocra': 24, 'tabaxi': 123, 'genasi': 126, 'human': 652, 'elf': 190, 'goblin': 80, 'halfling': 52, 'centaur': 22, 'firbolg': 76, 'goliath': 35}) There is a high chance some images are mislabelled! Please feel free to enrich this dataset with whatever attributes you think might be useful!
0xJustin/Dungeons-and-Diffusion
[ "region:us" ]
2022-11-03T06:04:27+00:00
{}
2023-05-19T17:26:58+00:00
b66f8130f392e1d994cd96d646ac3a27ae93bdec
J3H0X77K/CHAMOX
[ "license:afl-3.0", "region:us" ]
2022-11-03T06:50:00+00:00
{"license": "afl-3.0"}
2022-11-03T06:50:53+00:00
065fc8ac2f9921f39cd03a5003377589a48293ee
nev/worm-activity-data
[ "license:cc-by-4.0", "region:us" ]
2022-11-03T09:00:56+00:00
{"license": "cc-by-4.0"}
2022-11-03T09:02:28+00:00
ab4b90142da320df49a31aaa9fa8df1df67d123f
# Dataset Card for "music_genres_small" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lewtun/music_genres_small
[ "region:us" ]
2022-11-03T13:36:11+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "song_id", "dtype": "int64"}, {"name": "genre_id", "dtype": "int64"}, {"name": "genre", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 392427659.9527852, "num_examples": 1000}], "download_size": 390675126, "dataset_size": 392427659.9527852}}
2022-11-03T13:36:49+00:00
17e87976452beb6cd28dd83ee3b98604fca98632
# Dataset Card for "amazon-shoe-reviews" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Markmus/amazon-shoe-reviews
[ "region:us" ]
2022-11-03T13:41:22+00:00
{"dataset_info": {"features": [{"name": "labels", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1871962.8, "num_examples": 10000}, {"name": "train", "num_bytes": 16847665.2, "num_examples": 90000}], "download_size": 10939033, "dataset_size": 18719628.0}}
2022-11-03T13:41:50+00:00
c0e1f6c4ab0b7ec8268e9eed39185c002df10344
# Dataset Card for "amazon-shoe-reviews" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Matthaios/amazon-shoe-reviews
[ "region:us" ]
2022-11-03T13:43:26+00:00
{"dataset_info": {"features": [{"name": "labels", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1871962.8, "num_examples": 10000}, {"name": "train", "num_bytes": 16847665.2, "num_examples": 90000}], "download_size": 10939031, "dataset_size": 18719628.0}}
2022-11-03T13:43:56+00:00
490bc8a946289d68fe7c628afa5c36b52ca8f9e3
# PyCoder This repository contains the dataset for the paper [Syntax-Aware On-the-Fly Code Completion](https://arxiv.org/abs/2211.04673) The sample code to run the model can be found in directory: "`assets/notebooks/inference.ipynb`" in our GitHub: https://github.com/awsm-research/pycoder. PyCoder is an auto code completion model which leverages a Multi-Task Training technique (MTT) to cooperatively learn the code prediction task and the type prediction task. For the type prediction task, we propose to leverage the standard Python token type information (e.g., String, Number, Name, Keyword), which is readily available and lightweight, instead of using the AST information which requires source code to be parsable for an extraction, limiting its ability to perform on-the-fly code completion (see Section 2.3 in our paper). More information can be found in our paper. If you use our code or PyCoder, please cite our paper. <pre><code>@article{takerngsaksiri2022syntax, title={Syntax-Aware On-the-Fly Code Completion}, author={Takerngsaksiri, Wannita and Tantithamthavorn, Chakkrit and Li, Yuan-Fang}, journal={arXiv preprint arXiv:2211.04673}, year={2022} }</code></pre>
Wannita/PyCoder
[ "task_categories:text-generation", "license:mit", "code", "arxiv:2211.04673", "region:us" ]
2022-11-03T13:45:53+00:00
{"license": "mit", "task_categories": ["text-generation"], "datasets": ["Wannita/PyCoder"], "metrics": ["accuracy", "bleu", "meteor", "exact_match", "rouge"], "library_name": "transformers", "pipeline_tag": "text-generation", "tags": ["code"]}
2023-03-29T14:52:53+00:00
7af12b091affeb6e55d0f4871dc98af83fabe28b
--- # Dataset Card for KAMEL: Knowledge Analysis with Multitoken Entities in Language Models ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/JanKalo/KAMEL - **Repository:** https://github.com/JanKalo/KAMEL - **Paper:** @inproceedings{kalo2022kamel, title={KAMEL: Knowledge Analysis with Multitoken Entities in Language Models}, author={Kalo, Jan-Christoph and Fichtel, Leandra}, booktitle={Automated Knowledge Base Construction}, year={2022} } ### Dataset Summary This dataset provides the data for KAMEL, a probing dataset for language models that contains factual knowledge from Wikidata and Wikipedia. See the paper for more details. For more information, also see: https://github.com/JanKalo/KAMEL ### Languages en ## Dataset Structure ### Data Instances ### Data Fields KAMEL has the following fields: * index: the id * sub_label: a label for the subject * obj_uri: Wikidata uri for the object * obj_labels: multiple labels for the object * chosen_label: the preferred label * rel_uri: Wikidata uri for the relation * rel_label: a label for the relation ### Data Splits The dataset is split into a training, validation, and test dataset. It contains 234 Wikidata relations. For each relation there exist 200 training, 100 validation, and 100 test instances. ## Dataset Creation ### Curation Rationale This dataset was gathered and created to explore what knowledge graph facts are memorized by large language models. ### Source Data #### Initial Data Collection and Normalization See the reaserch paper and website for more detail. The dataset was created from Wikidata and Wikipedia. ### Annotations #### Annotation process There is no human annotation, but only automatic linking from Wikidata facts to Wikipedia articles. The details about the process can be found in the paper. #### Who are the annotators? Machine Annotations ### Personal and Sensitive Information Unkown, but likely information about famous people mentioned in the English Wikipedia. ## Considerations for Using the Data ### Social Impact of Dataset The goal for the work is to probe the understanding of language models. ### Discussion of Biases Since the data is created from Wikipedia and Wikidata, the existing biases from these two data sources may also be reflected in KAMEL. ## Additional Information ### Dataset Curators The authors of KAMEL at Vrije Universiteit Amsterdam and Technische Universität Braunschweig. ### Licensing Information The Creative Commons Attribution-Noncommercial 4.0 International License. see https://github.com/facebookresearch/LAMA/blob/master/LICENSE ### Citation Information @inproceedings{kalo2022kamel, title={KAMEL: Knowledge Analysis with Multitoken Entities in Language Models}, author={Kalo, Jan-Christoph and Fichtel, Leandra}, booktitle={Automated Knowledge Base Construction}, year={2022} }
LeandraFichtel/KAMEL
[ "region:us" ]
2022-11-03T14:00:02+00:00
{}
2022-11-03T16:39:49+00:00
dfa2ec4ee00fcd57232b5edaa3e37a5ab1c0985e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt * Dataset: adversarial_qa * Config: adversarialQA * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ce107](https://huggingface.co/ce107) for evaluating this model.
autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-fc121d-1975865996
[ "autotrain", "evaluation", "region:us" ]
2022-11-03T14:10:05+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "21iridescent/RoBERTa-base-finetuned-squad2-lwt", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-11-03T14:11:13+00:00
4162853a87a970f96bdb689dcdc35732d8aaa854
# Dataset accompanying the "Probing neural language models for understanding of words of estimative probability" article This dataset tests the capabilities of language models to correctly capture the meaning of words denoting probabilities (WEP, also called verbal probabilities), e.g. words like "probably", "maybe", "surely", "impossible". We used probabilitic soft logic to combine probabilistic statements expressed with WEP (WEP-Reasoning) and we also used the UNLI dataset (https://nlp.jhu.edu/unli/) to directly check whether models can detect the WEP matching human-annotated probabilities according to [Fagen-Ulmschneider, 2018](https://github.com/wadefagen/datasets/tree/master/Perception-of-Probability-Words). The dataset can be used as natural language inference data (context, premise, label) or multiple choice question answering (context,valid_hypothesis, invalid_hypothesis). Code : [colab](https://colab.research.google.com/drive/10ILEWY2-J6Q1hT97cCB3eoHJwGSflKHp?usp=sharing) # Citation https://arxiv.org/abs/2211.03358 ```bib @inproceedings{sileo-moens-2023-probing, title = "Probing neural language models for understanding of words of estimative probability", author = "Sileo, Damien and Moens, Marie-francine", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.41", doi = "10.18653/v1/2023.starsem-1.41", pages = "469--476", } ```
sileod/probability_words_nli
[ "task_categories:text-classification", "task_categories:multiple-choice", "task_categories:question-answering", "task_ids:open-domain-qa", "task_ids:multiple-choice-qa", "task_ids:natural-language-inference", "task_ids:multi-input-text-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:apache-2.0", "wep", "words of estimative probability", "probability", "logical reasoning", "soft logic", "nli", "verbal probabilities", "natural-language-inference", "reasoning", "logic", "arxiv:2211.03358", "region:us" ]
2022-11-03T14:21:14+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification", "multiple-choice", "question-answering"], "task_ids": ["open-domain-qa", "multiple-choice-qa", "natural-language-inference", "multi-input-text-classification"], "pretty_name": "probability_words_nli", "paperwithcoode_id": "probability-words-nli", "tags": ["wep", "words of estimative probability", "probability", "logical reasoning", "soft logic", "nli", "verbal probabilities", "natural-language-inference", "reasoning", "logic"], "train-eval-index": [{"config": "usnli", "task": "text-classification", "task_id": "multi-class-classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "context", "sentence2": "hypothesis", "label": "label"}, "metrics": [{"type": "accuracy", "name": "Accuracy"}, {"type": "f1", "name": "F1 binary"}]}, {"config": "reasoning-1hop", "task": "text-classification", "task_id": "multi-class-classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "context", "sentence2": "hypothesis", "label": "label"}, "metrics": [{"type": "accuracy", "name": "Accuracy"}, {"type": "f1", "name": "F1 binary"}]}, {"config": "reasoning-2hop", "task": "text-classification", "task_id": "multi-class-classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "context", "sentence2": "hypothesis", "label": "label"}, "metrics": [{"type": "accuracy", "name": "Accuracy"}, {"type": "f1", "name": "F1 binary"}]}]}
2023-09-06T13:56:43+00:00
772d7f4015382026d97b6c8a2e477a8a3f1fbbc6
# Dataset Card for "my_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
popaqy/my_dataset
[ "region:us" ]
2022-11-03T14:27:51+00:00
{"dataset_info": {"features": [{"name": "bg", "dtype": "string"}, {"name": "en", "dtype": "string"}, {"name": "bg_wrong", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1792707, "num_examples": 3442}], "download_size": 908032, "dataset_size": 1792707}}
2022-11-03T14:27:55+00:00
166086fcbdca991f9f39b9b20bb2157c0d29304e
cannlytics/cannabis_strains
[ "license:cc-by-4.0", "region:us" ]
2022-11-03T15:03:08+00:00
{"license": "cc-by-4.0"}
2022-11-03T15:03:08+00:00
a340e6425ffe90c222de7847a260d140bdb42fde
LiveEvil/Teshjsdf
[ "license:mit", "region:us" ]
2022-11-03T16:16:47+00:00
{"license": "mit"}
2022-11-03T16:16:47+00:00
2ac5bf4dc855aacdfc4ec1bdf9691d721207c3a6
# Dataset Card for Polish ASR BIGOS corpora ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://huggingface.co/datasets/michaljunczyk/pl-asr-bigos - **Repository:** https://github.com/goodmike31/pl-asr-bigos-tools - **Paper:** https://annals-csis.org/proceedings/2023/drp/1609.html - **Leaderboard:** https://huggingface.co/spaces/michaljunczyk/pl-asr-bigos-benchmark - **Point of Contact:** [email protected] ### Dataset Summary The BIGOS (Benchmark Intended Grouping of Open Speech) corpora aims at simplifying the access and use of publicly available ASR speech datasets for Polish.<br> The initial release consist of test split with 1900 recordings and original transcriptions extracted from 10 publicly available datasets. ### Supported Tasks and Leaderboards The leaderboard with benchmark of publicly available ASR systems supporting Polish is [under construction](https://huggingface.co/spaces/michaljunczyk/pl-asr-bigos-benchmark/).<br> Evaluation results of 3 commercial and 5 freely available can be found in the [paper](https://annals-csis.org/proceedings/2023/drp/1609.html). ### Languages Polish ## Dataset Structure Dataset consists audio recordings in WAV format and corresponding metadata.<br> Audio and metadata can be used in raw format (TSV) or via hugging face datasets library. ### Data Instances 1900 audio files with original transcriptions are available in "test" split.<br> This consitutes 1.6% of the total available transcribed speech in 10 source datasets considered in the initial release. ### Data Fields Available fields: * file_id - file identifier * dataset_id - source dataset identifier * audio - binary representation of audio file * ref_original - original transcription of audio file * hyp_whisper_cloud - ASR hypothesis (output) from Whisper Cloud system * hyp_google_default - ASR hypothesis (output) from Google ASR system, default model * hyp_azure_default - ASR hypothesis (output) from Azure ASR system, default model * hyp_whisper_tiny - ASR hypothesis (output) from Whisper tiny model * hyp_whisper_base - ASR hypothesis (output) from Whisper base model * hyp_whisper_small - ASR hypothesis (output) from Whisper small model * hyp_whisper_medium - ASR hypothesis (output) from Whisper medium model * hyp_whisper_large - ASR hypothesis (output) from Whisper large (V2) model <br><br> Fields to be added in the next release: * ref_spoken - manual transcription in a spoken format (without normalization) * ref_written - manual transcription in a written format (with normalization) ### Data Splits Initial release contains only "test" split.<br> "Dev" and "train" splits will be added in the next release. ## Dataset Creation ### Curation Rationale [Polish ASR Speech Data Catalog](https://github.com/goodmike31/pl-asr-speech-data-survey) was used to identify suitable datasets which can be repurposed and included in the BIGOS corpora.<br> The following mandatory criteria were considered: * Dataset must be downloadable. * The license must allow for free, noncommercial use. * Transcriptions must be available and align with the recordings. * The sampling rate of audio recordings must be at least 8 kHz. * Audio encoding using a minimum of 16 bits per sample. ### Source Data 10 datasets that meet the criteria were chosen as sources for the BIGOS dataset. * The Common Voice dataset (mozilla-common-voice-19) * The Multilingual LibriSpeech (MLS) dataset (fair-mls-20) * The Clarin Studio Corpus (clarin-pjatk-studio-15) * The Clarin Mobile Corpus (clarin-pjatk-mobile-15) * The Jerzy Sas PWR datasets from Politechnika Wrocławska (pwr-viu-unk, pwr-shortwords-unk, pwr-maleset-unk). More info [here](https://www.ii.pwr.edu.pl/) * The Munich-AI Labs Speech corpus (mailabs-19) * The AZON Read and Spontaneous Speech Corpora (pwr-azon-spont-20, pwr-azon-read-20) More info [here](https://zasobynauki.pl/zasoby/korpus-nagran-probek-mowy-do-celow-budowy-modeli-akustycznych-dla-automatycznego-rozpoznawania-mowy) #### Initial Data Collection and Normalization Source text and audio files were extracted and encoded in a unified format.<br> Dataset-specific transcription norms are preserved, including punctuation and casing. <br> To strike a balance in the evaluation dataset and to facilitate the comparison of Word Error Rate (WER) scores across multiple datasets, 200 samples are randomly selected from each corpus. <br> The only exception is ’pwr-azon-spont-20’, which contains significantly longer recordings and utterances, therefore only 100 samples are selected. <br> #### Who are the source language producers? 1. Clarin corpora - Polish Japanese Academy of Technology 2. Common Voice - Mozilla foundation 3. Multlingual librispeech - Facebook AI research lab 4. Jerzy Sas and AZON datasets - Politechnika Wrocławska Please refer to the [paper](https://www.researchgate.net/publication/374713542_BIGOS_-_Benchmark_Intended_Grouping_of_Open_Speech_Corpora_for_Polish_Automatic_Speech_Recognition) for more details. ### Annotations #### Annotation process Current release contains original transcriptions. Manual transcriptions are planned for subsequent releases. #### Who are the annotators? Depends on the source dataset. ### Personal and Sensitive Information This corpus does not contain PII or Sensitive Information. All IDs pf speakers are anonymized. ## Considerations for Using the Data ### Social Impact of Dataset To be updated. ### Discussion of Biases To be updated. ### Other Known Limitations The dataset in the initial release contains only a subset of recordings from original datasets. ## Additional Information ### Dataset Curators Original authors of the source datasets - please refer to [source-data](#source-data) for details. Michał Junczyk ([email protected]) - curator of BIGOS corpora. ### Licensing Information The BIGOS corpora is available under [Creative Commons By Attribution Share Alike 4.0 license.](https://creativecommons.org/licenses/by-sa/4.0/) Original datasets used for curation of BIGOS have specific terms of usage that must be understood and agreed to before use. Below are the links to the license terms and datasets the specific license type applies to: * [Creative Commons 0](https://creativecommons.org/share-your-work/public-domain/cc0) which applies to [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) * [Creative Commons By Attribution Share Alike 4.0](https://creativecommons.org/licenses/by-sa/4.0/), which applies to [Clarin Cyfry](https://clarin-pl.eu/dspace/handle/11321/317), [Azon acoustic speech resources corpus](https://zasobynauki.pl/zasoby/korpus-nagran-probek-mowy-do-celow-budowy-modeli-akustycznych-dla-automatycznego-rozpoznawania-mowy,53293/). * [Creative Commons By Attribution 3.0](https://creativecommons.org/licenses/by/3.0/), which applies to [CLARIN Mobile database](https://clarin-pl.eu/dspace/handle/11321/237), [CLARIN Studio database](https://clarin-pl.eu/dspace/handle/11321/236), [PELCRA Spelling and Numbers Voice Database](http://pelcra.pl/new/snuv) and [FLEURS dataset](https://huggingface.co/datasets/google/fleurs) * [Creative Commons By Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), which applies to [Multilingual Librispeech](https://huggingface.co/datasets/facebook/multilingual_librispeech) and [Poly AI Minds 14](https://huggingface.co/datasets/PolyAI/minds14) * [Proprietiary License of Munich AI Labs dataset](https://www.caito.de/2019/01/03/the-m-ailabs-speech-dataset) * Public domain mark, which applies to [PWR datasets](https://www.ii.pwr.edu.pl/~sas/ASR/) ### Citation Information Please cite [BIGOS V1 paper](https://annals-csis.org/proceedings/2023/drp/1609.html). ### Contributions Thanks to [@goodmike31](https://github.com/goodmike31) for adding this dataset.
michaljunczyk/pl-asr-bigos
[ "task_categories:automatic-speech-recognition", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "annotations_creators:other", "annotations_creators:machine-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "source_datasets:extended|librispeech_asr", "source_datasets:extended|common_voice", "language:pl", "license:cc-by-sa-4.0", "benchmark", "polish", "asr", "speech", "doi:10.57967/hf/1068", "region:us" ]
2022-11-03T16:38:50+00:00
{"annotations_creators": ["crowdsourced", "expert-generated", "other", "machine-generated"], "language_creators": ["crowdsourced", "expert-generated", "other"], "language": ["pl"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original", "extended|librispeech_asr", "extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "pl-asr-bigos", "tags": ["benchmark", "polish", "asr", "speech"], "extra_gated_prompt": "Original datasets used for curation of BIGOS have specific terms of usage that must be understood and agreed to before use. Below are the links to the license terms and datasets the specific license type applies to:\n* [Creative Commons 0](https://creativecommons.org/share-your-work/public-domain/cc0) which applies to [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0)\n* [Creative Commons By Attribution Share Alike 4.0](https://creativecommons.org/licenses/by-sa/4.0/), which applies to [Clarin Cyfry](https://clarin-pl.eu/dspace/handle/11321/317), [Azon acoustic speech resources corpus](https://zasobynauki.pl/zasoby/korpus-nagran-probek-mowy-do-celow-budowy-modeli-akustycznych-dla-automatycznego-rozpoznawania-mowy,53293/).\n* [Creative Commons By Attribution 3.0](https://creativecommons.org/licenses/by/3.0/), which applies to [CLARIN Mobile database](https://clarin-pl.eu/dspace/handle/11321/237), [CLARIN Studio database](https://clarin-pl.eu/dspace/handle/11321/236), [PELCRA Spelling and Numbers Voice Database](http://pelcra.pl/new/snuv) and [FLEURS dataset](https://huggingface.co/datasets/google/fleurs)\n* [Creative Commons By Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), which applies to [Multilingual Librispeech](https://huggingface.co/datasets/facebook/multilingual_librispeech) and [Poly AI Minds 14](https://huggingface.co/datasets/PolyAI/minds14)\n* [Proprietiary License of Munich AI Labs dataset](https://www.caito.de/2019/01/03/the-m-ailabs-speech-dataset)\n* Public domain mark, which applies to [PWR datasets](https://www.ii.pwr.edu.pl/~sas/ASR/)\nTo use selected dataset, you also need to fill in the access forms on the specific datasets pages:\n* Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0", "extra_gated_fields": {"I hereby confirm that I have read and accepted the license terms of datasets comprising BIGOS corpora": "checkbox", "I hereby confirm that I have registered on the original Common Voice page and agree to not attempt to determine the identity of speakers in the Common Voice dataset": "checkbox"}}
2024-01-08T17:14:38+00:00
b4363edee8385f3f55e970105aed33797f6babeb
ccao/monkey
[ "license:bsd", "doi:10.57967/hf/0319", "region:us" ]
2022-11-03T18:59:39+00:00
{"license": "bsd"}
2023-02-02T03:13:20+00:00
548191053344a231c016a74927e87fae9fef786d
# Dataset Card for DocEE Dataset ## Dataset Description - **Homepage:** - **Repository:** [DocEE Dataset repository](https://github.com/tongmeihan1995/docee) - **Paper:** [DocEE: A Large-Scale and Fine-grained Benchmark for Document-level Event Extraction](https://aclanthology.org/2022.naacl-main.291/) ### Dataset Summary DocEE dataset is an English-language dataset containing more than 27k news and Wikipedia articles. Dataset is primarily annotated and collected for large-scale document-level event extraction. ### Data Fields - `title`: TODO - `text`: TODO - `event_type`: TODO - `date`: TODO - `metadata`: TODO **Note: this repo contains only event detection portion of the dataset.** ### Data Splits The dataset has 2 splits: _train_ and _test_. Train split contains 21949 documents while test split contains 5536 documents. In total, dataset contains 27485 documents classified into 59 event types. #### Differences from the original split(s) Originally, the dataset is split into three splits: train, validation and test. For the purposes of this repository, original splits were joined back together and divided into train and test splits while making sure that splits were stratified across document sources (news and wiki) and event types. Originally, the `title` column additionally contained information from `date` and `metadata` columns. They are now separated into three columns: `date`, `metadata` and `title`.
fkdosilovic/docee-event-classification
[ "task_categories:text-classification", "task_ids:multi-class-classification", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "wiki", "news", "event-detection", "region:us" ]
2022-11-03T20:30:39+00:00
{"language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "DocEE", "tags": ["wiki", "news", "event-detection"]}
2022-11-03T21:39:31+00:00
8b48d820c4bc9f34966fb2ee24f3adb783d20d88
# Dataset Card for Beeple Everyday Dataset used to train [beeple-diffusion](https://huggingface.co/riccardogiorato/beeple-diffusion). The original images were obtained from [twitter.com/beeple](https://twitter.com/beeple/media). ## Citation If you use this dataset, please cite it as: ``` @misc{gioratobeeple-everyday, author = {Riccardo, Giorato}, title = {Beeple Everyday}, year={2022}, howpublished= {\url{https://huggingface.co/datasets/riccardogiorato/beeple-everyday/}} } ```
riccardogiorato/beeple-everyday
[ "license:creativeml-openrail-m", "region:us" ]
2022-11-03T21:03:32+00:00
{"license": "creativeml-openrail-m"}
2022-11-03T21:12:57+00:00
611f9f86637b91ddaa36a0ad60d7ebea0ab73ccf
LiveEvil/RealSrry
[ "license:other", "region:us" ]
2022-11-03T21:41:04+00:00
{"license": "other"}
2022-11-03T21:41:04+00:00
d30422b378c7138835536625cd37dca0b29572ff
LiveEvil/RealTrain
[ "license:mit", "region:us" ]
2022-11-03T21:44:33+00:00
{"license": "mit"}
2022-11-03T21:45:06+00:00
b3187f53037e244e39c29606e357bdd411b46801
# Dataset Card for "dtic_sent" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
stauntonjr/dtic_sent
[ "region:us" ]
2022-11-03T22:30:39+00:00
{"dataset_info": {"features": [{"name": "Accession Number", "dtype": "string"}, {"name": "Title", "dtype": "string"}, {"name": "Descriptive Note", "dtype": "string"}, {"name": "Corporate Author", "dtype": "string"}, {"name": "Personal Author(s)", "sequence": "string"}, {"name": "Report Date", "dtype": "string"}, {"name": "Pagination or Media Count", "dtype": "string"}, {"name": "Descriptors", "sequence": "string"}, {"name": "Subject Categories", "dtype": "string"}, {"name": "Distribution Statement", "dtype": "string"}, {"name": "fulltext", "dtype": "string"}, {"name": "cleantext", "dtype": "string"}, {"name": "sents", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 6951041151, "num_examples": 27425}], "download_size": 3712549813, "dataset_size": 6951041151}}
2022-11-03T23:37:08+00:00
ac0a9507326eaf1752d6209cec2b6b46d8113cbd
# Dataset Card for QA-Portuguese ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Portuguese preprocessed split from [MQA dataset](https://huggingface.co/datasets/clips/mqa). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is Portuguese. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@ju-resplande](https://github.com/ju-resplande) for adding this dataset.
ju-resplande/qa-pt
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:no-annotation", "language_creators:other", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:extended|mqa", "language:pt", "license:cc0-1.0", "region:us" ]
2022-11-03T22:57:12+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["other"], "language": ["pt"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["extended|mqa"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "pretty_name": "qa-portuguese"}
2022-11-25T20:31:56+00:00
4acd51b06d689bf2d0cb95dce6b552909584e8ba
# Nixeu Style Embedding / Textual Inversion ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"drawn by nixeu_style"``` Use the Embedding with one of [SirVeggies](https://huggingface.co/SirVeggie) Nixeu or Wlop models for best results If it is to strong just add [] around it. Trained until 8400 steps Have fun :) ## Example Pictures <table> <tr> <td><img src=https://i.imgur.com/5Rg6a3N.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/oWqYTHL.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/45GFoZf.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/NU8Rc4z.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/Yvl836l.png width=100% height=100%/></td> </tr> </table> ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/nixeu_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "region:us" ]
2022-11-03T23:29:09+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false}
2022-11-03T23:36:01+00:00
563663e3d9cd595fc13750738c733d347117c796
mariopeng/openwebIPA
[ "license:openrail", "region:us" ]
2022-11-03T23:49:03+00:00
{"license": "openrail"}
2022-11-03T23:54:07+00:00
f4a05a82646d34a07f7a830e02a6eca0cc112e7f
camilacorreamelo/camilacorreamelo
[ "region:us" ]
2022-11-04T00:10:00+00:00
{}
2022-11-05T15:49:25+00:00
cec8cd7af9b951972b470c917802172d0398b1a7
dalow24/testing
[ "license:afl-3.0", "region:us" ]
2022-11-04T01:25:21+00:00
{"license": "afl-3.0"}
2022-11-04T01:25:50+00:00
5cfd2faebc11c885a4b7fe7bc1507b0070824fd7
ktmeng/mec
[ "license:mit", "region:us" ]
2022-11-04T05:28:25+00:00
{"license": "mit"}
2022-11-04T05:40:39+00:00
8720d421e7995421bcd0980b087d4c2c7265f3d3
longshared/long_date
[ "license:apache-2.0", "region:us" ]
2022-11-04T06:07:38+00:00
{"license": "apache-2.0"}
2022-11-04T06:07:38+00:00
849be46ab60cfbd53a5bd950538253aecd6cea78
# Dataset Card for "lmqg/qa_harvesting_from_wikipedia" ## Dataset Description - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://aclanthology.org/P18-1177/](https://aclanthology.org/P18-1177/) - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/) ### Dataset Summary This is the QA dataset collected by [Harvesting Paragraph-level Question-Answer Pairs from Wikipedia](https://aclanthology.org/P18-1177) (Du & Cardie, ACL 2018). ### Supported Tasks and Leaderboards * `question-answering` ### Languages English (en) ## Dataset Structure ### Data Fields The data fields are the same among all splits. #### plain_text - `id`: a `string` feature of id - `title`: a `string` feature of title of the paragraph - `context`: a `string` feature of paragraph - `question`: a `string` feature of question - `answers`: a `json` feature of answers ### Data Splits |train |validation|test | |--------:|---------:|-------:| |1,204,925| 30,293| 24,473| ## Citation Information ``` @inproceedings{du-cardie-2018-harvesting, title = "Harvesting Paragraph-level Question-Answer Pairs from {W}ikipedia", author = "Du, Xinya and Cardie, Claire", booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2018", address = "Melbourne, Australia", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P18-1177", doi = "10.18653/v1/P18-1177", pages = "1907--1917", abstract = "We study the task of generating from Wikipedia articles question-answer pairs that cover content beyond a single sentence. We propose a neural network approach that incorporates coreference knowledge via a novel gating mechanism. As compared to models that only take into account sentence-level information (Heilman and Smith, 2010; Du et al., 2017; Zhou et al., 2017), we find that the linguistic knowledge introduced by the coreference representation aids question generation significantly, producing models that outperform the current state-of-the-art. We apply our system (composed of an answer span extraction system and the passage-level QG system) to the 10,000 top ranking Wikipedia articles and create a corpus of over one million question-answer pairs. We provide qualitative analysis for the this large-scale generated corpus from Wikipedia.", } ```
lmqg/qa_harvesting_from_wikipedia
[ "task_categories:question-answering", "task_ids:extractive-qa", "multilinguality:monolingual", "size_categories:1M<", "source_datasets:extended|wikipedia", "language:en", "license:cc-by-4.0", "region:us" ]
2022-11-04T06:30:51+00:00
{"language": "en", "license": "cc-by-4.0", "multilinguality": "monolingual", "size_categories": "1M<", "source_datasets": ["extended|wikipedia"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "Harvesting QA paris from Wikipedia."}
2022-11-05T03:19:40+00:00
f33015cbbb9603eafb301548bd4d43aad6354c64
KoziCreative/Testing
[ "license:afl-3.0", "region:us" ]
2022-11-04T08:35:11+00:00
{"license": "afl-3.0"}
2022-11-04T09:31:40+00:00
7d5efeb7e157099ebd0f630628e64b1cdc97f6e2
# Dataset Card for "auto_content" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Ayush2609/auto_content
[ "region:us" ]
2022-11-04T09:32:38+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 25207.5885509839, "num_examples": 503}, {"name": "validation", "num_bytes": 2806.4114490161, "num_examples": 56}], "download_size": 19771, "dataset_size": 28014.0}}
2022-11-04T09:32:44+00:00
cc540899103705a0cb87bea53bda71fa14a80737
# Dataset Card for "answerable_tydiqa" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
PartiallyTyped/answerable_tydiqa
[ "region:us" ]
2022-11-04T09:44:49+00:00
{"dataset_info": {"features": [{"name": "question_text", "dtype": "string"}, {"name": "document_title", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "annotations", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "answer_text", "sequence": "string"}]}, {"name": "document_plaintext", "dtype": "string"}, {"name": "document_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32084629.326371837, "num_examples": 29868}, {"name": "validation", "num_bytes": 3778385.324427767, "num_examples": 3712}], "download_size": 16354337, "dataset_size": 35863014.6507996}}
2022-11-04T09:45:10+00:00
f71b7973349141cb8a3d40b6ee2797830f62ae68
# Dataset Card for "answerable_tydiqa_restructured" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
PartiallyTyped/answerable_tydiqa_restructured
[ "region:us" ]
2022-11-04T09:45:21+00:00
{"dataset_info": {"features": [{"name": "language", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "references", "struct": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}, {"name": "id", "dtype": "string"}]}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21940019, "num_examples": 29868}, {"name": "validation", "num_bytes": 2730209, "num_examples": 3712}], "download_size": 17468684, "dataset_size": 24670228}}
2022-11-04T09:45:41+00:00
90b5976050208f4ab764422c334b95dfd681e4f0
# Dataset Card for "answerable_tydiqa_preprocessed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
PartiallyTyped/answerable_tydiqa_preprocessed
[ "region:us" ]
2022-11-04T09:46:00+00:00
{"dataset_info": {"features": [{"name": "language", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "references", "struct": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}, {"name": "id", "dtype": "string"}]}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21252073.336011786, "num_examples": 29800}, {"name": "validation", "num_bytes": 2657400.5792025863, "num_examples": 3709}], "download_size": 16838253, "dataset_size": 23909473.91521437}}
2022-11-04T09:46:21+00:00
b20f6950ca9773dac84e57b2f052cc9c3fcdf448
# Dataset Card for "answerable_tydiqa_tokenized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
PartiallyTyped/answerable_tydiqa_tokenized
[ "region:us" ]
2022-11-04T09:46:52+00:00
{"dataset_info": {"features": [{"name": "language", "dtype": "string"}, {"name": "question", "sequence": "string"}, {"name": "context", "sequence": "string"}, {"name": "references", "struct": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}, {"name": "id", "dtype": "string"}]}, {"name": "id", "dtype": "string"}, {"name": "labels", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 30320669, "num_examples": 29800}, {"name": "validation", "num_bytes": 3761508, "num_examples": 3709}], "download_size": 17981416, "dataset_size": 34082177}}
2022-11-04T09:47:12+00:00
148e1cda53c9697ea386953a60e8493dbd102cb1
# Guweiz Artist Embedding / Textual Inversion ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"drawn by guweiz_style"``` If it is to strong just add [] around it. Trained until 9000 steps Have fun :) ## Example Pictures <table> <tr> <td><img src=https://i.imgur.com/eCbB30e.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/U1Fezud.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/DqruJgs.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/O7VV7BS.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/k4sIsvH.png width=100% height=100%/></td> </tr> </table> ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/guweiz_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "region:us" ]
2022-11-04T10:11:35+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false}
2022-11-04T10:14:19+00:00
60d8a487125ced60f6cd19e37aac3739d135b6b5
# Dataset Card for "tx-data-to-decode" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Lucapro/tx-data-to-decode
[ "region:us" ]
2022-11-04T10:21:51+00:00
{"dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "de", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3527858, "num_examples": 6057}], "download_size": 995171, "dataset_size": 3527858}}
2022-11-04T10:22:12+00:00
b578d37c60f8311c642fb7b6838fadef45cdd2a0
MartinMu/SD-Training
[ "license:openrail", "region:us" ]
2022-11-04T10:42:34+00:00
{"license": "openrail"}
2022-11-04T10:49:52+00:00
e35a91f8e7cb3c201a7211b53219f0d8833f1a3d
paweldali/Tylercrimetime
[ "license:unknown", "region:us" ]
2022-11-04T10:53:29+00:00
{"license": "unknown"}
2022-11-04T11:02:28+00:00
626de4a1bf832412aed03cd731b74bc5ac978fcb
# Dataset Card for "icd10-reference-cm" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
rjac/icd10-reference-cm
[ "region:us" ]
2022-11-04T11:23:22+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "icd10_tc_category", "dtype": "string"}, {"name": "icd10_tc_category_group", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13286095, "num_examples": 71480}], "download_size": 2715065, "dataset_size": 13286095}}
2022-11-04T11:23:29+00:00
587e3170fcb95d51295acfea053c6570cedd8a41
# Dataset Card for "Pierse-movie-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
MarkGG/Pierse-movie-dataset
[ "region:us" ]
2022-11-04T11:34:53+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 53518991.51408206, "num_examples": 1873138}, {"name": "validation", "num_bytes": 5946570.485917939, "num_examples": 208127}], "download_size": 33525659, "dataset_size": 59465562.0}}
2022-11-04T11:35:26+00:00
7f5cd8bfac9cee6eb3a88ba576779a76c30bf806
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: Luciano/bertimbau-base-finetuned-brazilian_court_decisions * Dataset: joelito/brazilian_court_decisions * Config: joelito--brazilian_court_decisions * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-joelito__brazilian_court_decisions-joelito__brazilian_c-4bed1b-1985466167
[ "autotrain", "evaluation", "region:us" ]
2022-11-04T13:21:46+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["joelito/brazilian_court_decisions"], "eval_info": {"task": "multi_class_classification", "model": "Luciano/bertimbau-base-finetuned-brazilian_court_decisions", "metrics": [], "dataset_name": "joelito/brazilian_court_decisions", "dataset_config": "joelito--brazilian_court_decisions", "dataset_split": "test", "col_mapping": {"text": "decision_description", "target": "judgment_label"}}}
2022-11-04T13:22:24+00:00
04201c6a1a1cb7f50160ab3b0e0a7a630bef5463
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: Luciano/bertimbau-base-finetuned-lener-br-finetuned-brazilian_court_decisions * Dataset: joelito/brazilian_court_decisions * Config: joelito--brazilian_court_decisions * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model.
autoevaluate/autoeval-eval-joelito__brazilian_court_decisions-joelito__brazilian_c-4bed1b-1985466168
[ "autotrain", "evaluation", "region:us" ]
2022-11-04T13:21:51+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["joelito/brazilian_court_decisions"], "eval_info": {"task": "multi_class_classification", "model": "Luciano/bertimbau-base-finetuned-lener-br-finetuned-brazilian_court_decisions", "metrics": [], "dataset_name": "joelito/brazilian_court_decisions", "dataset_config": "joelito--brazilian_court_decisions", "dataset_split": "test", "col_mapping": {"text": "decision_description", "target": "judgment_label"}}}
2022-11-04T13:22:29+00:00
46f712c7d0dbfb4aaa83bdce8c4f9a4c2f080e69
# Dataset Card for "test_splits_order" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
polinaeterna/test_splits_order
[ "region:us" ]
2022-11-04T13:30:41+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 32, "num_examples": 2}, {"name": "train", "num_bytes": 48, "num_examples": 2}], "download_size": 1776, "dataset_size": 80}}
2022-11-04T13:30:57+00:00
06ed218989fe8d663592ac82d4b1a2118e0ee2bd
marianna13/laion2B-multi-joined-translated-to-en-hr
[ "region:us" ]
2022-11-04T13:48:39+00:00
{"license": "cc-by-4.0"}
2022-11-07T14:10:48+00:00
0a118a6d943dba991d968c909121d7e231f968f0
# Dataset Card for "test_splits" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
polinaeterna/test_splits
[ "region:us" ]
2022-11-04T13:53:18+00:00
{"dataset_info": {"features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 116, "num_examples": 8}, {"name": "test", "num_bytes": 46, "num_examples": 3}], "download_size": 1698, "dataset_size": 162}}
2022-11-04T13:59:01+00:00
6307437ad30f1172d69671dd1380e8d652c1fd0e
Yubing/Ubin
[ "license:openrail", "region:us" ]
2022-11-04T14:23:55+00:00
{"license": "openrail"}
2022-11-04T14:23:56+00:00
1151784096a8e009fd8cf9b614759f05adc5071a
VXX/sd_images
[ "license:openrail", "region:us" ]
2022-11-04T14:24:09+00:00
{"license": "openrail"}
2022-11-08T08:29:46+00:00
6cd12e75db5b54753dc7a1ef66f4fef854307edb
echogecko/molly
[ "region:us" ]
2022-11-04T14:34:21+00:00
{}
2022-11-04T14:36:37+00:00
8e413a6829a1f3d83de7c898850c5b92690c9b3f
marianna13/laion2B-multi-joined-translated-to-en-ultra-hr
[ "region:us" ]
2022-11-04T14:53:22+00:00
{"license": "cc-by-4.0"}
2022-11-07T14:26:15+00:00
ef320d1bb821f7a1cbc1e029f7e930faae59ff6c
assq/11
[ "license:cc0-1.0", "region:us" ]
2022-11-04T15:11:40+00:00
{"license": "cc0-1.0"}
2022-11-04T15:13:01+00:00
0ff5ded4caccbfeb631f5f70ea3e19a773e0004e
--- annotations_creators: - machine-generated language: - en language_creators: - other multilinguality: - monolingual pretty_name: "Fashion captions" size_categories: - n<100K tags: [] task_categories: - text-to-image task_ids: [] --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
duyngtr16061999/pokemon_fashion_mixed
[ "region:us" ]
2022-11-04T15:30:52+00:00
{}
2022-11-04T16:21:57+00:00
ca5374f76ac0bd2208713ad7d9b37bc7f99aed1e
LiveEvil/WannaCryBlock
[ "license:mit", "region:us" ]
2022-11-04T15:47:08+00:00
{"license": "mit"}
2022-11-04T15:47:08+00:00
0308f18780cb95bcb0625b1d0fa798c15d3aa250
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@MauritsG](https://huggingface.co/MauritsG) for evaluating this model.
autoevaluate/autoeval-eval-autoevaluate__zero-shot-classification-sample-autoevalu-103f11-1986766201
[ "autotrain", "evaluation", "region:us" ]
2022-11-04T15:49:37+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/zero-shot-classification-sample"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": ["recall", "precision"], "dataset_name": "autoevaluate/zero-shot-classification-sample", "dataset_config": "autoevaluate--zero-shot-classification-sample", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-11-04T15:49:57+00:00
33997c7bd85c09f16d8f1ccfe8daae52d4378a4a
LiveEvil/MyClass
[ "license:mit", "region:us" ]
2022-11-04T16:02:41+00:00
{"license": "mit"}
2022-11-04T16:02:41+00:00
a0b5c74c5522a35f19c88c46b8310c32a8f17761
marianna13/laion1B-nolang-joined-translated-to-en-hr
[ "region:us" ]
2022-11-04T16:03:38+00:00
{"license": "cc-by-4.0"}
2022-11-07T13:37:23+00:00
f64c63247c266a97e92092e7906050cf9f6f6b02
marianna13/laion1B-nolang-joined-translated-to-en-ultra-hr
[ "region:us" ]
2022-11-04T16:18:51+00:00
{"license": "cc-by-4.0"}
2022-11-04T16:40:36+00:00
adb84a6881e297ec2c9df51d56902781a25cf6e5
marianna13/improved_aesthetics_4.5plus-ultra-hr
[ "region:us" ]
2022-11-04T16:52:43+00:00
{"license": "apache-2.0"}
2022-11-07T14:50:02+00:00
c4c55382a58a997f57ff1100eff6696d1574204d
# Dataset Card for "dirt_teff2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
roydcarlson/dirt_teff2
[ "region:us" ]
2022-11-04T17:28:46+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 6436424.0, "num_examples": 7}], "download_size": 6352411, "dataset_size": 6436424.0}}
2022-11-04T17:28:50+00:00
de4b6a7d716fead381ca0525bf7488c237ca09c4
LiveEvil/LetMeE
[ "license:openrail", "region:us" ]
2022-11-04T18:07:01+00:00
{"license": "openrail"}
2022-11-04T18:07:01+00:00
f2675b210a774ec7e8116c38acb39e724f101ea4
# Dataset Card for "sidewalk-imagery2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
roydcarlson/sidewalk-imagery2
[ "region:us" ]
2022-11-04T18:41:10+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3138394.0, "num_examples": 10}], "download_size": 3139599, "dataset_size": 3138394.0}}
2022-11-04T18:41:17+00:00
5976a1c9abfe4c8a216fccd28cd199d22a53a40a
hjvjjv
codysoccerman/my_test_dataset
[ "region:us" ]
2022-11-04T19:13:19+00:00
{}
2022-11-20T01:05:46+00:00
590f8ab8f495a868ca9d191a4fd0fb4255d0788a
# SOLD - A Benchmark for Sinhala Offensive Language Identification In this repository, we introduce the {S}inhala {O}ffensive {L}anguage {D}ataset **(SOLD)** and present multiple experiments on this dataset. **SOLD** is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level. **SOLD** is the largest offensive language dataset compiled for Sinhala. We also introduce **SemiSOLD**, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach. :warning: This repository contains texts that may be offensive and harmful. ## Annotation We use an annotation scheme split into two levels deciding (a) Offensiveness of a tweet (sentence-level) and (b) Tokens that contribute to the offence at sentence-level (token-level). ### Sentence-level Our sentence-level offensive language detection follows level A in OLID [(Zampieri et al., 2019)](https://aclanthology.org/N19-1144/). We asked annotators to discriminate between the following types of tweets: * **Offensive (OFF)**: Posts containing any form of non-acceptable language (profanity) or a targeted offence, which can be veiled or direct. This includes insults, threats, and posts containing profane language or swear words. * **Not Offensive (NOT)**: Posts that do not contain offense or profanity. Each tweet was annotated with one of the above labels, which we used as the labels in sentence-level offensive language identification. ### Token-level To provide a human explanation of labelling, we collect rationales for the offensive language. Following HateXplain [(Mathew et al., 2021)](https://ojs.aaai.org/index.php/AAAI/article/view/17745), we define a rationale as a specific text segment that justifies the human annotator’s decision of the sentence-level labels. Therefore, We ask the annotators to highlight particular tokens in a tweet that supports their judgement about the sentence-level label (offensive, not offensive). Specifically, if a tweet is offensive, we guide the annotators to highlight tokens from the text that supports the judgement while including non-verbal expressions such as emojis and morphemes that are used to convey the intention as well. We use this as token-level offensive labels in SOLD. ![Alt text](https://github.com/Sinhala-NLP/SOLD/blob/master/images/SOLD_Annotation.png?raw=true "Annotation Process") ## Data SOLD is released in HuggingFace. It can be loaded in to pandas dataframes using the following code. ```python from datasets import Dataset from datasets import load_dataset sold_train = Dataset.to_pandas(load_dataset('sinhala-nlp/SOLD', split='train')) sold_test = Dataset.to_pandas(load_dataset('sinhala-nlp/SOLD', split='test')) ``` The dataset contains of the following columns. * **post_id** - Twitter ID * **text** - Post text * **tokens** - Tokenised text. Each token is seperated by a space. * **rationals** - Offensive tokens. If a token is offensive it is shown as 1 and 0 otherwise. * **label** - Sentence-level label, offensive or not-offensive. ![Alt text](https://github.com/Sinhala-NLP/SOLD/blob/master/images/SOLD_Examples.png?raw=true "Four examples from the SOLD dataset") SemiSOLD is also released HuggingFace and can be loaded to a pandas dataframe using the following code. ```python from datasets import Dataset from datasets import load_dataset semi_sold = Dataset.to_pandas(load_dataset('sinhala-nlp/SemiSOLD', split='train')) ``` The dataset contains following columns * **post_id** - Twitter ID * **text** - Post text Furthermore it contains predicted offensiveness scores from nine classifiers trained on SOLD train; xlmr, xlmt, mbert, sinbert, lstm_ft, cnn_ft, lstm_cbow, cnn_cbow, lstm_sl, cnn_sl and svm ## Experiments Clone the repository and install the libraries using the following command (preferably inside a conda environment) ~~~ pip install -r requirements.txt ~~~ ### Sentence-level Sentence-level transformer based experiments can be executed using the following command. ~~~ python -m experiments.sentence_level.sinhala_deepoffense ~~~ The command takes the following arguments; ~~~ --model_type : Type of the transformer model (bert, xlmroberta, roberta etc ). --model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files. --transfer : Whether to perform transfer learning or not (true or false). --transfer_language : The initial language if transfer learning is performed (hi, en or si). * hi - Perform transfer learning from HASOC 2019 Hindi dataset (Modha et al., 2019). * en - Perform transfer learning from Offenseval English dataset (Zampieri et al., 2019). * si - Perform transfer learning from CCMS Sinhala dataset (Rathnayake et al., 2021). --augment : Perform semi supervised data augmentation. --std : Standard deviation of the models to cut down data augmentation. --augment_type: The type of the data augmentation. * off - Augment only the offensive instances. * normal - Augment both offensive and non-offensive instances. ~~~ Sentence-level CNN and LSTM based experiments can be executed using the following command. ~~~ python -m experiments.sentence_level.sinhala_offensive_nn ~~~ The command takes the following arguments; ~~~ --model_type : Type of the architecture (cnn2D, lstm). --model_name : The exact word embeddings to use. This may be a gensim model, or the path to a word embeddinng files. --augment : Perform semi supervised data augmentation. --std : Standard deviation of the models to cut down data augmentation. --augment_type: The type of the data augmentation. * off - Augment only the offensive instances. * normal - Augment both offensive and non-offensive instances. ~~~ ### Token-level Token-level transformer based experiments can be executed using the following command. ~~~ python -m experiments.sentence_level.sinhala_mudes ~~~ The command takes the following arguments; ~~~ --model_type : Type of the transformer model (bert, xlmroberta, roberta etc ). --model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files. --transfer : Whether to perform transfer learning or not (true or false). --transfer_language : The initial language if transfer learning is performed (hatex or tsd). * hatex - Perform transfer learning from HateXplain dataset (Mathew et al., 2021). * tsd - Perform transfer learning from TSD dataset (Pavlopoulos et al., 2021). ~~~ Token-level LIME experiments can be executed using the following command. ~~~ python -m experiments.sentence_level.sinhala_lime ~~~ The command takes the following arguments; ~~~ --model_type : Type of the transformer model (bert, xlmroberta, roberta etc ). --model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files. ~~~ ## Acknowledgments We want to acknowledge Janitha Hapuarachchi, Sachith Suraweera, Chandika Udaya Kumara and Ridmi Randima, the team of volunteer annotators that provided their free time and efforts to help us produce SOLD. ## Citation If you are using the dataset or the models please cite the following paper ~~~ @article{ranasinghe2022sold, title={SOLD: Sinhala Offensive Language Dataset}, author={Ranasinghe, Tharindu and Anuradha, Isuri and Premasiri, Damith and Silva, Kanishka and Hettiarachchi, Hansi and Uyangodage, Lasitha and Zampieri, Marcos}, journal={arXiv preprint arXiv:2212.00851}, year={2022} } ~~~
sinhala-nlp/SOLD
[ "region:us" ]
2022-11-04T19:45:07+00:00
{}
2022-12-20T20:19:41+00:00
380434e3076631fced1ab7db82568a079c295764
BestManOnEarth/dataset01
[ "license:afl-3.0", "region:us" ]
2022-11-04T20:20:07+00:00
{"license": "afl-3.0"}
2022-11-04T20:23:20+00:00
d3c6aafbdaca0dac1274db14f142f0c20a5348b2
# SOLD - A Benchmark for Sinhala Offensive Language Identification In this repository, we introduce the {S}inhala {O}ffensive {L}anguage {D}ataset **(SOLD)** and present multiple experiments on this dataset. **SOLD** is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level. **SOLD** is the largest offensive language dataset compiled for Sinhala. We also introduce **SemiSOLD**, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach. :warning: This repository contains texts that may be offensive and harmful. ## Annotation We use an annotation scheme split into two levels deciding (a) Offensiveness of a tweet (sentence-level) and (b) Tokens that contribute to the offence at sentence-level (token-level). ### Sentence-level Our sentence-level offensive language detection follows level A in OLID [(Zampieri et al., 2019)](https://aclanthology.org/N19-1144/). We asked annotators to discriminate between the following types of tweets: * **Offensive (OFF)**: Posts containing any form of non-acceptable language (profanity) or a targeted offence, which can be veiled or direct. This includes insults, threats, and posts containing profane language or swear words. * **Not Offensive (NOT)**: Posts that do not contain offense or profanity. Each tweet was annotated with one of the above labels, which we used as the labels in sentence-level offensive language identification. ### Token-level To provide a human explanation of labelling, we collect rationales for the offensive language. Following HateXplain [(Mathew et al., 2021)](https://ojs.aaai.org/index.php/AAAI/article/view/17745), we define a rationale as a specific text segment that justifies the human annotator’s decision of the sentence-level labels. Therefore, We ask the annotators to highlight particular tokens in a tweet that supports their judgement about the sentence-level label (offensive, not offensive). Specifically, if a tweet is offensive, we guide the annotators to highlight tokens from the text that supports the judgement while including non-verbal expressions such as emojis and morphemes that are used to convey the intention as well. We use this as token-level offensive labels in SOLD. ![Alt text](https://github.com/Sinhala-NLP/SOLD/blob/master/images/SOLD_Annotation.png?raw=true "Annotation Process") ## Data SOLD is released in HuggingFace. It can be loaded in to pandas dataframes using the following code. ```python from datasets import Dataset from datasets import load_dataset sold_train = Dataset.to_pandas(load_dataset('sinhala-nlp/SOLD', split='train')) sold_test = Dataset.to_pandas(load_dataset('sinhala-nlp/SOLD', split='test')) ``` The dataset contains of the following columns. * **post_id** - Twitter ID * **text** - Post text * **tokens** - Tokenised text. Each token is seperated by a space. * **rationals** - Offensive tokens. If a token is offensive it is shown as 1 and 0 otherwise. * **label** - Sentence-level label, offensive or not-offensive. ![Alt text](https://github.com/Sinhala-NLP/SOLD/blob/master/images/SOLD_Examples.png?raw=true "Four examples from the SOLD dataset") SemiSOLD is also released HuggingFace and can be loaded to a pandas dataframe using the following code. ```python from datasets import Dataset from datasets import load_dataset semi_sold = Dataset.to_pandas(load_dataset('sinhala-nlp/SemiSOLD', split='train')) ``` The dataset contains following columns * **post_id** - Twitter ID * **text** - Post text Furthermore it contains predicted offensiveness scores from nine classifiers trained on SOLD train; xlmr, xlmt, mbert, sinbert, lstm_ft, cnn_ft, lstm_cbow, cnn_cbow, lstm_sl, cnn_sl and svm ## Experiments Clone the repository and install the libraries using the following command (preferably inside a conda environment) ~~~ pip install -r requirements.txt ~~~ ### Sentence-level Sentence-level transformer based experiments can be executed using the following command. ~~~ python -m experiments.sentence_level.sinhala_deepoffense ~~~ The command takes the following arguments; ~~~ --model_type : Type of the transformer model (bert, xlmroberta, roberta etc ). --model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files. --transfer : Whether to perform transfer learning or not (true or false). --transfer_language : The initial language if transfer learning is performed (hi, en or si). * hi - Perform transfer learning from HASOC 2019 Hindi dataset (Modha et al., 2019). * en - Perform transfer learning from Offenseval English dataset (Zampieri et al., 2019). * si - Perform transfer learning from CCMS Sinhala dataset (Rathnayake et al., 2021). --augment : Perform semi supervised data augmentation. --std : Standard deviation of the models to cut down data augmentation. --augment_type: The type of the data augmentation. * off - Augment only the offensive instances. * normal - Augment both offensive and non-offensive instances. ~~~ Sentence-level CNN and LSTM based experiments can be executed using the following command. ~~~ python -m experiments.sentence_level.sinhala_offensive_nn ~~~ The command takes the following arguments; ~~~ --model_type : Type of the architecture (cnn2D, lstm). --model_name : The exact word embeddings to use. This may be a gensim model, or the path to a word embeddinng files. --augment : Perform semi supervised data augmentation. --std : Standard deviation of the models to cut down data augmentation. --augment_type: The type of the data augmentation. * off - Augment only the offensive instances. * normal - Augment both offensive and non-offensive instances. ~~~ ### Token-level Token-level transformer based experiments can be executed using the following command. ~~~ python -m experiments.sentence_level.sinhala_mudes ~~~ The command takes the following arguments; ~~~ --model_type : Type of the transformer model (bert, xlmroberta, roberta etc ). --model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files. --transfer : Whether to perform transfer learning or not (true or false). --transfer_language : The initial language if transfer learning is performed (hatex or tsd). * hatex - Perform transfer learning from HateXplain dataset (Mathew et al., 2021). * tsd - Perform transfer learning from TSD dataset (Pavlopoulos et al., 2021). ~~~ Token-level LIME experiments can be executed using the following command. ~~~ python -m experiments.sentence_level.sinhala_lime ~~~ The command takes the following arguments; ~~~ --model_type : Type of the transformer model (bert, xlmroberta, roberta etc ). --model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files. ~~~ ## Acknowledgments We want to acknowledge Janitha Hapuarachchi, Sachith Suraweera, Chandika Udaya Kumara and Ridmi Randima, the team of volunteer annotators that provided their free time and efforts to help us produce SOLD. ## Citation If you are using the dataset or the models please cite the following paper ~~~ @article{ranasinghe2022sold, title={SOLD: Sinhala Offensive Language Dataset}, author={Ranasinghe, Tharindu and Anuradha, Isuri and Premasiri, Damith and Silva, Kanishka and Hettiarachchi, Hansi and Uyangodage, Lasitha and Zampieri, Marcos}, journal={arXiv preprint arXiv:2212.00851}, year={2022} } ~~~
sinhala-nlp/SemiSOLD
[ "region:us" ]
2022-11-04T20:42:38+00:00
{}
2022-12-20T20:21:26+00:00
d2687bf97a010478ad55cdc6b17489d7bdda6158
# AutoTrain Dataset for project: test ## Dataset Description This dataset has been automatically processed by AutoTrain for project test. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<512x512 RGB PIL image>", "target": 1 }, { "image": "<512x512 RGB PIL image>", "target": 2 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(num_classes=3, names=['man', 'other', 'woman'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 45 | | valid | 13 |
sirtolkien/autotrain-data-test
[ "task_categories:image-classification", "doi:10.57967/hf/0090", "region:us" ]
2022-11-04T20:56:01+00:00
{"task_categories": ["image-classification"]}
2022-11-04T21:02:23+00:00
8d08878020856ee2a2e28f5624c8c684ee84b2ea
# Dataset Card for Multilingual Sarcasm Detection ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) ## Dataset Description - Repository: https://github.com/helinivan/multilingual-sarcasm-detector ### Dataset Summary Dataset consists of news article headlines in Dutch, English and Italian. The news article headlines are both from actual news sources and sarcastic/satirical newspapers. The news article is determined sarcastic/non-sarcastic based on the news article source. The sources of news articles are: - The Huffington Post (en, non-sarcastic) - The Onion (en, sarcastic) - NOS (nl, non-sarcastic) - De Speld (nl, sarcastic) - Il Giornale (it, non-sarcastic) - Lercio (it, sarcastic) ### Languages `en`, `nl`, `it` ## Dataset Structure ### Data Instances - total_length: 67,480 - sarcastic: 25,609 - non_sarcastic: 41,817 - english: 22,837 - dutch: 20,771 - italian: 23,871 ### Data Fields - article_url: str - article_title: str - is_sarcastic: int - lang: str - title_length: int ## Dataset Creation ### Source Data - Selected all English news article titles from this Kaggle dataset: https://www.kaggle.com/datasets/rmisra/news-headlines-dataset-for-sarcasm-detection - Randomly selected 15k Dutch non-sarcastic news article titles from this Kaggle dataset: https://www.kaggle.com/datasets/maxscheijen/dutch-news-articles Rest of the data is scraped directly from the newspapers.
helinivan/sarcasm_headlines_multilingual
[ "region:us" ]
2022-11-04T22:23:03+00:00
{}
2022-12-04T18:56:53+00:00
4192bf0f29316c0ed081510171b83a71883f1eaa
# Dataset Card for "dummy" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
arbml/dummy
[ "region:us" ]
2022-11-04T22:28:56+00:00
{"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "age", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "female", "1": "male"}}}}], "splits": [{"name": "train", "num_bytes": 50, "num_examples": 2}], "download_size": 1182, "dataset_size": 50}}
2022-11-29T15:57:27+00:00
3817af36979322cdbbbd8896baafbf248198878c
InstantD/PathfinderKobold
[ "region:us" ]
2022-11-04T22:56:17+00:00
{}
2022-11-04T23:17:55+00:00
31d3a08d5af6c0eb87e822ae146b14955d8453e0
# Landscape Style Embedding / Textual Inversion ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder Two different Versions: ### Version 1: File: ```land_style``` To use it in a prompt: ```"art by land_style"``` For best use write something like ```highly detailed background art by land_style``` ### Version 2: File: ```landscape_style``` To use it in a prompt: ```"art by landscape_style"``` For best use write something like ```highly detailed background art by landscape_style``` If it is to strong just add [] around it. Trained until 7000 steps Have fun :) ## Example Pictures <img src=https://i.imgur.com/UjoXFkJ.png width=100% height=100%/> <img src=https://i.imgur.com/rAoEyLK.png width=100% height=100%/> <img src=https://i.imgur.com/SpPsc7i.png width=100% height=100%/> <img src=https://i.imgur.com/zMH0EeI.png width=100% height=100%/> <img src=https://i.imgur.com/iQe0Jxc.png width=100% height=100%/> ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/land_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "region:us" ]
2022-11-04T22:56:47+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false}
2022-11-12T14:42:39+00:00
55f1c09dcca698cd7015ff37b35ee2e136df6797
# Dataset Card for "Romance-baseline" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
MarkGG/Romance-baseline
[ "region:us" ]
2022-11-05T01:05:26+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39176840.7, "num_examples": 1105002}, {"name": "validation", "num_bytes": 4352982.3, "num_examples": 122778}], "download_size": 23278822, "dataset_size": 43529823.0}}
2022-11-05T01:05:46+00:00
872974844b7d454a4e1fb0730de79149e7f7d826
iejMac/CLIP-MSVD
[ "license:mit", "region:us" ]
2022-11-05T01:56:19+00:00
{"license": "mit"}
2022-11-05T02:19:16+00:00
7b8b77e8fdeb334e3550d1fb6167d4cc92dc6957
# Dataset Card for "lmqg/qa_squadshifts" ## Dataset Description - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2004.14444](https://arxiv.org/abs/2004.14444) - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/) ### Dataset Summary This is SQuADShifts dataset with custom split of training/validation/test following [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts). ### Supported Tasks and Leaderboards * `question-answering` ### Languages English (en) ## Dataset Structure ### Data Fields The data fields are the same among all splits. #### plain_text - `id`: a `string` feature of id - `title`: a `string` feature of title of the paragraph - `context`: a `string` feature of paragraph - `question`: a `string` feature of question - `answers`: a `json` feature of answers ### Data Splits | name |train | valid | test | |-------------|------:|------:|-----:| |default (all)|9209|6283 |18,844| | amazon |3295|1648|4942| | new_wiki |2646|1323|3969| | nyt |3355|1678|5032| | reddit |3268|1634|4901| ## Citation Information ``` @inproceedings{miller2020effect, title={The effect of natural distribution shift on question answering models}, author={Miller, John and Krauth, Karl and Recht, Benjamin and Schmidt, Ludwig}, booktitle={International Conference on Machine Learning}, pages={6905--6916}, year={2020}, organization={PMLR} } ```
lmqg/qa_squadshifts
[ "task_categories:question-answering", "task_ids:extractive-qa", "multilinguality:monolingual", "size_categories:1k<n<10k", "source_datasets:extended|wikipedia", "language:en", "license:cc-by-4.0", "arxiv:2004.14444", "region:us" ]
2022-11-05T02:43:19+00:00
{"language": "en", "license": "cc-by-4.0", "multilinguality": "monolingual", "size_categories": "1k<n<10k", "source_datasets": ["extended|wikipedia"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "SQuADShifts"}
2022-11-05T05:10:26+00:00
6f41e1fff033457ae09c882a845a548a1c99ddba
# Dataset Card for "winobias" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
henryscheible/winobias
[ "region:us" ]
2022-11-05T05:11:18+00:00
{"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "eval", "num_bytes": 230400, "num_examples": 1584}, {"name": "train", "num_bytes": 226080, "num_examples": 1584}], "download_size": 83948, "dataset_size": 456480}}
2022-11-05T05:11:25+00:00
3441c9e1f9d053e02e451d65b5e9cbd91759b6c6
# Dataset Card for "diffusiondb_random_10k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
svjack/diffusiondb_random_10k
[ "region:us" ]
2022-11-05T06:06:24+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "seed", "dtype": "int64"}, {"name": "step", "dtype": "int64"}, {"name": "cfg", "dtype": "float32"}, {"name": "sampler", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6221323762.0, "num_examples": 10000}], "download_size": 5912620994, "dataset_size": 6221323762.0}}
2022-11-05T06:42:29+00:00
f5e692026a34569c12e41c76f8d454fd9656f041
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/roberta-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model.
autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966288
[ "autotrain", "evaluation", "region:us" ]
2022-11-05T09:05:56+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-base-squad2", "metrics": ["accuracy", "bleu", "precision", "recall", "rouge"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-11-05T09:08:51+00:00
0d4919bac6e97e65c5770de6df0c068c6668c1a8
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: abhilash1910/albert-squad-v2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model.
autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966289
[ "autotrain", "evaluation", "region:us" ]
2022-11-05T09:06:01+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "abhilash1910/albert-squad-v2", "metrics": ["accuracy", "bleu", "precision", "recall", "rouge"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-11-05T09:10:19+00:00
7d1d7bfc1ce0bc6e4232a162fa62f4bd9fac84aa
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/bert-base-cased-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model.
autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966290
[ "autotrain", "evaluation", "region:us" ]
2022-11-05T09:06:06+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-base-cased-squad2", "metrics": ["accuracy", "bleu", "precision", "recall", "rouge"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-11-05T09:09:12+00:00
d3977836565f67db67cf3c73acff318889fe1fb8
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/bert-base-uncased-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model.
autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966291
[ "autotrain", "evaluation", "region:us" ]
2022-11-05T09:06:12+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-base-uncased-squad2", "metrics": ["accuracy", "bleu", "precision", "recall", "rouge"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-11-05T09:09:17+00:00
7ea37d0dd1563d17ca76bbbd94870d0c2ecae6d0
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: distilbert-base-cased-distilled-squad * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model.
autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966292
[ "autotrain", "evaluation", "region:us" ]
2022-11-05T09:06:18+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "distilbert-base-cased-distilled-squad", "metrics": ["accuracy", "bleu", "precision", "recall", "rouge"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-11-05T09:08:45+00:00
5910f37a9ea67db63f742fab701c7f58fa9f2878
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/electra-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@anchal](https://huggingface.co/anchal) for evaluating this model.
autoevaluate/autoeval-eval-squad_v2-squad_v2-5d46e4-1992966293
[ "autotrain", "evaluation", "region:us" ]
2022-11-05T09:06:22+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/electra-base-squad2", "metrics": ["accuracy", "bleu", "precision", "recall", "rouge"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-11-05T09:09:32+00:00