sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
sequence
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
07a8b5711578956e3962668341e696c23b4afba8
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sysresearch101/t5-large-finetuned-xsum * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-d7ddcd7b-12845708
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T01:35:11+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "sysresearch101/t5-large-finetuned-xsum", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T02:07:02+00:00
8b3718ab8d417b60b0841465810b4e9cc062d710
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-d7ddcd7b-12845709
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T01:35:14+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T02:16:03+00:00
97af091b1c1eeae4c0f48d669716625ccd78c2c6
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sysresearch101/t5-large-finetuned-xsum-cnn * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-d7ddcd7b-12845710
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T01:35:19+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "sysresearch101/t5-large-finetuned-xsum-cnn", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T02:07:06+00:00
4b84d943bd01791746753c43d65d04d4bd72c098
# Dataset Card for GitHub Issues ## Dataset Description - **Point of Contact:** [Lewis Tunstall]([email protected]) ### Dataset Summary GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets [repository](https://github.com/huggingface/datasets). It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond. ### Supported Tasks and Leaderboards For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`). - `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name). ### Languages Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,... When relevant, please provide [BCP-47 codes](https://tools.ietf.org/html/bcp47), which consist of a [primary language subtag](https://tools.ietf.org/html/bcp47#section-2.2.1), with a [script subtag](https://tools.ietf.org/html/bcp47#section-2.2.3) and/or [region subtag](https://tools.ietf.org/html/bcp47#section-2.2.4) if available. ## Dataset Structure ### Data Instances Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples. ``` { 'example_field': ..., ... } ``` Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit. ### Data Fields List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points. - `example_field`: description of `example_field` Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [tagging app](https://github.com/huggingface/datasets-tagging), you will then only need to refine the generated descriptions. ### Data Splits Describe and name the splits in the dataset if there are more than one. Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example: | | Tain | Valid | Test | | ----- | ------ | ----- | ---- | | Input Sentences | | | | | Average Sentence Length | | | | ## Dataset Creation ### Curation Rationale What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together? ### Source Data This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...) #### Initial Data Collection and Normalization Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process. If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name). If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used. #### Who are the source language producers? State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data. If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here. Describe other people represented or mentioned in the data. Where possible, link to references for the information. ### Annotations If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs. #### Annotation process If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes. #### Who are the annotators? If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated. Describe the people or systems who originally created the annotations and their selection criteria if applicable. If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here. ### Personal and Sensitive Information State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data). State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history). If efforts were made to anonymize the data, describe the anonymization process. ## Considerations for Using the Data ### Social Impact of Dataset Please discuss some of the ways you believe the use of this dataset will impact society. The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations. Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here. ### Discussion of Biases Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact. For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic. If analyses have been run quantifying these biases, please add brief summaries and links to the studies here. ### Other Known Limitations If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here. ## Additional Information ### Dataset Curators List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here. ### Licensing Information Provide the license and link to the license webpage if available. ### Citation Information Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example: ``` @article{article_id, author = {Author List}, title = {Dataset Paper Title}, journal = {Publication Venue}, year = {2525} } ``` If the dataset has a [DOI](https://www.doi.org/), please provide it here. ### Contributions Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
planhanasan/github-issues
[ "arxiv:2005.00614", "region:us" ]
2022-08-11T02:37:06+00:00
{}
2022-08-11T03:22:30+00:00
e7d454b3ca32b66e7d270a2c766c42f5f5f70b46
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sysresearch101/t5-large-finetuned-xsum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-3ca4a8a7-12855711
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T05:05:02+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "sysresearch101/t5-large-finetuned-xsum", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-08-11T18:55:34+00:00
6f7358a3b383aea6d10788b8a63cd814e028f64b
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sysresearch101/t5-large-finetuned-xsum-cnn * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-3ca4a8a7-12855712
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T05:05:08+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "sysresearch101/t5-large-finetuned-xsum-cnn", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-08-11T19:04:47+00:00
e404fa8894ce2092f89eae86da115760db88574f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: t5-base * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-3ca4a8a7-12855713
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T05:05:13+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "t5-base", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-08-11T08:41:15+00:00
d6e0e001bba9b14661345a9575ca7f11609a3b59
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: t5-large * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sysresearch101](https://huggingface.co/sysresearch101) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-3ca4a8a7-12855714
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T05:05:20+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "t5-large", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-08-11T18:57:13+00:00
c9cf33cf2552490371e7694b1b8ffa8685cc7ba4
BigBang/rosetta_old
[ "license:cc-by-sa-4.0", "region:us" ]
2022-08-11T07:54:24+00:00
{"license": ["cc-by-sa-4.0"]}
2022-08-25T07:36:05+00:00
44c960b81b39ddf04b08a9a23f451c23a30ea8b5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-c1b20bff-12875715
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T11:00:47+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-08-11T12:04:30+00:00
060d4151a9bed0e17f02cf8713bbb080109b6c2b
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: csebuetnlp/mT5_multilingual_XLSum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-c1b20bff-12875716
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T11:00:51+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "csebuetnlp/mT5_multilingual_XLSum", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-08-11T11:35:14+00:00
1312ec1d0f1935bb84c3e1471dbcac70b82944fd
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-cnn_dailymail * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-c1b20bff-12875717
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T11:00:57+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "google/pegasus-cnn_dailymail", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-08-11T12:46:48+00:00
00e13174de84f6892fa7cdbcb030757504ee11d0
--- --- This is the code that was used to generate this video: ``` from decord import VideoReader, cpu from huggingface_hub import hf_hub_download import numpy as np np.random.seed(0) def sample_frame_indices(clip_len, frame_sample_rate, seg_len): converted_len = int(clip_len * frame_sample_rate) end_idx = np.random.randint(converted_len, seg_len) start_idx = end_idx - converted_len indices = np.linspace(start_idx, end_idx, num=clip_len) indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64) return indices file_path = hf_hub_download( repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset" ) vr = VideoReader(file_path, num_threads=1, ctx=cpu(0)) # sample 8 frames vr.seek(0) indices = sample_frame_indices(clip_len=8, frame_sample_rate=1, seg_len=len(vr)) buffer = vr.get_batch(indices).asnumpy() # create a list of NumPy arrays video = [buffer[i] for i in range(buffer.shape[0])] video_numpy = np.array(video) with open('spaghetti_video_8_frames.npy', 'wb') as f: np.save(f, video_numpy) ```
hf-internal-testing/spaghetti-video-8-frames
[ "region:us" ]
2022-08-11T11:10:26+00:00
{}
2022-08-25T15:00:38+00:00
5ae360e13ed6372f2c5fe799bb2c4f0799b4ac50
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-5cb1ece5-12895721
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T11:23:54+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-arxiv", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T14:29:15+00:00
403c0e9b0f0c46a9cf2579124b06c47d3c08db61
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-4ce7da77-12905722
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T11:26:25+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-arxiv", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T14:31:22+00:00
bc903c85ac42397037b91bef89142243c7b4d7b6
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915723
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T11:46:39+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T12:28:11+00:00
3f3a3a357a6531c4e6127b8247aaa85fc8d26729
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-xsum * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915724
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T11:46:44+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-xsum", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T12:18:29+00:00
d06a1f8d090c853b1122c540a6ff6d2b16c10d12
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-cnn-12-6 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915725
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T11:46:50+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "sshleifer/distilbart-cnn-12-6", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T12:20:52+00:00
70986fc57830f32608141c7f2278093ebd811903
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-xsum-12-6 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915726
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T11:46:56+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "sshleifer/distilbart-xsum-12-6", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T12:10:53+00:00
a07bec7a6b1cbf4b5ca3a68bf744e854982b72bd
# Dataset Card for Visual Spatial Reasoning ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://ltl.mmll.cam.ac.uk/ - **Repository:** https://github.com/cambridgeltl/visual-spatial-reasoning - **Paper:** https://arxiv.org/abs/2205.00363 - **Leaderboard:** https://paperswithcode.com/sota/visual-reasoning-on-vsr - **Point of Contact:** https://ltl.mmll.cam.ac.uk/ ### Dataset Summary The Visual Spatial Reasoning (VSR) corpus is a collection of caption-image pairs with true/false labels. Each caption describes the spatial relation of two individual objects in the image, and a vision-language model (VLM) needs to judge whether the caption is correctly describing the image (True) or not (False). ### Supported Tasks and Leaderboards We test three baselines, all supported in huggingface. They are VisualBERT [(Li et al. 2019)](https://arxiv.org/abs/1908.03557), LXMERT [(Tan and Bansal, 2019)](https://arxiv.org/abs/1908.07490) and ViLT [(Kim et al. 2021)](https://arxiv.org/abs/2102.03334). The leaderboard can be checked at [Papers With Code](https://paperswithcode.com/sota/visual-reasoning-on-vsr). model | random split | zero-shot :-------------|:-------------:|:-------------: *human* | *95.4* | *95.4* VisualBERT | 57.4 | 54.0 LXMERT | **72.5** | **63.2** ViLT | 71.0 | 62.4 ### Languages The language in the dataset is English as spoken by the annotators. The BCP-47 code for English is en. [`meta_data.csv`](https://github.com/cambridgeltl/visual-spatial-reasoning/tree/master/data/data_files/meta_data.jsonl) contains meta data of annotators. ## Dataset Structure ### Data Instances Each line is an individual data point. Each `jsonl` file is of the following format: ```json {"image": "000000050403.jpg", "image_link": "http://images.cocodataset.org/train2017/000000050403.jpg", "caption": "The teddy bear is in front of the person.", "label": 1, "relation": "in front of", "annotator_id": 31, "vote_true_validator_id": [2, 6], "vote_false_validator_id": []} {"image": "000000401552.jpg", "image_link": "http://images.cocodataset.org/train2017/000000401552.jpg", "caption": "The umbrella is far away from the motorcycle.", "label": 0, "relation": "far away from", "annotator_id": 2, "vote_true_validator_id": [], "vote_false_validator_id": [2, 9, 1]} ``` ### Data Fields `image` denotes name of the image in COCO and `image_link` points to the image on the COCO server (so you can also access directly). `caption` is self-explanatory. `label` being `0` and `1` corresponds to False and True respectively. `relation` records the spatial relation used. `annotator_id` points to the annotator who originally wrote the caption. `vote_true_validator_id` and `vote_false_validator_id` are annotators who voted True or False in the second phase validation. ### Data Splits The VSR corpus, after validation, contains 10,119 data points with high agreement. On top of these, we create two splits (1) random split and (2) zero-shot split. For random split, we randomly split all data points into train, development, and test sets. Zero-shot split makes sure that train, development and test sets have no overlap of concepts (i.e., if *dog* is in test set, it is not used for training and development). Below are some basic statistics of the two splits. split | train | dev | test | total :------|:--------:|:--------:|:--------:|:--------: random | 7,083 | 1,012 | 2,024 | 10,119 zero-shot | 5,440 | 259 | 731 | 6,430 Check out [`data/`](https://github.com/cambridgeltl/visual-spatial-reasoning/tree/master/data) for more details. ## Dataset Creation ### Curation Rationale Understanding spatial relations is fundamental to achieve intelligence. Existing vision-language reasoning datasets are great but they compose multiple types of challenges and can thus conflate different sources of error. The VSR corpus focuses specifically on spatial relations so we can have accurate diagnosis and maximum interpretability. ### Source Data #### Initial Data Collection and Normalization **Image pair sampling.** MS COCO 2017 contains 123,287 images and has labelled the segmentation and classes of 886,284 instances (individual objects). Leveraging the segmentation, we first randomly select two concepts, then retrieve all images containing the two concepts in COCO 2017 (train and validation sets). Then images that contain multiple instances of any of the concept are filtered out to avoid referencing ambiguity. For the single-instance images, we also filter out any of the images with instance area size < 30, 000, to prevent extremely small instances. After these filtering steps, we randomly sample a pair in the remaining images. We repeat such process to obtain a large number of individual image pairs for caption generation. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process **Fill in the blank: template-based caption generation.** Given a pair of images, the annotator needs to come up with a valid caption that makes it correctly describing one image but incorrect for the other. In this way, the annotator could focus on the key difference of the two images (which should be spatial relation of the two objects of interest) and come up with challenging relation that differentiates the two. Similar paradigms are also used in the annotation of previous vision-language reasoning datasets such as NLVR2 (Suhr et al., 2017, 2019) and MaRVL (Liu et al., 2021). To regularise annotators from writing modifiers and differentiating the image pair with things beyond accurate spatial relations, we opt for a template-based classification task instead of free-form caption writing. Besides, the template-generated dataset can be easily categorised based on relations and their meta-categories. The caption template has the format of “The `OBJ1` (is) __ the `OBJ2`.”, and the annotators are instructed to select a relation from a fixed set to fill in the slot. The copula “is” can be omitted for grammaticality. For example, for “contains”, “consists of”, and “has as a part”, “is” should be discarded in the template when extracting the final caption. The fixed set of spatial relations enable us to obtain the full control of the generation process. The full list of used relations are listed in the table below. It contains 71 spatial relations and is adapted from the summarised relation table of Fagundes et al. (2021). We made minor changes to filter out clearly unusable relations, made relation names grammatical under our template, and reduced repeated relations. In our final dataset, 65 out of the 71 available relations are actually included (the other 6 are either not selected by annotators or are selected but the captions did not pass the validation phase). | Category | Spatial Relations | |-------------|-------------------------------------------------------------------------------------------------------------------------------------------------| | Adjacency | Adjacent to, alongside, at the side of, at the right side of, at the left side of, attached to, at the back of, ahead of, against, at the edge of | | Directional | Off, past, toward, down, deep down*, up*, away from, along, around, from*, into, to*, across, across from, through*, down from | | Orientation | Facing, facing away from, parallel to, perpendicular to | | Projective | On top of, beneath, beside, behind, left of, right of, under, in front of, below, above, over, in the middle of | | Proximity | By, close to, near, far from, far away from | | Topological | Connected to, detached from, has as a part, part of, contains, within, at, on, in, with, surrounding, among, consists of, out of, between, inside, outside, touching | | Unallocated | Beyond, next to, opposite to, after*, among, enclosed by | **Second-round Human Validation.** Every annotated data point is reviewed by at least two additional human annotators (validators). In validation, given a data point (consists of an image and a caption), the validator gives either a True or False label. We exclude data points that have < 2/3 validators agreeing with the original label. In the guideline, we communicated to the validators that, for relations such as “left”/“right”, “in front of”/“behind”, they should tolerate different reference frame: i.e., if the caption is true from either the object’s or the viewer’s reference, it should be given a True label. Only when the caption is incorrect under all reference frames, a False label is assigned. This adds difficulty to the models since they could not naively rely on relative locations of the objects in the images but also need to correctly identify orientations of objects to make the best judgement. #### Who are the annotators? Annotators are hired from [prolific.co](https://prolific.co). We require them (1) have at least a bachelor’s degree, (2) are fluent in English or native speaker, and (3) have a >99% historical approval rate on the platform. All annotators are paid with an hourly salary of 12 GBP. Prolific takes an extra 33% of service charge and 20% VAT on the service charge. For caption generation, we release the task with batches of 200 instances and the annotator is required to finish a batch in 80 minutes. An annotator cannot take more than one batch per day. In this way we have a diverse set of annotators and can also prevent annotators from being fatigued. For second round validation, we group 500 data points in one batch and an annotator is asked to label each batch in 90 minutes. In total, 24 annotators participated in caption generation and 26 participated in validation. The annotators have diverse demographic background: they were born in 13 different countries; live in 13 different couturiers; and have 14 different nationalities. 57.4% of the annotators identify themselves as females and 42.6% as males. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This project is licensed under the [Apache-2.0 License](https://github.com/cambridgeltl/visual-spatial-reasoning/blob/master/LICENSE). ### Citation Information ```bibtex @article{Liu2022VisualSR, title={Visual Spatial Reasoning}, author={Fangyu Liu and Guy Edward Toh Emerson and Nigel Collier}, journal={ArXiv}, year={2022}, volume={abs/2205.00363} } ``` ### Contributions Thanks to [@juletx](https://github.com/juletx) for adding this dataset.
juletxara/visual-spatial-reasoning
[ "task_categories:image-classification", "annotations_creators:crowdsourced", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:apache-2.0", "arxiv:2205.00363", "arxiv:1908.03557", "arxiv:1908.07490", "arxiv:2102.03334", "region:us" ]
2022-08-11T11:56:58+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": [], "pretty_name": "Visual Spatial Reasoning", "tags": []}
2022-08-11T19:11:21+00:00
c9ed41cbd1ee3f0275c4c4f0be802dc5864314b1
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-large * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915727
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T12:04:59+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-large", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T15:01:35+00:00
d45ad40b7ef5fb1aabfc89408a6269ff6ecd9fbc
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-xsum * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915728
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T12:11:20+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-xsum", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T12:45:26+00:00
b137984a923a7f937710ac41d0a97f7d68eb0175
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-cnn_dailymail * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-f0ba0c18-12915729
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T12:19:00+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-cnn_dailymail", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T13:48:21+00:00
3947e8559380f35ad1d92cad0266367c924c3888
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925730
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T12:21:28+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T13:02:39+00:00
544729e978e5120ece94dc40d9eba44bf865e748
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-xsum * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925731
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T12:28:39+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-xsum", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T13:00:18+00:00
5e3f25e9deec3aac79ff0edee782423f8dba814d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-cnn-12-6 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925732
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T12:46:05+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "sshleifer/distilbart-cnn-12-6", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T13:19:44+00:00
975a6926fa9fd2087ea7a397f74b579d6b22d723
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-xsum-12-6 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925733
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T12:47:19+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "sshleifer/distilbart-xsum-12-6", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T13:11:20+00:00
29784d9e5a9d2813d3a8df4b5da15a3a5b5a2f4c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-large * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925734
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T13:00:49+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-large", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T15:59:36+00:00
4d6f83691af8dd7cea05a532a49d275462449670
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-xsum * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925735
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T13:03:05+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-xsum", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T13:37:37+00:00
48948a18fba7481186adc4ee477fe180bced55dc
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-cnn_dailymail * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-8dc1621c-12925736
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T13:11:54+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-cnn_dailymail", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T14:41:05+00:00
3ebf510b9434206dfaaf35567ba531dcd70a4f99
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935737
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T13:20:15+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T14:01:40+00:00
9dc58c7fae34f20dc3761b45eecfabd787f9f5dd
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-xsum * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935738
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T13:38:17+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-xsum", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T14:09:17+00:00
288023970a01b31e96633b3ed3c93edd1609f493
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-cnn-12-6 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935739
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T13:49:10+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "sshleifer/distilbart-cnn-12-6", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T14:23:03+00:00
3705d8c1c5f58d29160f8e72eeb0cc27b3b15ac9
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-xsum-12-6 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935740
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T14:02:22+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "sshleifer/distilbart-xsum-12-6", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T14:26:20+00:00
79be53a8ffd3f2b6062c431560cd95b332e6de0d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-large * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935741
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T14:09:51+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-large", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T17:08:26+00:00
80853eab2ea846199ff76c3e6353951583bd6baf
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-cnn_dailymail * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935743
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T14:26:54+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-cnn_dailymail", "metrics": ["bleu"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T15:55:47+00:00
00351121bd85b3ae5629274cabb72e73a17a782d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-xsum-12-6 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-9818ea4b-12975766
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T16:11:07+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "sshleifer/distilbart-xsum-12-6", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T16:34:59+00:00
20ba4e84d62d8c42e887866173fe2960afa8e061
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-cnn-12-6 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-9818ea4b-12975767
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T16:11:13+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "sshleifer/distilbart-cnn-12-6", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T16:45:33+00:00
e287462f3504d1cc26dfecf34cf362c52b039348
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-cnn_dailymail * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-9818ea4b-12975768
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T16:17:53+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-cnn_dailymail", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T17:47:12+00:00
169d6a46b5be3f1daa1ddaf99b53268110e86ff0
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: csebuetnlp/mT5_multilingual_XLSum * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-xsum-9818ea4b-12975769
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T16:18:02+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "csebuetnlp/mT5_multilingual_XLSum", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-08-11T16:49:38+00:00
286635a883395d718b883f5b09e2a7a8ab00011a
# YALTAi Segmonto Manuscript and Early Printed Book Dataset ## Table of Contents - [YALTAi Segmonto Manuscript and Early Printed Book Dataset](#Segmonto Manuscript and Early Printed Book Dataset) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://doi.org/10.5281/zenodo.6814770](https://doi.org/10.5281/zenodo.6814770) - **Paper:** [https://arxiv.org/abs/2207.11230](https://arxiv.org/abs/2207.11230) ### Dataset Summary This dataset contains a subset of data used in the paper [You Actually Look Twice At it (YALTAi): using an object detection approach instead of region segmentation within the Kraken engine](https://arxiv.org/abs/2207.11230). This paper proposes treating page layout recognition on historical documents as an object detection task (compared to the usual pixel segmentation approach). This dataset contains images from digitised manuscripts and early printed books with the following labels: - DamageZone - DigitizationArtefactZone - DropCapitalZone - GraphicZone - MainZone - MarginTextZone - MusicZone - NumberingZone - QuireMarksZone - RunningTitleZone - SealZone - StampZone - TableZone - TitlePageZone ### Supported Tasks and Leaderboards - `object-detection`: This dataset can be used to train a model for object-detection on historic document images. ## Dataset Structure This dataset has two configurations. These configurations both cover the same data and annotations but provide these annotations in different forms to make it easier to integrate the data with existing processing pipelines. - The first configuration, `YOLO`, uses the data's original format. - The second configuration converts the YOLO format into a format closer to the `COCO` annotation format. This is done to make it easier to work with the `feature_extractor` from the `Transformers` models for object detection, which expect data to be in a COCO style format. ### Data Instances An example instance from the COCO config: ```python {'height': 5610, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=3782x5610 at 0x7F3B785609D0>, 'image_id': 0, 'objects': [{'area': 203660, 'bbox': [1545.0, 207.0, 1198.0, 170.0], 'category_id': 9, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 137034, 'bbox': [912.0, 1296.0, 414.0, 331.0], 'category_id': 2, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 110865, 'bbox': [2324.0, 908.0, 389.0, 285.0], 'category_id': 2, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 281634, 'bbox': [2308.0, 3507.0, 438.0, 643.0], 'category_id': 2, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 5064268, 'bbox': [949.0, 471.0, 1286.0, 3938.0], 'category_id': 4, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}, {'area': 5095104, 'bbox': [2303.0, 539.0, 1338.0, 3808.0], 'category_id': 4, 'id': 0, 'image_id': '0', 'iscrowd': False, 'segmentation': []}], 'width': 3782} ``` An example instance from the YOLO config: ```python {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=3782x5610 at 0x7F3B785EFA90>, 'objects': {'bbox': [[2144, 292, 1198, 170], [1120, 1462, 414, 331], [2519, 1050, 389, 285], [2527, 3828, 438, 643], [1593, 2441, 1286, 3938], [2972, 2444, 1338, 3808]], 'label': [9, 2, 2, 2, 4, 4]}} ``` ### Data Fields The fields for the YOLO config: - `image`: the image - `objects`: the annotations which consist of: - `bbox`: a list of bounding boxes for the image - `label`: a list of labels for this image The fields for the COCO config: - `height`: height of the image - `width`: width of the image - `image`: image - `image_id`: id for the image - `objects`: annotations in COCO format, consisting of a list containing dictionaries with the following keys: - `bbox`: bounding boxes for the images - `category_id`: a label for the image - `image_id`: id for the image - `iscrowd`: COCO is a crowd flag - `segmentation`: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts) ### Data Splits The dataset contains a train, validation and test split with the following numbers per split: | Dataset | Number of images | |---------|------------------| | Train | 854 | | Dev | 154 | | Test | 139 | A more detailed summary of the dataset (copied from the paper): | | Train | Dev | Test | Total | Average area | Median area | |--------------------------|------:|----:|-----:|------:|-------------:|------------:| | DropCapitalZone | 1537 | 180 | 222 | 1939 | 0.45 | 0.26 | | MainZone | 1408 | 253 | 258 | 1919 | 28.86 | 26.43 | | NumberingZone | 421 | 57 | 76 | 554 | 0.18 | 0.14 | | MarginTextZone | 396 | 59 | 49 | 504 | 1.19 | 0.52 | | GraphicZone | 289 | 54 | 50 | 393 | 8.56 | 4.31 | | MusicZone | 237 | 71 | 0 | 308 | 1.22 | 1.09 | | RunningTitleZone | 137 | 25 | 18 | 180 | 0.95 | 0.84 | | QuireMarksZone | 65 | 18 | 9 | 92 | 0.25 | 0.21 | | StampZone | 85 | 5 | 1 | 91 | 1.69 | 1.14 | | DigitizationArtefactZone | 1 | 0 | 32 | 33 | 2.89 | 2.79 | | DamageZone | 6 | 1 | 14 | 21 | 1.50 | 0.02 | | TitlePageZone | 4 | 0 | 1 | 5 | 48.27 | 63.39 | ## Dataset Creation This dataset is derived from: - CREMMA Medieval ( Pinche, A. (2022). Cremma Medieval (Version Bicerin 1.1.0) [Data set](https://github.com/HTR-United/cremma-medieval) - CREMMA Medieval Lat (Clérice, T. and Vlachou-Efstathiou, M. (2022). Cremma Medieval Latin [Data set](https://github.com/HTR-United/cremma-medieval-lat) - Eutyches. (Vlachou-Efstathiou, M. Voss.Lat.O.41 - Eutyches "de uerbo" glossed [Data set](https://github.com/malamatenia/Eutyches) - Gallicorpora HTR-Incunable-15e-Siecle ( Pinche, A., Gabay, S., Leroy, N., & Christensen, K. Données HTR incunable du 15e siècle [Computer software](https://github.com/Gallicorpora/HTR-incunable-15e-siecle) - Gallicorpora HTR-MSS-15e-Siecle ( Pinche, A., Gabay, S., Leroy, N., & Christensen, K. Données HTR manuscrits du 15e siècle [Computer software](https://github.com/Gallicorpora/HTR-MSS-15e-Siecle) - Gallicorpora HTR-imprime-gothique-16e-siecle ( Pinche, A., Gabay, S., Vlachou-Efstathiou, M., & Christensen, K. HTR-imprime-gothique-16e-siecle [Computer software](https://github.com/Gallicorpora/HTR-imprime-gothique-16e-siecle) + a few hundred newly annotated data, specifically the test set which is completely novel and based on early prints and manuscripts. These additional annotations were created by correcting an early version of the model developed in the paper using the [roboflow](https://roboflow.com/) platform. ### Curation Rationale [More information needed] ### Source Data The sources of the data are described above. #### Initial Data Collection and Normalization [More information needed] #### Who are the source language producers? [More information needed] ### Annotations #### Annotation process Additional annotations produced for this dataset were created by correcting an early version of the model developed in the paper using the [roboflow](https://roboflow.com/) platform. #### Who are the annotators? [More information needed] ### Personal and Sensitive Information This data does not contain information relating to living individuals. ## Considerations for Using the Data ### Social Impact of Dataset A growing number of datasets are related to page layout for historical documents. This dataset offers a different approach to annotating these datasets (focusing on object detection rather than pixel-level annotations). Improving document layout recognition can have a positive impact on downstream tasks, in particular Optical Character Recognition. ### Discussion of Biases Historical documents contain a wide variety of page layouts. This means that the ability of models trained on this dataset to transfer to documents with very different layouts is not guaranteed. ### Other Known Limitations [More information needed] ## Additional Information ### Dataset Curators ### Licensing Information [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) ### Citation Information ``` @dataset{clerice_thibault_2022_6814770, author = {Clérice, Thibault}, title = {{YALTAi: Segmonto Manuscript and Early Printed Book Dataset}}, month = jul, year = 2022, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.6814770}, url = {https://doi.org/10.5281/zenodo.6814770} } ``` [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.6814770.svg)](https://doi.org/10.5281/zenodo.6814770) ### Contributions Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
biglam/yalta_ai_segmonto_manuscript_dataset
[ "task_categories:object-detection", "annotations_creators:expert-generated", "language_creators:expert-generated", "size_categories:n<1K", "license:cc-by-4.0", "manuscripts", "LAM", "arxiv:2207.11230", "region:us" ]
2022-08-11T16:19:41+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": [], "license": ["cc-by-4.0"], "multilinguality": [], "size_categories": ["n<1K"], "source_datasets": [], "task_categories": ["object-detection"], "task_ids": [], "pretty_name": "YALTAi Tabular Dataset", "tags": ["manuscripts", "LAM"]}
2022-08-12T07:33:43+00:00
e56b3827f5edb98eb7fdea0eeba2bb232231f77f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-xsum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-d1c2a643-13015770
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T16:48:56+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "google/pegasus-xsum", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-08-11T17:30:39+00:00
8f5f91a564e09afb43252ed0223786a5d0a1e440
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-xsum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-d1c2a643-13015771
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T16:48:59+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-xsum", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-08-11T17:22:55+00:00
815655e1713cfbf69c0a221fb77de3121deeb526
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@grapplerulrich](https://huggingface.co/grapplerulrich) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-cnn_dailymail-d1c2a643-13015772
[ "autotrain", "evaluation", "region:us" ]
2022-08-11T16:49:07+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-arxiv", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-08-11T20:15:19+00:00
7e296a5a47498a31f6d52e30063b3213b69be396
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Contributions](#contributions) annotations_creators: - no-annotation language: - en language_creators: - crowdsourced license: - afl-3.0 multilinguality: - monolingual pretty_name: Crema D Diarization size_categories: - 10M<n<100M source_datasets: [] tags: [] task_categories: - audio-classification - automatic-speech-recognition - voice-activity-detection task_ids: - audio-emotion-recognition - speaker-identification ### Contributions Thanks to [@EvgeniiPustozerov](https://github.com/EvgeniiPustozerov) for adding this dataset.
pustozerov/crema_d_diarization
[ "region:us" ]
2022-08-11T16:49:32+00:00
{}
2022-08-16T07:09:57+00:00
d82a5d84ac4585157ad524c5114b48ed76957361
**The original dataset is accepting contributions and annotation at https://mekabytes.com/dataset/info/billboards-signs-and-branding :)** The goal of this dataset is to be able to recognize billboards and popular corporate logos so they can be hidden in photos, and in the future so that they can be hidden using augmented reality. We are settling on a maximalist approach where we would like to block all signage. This includes bus stop ads, store signs, those banners they have on street lights, etc. ### Categories 🚧 **Billboard** - includes advertisements on bus benches and shelters, and the posters on building construction (think with scaffolding). 🏪 **Signage** - store names, signs on buildings, lists of businesses at a strip mall, also includes any small standalone advertisements like those campaign signs in people's yards or papers on telephone poles. 📦 **Branding** - logos and names on products, like a coffee cup or scooter, includes car badges. ### Seeking Photos on https://mekabytes.com Right now the images have been mostly collected in Los Angeles, CA. We would love some geographical variety! If you have any questions about labeling, don't hesitate to leave a comment and check the checkbox to notify the mods. We are light on branding photos, so pictures of products with logos and brands on them are greatly appreciated! ### Version Info ``` Version: 2022-08-11T18:53:22Z Type: bounding box Images: 103 Annotations: 1351 Size (bytes): 315483844 ```
ComputeHeavy/billboards-signs-and-branding
[ "license:cc-by-nc-sa-4.0", "region:us" ]
2022-08-11T17:47:35+00:00
{"license": "cc-by-nc-sa-4.0"}
2022-08-11T18:19:26+00:00
88387f4957edde8af0a5415fe0c89e3a4c926515
darragh/ChiSig
[ "license:apache-2.0", "region:us" ]
2022-08-12T07:11:54+00:00
{"license": "apache-2.0"}
2022-08-12T07:18:47+00:00
25f540fe3476a6af03ad785d48f725b963f58030
# Label2Id This repository contains all the label2id files of [tner](https://huggingface.co/tner) dataset.
tner/label2id
[ "region:us" ]
2022-08-12T13:07:20+00:00
{}
2022-09-27T18:48:06+00:00
24bb0eaf951c083be8becb922dd076aaba9dda02
cakiki/test
[ "license:cc-by-sa-3.0", "region:us" ]
2022-08-12T13:32:23+00:00
{"license": "cc-by-sa-3.0"}
2022-08-19T12:22:35+00:00
247aee30dcfbc4dbf014e936c4e3916a3f2794bf
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: facebook/opt-125m * Dataset: Tristan/zero_shot_classification_test * Config: Tristan--zero_shot_classification_test * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Tristan](https://huggingface.co/Tristan) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-Tristan__zero_shot_classification_test-fb99e6e4-4634
[ "autotrain", "evaluation", "region:us" ]
2022-08-12T16:41:07+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Tristan/zero_shot_classification_test"], "eval_info": {"task": "zero_shot_classification", "model": "facebook/opt-125m", "metrics": [], "dataset_name": "Tristan/zero_shot_classification_test", "dataset_config": "Tristan--zero_shot_classification_test", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-08-12T18:18:42+00:00
49abca970a911bbd625a8751cafdee48c0da9a8c
signal-k/planets
[ "license:mit", "region:us" ]
2022-08-12T17:28:41+00:00
{"license": "mit"}
2022-08-12T17:34:20+00:00
29f69fed5b8afa68b5b72d6b1342ad03109e70f9
annotations_creators: - found language: - English language_creators: - found license: [] multilinguality: - monolingual pretty_name: Lines from American Psycho - All Michael Bateman size_categories: [] source_datasets: [] tags: - ai - chatbot - textgeneration task_categories: - conversational task_ids: - dialogue-generation
Meowren/Melopoly
[ "region:us" ]
2022-08-12T19:43:20+00:00
{}
2022-08-12T19:44:27+00:00
12c12ebe27cf9cac7ad6c1244f6022cf7ae41d12
# Dataset Card for Indonesian News Title Generation ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
jakartaresearch/news-title-gen
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:id", "license:cc-by-4.0", "newspapers", "title", "news", "region:us" ]
2022-08-13T00:39:26+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["id"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Indonesian News Title Generation", "tags": ["newspapers", "title", "news"]}
2022-08-13T05:32:12+00:00
a6d3a73c186a6cfa691b44a1c3499cfd42afeaa4
This is the summarization datasets collected by TextBox, including: - CNN/Daily Mail (cnndm) - XSum (xsum) - SAMSum (samsum) - WLE (wle) - Newsroom (nr) - WikiHow (wikihow) - MicroSoft News (msn) - MediaSum (mediasum) - English Gigaword (eg). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
RUCAIBox/Summarization
[ "task_categories:summarization", "multilinguality:monolingual", "language:en", "region:us" ]
2022-08-13T00:53:11+00:00
{"language": ["en"], "multilinguality": ["monolingual"], "task_categories": ["summarization"], "task_ids": []}
2022-10-25T05:19:17+00:00
6d54db8869c266ab82d6ae4c60c8720d109069a9
This is the Chinese generation datasets collected by TextBox, including: - LCSTS (lcsts) - CSL (csl) - ADGEN (adgen). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
RUCAIBox/Chinese-Generation
[ "task_categories:summarization", "task_categories:text2text-generation", "task_categories:text-generation", "multilinguality:monolingual", "language:zh", "region:us" ]
2022-08-13T01:07:35+00:00
{"language": ["zh"], "multilinguality": ["monolingual"], "task_categories": ["summarization", "text2text-generation", "text-generation"], "task_ids": []}
2022-10-25T05:19:15+00:00
6cbe22f1304fd822367b94213eb2587b2cfda761
This is the commonsense generation datasets collected by TextBox, including: - CommonGen (cg). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
RUCAIBox/Commonsense-Generation
[ "task_categories:other", "multilinguality:monolingual", "language:en", "commonsense-generation", "region:us" ]
2022-08-13T01:07:50+00:00
{"language": ["en"], "multilinguality": ["monolingual"], "task_categories": ["other"], "task_ids": [], "tags": ["commonsense-generation"]}
2023-03-03T14:41:45+00:00
4886500c44ba24360881267bca9f88e6eb1db37e
This is the data-to-text generation datasets collected by TextBox, including: - WebNLG v2.1 (webnlg) - WebNLG v3.0 (webnlg2) - WikiBio (wikibio) - E2E (e2e) - DART (dart) - ToTTo (totto) - ENT-DESC (ent) - AGENDA (agenda) - GenWiki (genwiki) - TEKGEN (tekgen) - LogicNLG (logicnlg) - WikiTableT (wikit) - WEATHERGOV (wg). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
RUCAIBox/Data-to-text-Generation
[ "task_categories:tabular-to-text", "task_categories:table-to-text", "multilinguality:monolingual", "language:en", "data-to-text", "region:us" ]
2022-08-13T01:08:03+00:00
{"language": ["en"], "multilinguality": ["monolingual"], "task_categories": ["tabular-to-text", "table-to-text"], "task_ids": [], "tags": ["data-to-text"]}
2023-03-03T14:42:50+00:00
4cbf9c84920e9af820c7a5019400941005044f12
This is the open dialogue datasets collected by TextBox, including: - PersonaChat (pc) - DailyDialog (dd) - DSTC7-AVSD (da) - SGD (sgd) - Topical-Chat (tc) - Wizard of Wikipedia (wow) - Movie Dialog (md) - Cleaned OpenSubtitles Dialogs (cos) - Empathetic Dialogues (ed) - Curiosity (curio) - CMU Document Grounded Conversations (cmudog) - MuTual (mutual) - OpenDialKG (odkg) - DREAM (dream). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
RUCAIBox/Open-Dialogue
[ "task_categories:conversational", "task_ids:dialogue-generation", "multilinguality:monolingual", "language:en", "dialogue-response-generation", "open-dialogue", "dialog-response-generation", "region:us" ]
2022-08-13T01:08:40+00:00
{"language": ["en"], "multilinguality": ["monolingual"], "task_categories": ["conversational"], "task_ids": ["dialogue-generation"], "tags": ["dialogue-response-generation", "open-dialogue", "dialog-response-generation"]}
2023-03-03T14:43:02+00:00
48c640fb04bde72f27cf02cfb02b2350e9952028
This is the question answering datasets collected by TextBox, including: - SQuAD (squad) - CoQA (coqa) - Natural Questions (nq) - TriviaQA (tqa) - WebQuestions (webq) - NarrativeQA (nqa) - MS MARCO (marco) - NewsQA (newsqa) - HotpotQA (hotpotqa) - MSQG (msqg) - QuAC (quac). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
RUCAIBox/Question-Answering
[ "task_categories:question-answering", "multilinguality:monolingual", "language:en", "region:us" ]
2022-08-13T01:08:53+00:00
{"language": ["en"], "multilinguality": ["monolingual"], "task_categories": ["question-answering"], "task_ids": []}
2023-03-03T14:42:19+00:00
0f826acd68e6b5b18205752fc0d747c146ebede8
This is the question generation datasets collected by TextBox, including: - SQuAD (squadqg) - CoQA (coqaqg) - NewsQA (newsqa) - HotpotQA (hotpotqa) - MS MARCO (marco) - MSQG (msqg) - NarrativeQA (nqa) - QuAC (quac). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
RUCAIBox/Question-Generation
[ "task_categories:text2text-generation", "multilinguality:monolingual", "language:en", "question-generation", "region:us" ]
2022-08-13T01:09:12+00:00
{"language": ["en"], "multilinguality": ["monolingual"], "task_categories": ["text2text-generation"], "task_ids": [], "tags": ["question-generation"]}
2023-03-03T14:42:10+00:00
86a29b954ca2c6817c350316fbaf57c6721e3d13
This is the simplification datasets collected by TextBox, including: - WikiAuto + Turk/ASSET (wia-t). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
RUCAIBox/Simplification
[ "task_categories:text2text-generation", "task_ids:text-simplification", "multilinguality:monolingual", "language:en", "region:us" ]
2022-08-13T01:09:27+00:00
{"language": ["en"], "multilinguality": ["monolingual"], "task_categories": ["text2text-generation"], "task_ids": ["text-simplification"]}
2022-10-25T05:19:12+00:00
d67ce1053296f292b1497ce239436461aaf71890
This is the story generation datasets collected by TextBox, including: - ROCStories (roc) - WritingPrompts (wp) - Hippocorpus (hc) - WikiPlots (wikip) - ChangeMyView (cmv). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
RUCAIBox/Story-Generation
[ "task_categories:text-generation", "multilinguality:monolingual", "language:en", "story-generation", "region:us" ]
2022-08-13T01:09:37+00:00
{"language": ["en"], "multilinguality": ["monolingual"], "task_categories": ["text-generation"], "task_ids": [], "tags": ["story-generation"]}
2023-03-03T14:42:27+00:00
7e2fac7addc9f1f386f0980b04f13e4f3888dbb2
This is the task dialogue datasets collected by TextBox, including: - MultiWOZ 2.0 (multiwoz) - MetaLWOZ (metalwoz) - KVRET (kvret) - WOZ (woz) - CamRest676 (camres676) - Frames (frames) - TaskMaster (taskmaster) - Schema-Guided (schema) - MSR-E2E (e2e_msr). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
RUCAIBox/Task-Dialogue
[ "task_categories:conversational", "task_ids:dialogue-generation", "multilinguality:monolingual", "language:en", "dialogue-response-generation", "task-dialogue", "dialog-response-generation", "region:us" ]
2022-08-13T01:09:47+00:00
{"language": ["en"], "multilinguality": ["monolingual"], "task_categories": ["conversational"], "task_ids": ["dialogue-generation"], "tags": ["dialogue-response-generation", "task-dialogue", "dialog-response-generation"]}
2022-10-25T05:16:50+00:00
f2302152a009374e6e9053b39f56e296ef65447a
This is the translation datasets collected by TextBox, including: - WMT14 English-French (wmt14-fr-en) - WMT16 Romanian-English (wmt16-ro-en) - WMT16 German-English (wmt16-de-en) - WMT19 Czech-English (wmt19-cs-en) - WMT13 Spanish-English (wmt13-es-en) - WMT19 Chinese-English (wmt19-zh-en) - WMT19 Russian-English (wmt19-ru-en). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
RUCAIBox/Translation
[ "task_categories:translation", "multilinguality:translation", "language:en", "language:fr", "language:de", "language:cs", "language:es", "language:zh", "language:ru", "region:us" ]
2022-08-13T01:09:56+00:00
{"language": ["en", "fr", "de", "cs", "es", "zh", "ru"], "multilinguality": ["translation"], "task_categories": ["translation"], "task_ids": []}
2022-10-25T05:19:08+00:00
63f215c870e53f469daffe7bc8886c5d2425b7d7
Port of the compas-recidivism dataset from propublica (github [here](https://github.com/propublica/compas-analysis)). See details there and use carefully, as there are serious known social impacts and biases present in this dataset. Basic preprocessing done by the [imodels team](https://github.com/csinva/imodels) in [this notebook](https://github.com/csinva/imodels-data/blob/master/notebooks_fetch_data/00_get_datasets_custom.ipynb). The target is the binary outcome `is_recid`. ### Sample usage Load the data: ``` from datasets import load_dataset dataset = load_dataset("imodels/compas-recidivism") df = pd.DataFrame(dataset['train']) X = df.drop(columns=['is_recid']) y = df['is_recid'].values ``` Fit a model: ``` import imodels import numpy as np m = imodels.FIGSClassifier(max_rules=5) m.fit(X, y) print(m) ``` Evaluate: ``` df_test = pd.DataFrame(dataset['test']) X_test = df.drop(columns=['is_recid']) y_test = df['is_recid'].values print('accuracy', np.mean(m.predict(X_test) == y_test)) ```
imodels/compas-recidivism
[ "task_categories:tabular-classification", "size_categories:1K<n<10K", "interpretability", "fairness", "region:us" ]
2022-08-13T02:55:20+00:00
{"annotations_creators": [], "language_creators": [], "language": [], "license": [], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["tabular-classification"], "task_ids": [], "pretty_name": "compas-recividivsm", "tags": ["interpretability", "fairness"]}
2022-08-13T03:17:29+00:00
1392e95369e9cb4be0255b3a44c49c35ee18bfc6
# Dataset Card for new_dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://crisisnlp.qcri.org/humaid_dataset - **Repository:** https://crisisnlp.qcri.org/data/humaid/humaid_data_all.zip - **Paper:** https://ojs.aaai.org/index.php/ICWSM/article/view/18116/17919 <!-- - **Leaderboard:** [Needs More Information] --> <!-- - **Point of Contact:** [Needs More Information] --> ### Dataset Summary The HumAID Twitter dataset consists of several thousands of manually annotated tweets that has been collected during 19 major natural disaster events including earthquakes, hurricanes, wildfires, and floods, which happened from 2016 to 2019 across different parts of the World. The annotations in the provided datasets consists of following humanitarian categories. The dataset consists only english tweets and it is the largest dataset for crisis informatics so far. ** Humanitarian categories ** - Caution and advice - Displaced people and evacuations - Dont know cant judge - Infrastructure and utility damage - Injured or dead people - Missing or found people - Not humanitarian - Other relevant information - Requests or urgent needs - Rescue volunteering or donation effort - Sympathy and support The resulting annotated dataset consists of 11 labels. ### Supported Tasks and Benchmark The dataset can be used to train a model for multiclass tweet classification for disaster response. The benchmark results can be found in https://ojs.aaai.org/index.php/ICWSM/article/view/18116/17919. Dataset is also released with event-wise and JSON objects for further research. Full set of the dataset can be found in https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/A7NVF7 ### Languages English ## Dataset Structure ### Data Instances ``` { "tweet_text": "@RT_com: URGENT: Death toll in #Ecuador #quake rises to 233 \u2013 President #Correa #1 in #Pakistan", "class_label": "injured_or_dead_people" } ``` ### Data Fields * tweet_text: corresponds to the tweet text. * class_label: corresponds to a label assigned to a given tweet text ### Data Splits * Train * Development * Test ## Dataset Creation <!-- ### Curation Rationale --> ### Source Data #### Initial Data Collection and Normalization Tweets has been collected during several disaster events. ### Annotations #### Annotation process AMT has been used to annotate the dataset. Please check the paper for a more detail. #### Who are the annotators? - crowdsourced <!-- ## Considerations for Using the Data --> <!-- ### Social Impact of Dataset --> <!-- ### Discussion of Biases --> <!-- [Needs More Information] --> <!-- ### Other Known Limitations --> <!-- [Needs More Information] --> ## Additional Information ### Dataset Curators Authors of the paper. ### Licensing Information - cc-by-nc-4.0 ### Citation Information ``` @inproceedings{humaid2020, Author = {Firoj Alam, Umair Qazi, Muhammad Imran, Ferda Ofli}, booktitle={Proceedings of the Fifteenth International AAAI Conference on Web and Social Media}, series={ICWSM~'21}, Keywords = {Social Media, Crisis Computing, Tweet Text Classification, Disaster Response}, Title = {HumAID: Human-Annotated Disaster Incidents Data from Twitter}, Year = {2021}, publisher={AAAI}, address={Online}, } ```
prerona/new_dataset
[ "region:us" ]
2022-08-13T06:32:23+00:00
{}
2022-08-22T14:15:20+00:00
157ec8c8cb91011b3754ec4d26459c19abde3e51
# Dataset Card for Swedish CNN Dailymail Dataset The Swedish CNN/DailyMail dataset has only been machine-translated to improve downstream fine-tuning on Swedish summarization tasks. ## Dataset Summary Read about the full details at original English version: https://huggingface.co/datasets/cnn_dailymail ### Data Fields - `id`: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from - `article`: a string containing the body of the news article - `highlights`: a string containing the highlight of the article as written by the article author ### Data Splits The Swedish CNN/DailyMail dataset follows the same splits as the original English version and has 3 splits: _train_, _validation_, and _test_. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 287,113 | | Validation | 13,368 | | Test | 11,490 |
Gabriel/cnn_daily_swe
[ "task_categories:summarization", "task_categories:text2text-generation", "size_categories:100K<n<1M", "source_datasets:https://github.com/huggingface/datasets/tree/master/datasets/cnn_dailymail", "language:sv", "license:mit", "conditional-text-generation", "region:us" ]
2022-08-13T07:55:53+00:00
{"language": ["sv"], "license": ["mit"], "size_categories": ["100K<n<1M"], "source_datasets": ["https://github.com/huggingface/datasets/tree/master/datasets/cnn_dailymail"], "task_categories": ["summarization", "text2text-generation"], "task_ids": [], "tags": ["conditional-text-generation"]}
2022-10-29T10:53:08+00:00
00069d7da55dcca7b4e3743111b9caa3918460ee
# TeTIm-Eval
galatolo/TeTIm-Eval
[ "task_categories:text-to-image", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc", "curated", "high-quality", "text-to-image", "evaluation", "validation", "region:us" ]
2022-08-13T08:53:36+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "TeTIm-Eval", "tags": ["curated", "high-quality", "text-to-image", "evaluation", "validation"]}
2022-12-15T14:58:24+00:00
7d1910e1d4224fc239757dc96fa4ad41e2130a62
# Dataset Card for Indonesian Question Answering Dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@fhrzn](https://github.com/fhrzn)[@Kalzaik](https://github.com/Kalzaik) [@ibamibrahim](https://github.com/ibamibrahim) [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
jakartaresearch/indoqa
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:id", "license:cc-by-nd-4.0", "indoqa", "qa", "question-answering", "indonesian", "region:us" ]
2022-08-13T09:54:08+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["id"], "license": ["cc-by-nd-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "Indonesian Question Answering Dataset", "tags": ["indoqa", "qa", "question-answering", "indonesian"]}
2022-12-17T06:07:27+00:00
aea2595889bdb0b5b5752d1bf043b1ef056c8e78
# Dataset Card for Swedish Xsum Dataset The Swedish xsum dataset has only been machine-translated to improve downstream fine-tuning on Swedish summarization tasks. ## Dataset Summary Read about the full details at original English version: https://huggingface.co/datasets/xsum ### Data Fields - `id`: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from - `document`: a string containing the body of the news article - `summary`: a string containing the summary of the article as written by the article author ### Data Splits The Swedish xsum dataset follows the same splits as the original English version and has 3 splits: _train_, _validation_, and _test_. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 204,045 | | Validation | 11,332 | | Test | 11,334 |
Gabriel/xsum_swe
[ "task_categories:summarization", "task_categories:text2text-generation", "size_categories:100K<n<1M", "source_datasets:https://github.com/huggingface/datasets/tree/master/datasets/xsum", "language:sv", "license:mit", "conditional-text-generation", "region:us" ]
2022-08-13T13:24:10+00:00
{"language": ["sv"], "license": ["mit"], "size_categories": ["100K<n<1M"], "source_datasets": ["https://github.com/huggingface/datasets/tree/master/datasets/xsum"], "task_categories": ["summarization", "text2text-generation"], "task_ids": [], "tags": ["conditional-text-generation"]}
2022-10-29T10:53:39+00:00
5e1735b10088c9ef57f3c211bc1182c436a45f47
This is the text style transfer datasets collected by TextBox, including: - GYAFC Entertainment & Music (gyafc_em). - GYAFC Family & Relationships (gyafc_fr). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
RUCAIBox/Style-Transfer
[ "task_categories:other", "multilinguality:monolingual", "language:en", "style-transfer", "region:us" ]
2022-08-13T13:34:29+00:00
{"language": ["en"], "multilinguality": ["monolingual"], "task_categories": ["other"], "task_ids": [], "tags": ["style-transfer"]}
2022-10-25T05:18:14+00:00
9ad2c5d8c372485a9899b5b1e980edbd92bc6c57
This is the paraphrase datasets collected by TextBox, including: - Quora (a.k.a., QQP-Pos) (quora) - ParaNMT-small (paranmt). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
RUCAIBox/Paraphrase
[ "task_categories:other", "multilinguality:monolingual", "language:en", "paraphrase", "region:us" ]
2022-08-13T13:34:49+00:00
{"language": ["en"], "multilinguality": ["monolingual"], "task_categories": ["other"], "task_ids": [], "tags": ["paraphrase"]}
2022-10-25T05:17:38+00:00
bcaefcdcfbcebeefad75fbb0d378c53e2db03d5b
# Dataset Card for Swedish Gigaword Dataset The Swedish gigaword dataset has only been machine-translated to improve downstream fine-tuning on Swedish summarization tasks. ## Dataset Summary Read about the full details at original English version: https://huggingface.co/datasets/gigaword ### Data Fields - `document`: a string containing the shorter body - `summary`: a string containing the summary of the body ### Data Splits The Swedish gigaword dataset follows the same splits as the original English version and has 3 splits: _train_, _validation_, and _test_. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 3,700,301 | | Validation | 189,650 | | Test | 1,951 |
Gabriel/gigaword_swe
[ "task_categories:summarization", "task_categories:text2text-generation", "size_categories:1M<n<3M", "source_datasets:https://github.com/huggingface/datasets/tree/master/datasets/gigaword", "language:sv", "license:mit", "conditional-text-generation", "region:us" ]
2022-08-13T13:44:07+00:00
{"language": ["sv"], "license": ["mit"], "size_categories": ["1M<n<3M"], "source_datasets": ["https://github.com/huggingface/datasets/tree/master/datasets/gigaword"], "task_categories": ["summarization", "text2text-generation"], "task_ids": [], "tags": ["conditional-text-generation"]}
2022-10-29T10:54:02+00:00
89283b8f379028b9079e6968566f669fc33903f7
# Dataset Card for Swedish Wiki_lingua Dataset The Swedish wiki_lingua dataset has only been machine-translated to improve downstream fine-tuning on Swedish summarization tasks. ## Dataset Summary Read about the full details at original Multilingual version: https://huggingface.co/datasets/wiki_lingua ### Data details - gem_id: the id for the data instance. - gem_id_parent: the id for the data instance. - Document: a string containing the document body. - Summary: a string containing the summary of the body. ### Data Splits The Swedish wiki_lingua dataset follows the same splits as the original English version and has 3 splits: _train_, _validation_, and _test_. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 95,516 | | Validation | 27,489 | | Test | 13,340 |
Gabriel/wiki_lingua_swe
[ "task_categories:summarization", "task_categories:text2text-generation", "size_categories:10K<n<100K", "source_datasets:https://github.com/morningmoni/CiteSu", "language:sv", "license:cc-by-sa-3.0", "conditional-text-generation", "region:us" ]
2022-08-13T13:44:24+00:00
{"language": ["sv"], "license": ["cc-by-sa-3.0"], "size_categories": ["10K<n<100K"], "source_datasets": ["https://github.com/morningmoni/CiteSu"], "task_categories": ["summarization", "text2text-generation"], "task_ids": [], "tags": ["conditional-text-generation"]}
2022-10-29T10:54:17+00:00
2d0456c69c3158a4d8db10ee0675fdf8972a451c
# Dataset Card for Swedish Citesum Dataset The Swedish citesum dataset has only been machine-translated to improve downstream fine-tuning on Swedish summarization tasks. ## Dataset Summary Read about the full details at original English version: https://huggingface.co/datasets/citesum ### Paper https://arxiv.org/abs/2205.06207 ### Authors Yuning Mao, Ming Zhong, Jiawei Han University of Illinois Urbana-Champaign {yuningm2, mingz5, hanj}@illinois.edu ## Data details - src (string): source text. long description of paper - tgt (string): target text. tldr of paper - paper_id (string): unique id for the paper - title (string): title of the paper - discipline (dict): - venue (string): Where the paper was published (conference) - journal (string): Journal in which the paper was published - mag_field_of_study (list[str]): scientific fields that the paper falls under. ### Data Splits The Swedish xsum dataset follows the same splits as the original English version and has 3 splits: _train_, _validation_, and _test_. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 83,304 | | Validation | 4,721 | | Test | 4,921 |
Gabriel/citesum_swe
[ "task_categories:summarization", "task_categories:text2text-generation", "size_categories:10K<n<100K", "source_datasets:https://github.com/morningmoni/CiteSu", "language:sv", "license:cc-by-nc-4.0", "conditional-text-generation", "arxiv:2205.06207", "region:us" ]
2022-08-13T13:45:11+00:00
{"language": ["sv"], "license": ["cc-by-nc-4.0"], "size_categories": ["10K<n<100K"], "source_datasets": ["https://github.com/morningmoni/CiteSu"], "task_categories": ["summarization", "text2text-generation"], "task_ids": [], "tags": ["conditional-text-generation"]}
2022-10-29T10:54:21+00:00
9e92850ad9c505e4da2114b62475ad715270da24
Django Dataset for Code Translation Tasks ========================================= *Django* dataset used in the paper [*"Learning to Generate Pseudo-Code from Source Code Using Statistical Machine Translation"*](http://ieeexplore.ieee.org/document/7372045/), Oda et al., ASE, 2015. The Django dataset is a dataset for code generation comprising of 16000 training, 1000 development and 1805 test annotations. Each data point consists of a line of Python code together with a manually created natural language description. ```bibtex @inproceedings{oda2015ase:pseudogen1, author = {Oda, Yusuke and Fudaba, Hiroyuki and Neubig, Graham and Hata, Hideaki and Sakti, Sakriani and Toda, Tomoki and Nakamura, Satoshi}, title = {Learning to Generate Pseudo-code from Source Code Using Statistical Machine Translation}, booktitle = {Proceedings of the 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE)}, series = {ASE '15}, month = {November}, year = {2015}, isbn = {978-1-5090-0025-8}, pages = {574--584}, numpages = {11}, url = {https://doi.org/10.1109/ASE.2015.36}, doi = {10.1109/ASE.2015.36}, acmid = {2916173}, publisher = {IEEE Computer Society}, address = {Lincoln, Nebraska, USA} } ```
AhmedSSoliman/DJANGO
[ "region:us" ]
2022-08-13T15:44:25+00:00
{}
2022-08-14T13:19:28+00:00
bb7727c857ab980682dee6aece71abfdcf248095
# multi_domain_document_classification Multi-domain document classification datasets. - Biomedical: `chemprot`, `rct-sample` - Computer Science: `citation_intent`, `sciie` - Customer Review: `amcd`, `yelp_review` - Social Media: `tweet_eval_irony`, `tweet_eval_hate`, `tweet_eval_emotion` The `yelp_review` dataset is randomly downsampled to 2000/2000/8000 for test/validation/train. | | chemprot | citation_intent | hyperpartisan_news | rct_sample | sciie | amcd | yelp_review | tweet_eval_irony | tweet_eval_hate | tweet_eval_emotion | |:--------------------|-----------:|------------------:|---------------------:|-------------:|--------:|-------:|--------------:|-------------------:|------------------:|---------------------:| | word/validation | 32 | 40 | 502 | 26 | 32 | 20 | 132 | 13 | 24 | 15 | | word/test | 32 | 42 | 612 | 26 | 32 | 19 | 131 | 14 | 21 | 15 | | word/train | 31 | 42 | 536 | 26 | 32 | 19 | 133 | 13 | 20 | 16 | | instance/validation | 2427 | 114 | 64 | 30212 | 455 | 666 | 2000 | 955 | 1000 | 374 | | instance/test | 3469 | 139 | 65 | 30135 | 974 | 1334 | 2000 | 784 | 2970 | 1421 | | instance/train | 4169 | 1688 | 516 | 500 | 3219 | 8000 | 6000 | 2862 | 9000 | 3257 |
m3/multi_domain_document_classification
[ "region:us" ]
2022-08-13T21:50:55+00:00
{}
2022-08-25T10:25:30+00:00
8add66152bda31045138a0faf77804e0179e0c59
# Dataset Card for Indonesian Sentence Paraphrase Detection ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The dataset is originally from [Microsoft Research Paraphrase Corpus](https://www.microsoft.com/en-us/download/details.aspx?id=52398). We translated the text into Bahasa using google translate. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Indonesian ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
jakartaresearch/id-paraphrase-detection
[ "task_categories:sentence-similarity", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|msrp", "language:id", "license:cc-by-4.0", "msrp", "id-msrp", "paraphrase-detection", "region:us" ]
2022-08-14T00:46:49+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["id"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|msrp"], "task_categories": ["sentence-similarity"], "task_ids": [], "pretty_name": "Indonesian Paraphrase Detection", "tags": ["msrp", "id-msrp", "paraphrase-detection"]}
2022-08-14T01:10:33+00:00
c36967abb45f06ff7659849372ab41e01838193e
# Dataset Card for No Language Left Behind (NLLB - 200vo) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** https://arxiv.org/pdf/2207.0467 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary This dataset was created based on [metadata](https://github.com/facebookresearch/fairseq/tree/nllb) for mined bitext released by Meta AI. It contains bitext for 148 English-centric and 1465 non-English-centric language pairs using the stopes mining library and the LASER3 encoders (Heffernan et al., 2022). The complete dataset is ~450GB. [CCMatrix](https://opus.nlpl.eu/CCMatrix.php) contains previous versions of mined instructions. #### How to use the data There are two ways to access the data: * Via the Hugging Face Python datasets library For accessing a particular [language pair](https://huggingface.co/datasets/allenai/nllb/blob/main/nllb_lang_pairs.py): ``` from datasets import load_dataset dataset = load_dataset("allenai/nllb", "ace_Latn-ban_Latn") ``` * Clone the git repo ``` git lfs install git clone https://huggingface.co/datasets/allenai/nllb ``` ### Supported Tasks and Leaderboards N/A ### Languages Language pairs can be found [here](https://huggingface.co/datasets/allenai/nllb/blob/main/nllb_lang_pairs.py). ## Dataset Structure The dataset contains gzipped tab delimited text files for each direction. Each text file contains lines with parallel sentences. ### Data Instances The number of instances for each language pair can be found in the [dataset_infos.json](https://huggingface.co/datasets/allenai/nllb/blob/main/dataset_infos.json) file. ### Data Fields Every instance for a language pair contains the following fields: 'translation' (containing sentence pairs), 'laser_score', 'source_sentence_lid', 'target_sentence_lid', where 'lid' is language classification probability, 'source_sentence_source', 'source_sentence_url', 'target_sentence_source', 'target_sentence_url'. * Sentence in first language * Sentence in second language * LASER score * Language ID score for first sentence * Language ID score for second sentence * First sentence source (See [Source Data Table](https://huggingface.co/datasets/allenai/nllb#source-data)) * First sentence URL if the source is crawl-data/\*; _ otherwise * Second sentence source * Second sentence URL if the source is crawl-data/\*; _ otherwise The lines are sorted by LASER3 score in decreasing order. Example: ``` {'translation': {'ace_Latn': 'Gobnyan hana geupeukeucewa gata atawa geutinggai meunan mantong gata."', 'ban_Latn': 'Ida nenten jaga manggayang wiadin ngutang semeton."'}, 'laser_score': 1.2499876022338867, 'source_sentence_lid': 1.0000100135803223, 'target_sentence_lid': 0.9991400241851807, 'source_sentence_source': 'paracrawl9_hieu', 'source_sentence_url': '_', 'target_sentence_source': 'crawl-data/CC-MAIN-2020-10/segments/1581875144165.4/wet/CC-MAIN-20200219153707-20200219183707-00232.warc.wet.gz', 'target_sentence_url': 'https://alkitab.mobi/tb/Ula/31/6/\n'} ``` ### Data Splits The data is not split. Given the noisy nature of the overall process, we recommend using the data only for training and use other datasets like [Flores-200](https://github.com/facebookresearch/flores) for the evaluation. The data includes some development and test sets from other datasets, such as xlsum. In addition, sourcing data from multiple web crawls is likely to produce incidental overlap with other test sets. ## Dataset Creation ### Curation Rationale Data was filtered based on language identification, emoji based filtering, and for some high-resource languages using a language model. For more details on data filtering please refer to Section 5.2 (NLLB Team et al., 2022). ### Source Data #### Initial Data Collection and Normalization Monolingual data was collected from the following sources: | Name in data | Source | |------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | afriberta | https://github.com/castorini/afriberta | | americasnlp | https://github.com/AmericasNLP/americasnlp2021/ | | bho_resources | https://github.com/shashwatup9k/bho-resources | | crawl-data/* | WET files from https://commoncrawl.org/the-data/get-started/ | | emcorpus | http://lepage-lab.ips.waseda.ac.jp/en/projects/meiteilon-manipuri-language-resources/ | | fbseed20220317 | https://github.com/facebookresearch/flores/tree/main/nllb_seed | | giossa_mono | https://github.com/sgongora27/giossa-gongora-guarani-2021 | | iitguwahati | https://github.com/priyanshu2103/Sanskrit-Hindi-Machine-Translation/tree/main/parallel-corpus | | indic | https://indicnlp.ai4bharat.org/corpora/ | | lacunaner | https://github.com/masakhane-io/lacuna_pos_ner/tree/main/language_corpus | | leipzig | Community corpora from https://wortschatz.uni-leipzig.de/en/download for each year available | | lowresmt2020 | https://github.com/panlingua/loresmt-2020 | | masakhanener | https://github.com/masakhane-io/masakhane-ner/tree/main/MasakhaNER2.0/data | | nchlt | https://repo.sadilar.org/handle/20.500.12185/299 <br>https://repo.sadilar.org/handle/20.500.12185/302 <br>https://repo.sadilar.org/handle/20.500.12185/306 <br>https://repo.sadilar.org/handle/20.500.12185/308 <br>https://repo.sadilar.org/handle/20.500.12185/309 <br>https://repo.sadilar.org/handle/20.500.12185/312 <br>https://repo.sadilar.org/handle/20.500.12185/314 <br>https://repo.sadilar.org/handle/20.500.12185/315 <br>https://repo.sadilar.org/handle/20.500.12185/321 <br>https://repo.sadilar.org/handle/20.500.12185/325 <br>https://repo.sadilar.org/handle/20.500.12185/328 <br>https://repo.sadilar.org/handle/20.500.12185/330 <br>https://repo.sadilar.org/handle/20.500.12185/332 <br>https://repo.sadilar.org/handle/20.500.12185/334 <br>https://repo.sadilar.org/handle/20.500.12185/336 <br>https://repo.sadilar.org/handle/20.500.12185/337 <br>https://repo.sadilar.org/handle/20.500.12185/341 <br>https://repo.sadilar.org/handle/20.500.12185/343 <br>https://repo.sadilar.org/handle/20.500.12185/346 <br>https://repo.sadilar.org/handle/20.500.12185/348 <br>https://repo.sadilar.org/handle/20.500.12185/353 <br>https://repo.sadilar.org/handle/20.500.12185/355 <br>https://repo.sadilar.org/handle/20.500.12185/357 <br>https://repo.sadilar.org/handle/20.500.12185/359 <br>https://repo.sadilar.org/handle/20.500.12185/362 <br>https://repo.sadilar.org/handle/20.500.12185/364 | | paracrawl-2022-* | https://data.statmt.org/paracrawl/monolingual/ | | paracrawl9* | https://paracrawl.eu/moredata the monolingual release | | pmi | https://data.statmt.org/pmindia/ | | til | https://github.com/turkic-interlingua/til-mt/tree/master/til_corpus | | w2c | https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0022-6133-9 | | xlsum | https://github.com/csebuetnlp/xl-sum | #### Who are the source language producers? Text was collected from the web and various monolingual data sets, many of which are also web crawls. This may have been written by people, generated by templates, or in some cases be machine translation output. ### Annotations #### Annotation process Parallel sentences in the monolingual data were identified using LASER3 encoders. (Heffernan et al., 2022) #### Who are the annotators? The data was not human annotated. ### Personal and Sensitive Information Data may contain personally identifiable information, sensitive content, or toxic content that was publicly shared on the Internet. ## Considerations for Using the Data ### Social Impact of Dataset This dataset provides data for training machine learning systems for many languages that have low resources available for NLP. ### Discussion of Biases Biases in the data have not been specifically studied, however as the original source of data is World Wide Web it is likely that the data has biases similar to those prevalent in the Internet. The data may also exhibit biases introduced by language identification and data filtering techniques; lower resource languages generally have lower accuracy. ### Other Known Limitations Some of the translations are in fact machine translations. While some website machine translation tools are identifiable from HTML source, these tools were not filtered out en mass because raw HTML was not available from some sources and CommonCrawl processing started from WET files. ## Additional Information ### Dataset Curators The data was not curated. ### Licensing Information The dataset is released under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this, you are also bound to the respective Terms of Use and License of the original source. ### Citation Information Schwenk et al, CCMatrix: Mining Billions of High-Quality Parallel Sentences on the Web. ACL https://aclanthology.org/2021.acl-long.507/ Hefferman et al, Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages. Arxiv https://arxiv.org/abs/2205.12654, 2022.<br> NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation, Arxiv https://arxiv.org/abs/2207.04672, 2022. ### Contributions We thank the NLLB Meta AI team for open sourcing the meta data and instructions on how to use it with special thanks to Bapi Akula, Pierre Andrews, Onur Çelebi, Sergey Edunov, Kenneth Heafield, Philipp Koehn, Alex Mourachko, Safiyyah Saleem, Holger Schwenk, and Guillaume Wenzek. We also thank the AllenNLP team at AI2 for hosting and releasing this data, including Akshita Bhagia (for engineering efforts to host the data, and create the huggingface dataset), and Jesse Dodge (for organizing the connection).
allenai/nllb
[ "arxiv:2207.0467", "arxiv:2205.12654", "arxiv:2207.04672", "region:us" ]
2022-08-14T01:02:15+00:00
{}
2022-09-29T17:53:15+00:00
e5415abfbccf475e0dca0ab00b0e11d605eb253f
MapleWish/LUNA16_subsets
[ "license:cc", "region:us" ]
2022-08-14T03:17:27+00:00
{"license": "cc"}
2022-08-14T03:17:27+00:00
60e03f1f98b19e519c271891caea6d1e020095f4
# Dataset Card for SemEval Task 12: Aspect-based Sentiment Analysis ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset is orignally from [SemEval-2015 Task 12](https://alt.qcri.org/semeval2015/task12/). From the page: > SE-ABSA15 will focus on the same domains as SE-ABSA14 (restaurants and laptops). However, unlike SE-ABSA14, the input datasets of SE-ABSA15 will contain entire reviews, not isolated (potentially out of context) sentences. SE-ABSA15 consolidates the four subtasks of SE-ABSA14 within a unified framework. In addition, SE-ABSA15 will include an out-of-domain ABSA subtask, involving test data from a domain unknown to the participants, other than the domains that will be considered during training. In particular, SE-ABSA15 consists of the following two subtasks. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
jakartaresearch/semeval-absa
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "aspect-based-sentiment-analysis", "semeval", "semeval2015", "region:us" ]
2022-08-14T04:35:35+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "SemEval 2015: Aspect-based Sentiement Analysis", "tags": ["aspect-based-sentiment-analysis", "semeval", "semeval2015"]}
2022-08-14T04:38:21+00:00
bd173fe2c8ed0dccd47acb4eda77542593651622
# Zeroth-Korean ## Dataset Description - **Homepage:** [OpenSLR](https://www.openslr.org/40/) - **Repository:** [goodatlas/zeroth](https://github.com/goodatlas/zeroth) - **Download Size** 2.68 GiB - **Generated Size** 2.85 GiB - **Total Size** 5.52 GiB ## Zeroth-Korean The data set contains transcriebed audio data for Korean. There are 51.6 hours transcribed Korean audio for training data (22,263 utterances, 105 people, 3000 sentences) and 1.2 hours transcribed Korean audio for testing data (457 utterances, 10 people). This corpus also contains pre-trained/designed language model, lexicon and morpheme-based segmenter(morfessor). Zeroth project introduces free Korean speech corpus and aims to make Korean speech recognition more broadly accessible to everyone. This project was developed in collaboration between Lucas Jo(@Atlas Guide Inc.) and Wonkyum Lee(@Gridspace Inc.). Contact: Lucas Jo([email protected]), Wonkyum Lee([email protected]) ### License CC BY 4.0 ## Dataset Structure ### Data Instance ```pycon >>> from datasets import load_dataset >>> dataset = load_dataset("Bingsu/zeroth-korean") >>> dataset DatasetDict({ train: Dataset({ features: ['audio', 'text'], num_rows: 22263 }) test: Dataset({ features: ['text', 'audio'], num_rows: 457 }) }) ``` ### Data Size download: 2.68 GiB<br> generated: 2.85 GiB<br> total: 5.52 GiB ### Data Fields - audio: `audio`, sampling rate = 16000 - A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. - Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`. - text: `string` ```pycon >>> dataset["train"][0] {'audio': {'path': None, 'array': array([-3.0517578e-05, 0.0000000e+00, -3.0517578e-05, ..., 0.0000000e+00, 0.0000000e+00, -6.1035156e-05], dtype=float32), 'sampling_rate': 16000}, 'text': '인사를 결정하는 과정에서 당 지도부가 우 원내대표 및 원내지도부와 충분한 상의를 거치지 않은 채 일방적으로 인사를 했다는 불만도 원내지도부를 중심으로 흘러나왔다'} ``` ### Data Splits | | train | test | | ---------- | -------- | ----- | | # of data | 22263 | 457 |
Bingsu/zeroth-korean
[ "task_categories:automatic-speech-recognition", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|kresnik/zeroth_korean", "language:ko", "license:cc-by-4.0", "region:us" ]
2022-08-14T07:50:33+00:00
{"language_creators": ["crowdsourced"], "language": ["ko"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|kresnik/zeroth_korean"], "task_categories": ["automatic-speech-recognition"], "pretty_name": "zeroth-korean"}
2022-08-15T09:30:30+00:00
d2731ab913ca384272c40df4e274e30e4d2ea657
thientran/test_dataset_s_v_a_f
[ "license:unknown", "region:us" ]
2022-08-14T08:52:04+00:00
{"license": "unknown"}
2022-08-14T08:52:04+00:00
bb8ba14d41628040be189dd1bac394d94bf0163c
# AutoTrain Dataset for project: favs_bot ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project favs_bot. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "feat_id": "13104", "tokens": [ "Jackie", "Frank" ], "feat_pos_tags": [ 21, 21 ], "feat_chunk_tags": [ 5, 16 ], "tags": [ 3, 7 ] }, { "feat_id": "9297", "tokens": [ "U.S.", "lauds", "Russian-Chechen", "deal", "." ], "feat_pos_tags": [ 21, 20, 15, 20, 7 ], "feat_chunk_tags": [ 5, 16, 16, 16, 22 ], "tags": [ 0, 8, 1, 8, 8 ] } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "feat_id": "Value(dtype='string', id=None)", "tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)", "feat_pos_tags": "Sequence(feature=ClassLabel(num_classes=47, names=['\"', '#', '$', \"''\", '(', ')', ',', '.', ':', 'CC', 'CD', 'DT', 'EX', 'FW', 'IN', 'JJ', 'JJR', 'JJS', 'LS', 'MD', 'NN', 'NNP', 'NNPS', 'NNS', 'NN|SYM', 'PDT', 'POS', 'PRP', 'PRP$', 'RB', 'RBR', 'RBS', 'RP', 'SYM', 'TO', 'UH', 'VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ', 'WDT', 'WP', 'WP$', 'WRB', '``'], id=None), length=-1, id=None)", "feat_chunk_tags": "Sequence(feature=ClassLabel(num_classes=23, names=['B-ADJP', 'B-ADVP', 'B-CONJP', 'B-INTJ', 'B-LST', 'B-NP', 'B-PP', 'B-PRT', 'B-SBAR', 'B-UCP', 'B-VP', 'I-ADJP', 'I-ADVP', 'I-CONJP', 'I-INTJ', 'I-LST', 'I-NP', 'I-PP', 'I-PRT', 'I-SBAR', 'I-UCP', 'I-VP', 'O'], id=None), length=-1, id=None)", "tags": "Sequence(feature=ClassLabel(num_classes=9, names=['B-LOC', 'B-MISC', 'B-ORG', 'B-PER', 'I-LOC', 'I-MISC', 'I-ORG', 'I-PER', 'O'], id=None), length=-1, id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 10013 | | valid | 4029 |
thientran/autotrain-data-favs_bot
[ "language:en", "region:us" ]
2022-08-14T08:57:34+00:00
{"language": ["en"]}
2022-08-16T02:18:04+00:00
3aa769fa56fc7bb99fe6ad6729e9c777f361823f
# Dataset Card for Swedish pubmed Dataset The Swedish pubmed dataset has only been machine-translated to improve downstream fine-tuning on Swedish summarization tasks. ## Dataset Summary Read about the full details at original English version: https://huggingface.co/datasets/pubmed ### Data Fields - `document`: a string containing the body of the paper - `summary`: a string containing the abstract of the paper ### Data Splits The Swedish pubmed dataset follows the same splits as the original English version and has 1 splits: _train_. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 90,000 |
Gabriel/pubmed_swe
[ "task_categories:summarization", "task_categories:text2text-generation", "size_categories:10K<n<100K", "source_datasets:https://github.com/huggingface/datasets/tree/master/datasets/pubmed", "language:sv", "license:other", "conditional-text-generation", "region:us" ]
2022-08-14T13:06:26+00:00
{"language": ["sv"], "license": ["other"], "size_categories": ["10K<n<100K"], "source_datasets": ["https://github.com/huggingface/datasets/tree/master/datasets/pubmed"], "task_categories": ["summarization", "text2text-generation"], "task_ids": [], "tags": ["conditional-text-generation"]}
2022-10-29T10:54:25+00:00
edd09e033e99b17820e255e0b277b4ac365bb85e
This is a dataset is a fork of [librispeech_asr](https://huggingface.co/datasets/librispeech_asr) that defines each original split (like train-clean-100) as a split (named `train.clean.100`, with dots instead of hyphens). This allows you to download each part separately. This fork also reports a `path` for each sample accurately.
darkproger/librispeech_asr
[ "license:cc-by-4.0", "region:us" ]
2022-08-14T13:14:16+00:00
{"license": "cc-by-4.0"}
2022-08-14T15:46:17+00:00
191ab1f0aa68d52f6cd55d68df57849fad1751ca
Port of the diabetes-readmission dataset from UCI (link [here](https://archive.ics.uci.edu/ml/datasets/diabetes+130-us+hospitals+for+years+1999-2008)). See details there and use carefully. Basic preprocessing done by the [imodels team](https://github.com/csinva/imodels) in [this notebook](https://github.com/csinva/imodels-data/blob/master/notebooks_fetch_data/00_get_datasets_custom.ipynb). The target is the binary outcome `readmitted`. ### Sample usage Load the data: ``` from datasets import load_dataset dataset = load_dataset("imodels/diabetes-readmission") df = pd.DataFrame(dataset['train']) X = df.drop(columns=['readmitted']) y = df['readmitted'].values ``` Fit a model: ``` import imodels import numpy as np m = imodels.FIGSClassifier(max_rules=5) m.fit(X, y) print(m) ``` Evaluate: ``` df_test = pd.DataFrame(dataset['test']) X_test = df.drop(columns=['readmitted']) y_test = df['readmitted'].values print('accuracy', np.mean(m.predict(X_test) == y_test)) ```
imodels/diabetes-readmission
[ "task_categories:tabular-classification", "size_categories:100K<n<1M", "interpretability", "fairness", "medicine", "region:us" ]
2022-08-14T14:19:27+00:00
{"annotations_creators": [], "language_creators": [], "language": [], "license": [], "multilinguality": [], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["tabular-classification"], "task_ids": [], "pretty_name": "diabetes-readmission", "tags": ["interpretability", "fairness", "medicine"]}
2022-08-14T14:38:59+00:00
aa2d71d4fb7c056745552c6b401f626e601f22a4
Port of the credit-card dataset from UCI (link [here](https://www.kaggle.com/datasets/uciml/default-of-credit-card-clients-dataset)). See details there and use carefully. Basic preprocessing done by the [imodels team](https://github.com/csinva/imodels) in [this notebook](https://github.com/csinva/imodels-data/blob/master/notebooks_fetch_data/00_get_datasets_custom.ipynb). The target is the binary outcome `default.payment.next.month`. ### Sample usage Load the data: ``` from datasets import load_dataset dataset = load_dataset("imodels/credit-card") df = pd.DataFrame(dataset['train']) X = df.drop(columns=['default.payment.next.month']) y = df['default.payment.next.month'].values ``` Fit a model: ``` import imodels import numpy as np m = imodels.FIGSClassifier(max_rules=5) m.fit(X, y) print(m) ``` Evaluate: ``` df_test = pd.DataFrame(dataset['test']) X_test = df.drop(columns=['default.payment.next.month']) y_test = df['default.payment.next.month'].values print('accuracy', np.mean(m.predict(X_test) == y_test)) ```
imodels/credit-card
[ "task_categories:tabular-classification", "size_categories:10K<n<100K", "interpretability", "fairness", "medicine", "region:us" ]
2022-08-14T14:33:53+00:00
{"annotations_creators": [], "language_creators": [], "language": [], "license": [], "multilinguality": [], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["tabular-classification"], "task_ids": [], "pretty_name": "credit-card", "tags": ["interpretability", "fairness", "medicine"]}
2022-08-14T14:37:54+00:00
9748d6d102a17a4267cbc2171adad990fab472bf
## Concode dataset A large dataset with over 100,000 examples consisting of Java classes from online code repositories, and develop a new encoder-decoder architecture that models the interaction between the method documentation and the class environment. Concode dataset is a widely used code generation dataset from Iyer's EMNLP 2018 paper [Mapping Language to Code in Programmatic Context](https://www.aclweb.org/anthology/D18-1192.pdf). Data statistics of concode dataset are shown in the below table: | | #Examples | | --------- | :---------: | | Train | 100,000 | | Validation | 2,000 | | Test | 2,000 | ## Data Format Code corpus are saved in json lines format files. one line is a json object: ``` { "nl": "Increment this vector in this place. con_elem_sep double[] vecElement con_elem_sep double[] weights con_func_sep void add(double)", "code": "public void inc ( ) { this . add ( 1 ) ; }" } ``` `nl` combines natural language description and class environment. Elements in class environment are seperated by special tokens like `con_elem_sep` and `con_func_sep`. ## Task Definition Generate source code of class member functions in Java, given natural language description and class environment. Class environment is the programmatic context provided by the rest of the class, including other member variables and member functions in class. Models are evaluated by exact match and BLEU. It's a challenging task because the desired code can vary greatly depending on the functionality the class provides. Models must (a) have a deep understanding of NL description and map the NL to environment variables, library API calls and user-defined methods in the class, and (b) decide on the structure of the resulting code. ## Reference Concode dataset: <pre><code>@article{iyer2018mapping, title={Mapping language to code in programmatic context}, author={Iyer, Srinivasan and Konstas, Ioannis and Cheung, Alvin and Zettlemoyer, Luke}, journal={arXiv preprint arXiv:1808.09588}, year={2018} }</code></pre>
AhmedSSoliman/CodeXGLUE-CONCODE
[ "region:us" ]
2022-08-14T14:58:27+00:00
{}
2022-09-13T13:47:15+00:00
bba1f10a0b7a6c258e10fd5c5ae09dc4a47e7a75
# Dataset Card for Data Science Job Salaries ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/ruchi798/data-science-job-salaries - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary ### Content | Column | Description | |--------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | work_year | The year the salary was paid. | | experience_level | The experience level in the job during the year with the following possible values: EN Entry-level / Junior MI Mid-level / Intermediate SE Senior-level / Expert EX Executive-level / Director | | employment_type | The type of employement for the role: PT Part-time FT Full-time CT Contract FL Freelance | | job_title | The role worked in during the year. | | salary | The total gross salary amount paid. | | salary_currency | The currency of the salary paid as an ISO 4217 currency code. | | salary_in_usd | The salary in USD (FX rate divided by avg. USD rate for the respective year via fxdata.foorilla.com). | | employee_residence | Employee's primary country of residence in during the work year as an ISO 3166 country code. | | remote_ratio | The overall amount of work done remotely, possible values are as follows: 0 No remote work (less than 20%) 50 Partially remote 100 Fully remote (more than 80%) | | company_location | The country of the employer's main office or contracting branch as an ISO 3166 country code. | | company_size | The average number of people that worked for the company during the year: S less than 50 employees (small) M 50 to 250 employees (medium) L more than 250 employees (large) | ### Acknowledgements I'd like to thank ai-jobs.net Salaries for aggregating this data! ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@ruchi798](https://kaggle.com/ruchi798) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
hugginglearners/data-science-job-salaries
[ "license:cc0-1.0", "region:us" ]
2022-08-14T23:00:27+00:00
{"license": ["cc0-1.0"], "kaggle_id": "ruchi798/data-science-job-salaries"}
2022-08-17T17:42:40+00:00
a8ea5b9fe8851acd50fc14b5ab54cca61a4dbf04
# ECHR Cases The original data from [Chalkidis et al.](https://arxiv.org/abs/1906.02059), sourced from [archive.org](https://archive.org/details/ECHR-ACL2019). ## Preprocessing * Order is shuffled * Fact numbers preceeding each fact are removed (using the python regex `^[0-9]+\. `), as some cases didn't have fact numbers to begin with * Everything else is the same
jonathanli/echr
[ "license:cc-by-nc-sa-4.0", "arxiv:1906.02059", "region:us" ]
2022-08-15T00:35:16+00:00
{"license": "cc-by-nc-sa-4.0"}
2022-08-21T22:29:28+00:00
ab5a35857580420f3fbf28169bfe3f804d9284c1
# Popular Surname Nationality Mapping Sample of popular surnames for 30+ countries labeled with nationality (language)
Hobson/surname-nationality
[ "task_categories:token-classification", "task_categories:text-classification", "task_ids:named-entity-recognition", "size_categories:List[str]", "source_datasets:List[str]", "license:mit", "multilingual", "RNN", "name", "tagging", "nlp", "transliterated", "character-level", "text-tagging", "bias", "classification", "language model", "surname", "ethnicity", "multilabel classification", "natural language", "region:us" ]
2022-08-15T02:52:58+00:00
{"license": "mit", "size_categories": "List[str]", "source_datasets": "List[str]", "task_categories": ["token-classification", "text-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "Popular Surname Nationality Mapping", "tags": ["multilingual", "RNN", "name", "tagging", "nlp", "transliterated", "character-level", "text-tagging", "bias", "classification", "language model", "surname", "ethnicity", "multilabel classification", "natural language"]}
2022-12-29T23:14:09+00:00
a5b444f752b9be3f66feda3720cc0344a1593d20
# Dataset Card for SentiNews ## Dataset Description - **Homepage:** https://github.com/19Joey85/Sentiment-annotated-news-corpus-and-sentiment-lexicon-in-Slovene - **Paper:** Bučar, J., Žnidaršič, M. & Povh, J. Annotated news corpora and a lexicon for sentiment analysis in Slovene. Lang Resources & Evaluation 52, 895–919 (2018). https://doi.org/10.1007/s10579-018-9413-3 ### Dataset Summary SentiNews is a Slovenian sentiment classification dataset, consisting of news articles manually annotated with their sentiment by between two and six annotators. It is annotated at three granularities: - document-level (config `document_level`, 10 427 documents), - paragraph-level (config `paragraph_level`, 89 999 paragraphs), and - sentence-level (config `sentence_level`, 168 899 sentences). ### Supported Tasks and Leaderboards Sentiment classification, three classes (negative, neutral, positive). ### Languages Slovenian. ## Dataset Structure ### Data Instances A sample instance from the sentence-level config: ``` { 'nid': 2, 'content': 'Vilo Prešeren je na dražbi ministrstva za obrambo kupilo nepremičninsko podjetje Condor Real s sedežem v Lescah.', 'sentiment': 'neutral', 'pid': 1, 'sid': 1 } ``` ### Data Fields The data fields are similar among all three configs, with the only difference being the IDs. - `nid`: a uint16 containing a unique ID of the news article (document). - `content`: a string containing the body of the news article - `sentiment`: the sentiment of the instance - `pid`: a uint8 containing the consecutive number of the paragraph inside the current news article, **not unique** (present in the configs `paragraph_level` and `sentence_level`) - `sid`: a uint8 containing the consecutive number of the sentence inside the current paragraph, **not unique** (present in the config `sentence_level`) ## Additional Information ### Dataset Curators Jože Bučar, Martin Žnidaršič, Janez Povh. ### Licensing Information CC BY-SA 4.0 ### Citation Information ``` @article{buvcar2018annotated, title={Annotated news corpora and a lexicon for sentiment analysis in Slovene}, author={Bu{\v{c}}ar, Jo{\v{z}}e and {\v{Z}}nidar{\v{s}}i{\v{c}}, Martin and Povh, Janez}, journal={Language Resources and Evaluation}, volume={52}, number={3}, pages={895--919}, year={2018}, publisher={Springer} } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
cjvt/sentinews
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:sl", "license:cc-by-sa-4.0", "slovenian sentiment", "news articles", "region:us" ]
2022-08-15T07:32:30+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["sl"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "SentiNews", "tags": ["slovenian sentiment", "news articles"]}
2022-08-17T05:28:13+00:00
0ffdf305a38276633bb2dbfb6096570398f73073
jokerak/imagenet100
[ "license:apache-2.0", "region:us" ]
2022-08-15T07:44:42+00:00
{"license": "apache-2.0"}
2022-08-15T10:51:06+00:00
d766cb8a7497d0d507d81f5f681a8d58deedf495
# Dataset Card for broad_twitter_corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus) - **Repository:** [https://github.com/GateNLP/broad_twitter_corpus](https://github.com/GateNLP/broad_twitter_corpus) - **Paper:** [http://www.aclweb.org/anthology/C16-1111](http://www.aclweb.org/anthology/C16-1111) - **Leaderboard:** [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter) - **Point of Contact:** [Leon Derczynski](https://github.com/leondz) ### Dataset Summary This is the Broad Twitter corpus, a dataset of tweets collected over stratified times, places and social uses. The goal is to represent a broad range of activities, giving a dataset more representative of the language used in this hardest of social media formats to process. Further, the BTC is annotated for named entities. See the paper, [Broad Twitter Corpus: A Diverse Named Entity Recognition Resource](http://www.aclweb.org/anthology/C16-1111), for details. ### Supported Tasks and Leaderboards * Named Entity Recognition * On PWC: [Named Entity Recognition on Broad Twitter Corpus](https://paperswithcode.com/sota/named-entity-recognition-on-broad-twitter) ### Languages English from UK, US, Australia, Canada, Ireland, New Zealand; `bcp47:en` ## Dataset Structure ### Data Instances Feature |Count ---|---: Documents |9 551 Tokens |165 739 Person entities |5 271 Location entities |3 114 Organization entities |3 732 ### Data Fields Each tweet contains an ID, a list of tokens, and a list of NER tags - `id`: a `string` feature. - `tokens`: a `list` of `strings` - `ner_tags`: a `list` of class IDs (`int`s) representing the NER class: ``` 0: O 1: B-PER 2: I-PER 3: B-ORG 4: I-ORG 5: B-LOC 6: I-LOC ``` ### Data Splits Section|Region|Collection period|Description|Annotators|Tweet count ---|---|---|---|---|---: A | UK| 2012.01| General collection |Expert| 1000 B |UK |2012.01-02 |Non-directed tweets |Expert |2000 E |Global| 2014.07| Related to MH17 disaster| Crowd & expert |200 F |Stratified |2009-2014| Twitterati |Crowd & expert |2000 G |Stratified| 2011-2014| Mainstream news| Crowd & expert| 2351 H |Non-UK| 2014 |General collection |Crowd & expert |2000 The most varied parts of the BTC are sections F and H. However, each of the remaining four sections has some specific readily-identifiable bias. So, we propose that one uses half of section H for evaluation and leaves the other half in the training data. Section H should be partitioned in the order of the JSON-format lines. Note that the CoNLL-format data is readily reconstructible from the JSON format, which is the authoritative data format from which others are derived. **Test**: Section F **Development**: Section H (the paper says "second half of Section H" but ordinality could be ambiguous, so it all goes in. Bonne chance) **Training**: everything else ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information Creative Commons Attribution 4.0 International (CC BY 4.0) ### Citation Information ``` @inproceedings{derczynski2016broad, title={Broad twitter corpus: A diverse named entity recognition resource}, author={Derczynski, Leon and Bontcheva, Kalina and Roberts, Ian}, booktitle={Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers}, pages={1169--1179}, year={2016} } ``` ### Contributions Author-added dataset [@leondz](https://github.com/leondz)
GateNLP/broad_twitter_corpus
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-08-15T09:47:44+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "broad-twitter-corpus", "pretty_name": "Broad Twitter Corpus"}
2022-07-01T14:46:36+00:00
17fdc41d9ebf968bef3e189c21a4a1fdda09b430
# Oscar EN 2M Embeddings This dataset contains 2M sentences extracted from the English subset of the OSCAR dataset, and encoded into sentence embeddings using the `sentence-transformers/all-MiniLM-L6-v2` model.
jamescalam/oscar-en-minilm-2m
[ "task_categories:sentence-similarity", "annotations_creators:no-annotation", "language_creators:other", "size_categories:1M<n<10M", "source_datasets:extended|oscar", "language:en", "license:afl-3.0", "embeddings", "vector search", "semantic similarity", "semantic search", "sentence transformers", "sentence similarity", "region:us" ]
2022-08-15T12:08:44+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["other"], "language": ["en"], "license": ["afl-3.0"], "multilinguality": [], "size_categories": ["1M<n<10M"], "source_datasets": ["extended|oscar"], "task_categories": ["sentence-similarity"], "task_ids": [], "pretty_name": "OSCAR MiniLM Embeddings 2M", "tags": ["embeddings", "vector search", "semantic similarity", "semantic search", "sentence transformers", "sentence similarity"]}
2022-08-15T17:19:16+00:00
df897ab78dbb597074d5c1b6c2f6a28ad7e579cf
Corran/Pubmed-OpenAccess-Commercial-Use
[ "license:other", "region:us" ]
2022-08-15T14:06:13+00:00
{"license": "other"}
2022-11-16T00:29:32+00:00
74a8e982a0dfbbfb32dd853936e22a967c0be7c1
dms2ect/wikipedia_character_abstracts
[ "license:apache-2.0", "region:us" ]
2022-08-15T14:13:24+00:00
{"license": "apache-2.0"}
2022-08-15T14:16:52+00:00
ad1769db777807a5883537be08df160ef76e0e7a
a continuous data scrape of arxiv and google scholar papers of quantum machine learning papers particularly regarding climate.
shwetha729/quantum-machine-learning
[ "license:gpl", "region:us" ]
2022-08-16T00:05:17+00:00
{"license": "gpl"}
2022-08-16T00:08:21+00:00
acf22cd6ed86872a965a5d55ed4c7431853aa2ba
--- annotations_creators: - found language_creators: - found language: - ar license: - other multilinguality: - monolingual pretty_name: TD_dataset task_categories: - translation task_ids: - disfluency-detection dataset_info: features: - name: id dtype: string - name: text dtype: string config_name: TD_dataset # Dataset Card for myds ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary dataset for Tunisian dialect ### Supported Tasks and Leaderboards [Needs More Information] ### Languages tuanisian arabic dialect ## Dataset Structure ### Data Instances Size of downloaded dataset files: 4.63 MB Size of the generated dataset: 9.78 MB Total amount of disk used: 14.41 MB ### Data Fields dsfsergrth ### Data Splits rtsert ## Dataset Creation ### Curation Rationale link ### Source Data #### Initial Data Collection and Normalization kink #### Who are the source language producers? link ### Annotations #### Annotation process tool #### Who are the annotators? me ### Personal and Sensitive Information ok ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
EmnaBou/TD_dataset
[ "region:us" ]
2022-08-16T09:59:30+00:00
{}
2022-11-24T09:54:52+00:00