sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
e52697ce08d3d44daa33e0d252c872b51b394625 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Image Classification
* Model: autoevaluate/image-multi-class-classification-not-evaluated
* Dataset: autoevaluate/mnist-sample
* Config: autoevaluate--mnist-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-8549c8be-1ee3-4cf8-990c-ffe8e4ea051d-119115 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-02T16:12:23+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/mnist-sample"], "eval_info": {"task": "image_multi_class_classification", "model": "autoevaluate/image-multi-class-classification-not-evaluated", "metrics": [], "dataset_name": "autoevaluate/mnist-sample", "dataset_config": "autoevaluate--mnist-sample", "dataset_split": "test", "col_mapping": {"image": "image", "target": "label"}}} | 2022-12-02T16:13:16+00:00 |
55dba4bb81abbd9630613670c5d99d8f1e6f4441 |
# NLU Few-shot Benchmark - English and German
This is a few-shot training dataset from the domain of human-robot interaction.
It contains texts in German and English language with 64 different utterances (classes).
Each utterance (class) has exactly 20 samples in the training set.
This leads to a total of 1280 different training samples.
The dataset is intended to benchmark the intent classifiers of chat bots in English and especially in German language.
We are building on our
[deutsche-telekom/NLU-Evaluation-Data-en-de](https://huggingface.co/datasets/deutsche-telekom/NLU-Evaluation-Data-en-de)
data set
## Creator
This data set was compiled and open sourced by [Philip May](https://may.la/)
of [Deutsche Telekom](https://www.telekom.de/).
## Processing Steps
- drop `NaN` values
- drop duplicates in `answer_de` and `answer`
- delete all rows where `answer_de` has more than 70 characters
- add column `label`: `df["label"] = df["scenario"] + "_" + df["intent"]`
- remove classes (`label`) with less than 25 samples:
- `audio_volume_other`
- `cooking_query`
- `general_greet`
- `music_dislikeness`
- random selection for train set - exactly 20 samples for each class (`label`)
- rest for test set
## Copyright
Copyright (c) the authors of [xliuhw/NLU-Evaluation-Data](https://github.com/xliuhw/NLU-Evaluation-Data)\
Copyright (c) 2022 [Philip May](https://may.la/), [Deutsche Telekom AG](https://www.telekom.com/)
All data is released under the
[Creative Commons Attribution 4.0 International License (CC BY 4.0)](http://creativecommons.org/licenses/by/4.0/).
| deutsche-telekom/NLU-few-shot-benchmark-en-de | [
"task_categories:text-classification",
"task_ids:intent-classification",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:extended|deutsche-telekom/NLU-Evaluation-Data-en-de",
"language:en",
"language:de",
"license:cc-by-4.0",
"region:us"
]
| 2022-12-02T16:26:59+00:00 | {"language": ["en", "de"], "license": "cc-by-4.0", "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|deutsche-telekom/NLU-Evaluation-Data-en-de"], "task_categories": ["text-classification"], "task_ids": ["intent-classification"]} | 2023-12-17T17:41:42+00:00 |
3ee38aaa4cd37669a566b84b6a0e18e02cc66e51 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification-not-evaluated
* Dataset: autoevaluate/zero-shot-classification-sample
* Config: autoevaluate--zero-shot-classification-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-c80bd5f3-aba9-44d4-aefd-7fef2e67a535-120116 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-02T16:30:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/zero-shot-classification-sample"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification-not-evaluated", "metrics": [], "dataset_name": "autoevaluate/zero-shot-classification-sample", "dataset_config": "autoevaluate--zero-shot-classification-sample", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-12-02T16:31:14+00:00 |
1d62c325c2898cd1342b4e33fe640785b3c82f38 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: autoevaluate/natural-language-inference-not-evaluated
* Dataset: glue
* Config: mrpc
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-79eac003-d1e7-4d2c-ae8f-d5e71acc5a82-121117 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-02T16:36:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "autoevaluate/natural-language-inference-not-evaluated", "metrics": [], "dataset_name": "glue", "dataset_config": "mrpc", "dataset_split": "validation", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}} | 2022-12-02T16:36:39+00:00 |
960701042f7316f3d5eff9069bda0c632f5b9291 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: autoevaluate/entity-extraction-not-evaluated
* Dataset: autoevaluate/conll2003-sample
* Config: autoevaluate--conll2003-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cfd9b2d6-f835-45b3-a940-6a4a4aec71b0-122118 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-02T16:40:15+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/conll2003-sample"], "eval_info": {"task": "entity_extraction", "model": "autoevaluate/entity-extraction-not-evaluated", "metrics": [], "dataset_name": "autoevaluate/conll2003-sample", "dataset_config": "autoevaluate--conll2003-sample", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-12-02T16:40:52+00:00 |
507aec963fb4cec1d17f314fe14c137f7ea357eb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/distilbert-base-cased-distilled-squad
* Dataset: autoevaluate/squad-sample
* Config: autoevaluate--squad-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-c3da4aa4-0386-41d1-9c7c-12d712dd287c-126120 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-02T16:44:16+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/squad-sample"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/distilbert-base-cased-distilled-squad", "metrics": [], "dataset_name": "autoevaluate/squad-sample", "dataset_config": "autoevaluate--squad-sample", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-12-02T16:44:53+00:00 |
1959f1c839c6003a0fe5b31e76565d2d2bc8be0c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/extractive-question-answering-not-evaluated
* Dataset: autoevaluate/squad-sample
* Config: autoevaluate--squad-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-c3da4aa4-0386-41d1-9c7c-12d712dd287c-126119 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-02T16:46:48+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/squad-sample"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering-not-evaluated", "metrics": [], "dataset_name": "autoevaluate/squad-sample", "dataset_config": "autoevaluate--squad-sample", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-12-02T16:47:25+00:00 |
a6ce1ecc6756118e7ce49f67d740a5cc363b5488 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: autoevaluate/translation-not-evaluated
* Dataset: autoevaluate/wmt16-ro-en-sample
* Config: autoevaluate--wmt16-ro-en-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-d0f125bb-b6fe-4a56-8bed-0f8d3744fc42-127121 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-02T16:50:07+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/wmt16-ro-en-sample"], "eval_info": {"task": "translation", "model": "autoevaluate/translation-not-evaluated", "metrics": [], "dataset_name": "autoevaluate/wmt16-ro-en-sample", "dataset_config": "autoevaluate--wmt16-ro-en-sample", "dataset_split": "test", "col_mapping": {"source": "translation.ro", "target": "translation.en"}}} | 2022-12-02T16:50:55+00:00 |
ba71fd455999029eee07ccc606b6eeeed1319dd0 | test | Ni24601/test_Kim | [
"region:us"
]
| 2022-12-02T16:56:46+00:00 | {} | 2022-12-02T17:15:26+00:00 |
cbf86032c77298cddde7e677254db1283e8b95ed |
Data was obtained from [TMDB API](https://developers.themoviedb.org/3) | ashraq/tmdb-people-image | [
"region:us"
]
| 2022-12-02T17:34:52+00:00 | {"dataset_info": {"features": [{"name": "adult", "dtype": "bool"}, {"name": "also_known_as", "dtype": "string"}, {"name": "biography", "dtype": "string"}, {"name": "birthday", "dtype": "string"}, {"name": "deathday", "dtype": "string"}, {"name": "gender", "dtype": "int64"}, {"name": "homepage", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "imdb_id", "dtype": "string"}, {"name": "known_for_department", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "place_of_birth", "dtype": "string"}, {"name": "popularity", "dtype": "float64"}, {"name": "profile_path", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3749610460.6819267, "num_examples": 116403}], "download_size": 3733145768, "dataset_size": 3749610460.6819267}} | 2023-04-21T19:02:31+00:00 |
39e6ad0820ad1a3615506910030dc73cf006f036 | # Dataset Card for "imdb-movie-genres"
MDb (an acronym for Internet Movie Database) is an online database of information related to films, television programs, home videos, video games, and streaming content online – including cast, production crew and personal biographies, plot summaries, trivia, ratings, and fan and critical reviews. An additional fan feature, message boards, was abandoned in February 2017. Originally a fan-operated website, the database is now owned and operated by IMDb.com, Inc., a subsidiary of Amazon.
As of December 2020, IMDb has approximately 7.5 million titles (including episodes) and 10.4 million personalities in its database,[2] as well as 83 million registered users.
IMDb began as a movie database on the Usenet group "rec.arts.movies" in 1990 and moved to the web in 1993.
## Provenance : [ftp://ftp.fu-berlin.de/pub/misc/movies/database/](ftp://ftp.fu-berlin.de/pub/misc/movies/database/)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | adrienheymans/imdb-movie-genres | [
"region:us"
]
| 2022-12-02T17:44:56+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "genre", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 35392128, "num_examples": 54214}, {"name": "test", "num_bytes": 35393614, "num_examples": 54200}], "download_size": 46358637, "dataset_size": 70785742}} | 2022-12-02T17:49:10+00:00 |
942d06f7e0aa48addeeea991fa62d8590507f6d7 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Artifact-AI/t5_base_courtlistener_billsum
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Artifact-AI](https://huggingface.co/Artifact-AI) for evaluating this model. | autoevaluate/autoeval-eval-billsum-default-258166-2318473352 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-02T18:25:59+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "Artifact-AI/t5_base_courtlistener_billsum", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}} | 2022-12-02T18:29:48+00:00 |
6d67f9ad7ecd5d0d2d44a273ff933819cfd6b17c |
# Dataset Card for "lmqg/qg_tweetqa"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the [tweet_qa](https://huggingface.co/datasets/tweet_qa). The test set of the original data is not publicly released, so we randomly sampled test questions from the training set.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```
{
'answer': 'vine',
'paragraph_question': 'question: what site does the link take you to?, context:5 years in 5 seconds. Darren Booth (@darbooth) January 25, 2013',
'question': 'what site does the link take you to?',
'paragraph': '5 years in 5 seconds. Darren Booth (@darbooth) January 25, 2013'
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `question_answer`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|9489 | 1086| 1203|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | lmqg/qg_tweetqa | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:tweet_qa",
"language:en",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
]
| 2022-12-02T18:53:49+00:00 | {"language": "en", "license": "cc-by-sa-4.0", "multilinguality": "monolingual", "size_categories": "1k<n<10K", "source_datasets": "tweet_qa", "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "TweetQA for question generation", "tags": ["question-generation"]} | 2022-12-02T19:11:42+00:00 |
0992e0314d222c892be5d5c7bb27f6020c734a8c | GEOcite/ReferenceParserDataset | [
"region:us"
]
| 2022-12-02T19:15:43+00:00 | {} | 2023-01-31T00:36:49+00:00 |
|
48e546938ba5143a4af5c62ac868fa4d357b557b |
# WIFI RSSI Indoor Positioning Dataset
A reliable and comprehensive public WiFi fingerprinting database for researchers to implement and compare the indoor localization’s methods.The database contains RSSI information from 6 APs conducted in different days with the support of autonomous robot.
We use an autonomous robot to collect the WiFi fingerprint data. Our 3-wheel robot has multiple sensors including wheel odometer, an inertial measurement unit (IMU), a LIDAR, sonar sensors and a color and depth (RGB-D) camera. The robot can navigate to a target location to collect WiFi fingerprints automatically. The localization accuracy of the robot is 0.07 m ± 0.02 m. The dimension of the area is 21 m × 16 m. It has three long corridors. There are six APs and five of them provide two distinct MAC address for 2.4- and 5-GHz communications channels, respectively, except for one that only operates on 2.4-GHz frequency. There is one router can provide CSI information.
# Data Format
X Position (m), Y Position (m), RSSI Feature 1 (dBm), RSSI Feature 2 (dBm), RSSI Feature 3 (dBm), RSSI Feature 4 (dBm), ...
| Brosnan/WIFI_RSSI_Indoor_Positioning_Dataset | [
"task_categories:tabular-classification",
"task_ids:tabular-single-column-regression",
"language_creators:expert-generated",
"size_categories:100K<n<1M",
"license:cc-by-nc-sa-4.0",
"wifi",
"indoor-positioning",
"indoor-localisation",
"wifi-rssi",
"rssi",
"recurrent-neural-networks",
"region:us"
]
| 2022-12-02T20:14:17+00:00 | {"language_creators": ["expert-generated"], "license": "cc-by-nc-sa-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["tabular-classification"], "task_ids": ["tabular-single-column-regression"], "pretty_name": "WiFi RSSI Indoor Localization", "tags": ["wifi", "indoor-positioning", "indoor-localisation", "wifi-rssi", "rssi", "recurrent-neural-networks"]} | 2022-12-02T20:42:32+00:00 |
0e6d7cda2bb2d1d5203f630bf67e7c04d085cd96 | # Dataset Card for "2000dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | omarelsayeed/2000dataset | [
"region:us"
]
| 2022-12-02T21:19:25+00:00 | {"dataset_info": {"features": [{"name": "input_values", "struct": [{"name": "attention_mask", "sequence": {"sequence": "int32"}}, {"name": "input_values", "sequence": {"sequence": "float32"}}]}, {"name": "input_length", "dtype": "int64"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 1200580612, "num_examples": 2001}], "download_size": 536444205, "dataset_size": 1200580612}} | 2022-12-02T21:20:31+00:00 |
ffce3874b72978ddad36eff1b4e0c6f569907838 | saraimarte/upsale | [
"license:other",
"region:us"
]
| 2022-12-02T22:19:36+00:00 | {"license": "other"} | 2022-12-02T22:20:10+00:00 |
|
724eac9f9ed4731c403615af2e0c786b1bd4c539 | AlienKevin/kanjivg_klee | [
"license:cc0-1.0",
"region:us"
]
| 2022-12-03T00:31:51+00:00 | {"license": "cc0-1.0"} | 2022-12-03T00:35:35+00:00 |
|
65570bc1d5e12e29647fd99b34169179772486c0 | # Dataset Card for The Harvard USPTO Patent Dataset (HUPD)

## Dataset Description
- **Homepage:** [https://patentdataset.org/](https://patentdataset.org/)
- **Repository:** [HUPD GitHub repository](https://github.com/suzgunmirac/hupd)
- **Paper:** [HUPD arXiv Submission](https://arxiv.org/abs/2207.04043)
- **Point of Contact:** Mirac Suzgun
### Dataset Summary
The Harvard USPTO Dataset (HUPD) is a large-scale, well-structured, and multi-purpose corpus of English-language utility patent applications filed to the United States Patent and Trademark Office (USPTO) between January 2004 and December 2018.
### Experiments and Tasks Considered in the Paper
- **Patent Acceptance Prediction**: Given a section of a patent application (in particular, the abstract, claims, or description), predict whether the application will be accepted by the USPTO.
- **Automated Subject (IPC/CPC) Classification**: Predict the primary IPC or CPC code of a patent application given (some subset of) the text of the application.
- **Language Modeling**: Masked/autoregressive language modeling on the claims and description sections of patent applications.
- **Abstractive Summarization**: Given the claims or claims section of a patent application, generate the abstract.
### Languages
The dataset contains English text only.
### Domain
Patents (intellectual property).
### Dataset Curators
The dataset was created by Mirac Suzgun, Luke Melas-Kyriazi, Suproteem K. Sarkar, Scott Duke Kominers, and Stuart M. Shieber.
## Dataset Structure
Each patent application is defined by a distinct JSON file, named after its application number, and includes information about
the application and publication numbers,
title,
decision status,
filing and publication dates,
primary and secondary classification codes,
inventor(s),
examiner,
attorney,
abstract,
claims,
background,
summary, and
full description of the proposed invention, among other fields. There are also supplementary variables, such as the small-entity indicator (which denotes whether the applicant is considered to be a small entity by the USPTO) and the foreign-filing indicator (which denotes whether the application was originally filed in a foreign country).
In total, there are 34 data fields for each application. A full list of data fields used in the dataset is listed in the next section.
### Data Instances
Each patent application in our patent dataset is defined by a distinct JSON file (e.g., ``8914308.json``), named after its unique application number. The format of the JSON files is as follows:
```python
{
"application_number": "...",
"publication_number": "...",
"title": "...",
"decision": "...",
"date_produced": "...",
"date_published": "...",
"main_cpc_label": "...",
"cpc_labels": ["...", "...", "..."],
"main_ipcr_label": "...",
"ipcr_labels": ["...", "...", "..."],
"patent_number": "...",
"filing_date": "...",
"patent_issue_date": "...",
"abandon_date": "...",
"uspc_class": "...",
"uspc_subclass": "...",
"examiner_id": "...",
"examiner_name_last": "...",
"examiner_name_first": "...",
"examiner_name_middle": "...",
"inventor_list": [
{
"inventor_name_last": "...",
"inventor_name_first": "...",
"inventor_city": "...",
"inventor_state": "...",
"inventor_country": "..."
}
],
"abstract": "...",
"claims": "...",
"background": "...",
"summary": "...",
"full_description": "..."
}
```
## Usage
### Loading the Dataset
#### Sample (January 2016 Subset)
The following command can be used to load the `sample` version of the dataset, which contains all the patent applications that were filed to the USPTO during the month of January in 2016. This small subset of the dataset can be used for debugging and exploration purposes.
```python
from datasets import load_dataset
dataset_dict = load_dataset('HUPD/hupd',
name='sample',
data_files="https://huggingface.co/datasets/HUPD/hupd/blob/main/hupd_metadata_2022-02-22.feather",
icpr_label=None,
train_filing_start_date='2016-01-01',
train_filing_end_date='2016-01-21',
val_filing_start_date='2016-01-22',
val_filing_end_date='2016-01-31',
)
```
#### Full Dataset
If you would like to use the **full** version of the dataset, please make sure that change the `name` field from `sample` to `all`, specify the training and validation start and end dates carefully, and set `force_extract` to be `True` (so that you would only untar the files that you are interested in and not squander your disk storage space). In the following example, for instance, we set the training set year range to be [2011, 2016] (inclusive) and the validation set year range to be 2017.
```python
from datasets import load_dataset
dataset_dict = load_dataset('HUPD/hupd',
name='all',
data_files="https://huggingface.co/datasets/HUPD/hupd/blob/main/hupd_metadata_2022-02-22.feather",
icpr_label=None,
force_extract=True,
train_filing_start_date='2011-01-01',
train_filing_end_date='2016-12-31',
val_filing_start_date='2017-01-01',
val_filing_end_date='2017-12-31',
)
```
### Google Colab Notebook
You can also use the following Google Colab notebooks to explore HUPD.
- [](https://colab.research.google.com/drive/1_ZsI7WFTsEO0iu_0g3BLTkIkOUqPzCET?usp=sharing)[ HUPD Examples: Loading the Dataset](https://colab.research.google.com/drive/1_ZsI7WFTsEO0iu_0g3BLTkIkOUqPzCET?usp=sharing)
- [](https://colab.research.google.com/drive/1TzDDCDt368cUErH86Zc_P2aw9bXaaZy1?usp=sharing)[ HUPD Examples: Loading HUPD By Using HuggingFace's Libraries](https://colab.research.google.com/drive/1TzDDCDt368cUErH86Zc_P2aw9bXaaZy1?usp=sharing)
- [](https://colab.research.google.com/drive/1TzDDCDt368cUErH86Zc_P2aw9bXaaZy1?usp=sharing)[ HUPD Examples: Using the HUPD DistilRoBERTa Model](https://colab.research.google.com/drive/11t69BWcAVXndQxAOCpKaGkKkEYJSfydT?usp=sharing)
- [](https://colab.research.google.com/drive/1TzDDCDt368cUErH86Zc_P2aw9bXaaZy1?usp=sharing)[ HUPD Examples: Using the HUPD T5-Small Summarization Model](https://colab.research.google.com/drive/1VkCtrRIryzev_ixDjmJcfJNK-q6Vx24y?usp=sharing)
## Dataset Creation
### Source Data
HUPD synthesizes multiple data sources from the USPTO: While the full patent application texts were obtained from the USPTO Bulk Data Storage System (Patent Application Data/XML Versions 4.0, 4.1, 4.2, 4.3, 4.4 ICE, as well as Version 1.5) as XML files, the bibliographic filing metadata were obtained from the USPTO Patent Examination Research Dataset (in February, 2021).
### Annotations
Beyond our patent decision label, for which construction details are provided in the paper, the dataset does not contain any human-written or computer-generated annotations beyond those produced by patent applicants or the USPTO.
### Data Shift
A major feature of HUPD is its structure, which allows it to demonstrate the evolution of concepts over time. As we illustrate in the paper, the criteria for patent acceptance evolve over time at different rates, depending on category. We believe this is an important feature of the dataset, not only because of the social scientific questions it raises, but also because it facilitates research on models that can accommodate concept shift in a real-world setting.
### Personal and Sensitive Information
The dataset contains information about the inventor(s) and examiner of each patent application. These details are, however, already in the public domain and available on the USPTO's Patent Application Information Retrieval (PAIR) system, as well as on Google Patents and PatentsView.
### Social Impact of the Dataset
The authors of the dataset hope that HUPD will have a positive social impact on the ML/NLP and Econ/IP communities. They discuss these considerations in more detail in [the paper](https://arxiv.org/abs/2207.04043).
### Impact on Underserved Communities and Discussion of Biases
The dataset contains patent applications in English, a language with heavy attention from the NLP community. However, innovation is spread across many languages, cultures, and communities that are not reflected in this dataset. HUPD is thus not representative of all kinds of innovation. Furthermore, patent applications require a fixed cost to draft and file and are not accessible to everyone. One goal of this dataset is to spur research that reduces the cost of drafting applications, potentially allowing for more people to seek intellectual property protection for their innovations.
### Discussion of Biases
Section 4 of [the HUPD paper](https://arxiv.org/abs/2207.04043) provides an examination of the dataset for potential biases. It shows, among other things, that female inventors are notably underrepresented in the U.S. patenting system, that small and micro entities (e.g., independent inventors, small companies, non-profit organizations) are less likely to have positive outcomes in patent obtaining than large entities (e.g., companies with more than 500 employees), and that patent filing and acceptance rates are not uniformly distributed across the US. Our empirical findings suggest that any study focusing on the acceptance prediction task, especially if it is using the inventor information or the small-entity indicator as part of the input, should be aware of the the potential biases present in the dataset and interpret their results carefully in light of those biases.
- Please refer to Section 4 and Section D for an in-depth discussion of potential biases embedded in the dataset.
### Licensing Information
HUPD is released under the CreativeCommons Attribution-NonCommercial-ShareAlike 4.0 International.
### Citation Information
```
@article{suzgun2022hupd,
title={The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications},
author={Suzgun, Mirac and Melas-Kyriazi, Luke and Sarkar, Suproteem K. and Kominers, Scott Duke and Shieber, Stuart M.},
year={2022},
publisher={arXiv preprint arXiv:2207.04043},
url={https://arxiv.org/abs/2207.04043},
``` | egm517/hupd_augmented | [
"task_categories:fill-mask",
"task_categories:summarization",
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:masked-language-modeling",
"task_ids:multi-class-classification",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"language:en",
"license:cc-by-sa-4.0",
"patents",
"arxiv:2207.04043",
"region:us"
]
| 2022-12-03T02:16:04+00:00 | {"language": ["en"], "license": ["cc-by-sa-4.0"], "task_categories": ["fill-mask", "summarization", "text-classification", "token-classification"], "task_ids": ["masked-language-modeling", "multi-class-classification", "topic-classification", "named-entity-recognition"], "pretty_name": "HUPD", "tags": ["patents"]} | 2022-12-10T19:02:49+00:00 |
de9edb0ffb016b57e3c7f850f4a2dcd036c56ef7 | Yahir21/ggg | [
"license:afl-3.0",
"region:us"
]
| 2022-12-03T03:17:59+00:00 | {"license": "afl-3.0"} | 2022-12-03T03:17:59+00:00 |
|
196ce6a0d1e65039b5bb7c02e708127eec891e5f | rodrigobrazao/rbDataset | [
"license:openrail",
"region:us"
]
| 2022-12-03T04:00:48+00:00 | {"license": "openrail"} | 2022-12-03T04:00:48+00:00 |
|
45b3a1ed4861ddcee484e6668a9fb4c00847afb9 | bstdev/touhou_portraits | [
"license:agpl-3.0",
"region:us"
]
| 2022-12-03T06:04:29+00:00 | {"license": "agpl-3.0"} | 2022-12-03T06:47:25+00:00 |
|
dcaf420149dedf70228bb19ab66f8081607dcdea | wmt/yakut | [
"language:ru",
"language:sah",
"region:us"
]
| 2022-12-03T06:19:20+00:00 | {"language": ["ru", "sah"]} | 2022-12-03T08:32:45+00:00 |
|
fba364d22354b9e1b4b912235c1e3503df86bf5c | A dataset of AI-generated images, selected by the community, for fine tuning Stable Diffusion 1.5 and/or Stable Diffusion 2.0 on particular desired artstyles,
WITHOUT using any images owned by major media companies, nor images by artists who are uncomfortable with their works being used to train AI.
What to submit: Your own AI-generated Images paired with a text description of their subject matter. Filename can be whatever, but both should have the same filename.
If you're a conventional digital artist and want to contribute, include solid documentation of your permission for use in the text file. (see rules below)
Format: Images as .jpeg or .png: 512x512 or preferably 768x768
Text files as .txt: with a short list of one-word descriptions about what's in the image. (Not the prompts, describe it in your own words)
Where to submit: Select the directory most descriptive of the "style" you're going for. look at a few already-submitted images to make sure it fits.
If youre piece is a different style, something new, prepare at least 4 similarly styled images, and a creative style name, and push a new directory too.
What is going to be done with the images: Once the technical issues get ironed out I will use Hugging Face Diffuser's Dreambooth example script to train the
Stable Diffusion 2.0 model on the 768px images once we have accumulated at least 50 images in a particular style. I will document the setup, process, and make the
resultant model available here for download. Once (if) we accumulate on the order of 1000 images, I will begin work on a natively fine-tuned model, incorporating the
provided text-image pairs.
Until that point, all images and text descriptions will be available here. If you wish to use the datasets for your own training projects at any point, the data is
available, and for the purposes of AI training it is free to use. (Subject to license and Hugging Face T.O.S.)
I will not be taking submissions of trained models or other code or utilities.
Basic Rules:
The first rule of GAI: No images from human artists without the artist's explicit permission.
-In order to document the artist's permission, include a text file with a link to a public statement (like a post on social media for instance)
granting permission for use of that image, or a set of images including the image in question.
-The link should: 1) Be independently verifiable as actually being from the artist in question.
2) Include an unambiguous reference to the image in question, or an image set that clearly includes the image in question.
3) Include permission to use the images in question for training AI
-It is appreciated if members of the community can help reverse-image-search images submitted to the dataset, and help police content. I am but one goofball, and I
will keep on top of submitted images as best I can, but I don't claim to be magical, omnicient, or even competent at content moderation.
The second rule of GAI: Use creative, and descriptive names to artstyles instead of artist's names, pen names, trade names, or trademarks from media companies.
-This is part of the point of this dataset, to teach the wider art community that AI doesn't just copy existing work. And also to provide a hedge against possibly
litigous activist actions, and future changes to the law.
-Users who use AI ethically aren't attempting to counterfeit, displace, or impersonate existing artists. We want images that have a certain "look" to them, or simply
more visually appealing. So instead of prompting with artist names or existing properties, specify the actual image you want to see, and let the AI do the rest.
The third rule of GAI: NSFW content is permitted, but it is to be kept seperate. Illegal content like CSAM and "Lightning Rod" content like hate speech will not be
permitted. Neither will anything else that violates Hugging Face's T.O.s.
-What is NSFW? Using Unites States ESRB "Teen" rating, MPAA "PG-13", and typical mass-market social media moderation guidelines are the primary benchmark for what is
and isn't NSFW. If an image straddles the line, discuss. The primary purpose for separating this content is for appearances. If something is a bad look for
the main dataset, it will get moved.
-What is Illegal content? Primarily CSAM, CP, adult material including minors, anything that could get the feds to show up. AI's are not people, pictures are not
people, but argue about it somewhere else. This also includes confidential information, medical imaging, or, per rule #1, unauthorized copyrighted material.
-What is Lightning Rod content? Hate speech, discriminatory content aimed at groups of people based on heritage, nationality, sex, gender identity, race, ethnicity,
religion, etc. As well as things like deliberate shock images with no artistic merit. If an image straddles the line, discuss.
The zero'th rule of GAI: The objective is to build models for Stable Diffusion that work AS WELL or BETTER than models trained on copyrighted material. While using only
AI generated, or volunteered input. Anything that doesn't serve that goal is subject to moderation and removal. Don't try to "sneak" copyrighted material in.
Don't try to sneak in illegal, shock, or hateful material. Don't be a problem. I can appreciate a good trolling, but we're going to stay on topic here.
Have fun, and thanks to anyone who wants to contribute!
-DerrangedGadgeteer | DerrangedGadgeteer/SD-GAI | [
"region:us"
]
| 2022-12-03T09:09:21+00:00 | {} | 2022-12-03T11:29:47+00:00 |
f563da4f193534553ddfbc68d8e546b1a42720ee | sinsforeal/anythingv3classifers | [
"license:openrail",
"region:us"
]
| 2022-12-03T10:22:47+00:00 | {"license": "openrail"} | 2022-12-03T10:22:47+00:00 |
|
de7daaad2535b935698a3df9a0d3fff8358a9970 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | DTU54DL/common-accent-augmented-proc | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
]
| 2022-12-03T12:05:58+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["token-classification-other-acronym-identification"], "paperswithcode_id": "acronym-identification", "pretty_name": "Acronym Identification Dataset", "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "test", "num_bytes": 433226048, "num_examples": 451}, {"name": "train", "num_bytes": 9606026408, "num_examples": 10000}], "download_size": 2307292790, "dataset_size": 10039252456}, "train-eval-index": [{"col_mapping": {"labels": "tags", "tokens": "tokens"}, "config": "default", "splits": {"eval_split": "test"}, "task": "token-classification", "task_id": "entity_extraction"}]} | 2022-12-03T12:56:02+00:00 |
1c435ba41167bd40e8c2acae21ccdd9e3c168de0 | # Dataset Card for "common_voice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liangc40/common_voice | [
"region:us"
]
| 2022-12-03T12:15:27+00:00 | {"dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 11871603408, "num_examples": 12360}, {"name": "test", "num_bytes": 4868697560, "num_examples": 5069}], "download_size": 2458690800, "dataset_size": 16740300968}} | 2022-12-06T14:16:07+00:00 |
4ef0f45c938a0042851b8faafa712143064d2a33 | Aayush196/funsd_lmv2 | [
"license:mit",
"region:us"
]
| 2022-12-03T14:58:40+00:00 | {"license": "mit"} | 2022-12-03T15:22:15+00:00 |
|
021d0a017dd4238172a9c517e6af4a07b8708667 | faztrick/wapi | [
"region:us"
]
| 2022-12-03T15:30:49+00:00 | {} | 2022-12-03T15:32:37+00:00 |
|
84931371c374c20ac7c1f7595f3164990d968e84 | # Dataset Card for "Dvoice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/Dvoice | [
"region:us"
]
| 2022-12-03T15:34:53+00:00 | {"dataset_info": {"features": [{"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 52843034.0, "num_examples": 457}, {"name": "train", "num_bytes": 153498349.056, "num_examples": 1368}, {"name": "validation", "num_bytes": 54017328.0, "num_examples": 456}], "download_size": 194658648, "dataset_size": 260358711.056}} | 2022-12-03T15:39:09+00:00 |
1b54fd7865d13e811ae87b56c0e54d55c6128a16 |
These embeddings result from applying SemAxis (https://arxiv.org/abs/1806.05521) to common sense knowledge graph embeddings (https://arxiv.org/abs/2012.11490).
| KnutJaegersberg/Interpretable_word_embeddings_large_cskg | [
"license:mit",
"arxiv:1806.05521",
"arxiv:2012.11490",
"region:us"
]
| 2022-12-03T15:45:06+00:00 | {"license": "mit"} | 2022-12-03T22:31:29+00:00 |
1c022733f473ab1c86b7799f1ca8e52411df5c28 | cjlovering/natural-questions-short | [
"license:apache-2.0",
"region:us"
]
| 2022-12-03T17:00:55+00:00 | {"license": "apache-2.0"} | 2022-12-04T21:15:26+00:00 |
|
639d674fe76d48c588264a12e1ee6e6d6569ec21 | # Dataset Card for "sudanese_dialect_speech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/sudanese_dialect_speech | [
"region:us"
]
| 2022-12-03T17:44:20+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 1207318008.52, "num_examples": 3547}], "download_size": 1624404468, "dataset_size": 1207318008.52}} | 2022-12-04T16:44:26+00:00 |
76492a80083423b44e31d8e1a659b9708b8231e9 | Helife/mattis | [
"license:mit",
"region:us"
]
| 2022-12-03T18:08:20+00:00 | {"license": "mit"} | 2022-12-04T00:29:43+00:00 |
|
8b7b0ffe0d3c3c86eb28fc9381f17f36301b0068 | # Dataset Card for "lat_en_loeb_whitaker_split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | grosenthal/lat_en_loeb_whitaker_split | [
"region:us"
]
| 2022-12-03T19:29:27+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "la", "dtype": "string"}, {"name": "en", "dtype": "string"}, {"name": "file", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30517119.261391733, "num_examples": 77774}], "download_size": 18966593, "dataset_size": 30517119.261391733}} | 2023-01-25T17:47:40+00:00 |
47d2ca79b915fe3e31acae2b7937da8ebe69c0ab | # Dataset Card for "Food-Prototype-Bruce"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | izou3/Food-Prototype-Bruce | [
"region:us"
]
| 2022-12-03T20:14:30+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 74206828.0, "num_examples": 400}], "download_size": 73784241, "dataset_size": 74206828.0}} | 2022-12-03T20:20:34+00:00 |
14dfec44ee4531f79ff50b21d0f85a6290200cd2 | bstdev/touhou_portraits_more | [
"license:agpl-3.0",
"region:us"
]
| 2022-12-03T20:17:38+00:00 | {"license": "agpl-3.0"} | 2022-12-03T20:23:23+00:00 |
|
6001dd3a96d44c22e2a6c5c8f937ba0f840c4d50 |
# V-D4RL
V-D4RL provides pixel-based analogues of the popular D4RL benchmarking tasks, derived from the **`dm_control`** suite, along with natural extensions of two state-of-the-art online pixel-based continuous control algorithms, DrQ-v2 and DreamerV2, to the offline setting. For further details, please see the paper:
**_Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations_**; Cong Lu*, Philip J. Ball*, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, Yee Whye Teh.
<p align="center">
<a href=https://arxiv.org/abs/2206.04779>View on arXiv</a>
</p>
## Benchmarks
The V-D4RL datasets can be found in this repository under `vd4rl`. **These must be downloaded before running the code.** Assuming the data is stored under `vd4rl_data`, the file structure is:
```
vd4rl_data
└───main
│ └───walker_walk
│ │ └───random
│ │ │ └───64px
│ │ │ └───84px
│ │ └───medium_replay
│ │ │ ...
│ └───cheetah_run
│ │ ...
│ └───humanoid_walk
│ │ ...
└───distracting
│ ...
└───multitask
│ ...
```
## Baselines
### Environment Setup
Requirements are presented in conda environment files named `conda_env.yml` within each folder. The command to create the environment is:
```
conda env create -f conda_env.yml
```
Alternatively, dockerfiles are located under `dockerfiles`, replace `<<USER_ID>>` in the files with your own user ID from the command `id -u`.
### V-D4RL Main Evaluation
Example run commands are given below, given an environment type and dataset identifier:
```
ENVNAME=walker_walk # choice in ['walker_walk', 'cheetah_run', 'humanoid_walk']
TYPE=random # choice in ['random', 'medium_replay', 'medium', 'medium_expert', 'expert']
```
#### Offline DV2
```
python offlinedv2/train_offline.py --configs dmc_vision --task dmc_${ENVNAME} --offline_dir vd4rl_data/main/${ENV_NAME}/${TYPE}/64px --offline_penalty_type meandis --offline_lmbd_cons 10 --seed 0
```
#### DrQ+BC
```
python drqbc/train.py task_name=offline_${ENVNAME}_${TYPE} offline_dir=vd4rl_data/main/${ENV_NAME}/${TYPE}/84px nstep=3 seed=0
```
#### DrQ+CQL
```
python drqbc/train.py task_name=offline_${ENVNAME}_${TYPE} offline_dir=vd4rl_data/main/${ENV_NAME}/${TYPE}/84px algo=cql cql_importance_sample=false min_q_weight=10 seed=0
```
#### BC
```
python drqbc/train.py task_name=offline_${ENVNAME}_${TYPE} offline_dir=vd4rl_data/main/${ENV_NAME}/${TYPE}/84px algo=bc seed=0
```
### Distracted and Multitask Experiments
To run the distracted and multitask experiments, it suffices to change the offline directory passed to the commands above.
## Note on data collection and format
We follow the image sizes and dataset format of each algorithm's native codebase.
The means that Offline DV2 uses `*.npz` files with 64px images to store the offline data, whereas DrQ+BC uses `*.hdf5` with 84px images.
The data collection procedure is detailed in Appendix B of our paper, and we provide conversion scripts in `conversion_scripts`.
For the original SAC policies to generate the data see [here](https://github.com/philipjball/SAC_PyTorch/blob/dmc_branch/train_agent.py).
See [here](https://github.com/philipjball/SAC_PyTorch/blob/dmc_branch/gather_offline_data.py) for distracted/multitask variants.
We used `seed=0` for all data generation.
## Acknowledgements
V-D4RL builds upon many works and open-source codebases in both offline reinforcement learning and online pixel-based continuous control. We would like to particularly thank the authors of:
- [D4RL](https://github.com/rail-berkeley/d4rl)
- [DMControl](https://github.com/deepmind/dm_control)
- [DreamerV2](https://github.com/danijar/dreamerv2)
- [DrQ-v2](https://github.com/facebookresearch/drqv2)
- [LOMPO](https://github.com/rmrafailov/LOMPO)
## Contact
Please contact [Cong Lu](mailto:[email protected]) or [Philip Ball](mailto:[email protected]) for any queries. We welcome any suggestions or contributions!
| conglu/vd4rl | [
"license:mit",
"Reinforcement Learning",
"Offline Reinforcement Learning",
"Reinforcement Learning from Pixels",
"DreamerV2",
"DrQ+BC",
"arxiv:2206.04779",
"region:us"
]
| 2022-12-03T20:23:15+00:00 | {"license": "mit", "thumbnail": "https://github.com/conglu1997/v-d4rl/raw/main/figs/envs.png", "tags": ["Reinforcement Learning", "Offline Reinforcement Learning", "Reinforcement Learning from Pixels", "DreamerV2", "DrQ+BC"], "datasets": ["V-D4RL"]} | 2022-12-05T17:31:55+00:00 |
c292ad5ec0bf052b4b730c83e91a86fb44f06530 | HADESJUDGEMENT/Art | [
"license:unknown",
"region:us"
]
| 2022-12-03T21:15:47+00:00 | {"license": "unknown"} | 2022-12-03T21:15:47+00:00 |
|
5327d6f12267f6e189573e0dcfb5d41d2f35d149 |
# stacked-xsum-1024
a "stacked" version of `xsum`
1. Original Dataset: copy of the base dataset
2. Stacked Rows: The original dataset is processed by stacking rows based on certain criteria:
- Maximum Input Length: The maximum length for input sequences is 1024 tokens in the longt5 model tokenizer.
- Maximum Output Length: The maximum length for output sequences is also 1024 tokens in the longt5 model tokenizer.
3. Special Token: The dataset utilizes the `[NEXT_CONCEPT]` token to indicate a new topic **within** the same summary. It is recommended to explicitly add this special token to your model's tokenizer before training, ensuring that it is recognized and processed correctly during downstream usage.
4.
## updates
- dec 3: upload initial version
- dec 4: upload v2 with basic data quality fixes (i.e. the `is_stacked` column)
- dec 5 0500: upload v3 which has pre-randomised order and duplicate rows for document+summary dropped
## stats

## dataset details
see the repo `.log` file for more details.
train input
```python
[2022-12-05 01:05:17] INFO:root:INPUTS - basic stats - train
[2022-12-05 01:05:17] INFO:root:{'num_columns': 5,
'num_rows': 204045,
'num_unique_target': 203107,
'num_unique_text': 203846,
'summary - average chars': 125.46,
'summary - average tokens': 30.383719277610332,
'text input - average chars': 2202.42,
'text input - average tokens': 523.9222230390355}
```
stacked train:
```python
[2022-12-05 04:47:01] INFO:root:stacked 181719 rows, 22326 rows were ineligible
[2022-12-05 04:47:02] INFO:root:dropped 64825 duplicate rows, 320939 rows remain
[2022-12-05 04:47:02] INFO:root:shuffling output with seed 323
[2022-12-05 04:47:03] INFO:root:STACKED - basic stats - train
[2022-12-05 04:47:04] INFO:root:{'num_columns': 6,
'num_rows': 320939,
'num_unique_chapters': 320840,
'num_unique_summaries': 320101,
'summary - average chars': 199.89,
'summary - average tokens': 46.29925001324239,
'text input - average chars': 2629.19,
'text input - average tokens': 621.541532814647}
```
## Citation
If you find this useful in your work, please consider citing us.
```
@misc {stacked_summaries_2023,
author = { {Stacked Summaries: Karim Foda and Peter Szemraj} },
title = { stacked-xsum-1024 (Revision 2d47220) },
year = 2023,
url = { https://huggingface.co/datasets/stacked-summaries/stacked-xsum-1024 },
doi = { 10.57967/hf/0390 },
publisher = { Hugging Face }
}
``` | stacked-summaries/stacked-xsum-1024 | [
"task_categories:summarization",
"size_categories:100K<n<1M",
"source_datasets:xsum",
"language:en",
"license:apache-2.0",
"stacked summaries",
"xsum",
"doi:10.57967/hf/0390",
"region:us"
]
| 2022-12-04T00:47:30+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "source_datasets": ["xsum"], "task_categories": ["summarization"], "pretty_name": "Stacked XSUM: 1024 tokens max", "tags": ["stacked summaries", "xsum"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "document", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "chapter_length", "dtype": "int64"}, {"name": "summary_length", "dtype": "int64"}, {"name": "is_stacked", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 918588672, "num_examples": 320939}, {"name": "validation", "num_bytes": 51154057, "num_examples": 17935}, {"name": "test", "num_bytes": 51118088, "num_examples": 17830}], "download_size": 653378162, "dataset_size": 1020860817}} | 2023-10-08T22:34:15+00:00 |
01abbc1300d16d69996bc64f7c8d1bd82ede010c | epts/joyokanji | [
"license:mit",
"region:us"
]
| 2022-12-04T01:49:53+00:00 | {"license": "mit"} | 2022-12-04T02:07:31+00:00 |
|
8957daa3f265f824532bcd8187b20674e539b8ed |
# TU-Berlin Sketch Dataset
This is the full PNG dataset from [TU-Berlin](https://cybertron.cg.tu-berlin.de/eitz/projects/classifysketch/).
| kmewhort/tu-berlin-png | [
"license:cc-by-4.0",
"region:us"
]
| 2022-12-04T02:32:17+00:00 | {"license": "cc-by-4.0", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "airplane", "1": "alarm clock", "2": "angel", "3": "ant", "4": "apple", "5": "arm", "6": "armchair", "7": "ashtray", "8": "axe", "9": "backpack", "10": "banana", "11": "barn", "12": "baseball bat", "13": "basket", "14": "bathtub", "15": "bear (animal)", "16": "bed", "17": "bee", "18": "beer-mug", "19": "bell", "20": "bench", "21": "bicycle", "22": "binoculars", "23": "blimp", "24": "book", "25": "bookshelf", "26": "boomerang", "27": "bottle opener", "28": "bowl", "29": "brain", "30": "bread", "31": "bridge", "32": "bulldozer", "33": "bus", "34": "bush", "35": "butterfly", "36": "cabinet", "37": "cactus", "38": "cake", "39": "calculator", "40": "camel", "41": "camera", "42": "candle", "43": "cannon", "44": "canoe", "45": "car (sedan)", "46": "carrot", "47": "castle", "48": "cat", "49": "cell phone", "50": "chair", "51": "chandelier", "52": "church", "53": "cigarette", "54": "cloud", "55": "comb", "56": "computer monitor", "57": "computer-mouse", "58": "couch", "59": "cow", "60": "crab", "61": "crane (machine)", "62": "crocodile", "63": "crown", "64": "cup", "65": "diamond", "66": "dog", "67": "dolphin", "68": "donut", "69": "door", "70": "door handle", "71": "dragon", "72": "duck", "73": "ear", "74": "elephant", "75": "envelope", "76": "eye", "77": "eyeglasses", "78": "face", "79": "fan", "80": "feather", "81": "fire hydrant", "82": "fish", "83": "flashlight", "84": "floor lamp", "85": "flower with stem", "86": "flying bird", "87": "flying saucer", "88": "foot", "89": "fork", "90": "frog", "91": "frying-pan", "92": "giraffe", "93": "grapes", "94": "grenade", "95": "guitar", "96": "hamburger", "97": "hammer", "98": "hand", "99": "harp", "100": "hat", "101": "head", "102": "head-phones", "103": "hedgehog", "104": "helicopter", "105": "helmet", "106": "horse", "107": "hot air balloon", "108": "hot-dog", "109": "hourglass", "110": "house", "111": "human-skeleton", "112": "ice-cream-cone", "113": "ipod", "114": "kangaroo", "115": "key", "116": "keyboard", "117": "knife", "118": "ladder", "119": "laptop", "120": "leaf", "121": "lightbulb", "122": "lighter", "123": "lion", "124": "lobster", "125": "loudspeaker", "126": "mailbox", "127": "megaphone", "128": "mermaid", "129": "microphone", "130": "microscope", "131": "monkey", "132": "moon", "133": "mosquito", "134": "motorbike", "135": "mouse (animal)", "136": "mouth", "137": "mug", "138": "mushroom", "139": "nose", "140": "octopus", "141": "owl", "142": "palm tree", "143": "panda", "144": "paper clip", "145": "parachute", "146": "parking meter", "147": "parrot", "148": "pear", "149": "pen", "150": "penguin", "151": "person sitting", "152": "person walking", "153": "piano", "154": "pickup truck", "155": "pig", "156": "pigeon", "157": "pineapple", "158": "pipe (for smoking)", "159": "pizza", "160": "potted plant", "161": "power outlet", "162": "present", "163": "pretzel", "164": "pumpkin", "165": "purse", "166": "rabbit", "167": "race car", "168": "radio", "169": "rainbow", "170": "revolver", "171": "rifle", "172": "rollerblades", "173": "rooster", "174": "sailboat", "175": "santa claus", "176": "satellite", "177": "satellite dish", "178": "saxophone", "179": "scissors", "180": "scorpion", "181": "screwdriver", "182": "sea turtle", "183": "seagull", "184": "shark", "185": "sheep", "186": "ship", "187": "shoe", "188": "shovel", "189": "skateboard", "190": "skull", "191": "skyscraper", "192": "snail", "193": "snake", "194": "snowboard", "195": "snowman", "196": "socks", "197": "space shuttle", "198": "speed-boat", "199": "spider", "200": "sponge bob", "201": "spoon", "202": "squirrel", "203": "standing bird", "204": "stapler", "205": "strawberry", "206": "streetlight", "207": "submarine", "208": "suitcase", "209": "sun", "210": "suv", "211": "swan", "212": "sword", "213": "syringe", "214": "t-shirt", "215": "table", "216": "tablelamp", "217": "teacup", "218": "teapot", "219": "teddy-bear", "220": "telephone", "221": "tennis-racket", "222": "tent", "223": "tiger", "224": "tire", "225": "toilet", "226": "tomato", "227": "tooth", "228": "toothbrush", "229": "tractor", "230": "traffic light", "231": "train", "232": "tree", "233": "trombone", "234": "trousers", "235": "truck", "236": "trumpet", "237": "tv", "238": "umbrella", "239": "van", "240": "vase", "241": "violin", "242": "walkie talkie", "243": "wheel", "244": "wheelbarrow", "245": "windmill", "246": "wine-bottle", "247": "wineglass", "248": "wrist-watch", "249": "zebra"}}}}], "splits": [{"name": "train", "num_bytes": 590878465.7704024, "num_examples": 19879}, {"name": "test", "num_bytes": 6007805.400597609, "num_examples": 201}], "download_size": 590867064, "dataset_size": 596886271.171}} | 2022-12-19T15:01:51+00:00 |
76428efedd7bc44e3799cf023015377d10ec11aa | saraimarte/webdev | [
"license:other",
"region:us"
]
| 2022-12-04T03:16:32+00:00 | {"license": "other"} | 2022-12-04T03:16:53+00:00 |
|
e9c41ee046eeab35d0aaf6e45008c7229456c087 | nlpworker/COBI | [
"region:us"
]
| 2022-12-04T03:44:54+00:00 | {} | 2022-12-04T03:52:04+00:00 |
|
9beb0992d7452fc019f57e84a39b952bfde3a964 | # AutoTrain Dataset for project: whatsapp_chat_summarization
## Dataset Description
This dataset has been automatically processed by AutoTrain for project whatsapp_chat_summarization.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_id": "13682435",
"text": "Ella: Hi, did you get my text?\nJesse: Hey, yeah sorry- It's been crazy here. I'll collect Owen, don't worry about it :)\nElla: Oh thank you!! You're a lifesaver!\nJesse: It's not problem ;) Good luck with your meeting!!\nElla: Thanks again! :)",
"target": "Jesse will collect Owen so that Ella can go for a meeting."
},
{
"feat_id": "13728090",
"text": "William: Hey. Today i saw you were arguing with Blackett.\nWilliam: Are you guys fine?\nElizabeth: Hi. Sorry you had to see us argue.\nElizabeth: It was just a small misunderstanding but we will solve it.\nWilliam: Hope so\nWilliam: You think I should to talk to him about it?\nElizabeth: No don't\nElizabeth: He won't like it that we talked after the argument.\nWilliam: Ok. But if you need any help, don't hesitate to call me\nElizabeth: Definitely",
"target": "Elizabeth had an argument with Blackett today, but she doesn't want William to intermeddle."
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_id": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1600 |
| valid | 400 |
| dippatel11/autotrain-data-whatsapp_chat_summarization | [
"language:en",
"region:us"
]
| 2022-12-04T04:33:48+00:00 | {"language": ["en"], "task_categories": ["conditional-text-generation"]} | 2022-12-04T04:44:33+00:00 |
b41c9d5911d05c40762299bca2ab795f5d466a6e | # Dataset Card for "wmt16_sentence_lang"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Sandipan1994/wmt16_sentence_lang | [
"region:us"
]
| 2022-12-04T06:08:54+00:00 | {"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1427686436.0, "num_examples": 9097770}, {"name": "test", "num_bytes": 771496.0, "num_examples": 5998}, {"name": "validation", "num_bytes": 549009.0, "num_examples": 4338}], "download_size": 1022627880, "dataset_size": 1429006941.0}} | 2022-12-04T06:13:19+00:00 |
927600392068f0bc3c42f7f57aae96a234bb9450 | valehamiri/test-large-files-2 | [
"license:cc-by-4.0",
"region:us"
]
| 2022-12-04T06:16:41+00:00 | {"license": "cc-by-4.0"} | 2022-12-04T07:05:16+00:00 |
|
8774d7080d9ae9b4d8bc64185cf48fa732f5d9d5 | # AutoTrain Dataset for project: dippatel_summarizer
## Dataset Description
This dataset has been automatically processed by AutoTrain for project dippatel_summarizer.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_id": "13864393",
"text": "Peter: So have you gone to see the wedding?\nHolly: of course, it was so exciting\nRuby: I really don't understand what's so exciting about it\nAngela: me neither\nHolly: because it's the first person of colour in any Western royal family\nRuby: is she?\nPeter: it's not true\nHolly: no?\nPeter: there is a princess in Liechtenstein\nPeter: I think a few years ago a prince of Liechtenstein married a woman from Africa\nPeter: and it was the first case of this kind among European ruling dynasties\nHolly: what? I've never heard of it\nPeter: wait, I'll google it\nRuby: interesting\nPeter: here: <file_other>\nPeter: Princess Angela von Liechtenstein, born Angela Gisela Brown\nPeter: sorry, she's from Panama, but anyway of African descent\nRuby: right! but who cares about Liechtenstein?!\nPeter: lol, I just noticed that it's not true, what you wrote\nRuby: I'm excited anyway, she's the first in the UK for sure",
"target": "Holly went to see the royal wedding. Prince of Liechtenstein married a Panamanian woman of African descent."
},
{
"feat_id": "13716378",
"text": "Max: I'm so sorry Lucas. I don't know what got into me.\nLucas: .......\nLucas: I don't know either.\nMason: that was really fucked up Max\nMax: I know. I'm so sorry :(.\nLucas: I don't know, man.\nMason: what were you thinking??\nMax: I wasn't.\nMason: yea\nMax: Can we please meet and talk this through? Please.\nLucas: Ok. I'll think about it and let you know.\nMax: Thanks...",
"target": "Max is sorry about his behaviour so wants to meet up with Lucas and Mason. Lucas will let him know. "
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_id": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2400 |
| valid | 600 |
| dippatel11/autotrain-data-dippatel_summarizer | [
"region:us"
]
| 2022-12-04T06:20:12+00:00 | {"task_categories": ["conditional-text-generation"]} | 2022-12-04T06:37:40+00:00 |
832c5749f4710e89928b7b40af464d4218948d38 |
# Dataset Card for Yandex.Q
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** https://github.com/its5Q/yandex-q
### Dataset Summary
This is a dataset of questions and answers scraped from [Yandex.Q](https://yandex.ru/q/). There are 836810 answered questions out of the total of 1297670.
The full dataset that includes all metadata returned by Yandex.Q APIs and contains unanswered questions can be found in `full.jsonl.gz`
### Languages
The dataset is mostly in Russian, but there may be other languages present
## Dataset Structure
### Data Fields
The dataset consists of 3 fields:
- `question` - question title (`string`)
- `description` - question description (`string` or `null`)
- `answer` - answer to the question (`string`)
### Data Splits
All 836810 examples are in the train split, there is no validation split.
## Dataset Creation
The data was scraped through some "hidden" APIs using several scripts, located in [my GitHub repository](https://github.com/its5Q/yandex-q)
## Additional Information
### Dataset Curators
- https://github.com/its5Q
| its5Q/yandex-q | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_ids:language-modeling",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ru",
"license:cc0-1.0",
"region:us"
]
| 2022-12-04T06:56:33+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ru"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-generation", "question-answering"], "task_ids": ["language-modeling", "open-domain-qa"], "pretty_name": "Yandex.Q", "tags": []} | 2023-04-02T15:48:29+00:00 |
25f4994d14d10e8e21d49f0a828fc4b84f3259c9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-xsum
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@AkankshaK](https://huggingface.co/AkankshaK) for evaluating this model. | autoevaluate/autoeval-eval-xsum-default-604b3d-2333173628 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-04T08:37:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-xsum", "metrics": ["rouge", "bleu", "meteor", "bertscore"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-12-04T09:10:58+00:00 |
70b44b26f0dbdc9885f35f1c264eea284ab51dcf | # BAYƐLƐMABAGA: Parallel French - Bambara Dataset for Machine Learning
## Overview
The Bayelemabaga dataset is a collection of 46976 aligned machine translation ready Bambara-French lines, originating from [Corpus Bambara de Reference](http://cormande.huma-num.fr/corbama/run.cgi/first_form). The dataset is constitued of text extracted from **264** text files, varing from periodicals, books, short stories, blog posts, part of the Bible and the Quran.
## Snapshot: 46976
| | |
|:---|---:|
| **Lines** | **46976** |
| French Tokens (spacy) | 691312 |
| Bambara Tokens (daba) | 660732 |
| French Types | 32018 |
| Bambara Types | 29382 |
| Avg. Fr line length | 77.6 |
| Avg. Bam line length | 61.69 |
| Number of text sources | 264 |
## Data Splits
| | | |
|:-----:|:---:|------:|
| Train | 80% | 37580 |
| Valid | 10% | 4698 |
| Test | 10% | 4698 |
||
## Remarks
* We are working on resolving some last minute misalignment issues.
### Maintenance
* This dataset is supposed to be actively maintained.
### Benchmarks:
- `Coming soon`
### Sources
- [`sources`](./bayelemabaga/sources.txt)
### To note:
- ʃ => (sh/shy) sound: Symbol left in the dataset, although not a part of bambara orthography nor French orthography.
## License
- `CC-BY-SA-4.0`
## Version
- `1.0.1`
## Citation
```
@misc{bayelemabagamldataset2022
title={Machine Learning Dataset Development for Manding Languages},
author={
Valentin Vydrin and
Jean-Jacques Meric and
Kirill Maslinsky and
Andrij Rovenchak and
Allahsera Auguste Tapo and
Sebastien Diarra and
Christopher Homan and
Marco Zampieri and
Michael Leventhal
},
howpublished = {url{https://github.com/robotsmali-ai/datasets}},
year={2022}
}
```
## Contacts
- `sdiarra <at> robotsmali <dot> org`
- `aat3261 <at> rit <dot> edu` | RobotsMaliAI/bayelemabaga | [
"task_categories:translation",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:bm",
"language:fr",
"region:us"
]
| 2022-12-04T08:47:14+00:00 | {"language": ["bm", "fr"], "size_categories": ["10K<n<100K"], "task_categories": ["translation", "text-generation"]} | 2023-04-24T15:56:24+00:00 |
a71dd7fe273ea030307ce04af0dfae3c2cde1ecd | # Dataset Card for "Lab2scalable"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Victorlopo21/Lab2scalable | [
"region:us"
]
| 2022-12-04T09:49:55+00:00 | {"dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 5726523552, "num_examples": 5962}, {"name": "test", "num_bytes": 2546311152, "num_examples": 2651}], "download_size": 1397383253, "dataset_size": 8272834704}} | 2022-12-04T09:52:27+00:00 |
d96443a5c95f4bbdbafd86f691936121b444f76e | # Dataset Card for "librispeech5k-augmentated-train-prepared"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | DTU54DL/librispeech5k-augmentated-train-prepared | [
"region:us"
]
| 2022-12-04T10:12:34+00:00 | {"dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "chapter_id", "dtype": "int64"}, {"name": "id", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train.360", "num_bytes": 6796928865.0, "num_examples": 5000}], "download_size": 3988873165, "dataset_size": 6796928865.0}} | 2022-12-04T12:59:43+00:00 |
96eb342c58e9d4ba2286a20f8406126a6dac63a2 | # Dataset Card for "id2223_whisper_swedish"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jsra2/id2223_whisper_swedish | [
"region:us"
]
| 2022-12-04T11:53:27+00:00 | {"dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 11871603408, "num_examples": 12360}, {"name": "test", "num_bytes": 4868697560, "num_examples": 5069}], "download_size": 2458684890, "dataset_size": 16740300968}} | 2022-12-04T11:58:21+00:00 |
b186aa62836c86d22be8fba79a15e546e2b79d7d |
# stacked samsum 1024
Created with the `stacked-booksum` repo version v0.25. It contains:
1. Original Dataset: copy of the base dataset
2. Stacked Rows: The original dataset is processed by stacking rows based on certain criteria:
- Maximum Input Length: The maximum length for input sequences is 1024 tokens in the longt5 model tokenizer.
- Maximum Output Length: The maximum length for output sequences is also 1024 tokens in the longt5 model tokenizer.
3. Special Token: The dataset utilizes the `[NEXT_CONCEPT]` token to indicate a new topic **within** the same summary. It is recommended to explicitly add this special token to your model's tokenizer before training, ensuring that it is recognized and processed correctly during downstream usage.
## stats

## dataset details
Default (train):
```python
[2022-12-04 13:19:32] INFO:root:{'num_columns': 4,
'num_rows': 14732,
'num_unique_target': 14730,
'num_unique_text': 14265,
'summary - average chars': 110.13,
'summary - average tokens': 28.693727939180015,
'text input - average chars': 511.22,
'text input - average tokens': 148.88759163725223}
```
stacked (train)
```python
[2022-12-05 00:49:04] INFO:root:stacked 14730 rows, 2 rows were ineligible
[2022-12-05 00:49:04] INFO:root:dropped 20 duplicate rows, 29442 rows remain
[2022-12-05 00:49:04] INFO:root:shuffling output with seed 182
[2022-12-05 00:49:04] INFO:root:STACKED - basic stats - train
[2022-12-05 00:49:04] INFO:root:{'num_columns': 5,
'num_rows': 29442,
'num_unique_chapters': 28975,
'num_unique_summaries': 29441,
'summary - average chars': 452.8,
'summary - average tokens': 106.46820868147545,
'text input - average chars': 1814.09,
'text input - average tokens': 528.665579783982}
``` | stacked-summaries/stacked-samsum-1024 | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"source_datasets:samsum",
"language:en",
"license:apache-2.0",
"stacked summaries",
"region:us"
]
| 2022-12-04T12:22:28+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "source_datasets": ["samsum"], "task_categories": ["summarization"], "pretty_name": "Stacked Samsum - 1024", "tags": ["stacked summaries"]} | 2023-05-28T23:30:18+00:00 |
6dce44cbfe5ea7c50ac1ba7fc10ee7fa6132626a |
# Dataset Card for Kathbath
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:https://ai4bharat.org/indic-superb**
- **Repository:https://github.com/AI4Bharat/IndicSUPERB**
- **Paper:https://arxiv.org/pdf/2208.11761.pdf**
- **Point of Contact:[email protected]**
### Dataset Summary
Kathbath is an human-labeled ASR dataset containing 1,684 hours of labelled speech data across 12 Indian languages from 1,218 contributors located in 203 districts in India
### Languages
- Bengali
- Gujarati
- Kannada
- Hindi
- Malayalam
- Marathi
- Odia
- Punjabi
- Sanskrit
- Tamil
- Telugu
- Urdu
## Dataset Structure
```
Audio Data
data
├── bengali
│ ├── <split_name>
│ │ ├── 844424931537866-594-f.m4a
│ │ ├── 844424931029859-973-f.m4a
│ │ ├── ...
├── gujarati
├── ...
Transcripts
data
├── bengali
│ ├── <split_name>
│ │ ├── transcription_n2w.txt
├── gujarati
├── ...
```
### Licensing Information
The IndicSUPERB dataset is released under this licensing scheme:
- We do not own any of the raw text used in creating this dataset.
- The text data comes from the IndicCorp dataset which is a crawl of publicly available websites.
- The audio transcriptions of the raw text and labelled annotations of the datasets have been created by us.
- We license the actual packaging of all this data under the Creative Commons CC0 license (“no rights reserved”).
- To the extent possible under law, AI4Bharat has waived all copyright and related or neighboring rights to the IndicSUPERB dataset.
- This work is published from: India.
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2208.11761,
doi = {10.48550/ARXIV.2208.11761},
url = {https://arxiv.org/abs/2208.11761},
author = {Javed, Tahir and Bhogale, Kaushal Santosh and Raman, Abhigyan and Kunchukuttan, Anoop and Kumar, Pratyush and Khapra, Mitesh M.},
title = {IndicSUPERB: A Speech Processing Universal Performance Benchmark for Indian languages},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
We would like to thank the Ministry of Electronics and Information Technology (MeitY) of the Government of India and the Centre for Development of Advanced Computing (C-DAC), Pune for generously supporting this work and providing us access to multiple GPU nodes on the Param Siddhi Supercomputer. We would like to thank the EkStep Foundation and Nilekani Philanthropies for their generous grant which went into hiring human resources as well as cloud resources needed for this work. We would like to thank DesiCrew for connecting us to native speakers for collecting data. We would like to thank Vivek Seshadri from Karya Inc. for helping setup the data collection infrastructure on the Karya platform. We would like to thank all the members of AI4Bharat team in helping create the Query by Example dataset. | ai4bharat/kathbath | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"license:mit",
"arxiv:2208.11761",
"region:us"
]
| 2022-12-04T13:28:53+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "license": ["mit"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "Kathbath", "language_bcp47": ["bn,gu,kn,hi,ml,mr,or,pa,sn,ta,te,ur"], "tags": []} | 2022-12-09T09:59:48+00:00 |
96591e8627e47aa1ede6922d30d393639f14ca5a | # Dataset Card for "Kimetsu-no-Yaiba-Image-Dataset-01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Kurokabe/Kimetsu-no-Yaiba-Image-Dataset-01 | [
"region:us"
]
| 2022-12-04T13:36:04+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2005251870.0, "num_examples": 6000}, {"name": "validation", "num_bytes": 207003826.0, "num_examples": 809}], "download_size": 2135573514, "dataset_size": 2212255696.0}} | 2022-12-04T13:37:58+00:00 |
f934536471ab0b3968c0d56de4e7c19b8d9f0945 | # Dataset Card for "lcbsi-wbc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | polejowska/lcbsi-wbc | [
"region:us"
]
| 2022-12-04T14:33:39+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "basophil", "1": "eosinophil", "2": "lymphocyte", "3": "monocyte", "4": "neutrophil"}}}}, {"name": "pixel_values", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 1807214112, "num_examples": 2988}, {"name": "test", "num_bytes": 299387880, "num_examples": 495}, {"name": "validation", "num_bytes": 297573408, "num_examples": 492}], "download_size": 461247265, "dataset_size": 2404175400}} | 2022-12-04T16:00:37+00:00 |
16406de727a5825ec0e12d8cc4cd8800be2bc7ba | DreamAmir/art | [
"license:artistic-2.0",
"region:us"
]
| 2022-12-04T14:40:51+00:00 | {"license": "artistic-2.0"} | 2022-12-04T14:48:35+00:00 |
|
85998e63229fefc675c757d0891205d0ec505378 | # Dataset Card for "linustechtips"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Whispering-GPT](https://github.com/matallanas/whisper_gpt_pipeline)
- **Repository:** [whisper_gpt_pipeline](https://github.com/matallanas/whisper_gpt_pipeline)
- **Paper:** [whisper](https://cdn.openai.com/papers/whisper.pdf) and [gpt](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf)
- **Point of Contact:** [Whispering-GPT organization](https://huggingface.co/Whispering-GPT)
### Dataset Summary
This dataset is created by applying whisper to the videos of the Youtube channel [Linus Tech Tips](https://www.youtube.com/channel/UCXuqSBlHAE6Xw-yeJA0Tunw). The dataset was created a medium size whisper model.
### Languages
- **Language**: English
## Dataset Structure
The dataset contains all the transcripts plus the audio of the different videos of Linus Tech Tips.
### Data Fields
The dataset is composed by:
- **id**: Id of the youtube video.
- **channel**: Name of the channel.
- **channel\_id**: Id of the youtube channel.
- **title**: Title given to the video.
- **categories**: Category of the video.
- **description**: Description added by the author.
- **text**: Whole transcript of the video.
- **segments**: A list with the time and transcription of the video.
- **start**: When started the trancription.
- **end**: When the transcription ends.
- **text**: The text of the transcription.
- **audio**: the extracted audio of the video.
### Data Splits
- Train split.
## Dataset Creation
### Source Data
The transcriptions are from the videos of [Linus Tech Tips Channel](https://www.youtube.com/channel/UCXuqSBlHAE6Xw-yeJA0Tunw)
### Contributions
Thanks to [Whispering-GPT](https://huggingface.co/Whispering-GPT) organization for adding this dataset. | Whispering-GPT/linustechtips-transcript-audio | [
"task_categories:automatic-speech-recognition",
"whisper",
"whispering",
"medium",
"region:us"
]
| 2022-12-04T14:44:41+00:00 | {"task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "channel", "dtype": "string"}, {"name": "channel_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "categories", "sequence": "string"}, {"name": "tags", "sequence": "string"}, {"name": "description", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "segments", "list": [{"name": "start", "dtype": "float64"}, {"name": "end", "dtype": "float64"}, {"name": "text", "dtype": "string"}]}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}], "splits": [{"name": "train", "num_bytes": 117140526959.355, "num_examples": 5643}], "download_size": 111764307564, "dataset_size": 117140526959.355}, "tags": ["whisper", "whispering", "medium"]} | 2022-12-05T11:09:56+00:00 |
3e25b25e1de323772ec67b123ec4918feb7b4a72 | # Dataset Card for "librispeech-augmentated-validation-prepared"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | DTU54DL/librispeech-augmentated-validation-prepared | [
"region:us"
]
| 2022-12-04T15:05:34+00:00 | {"dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "chapter_id", "dtype": "int64"}, {"name": "id", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "validation", "num_bytes": 3218361347.125, "num_examples": 2703}], "download_size": 1286686337, "dataset_size": 3218361347.125}} | 2022-12-04T15:13:17+00:00 |
42c0813e51fc632214206b9f20f2d125bfc69833 | Using it for assessment.
# Dataset for Multi Domain (Including Kitchen, Books, DVDs, and Electronics)
[Multi-Domain Sentiment Dataset](https://www.cs.jhu.edu/~mdredze/datasets/sentiment/index2.html) by John Blitzer, Mark Dredze, Fernando Pereira.
### Description:
The Multi-Domain Sentiment Dataset contains product reviews taken from Amazon.com from 4 product types (domains): Kitchen, Books, DVDs, and Electronics. Each domain has several thousand reviews, but the exact number varies by domain. Reviews contain star ratings (1 to 5 stars) that can be converted into binary labels if needed. This page contains some descriptions about the data. If you have questions, please email me directly (email found here).
A few notes regarding the data.
1) There are 4 directories corresponding to each of the four domains. Each directory contains 3 files called positive.review, negative.review and unlabeled.review. (The books directory doesn't contain the unlabeled but the link is below.) While the positive and negative files contain positive and negative reviews, these aren't necessarily the splits we used in the experiments. We randomly drew from the three files ignoring the file names.
2) Each file contains a pseudo XML scheme for encoding the reviews. Most of the fields are self explanatory. The reviews have a unique ID field that isn't very unique. If it has two unique id fields, ignore the one containing only a number.
### Link to download the data:
Multi-Domain Sentiment Dataset (30 MB) [domain_sentiment_data.tar.gz](https://www.cs.jhu.edu/~mdredze/datasets/sentiment/domain_sentiment_data.tar.gz)
Books unlabeled data (2 MB) [book.unlabeled.gz](https://www.cs.jhu.edu/~mdredze/datasets/sentiment/book.unlabeled.gz)
| JSSICE/Multi-Domain-Sentiment-Dataset | [
"region:us"
]
| 2022-12-04T15:31:40+00:00 | {} | 2022-12-04T17:38:17+00:00 |
e06c516556d8db1b484b75a78df73e6c6b646e47 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: autoevaluate/summarization-not-evaluated
* Dataset: autoevaluate/xsum-sample
* Config: autoevaluate--xsum-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-a0b7f8d6-f4e4-45b3-a9ae-cefcb10962b0-128122 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-04T15:48:52+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/xsum-sample"], "eval_info": {"task": "summarization", "model": "autoevaluate/summarization-not-evaluated", "metrics": [], "dataset_name": "autoevaluate/xsum-sample", "dataset_config": "autoevaluate--xsum-sample", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-12-04T15:49:30+00:00 |
c8129741af418d9ae43cfc1fc4f285704e26035f | # Dataset Card for "farsi_paraphrase_detection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | alighasemi/farsi_paraphrase_detection | [
"region:us"
]
| 2022-12-04T16:11:53+00:00 | {"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not_equivalent", "1": "equivalent"}}}}], "splits": [{"name": "train", "num_bytes": 772532.4738052645, "num_examples": 6260}, {"name": "test", "num_bytes": 96628.26309736774, "num_examples": 783}, {"name": "validation", "num_bytes": 96628.26309736774, "num_examples": 783}], "download_size": 462094, "dataset_size": 965788.9999999999}} | 2022-12-06T18:39:55+00:00 |
a03b9f8cfec460acf50ec55efb80a445ec373a0a | ana-tamais/dogsnack-premier | [
"region:us"
]
| 2022-12-04T16:49:45+00:00 | {} | 2022-12-04T16:51:12+00:00 |
|
30c24cafa3cce43dc2aa87b370f3242eeebbac50 | ex-natura/histoire-universelle-du-regne-vegetal | [
"license:apache-2.0",
"region:us"
]
| 2022-12-04T17:57:16+00:00 | {"license": "apache-2.0"} | 2022-12-04T17:57:17+00:00 |
|
b452ae2d1fa6618796774c33e81a9bb91b9ee4a8 |
# Dataset Card for Babelbox Voice
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This database was created by Nordic Language Technology for the development of automatic speech recognition and dictation in Swedish.
It is redistributed as a Hugging Face dataset for convienience.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Swedish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | babelbox/babelbox_voice | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:sv",
"license:cc0-1.0",
"NST",
"region:us"
]
| 2022-12-04T18:25:31+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["sv"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "Babelbox Voice", "tags": ["NST"]} | 2023-02-13T21:27:17+00:00 |
af91e037ba86e29b8022288d0cbb44f2be04a4e5 | # Dataset Card for "english_wikipedia"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lucadiliello/english_wikipedia | [
"region:us"
]
| 2022-12-04T18:59:49+00:00 | {"dataset_info": {"features": [{"name": "filename", "dtype": "string"}, {"name": "maintext", "dtype": "string"}, {"name": "source_domain", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10569005563, "num_examples": 4184712}], "download_size": 6144953788, "dataset_size": 10569005563}} | 2022-12-04T19:05:23+00:00 |
edb74e6c88abb38f0a0fc993a7068ab00a32db45 | # Dataset Card for "bookcorpusopen"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lucadiliello/bookcorpusopen | [
"region:us"
]
| 2022-12-04T19:05:51+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6643459928, "num_examples": 17868}], "download_size": 3940589290, "dataset_size": 6643459928}} | 2022-12-04T19:09:30+00:00 |
6ba6b843709b2bc88d7c8c50a15b413dbc7de36b | aamirhs/pashto | [
"license:gpl-3.0",
"region:us"
]
| 2022-12-04T19:14:59+00:00 | {"license": "gpl-3.0"} | 2022-12-04T19:14:59+00:00 |
|
3d5d95dafa03478d4295699c28d3f61941b8bbbc | # Dataset Card for "Food_Final"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | izou3/Food_Final | [
"region:us"
]
| 2022-12-04T20:11:44+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 88626665.0, "num_examples": 611}], "download_size": 88172412, "dataset_size": 88626665.0}} | 2022-12-04T20:13:58+00:00 |
778728fa3b6cae9334c506debd85de675c030e66 | # Dataset Card for "met-ds-0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pnadel/met-ds-0 | [
"region:us"
]
| 2022-12-04T20:47:19+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 609545229.0, "num_examples": 628}], "download_size": 607994410, "dataset_size": 609545229.0}} | 2022-12-30T20:38:21+00:00 |
0381dbda010bebd91d17c3862eec40207c5e44dd | # Dataset Card for "tokenized-recipe-nlg-gpt2-ners-ingredients-only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pratultandon/tokenized-recipe-nlg-gpt2-ners-ingredients-only | [
"region:us"
]
| 2022-12-05T01:28:02+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 1211189668, "num_examples": 2022671}, {"name": "test", "num_bytes": 63599856, "num_examples": 106202}], "download_size": 304111787, "dataset_size": 1274789524}} | 2022-12-06T04:14:52+00:00 |
868f2c18b0af7eade45171d6d2f23f6f83df62bc | Longor/BloodSmearClearity | [
"license:openrail",
"region:us"
]
| 2022-12-05T02:49:49+00:00 | {"license": "openrail"} | 2022-12-05T02:55:26+00:00 |
|
b46791def9f75712df2c6c3ba59949d51734df98 | # Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | SatCat/github-issues | [
"region:us"
]
| 2022-12-05T03:07:14+00:00 | {"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "labels", "list": [{"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "assignees", "list": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "milestone", "struct": [{"name": "closed_at", "dtype": "string"}, {"name": "closed_issues", "dtype": "int64"}, {"name": "created_at", "dtype": "string"}, {"name": "creator", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "description", "dtype": "string"}, {"name": "due_on", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "labels_url", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "open_issues", "dtype": "int64"}, {"name": "state", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "updated_at", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "comments", "sequence": "string"}, {"name": "created_at", "dtype": "string"}, {"name": "updated_at", "dtype": "string"}, {"name": "closed_at", "dtype": "string"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "null"}, {"name": "draft", "dtype": "bool"}, {"name": "pull_request", "struct": [{"name": "diff_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "merged_at", "dtype": "string"}, {"name": "patch_url", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "body", "dtype": "string"}, {"name": "reactions", "struct": [{"name": "+1", "dtype": "int64"}, {"name": "-1", "dtype": "int64"}, {"name": "confused", "dtype": "int64"}, {"name": "eyes", "dtype": "int64"}, {"name": "heart", "dtype": "int64"}, {"name": "hooray", "dtype": "int64"}, {"name": "laugh", "dtype": "int64"}, {"name": "rocket", "dtype": "int64"}, {"name": "total_count", "dtype": "int64"}, {"name": "url", "dtype": "string"}]}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "null"}, {"name": "state_reason", "dtype": "string"}, {"name": "is_pull_request", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 20549193, "num_examples": 5345}], "download_size": 5891736, "dataset_size": 20549193}} | 2022-12-05T03:07:30+00:00 |
56b9ec571b4c465c1115b5ba9906fcd64eb8f5ba |
# Dataset Card for [UCLA Bashini Tamil Dataset]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://bhashini.gov.in/ulca
- **Repository:**: https://github.com/Open-Speech-EkStep/ULCA-asr-dataset-corpus
- **Paper:**: https://arxiv.org/abs/2111.03945
- **Leaderboard:**
- **Point of Contact:**[email protected]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
- Tamil
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@parambharat](https://github.com/parambharat) for adding this dataset.
| parambharat/ucla_dataset | [
"task_categories:automatic-speech-recognition",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:ta",
"license:cc-by-4.0",
"tamil asr",
"nlp",
"arxiv:2111.03945",
"region:us"
]
| 2022-12-05T07:35:03+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["ta"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "UCLA Bashini Tamil Dataset", "tags": ["tamil asr", "nlp"]} | 2022-12-05T08:46:24+00:00 |
c6b53fa441f6e464ae8a5820ff3a9bb28c434505 |
# Dataset Summary
Russian split for [xP3](https://huggingface.co/datasets/bigscience/xP3).
Using the script in this repository as an example it is very easy to add new languages to xP3.
# Citation Information
```
@article{yong2022bloom+,
title={BLOOM+ 1: Adding Language Support to BLOOM for Zero-Shot Prompting},
author={Yong, Zheng-Xin and Schoelkopf, Hailey and Muennighoff, Niklas and Aji, Alham Fikri and Adelani, David Ifeoluwa and Almubarak, Khalid and Bari, M Saiful and Sutawika, Lintang and Kasai, Jungo and Baruwa, Ahmed and others},
journal={arXiv preprint arXiv:2212.09535},
year={2022}
}
```
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | bs-la/xP3ru | [
"language:ru",
"arxiv:2211.01786",
"region:us"
]
| 2022-12-05T07:35:25+00:00 | {"language": ["ru"]} | 2023-01-09T20:16:35+00:00 |
ba63d0d2a7493fb34891c9a7793632d901b5cb79 | zhanye/zhanye | [
"license:unknown",
"region:us"
]
| 2022-12-05T08:18:59+00:00 | {"license": "unknown"} | 2022-12-05T08:23:10+00:00 |
|
0e142d1253a815f7a180038baeb8eb480c8e1215 | # Dataset Card for "vie-book"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hieule/vie-book | [
"region:us"
]
| 2022-12-05T09:39:49+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7481512027, "num_examples": 3620527}], "download_size": 3948213824, "dataset_size": 7481512027}} | 2022-12-13T10:40:01+00:00 |
e2a7f3ebf24b2914f5a8f24e703b864771a98ffa | # Dataset Card for "TH_corpora_parliament_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Thanapon1998/TH_corpora_parliament_processed | [
"region:us"
]
| 2022-12-05T10:13:25+00:00 | {"dataset_info": {"features": [{"name": "review_body", "dtype": "string"}, {"name": "star_rating", "dtype": {"class_label": {"names": {"0": "1", "1": "2", "2": "3", "3": "4", "4": "5"}}}}], "splits": [{"name": "train", "num_bytes": 60691412, "num_examples": 40000}], "download_size": 25964649, "dataset_size": 60691412}} | 2022-12-05T10:13:32+00:00 |
2c60fe7070a9e86ca7ba90258114a2e6595bed09 |
# Dataset Card for ATCO2 test set corpus (1hr set)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages and Other Details](#languages-and-other-details)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [ATCO2 project homepage](https://www.atco2.org/)
- **Repository:** [ATCO2 corpus](https://github.com/idiap/atco2-corpus)
- **Paper:** [ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications](https://arxiv.org/abs/2211.04054)
### Dataset Summary
ATCO2 project aims at developing a unique platform allowing to collect, organize and pre-process air-traffic control (voice communication) data from air space. This project has received funding from the Clean Sky 2 Joint Undertaking (JU) under grant agreement No 864702. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and the Clean Sky 2 JU members other than the Union.
The project collected the real-time voice communication between air-traffic controllers and pilots available either directly through publicly accessible radio frequency channels or indirectly from air-navigation service providers (ANSPs). In addition to the voice communication data, contextual information is available in a form of metadata (i.e. surveillance data). The dataset consists of two distinct packages:
- A corpus of 5000+ hours (pseudo-transcribed) of air-traffic control speech collected across different airports (Sion, Bern, Zurich, etc.) in .wav format for speech recognition. Speaker distribution is 90/10% between males and females and the group contains native and non-native speakers of English.
- A corpus of 4 hours (transcribed) of air-traffic control speech collected across different airports (Sion, Bern, Zurich, etc.) in .wav format for speech recognition. Speaker distribution is 90/10% between males and females and the group contains native and non-native speakers of English. This corpus has been transcribed with orthographic information in XML format with speaker noise information, SNR values and others. Read Less
- A free sample of the 4 hours transcribed data is in [ATCO2 project homepage](https://www.atco2.org/data)
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`. Already adapted/fine-tuned models are available here --> [Wav2Vec 2.0 LARGE mdel](https://huggingface.co/Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-uwb-atcc-and-atcosim).
### Languages and other details
The text and the recordings are in English. For more information see Table 3 and Table 4 of [ATCO2 corpus paper](https://arxiv.org/abs/2211.04054)
## Dataset Structure
### Data Fields
- `id (string)`: a string of recording identifier for each example, corresponding to its.
- `audio (audio)`: audio data for the given ID
- `text (string)`: transcript of the file already normalized. Follow these repositories for more details [w2v2-air-traffic](https://github.com/idiap/w2v2-air-traffic) and [bert-text-diarization-atc](https://github.com/idiap/bert-text-diarization-atc)
- `segment_start_time (float32)`: segment start time (normally 0)
- `segment_end_time (float32): segment end time
- `duration (float32)`: duration of the recording, compute as segment_end_time - segment_start_time
## Additional Information
### Licensing Information
The licensing status of the ATCO2-test-set-1h corpus is in the file **ATCO2-ASRdataset-v1_beta - End-User Data Agreement** in the data folder. Download the data in [ATCO2 project homepage](https://www.atco2.org/data)
### Citation Information
Contributors who prepared, processed, normalized and uploaded the dataset in HuggingFace:
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
| Jzuluaga/atco2_corpus_1h | [
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"language:en",
"audio",
"automatic-speech-recognition",
"en-atc",
"en",
"noisy-speech-recognition",
"speech-recognition",
"arxiv:2211.04054",
"region:us"
]
| 2022-12-05T10:37:25+00:00 | {"language": ["en"], "multilinguality": ["monolingual"], "task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "segment_start_time", "dtype": "float32"}, {"name": "segment_end_time", "dtype": "float32"}, {"name": "duration", "dtype": "float32"}], "splits": [{"name": "test", "num_bytes": 113872168.0, "num_examples": 871}], "download_size": 113467762, "dataset_size": 113872168.0}, "tags": ["audio", "automatic-speech-recognition", "en-atc", "en", "noisy-speech-recognition", "speech-recognition"]} | 2022-12-05T11:15:31+00:00 |
c86f52fc77bc019576c1a66375a47f2e64b93e6d |
### Dataset Summary
Augmented part of the test data of the LibriSpeech dataset.
As a basis, the original part of the test was taken, and augmentation was carried out to add extraneous noise.
| joefox/LibriSpeech_test_noise | [
"license:apache-2.0",
"region:us"
]
| 2022-12-05T11:06:35+00:00 | {"license": "apache-2.0"} | 2022-12-05T11:47:51+00:00 |
b16eca1126c332cbd870e2dd4c7e474fa1b0b38e | # Dataset Card for "asnq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lucadiliello/asnq | [
"region:us"
]
| 2022-12-05T11:14:52+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "key", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 87612019, "num_examples": 466148}, {"name": "dev", "num_bytes": 87607015, "num_examples": 463914}, {"name": "train", "num_bytes": 3814936393, "num_examples": 20377568}], "download_size": 2602671423, "dataset_size": 3990155427}} | 2022-12-05T11:17:24+00:00 |
90a6e07f98068a397b406147acf323a6c64796db | # Dataset Card for DataTranslationDT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** None
- **Paper:**
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
`dataset = load_dataset("DataTranslationDT", lang1="disfluent", lang2="fluent")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
| EmnaBou/DataTranslationDT | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ar",
"license:unknown",
"region:us"
]
| 2022-12-05T11:30:07+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["ar"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "DataTranslationDT", "dataset_info": [{"config_name": "disluent_fluent", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["disfluent", "fluent"]}}}, {"name": "id", "dtype": "string"}]}]} | 2022-12-12T12:56:36+00:00 |
e53ffb99873a2daff1b1f8c06b93047e35902758 | PhilSch/Testging | [
"license:unknown",
"region:us"
]
| 2022-12-05T11:30:35+00:00 | {"license": "unknown"} | 2022-12-05T11:30:35+00:00 |
|
a809d84d5f8ac35d4e1737d347ec999922f594c5 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.openslr.org/127/
- **Repository:** https://github.com/MILE-IISc
- **Paper:** https://arxiv.org/abs/2207.13331
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Tamil transcribed speech corpus for ASR
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
- Tamil
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Attribution 2.0 Generic (CC BY 2.0)
### Citation Information
@misc{mile_1,
doi = {10.48550/ARXIV.2207.13331},
url = {https://arxiv.org/abs/2207.13331},
author = {A, Madhavaraj and Pilar, Bharathi and G, Ramakrishnan A},
title = {Subword Dictionary Learning and Segmentation Techniques for Automatic Speech Recognition in Tamil and Kannada},
publisher = {arXiv},
year = {2022},
}
@misc{mile_2,
doi = {10.48550/ARXIV.2207.13333},
url = {https://arxiv.org/abs/2207.13333},
author = {A, Madhavaraj and Pilar, Bharathi and G, Ramakrishnan A},
title = {Knowledge-driven Subword Grammar Modeling for Automatic Speech Recognition in Tamil and Kannada},
publisher = {arXiv},
year = {2022},
}
### Contributions
Thanks to [@parambharat](https://github.com/parambharat) for adding this dataset.
| parambharat/mile_dataset | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ta",
"license:cc-by-2.0",
"Tamil ASR",
"Speech Recognition",
"arxiv:2207.13331",
"arxiv:2207.13333",
"region:us"
]
| 2022-12-05T11:37:10+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["ta"], "license": ["cc-by-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "IISc-MILE Tamil ASR Corpus", "tags": ["Tamil ASR", "Speech Recognition"]} | 2022-12-05T11:46:00+00:00 |
1b0e9acd975a677bb7505d9f1ba88cf0c2f884b4 |
# Dataset Card for BibleTTS Hausa
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://masakhane-io.github.io/bibleTTS/
- **Repository:** http://www.openslr.org/129/
- **Paper:** https://arxiv.org/abs/2207.03546
### Dataset Summary
BibleTTS is a large high-quality open Text-to-Speech dataset with up to 80 hours of single speaker, studio quality 48kHz recordings.
This is a Hausa part of the dataset. Aligned hours: 86.6, aligned verses: 40,603.
### Languages
Hausa
## Dataset Structure
### Data Fields
- `audio`: audio path
- `sentence`: transcription of the audio
- `locale`: always set to `ha`
- `book`: 3-char book encoding
- `verse`: verse id
### Data Splits
- `dev`: Book of Ezra (264 verses)
- `test`: Book of Colossians (124 verses)
- `train`: all other books (40215 verses)
## Additional Information
*See [this notebook](https://github.com/seads-org/hausa-speech-recognition/blob/6993c5c74379c93a2416acac6126b60ce6e52df8/notebooks/prepare_bible_dataset.ipynb) for the code on how the dataset was processed.
### Dataset Curators
The dataset uploaded by [vpetukhov](https://github.com/VPetukhov/) who is not connected to the dataset authors. Please, see the project page for more info.
### Licensing Information
The data is released under a commercial-friendly [CC-BY-SA](https://creativecommons.org/licenses/by-sa/4.0/) license.
### Citation Information
Meyer, Josh, et al. "BibleTTS: a large, high-fidelity, multilingual, and uniquely African speech corpus." arXiv preprint arXiv:2207.03546 (2022).
| vpetukhov/bible_tts_hausa | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ha",
"license:cc-by-sa-4.0",
"bible",
"arxiv:2207.03546",
"region:us"
]
| 2022-12-05T11:39:16+00:00 | {"annotations_creators": [], "language_creators": ["expert-generated"], "language": ["ha"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition", "text-to-speech"], "task_ids": [], "pretty_name": "BibleTTS Hausa", "tags": ["bible"]} | 2022-12-05T12:51:17+00:00 |
0cd86b382b3e54ba4febdb6924baadb10f7ce45b |
### Dataset Summary
Augmented part of the test data of the Mozilla Common Voice (part 10, en, test) dataset.
As a basis, the original part of the test was taken, and augmentation was carried out to add extraneous noise.
Part dataset: test
| joefox/Mozilla_Common_Voice_en_test_noise | [
"license:apache-2.0",
"region:us"
]
| 2022-12-05T12:00:21+00:00 | {"license": "apache-2.0"} | 2022-12-06T11:15:48+00:00 |
241bc5d7f6cb498fd3613e58e87d572d1a8d3971 |
### Dataset Summary
Augmented part of the test data of the Mozilla Common Voice (part 10, ru, test) dataset.
As a basis, the original part of the test was taken, and augmentation was carried out to add extraneous noise.
Part dataset: test
| joefox/Mozilla_Common_Voice_ru_test_noise | [
"license:apache-2.0",
"region:us"
]
| 2022-12-05T12:01:35+00:00 | {"license": "apache-2.0"} | 2022-12-05T16:08:43+00:00 |
ce6af0d17318e8d335f687f8380b78c0a477dd72 |
### Dataset Summary
Augmented part of the test data of the Russian LibriSpeech (RuLS) test part dataset.
As a basis, the original part of the test was taken, and augmentation was carried out to add extraneous noise.
Part dataset: test
| joefox/Russian_LibriSpeech_RuLS_test_noise | [
"license:apache-2.0",
"region:us"
]
| 2022-12-05T12:03:09+00:00 | {"license": "apache-2.0"} | 2022-12-06T08:04:44+00:00 |
2ccc00af7fbabc0e52ff78242ad85d092ef0515c | lexshinobi/lexshinobi | [
"license:openrail",
"region:us"
]
| 2022-12-05T12:36:05+00:00 | {"license": "openrail"} | 2022-12-05T12:36:05+00:00 |
|
b7947cfca7ad33753bd31831ce5f4e3457926be5 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification-not-evaluated
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-emotion-default-a0d22e-17376346 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-05T13:16:09+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification-not-evaluated", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-12-05T13:16:35+00:00 |
c305d1178b65afadcdb1cc333e6809b51e25c3d1 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-emotion-default-a0d22e-17376347 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-05T13:16:09+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-12-05T13:16:35+00:00 |
fb50c2a442ad3bc02637558aa84a52572b4d5183 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: lewtun/sagemaker-distilbert-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-emotion-default-73111d-17386348 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-05T13:44:46+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "lewtun/sagemaker-distilbert-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-12-05T13:45:13+00:00 |
6e1d51c08b728309212a96765eb85ab89aa2075d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: lewtun/minilm-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-emotion-default-73111d-17386349 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-05T13:44:51+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "lewtun/minilm-finetuned-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-12-05T13:45:15+00:00 |
8cadcca6588ed0f8bdf158c75b76f35467185e73 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: lewtun/sagemaker-distilbert-emotion-1
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-emotion-default-73111d-17386351 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-05T13:45:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "lewtun/sagemaker-distilbert-emotion-1", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-12-05T13:45:28+00:00 |
ca42ec21794d329c77cfb645b6aca758c7d30db5 | # Dataset Card for "pl-text-images-5000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Zombely/pl-text-images-5000 | [
"region:us"
]
| 2022-12-05T14:23:08+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 379657561.26, "num_examples": 4018}, {"name": "test", "num_bytes": 49155811.0, "num_examples": 516}, {"name": "validation", "num_bytes": 44168746.0, "num_examples": 466}], "download_size": 465789291, "dataset_size": 472982118.26}} | 2022-12-05T14:24:27+00:00 |
6eebe103a951d14d02f96d485542151acdd858e8 |
# Dataset Card for NoisyNER
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Estonian NER corpus](https://doi.org/10.15155/1-00-0000-0000-0000-00073L), [NoisyNER dataset](https://github.com/uds-lsv/NoisyNER)
- **Paper:** [Named Entity Recognition in Estonian](https://aclanthology.org/W13-2412/), [Analysing the Noise Model Error for Realistic Noisy Label Data](https://arxiv.org/abs/2101.09763)
- **Dataset:** NoisyNER
- **Domain:** News
- **Size of downloaded dataset files:** 6.23 MB
- **Size of the generated dataset files:** 9.53 MB
### Dataset Summary
NoisyNER is a dataset for the evaluation of methods to handle noisy labels when training machine learning models.
- Entity Types: `PER`, `ORG`, `LOC`
It is from the NLP/Information Extraction domain and was created through a realistic distant supervision technique. Some highlights and interesting aspects of the data are:
- Seven sets of labels with differing noise patterns to evaluate different noise levels on the same instances
- Full parallel clean labels available to compute upper performance bounds or study scenarios where a small amount of gold-standard data can be leveraged
- Skewed label distribution (typical for Named Entity Recognition tasks)
- For some label sets: noise level higher than the true label probability
- Sequential dependencies between the labels
For more details on the dataset and its creation process, please refer to the original author's publication https://ojs.aaai.org/index.php/AAAI/article/view/16938 (published at AAAI'21).
This dataset is based on the Estonian NER corpus. For more details see https://aclanthology.org/W13-2412/
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language data in NoisyNER is in Estonian (BCP-47 et)
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
'id': '0',
'tokens': ['Tallinna', 'õhusaaste', 'suureneb', '.'],
'lemmas': ['Tallinn+0', 'õhu_saaste+0', 'suurene+b', '.'],
'grammar': ['_H_ sg g', '_S_ sg n', '_V_ b', '_Z_'],
'ner_tags': [5, 0, 0, 0]
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature.
- `tokens`: a `list` of `string` features.
- `lemmas`: a `list` of `string` features.
- `grammar`: a `list` of `string` features.
- `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6}
```
### Data Splits
The splits are the same across all configurations.
|train|validation|test|
|----:|---------:|---:|
|11365| 1480|1433|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
Tkachenko et al (2013) collected 572 news stories published in the local online newspapers [Delfi](http://delfi.ee/) and [Postimees](http://postimees.ee/) between 1997 and 2009. Selected articles cover both local and international news on a range of topics including politics, economics and sports. The raw text was preprocessed using the morphological disambiguator t3mesta ([Kaalep and
Vaino, 1998](https://www.cl.ut.ee/yllitised/kk_yhest_1998.pdf)) provided by [Filosoft](http://www.filosoft.ee/). The processing steps involve tokenization, lemmatization, part-of-speech tagging, grammatical and morphological analysis.
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
According to Tkachenko et al (2013) one of the authors manually tagged the corpus and the other author examined the tags, after which conflicting cases were resolved.
The total size of the corpus is 184,638 tokens. Tkachenko et al (2013) provide the following number of named entities in the corpus:
| | PER | LOC | ORG | Total |
|--------|------|------|------|-------|
| All | 5762 | 5711 | 3938 | 15411 |
| Unique | 3588 | 1589 | 1987 | 7164 |
Hedderich et al (2021) obtained the noisy labels through a distant supervision/automatic annotation approach. They extracted lists of named entities from Wikidata and matched them against words in the text via the ANEA tool ([Hedderich, Lange, and Klakow 2021](https://arxiv.org/abs/2102.13129)). They also used heuristic functions to correct errors caused by non-complete lists of entities,
grammatical complexities of Estonian that do not allow simple string matching or entity lists in conflict with each other. For instance, they normalized the grammatical form of a word or excluded certain high false-positive words. They provide seven sets of labels that differ in the noise process. This results in 8 different configurations, when added to the original split with clean labels.
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{tkachenko-etal-2013-named,
title = "Named Entity Recognition in {E}stonian",
author = "Tkachenko, Alexander and
Petmanson, Timo and
Laur, Sven",
booktitle = "Proceedings of the 4th Biennial International Workshop on {B}alto-{S}lavic Natural Language Processing",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-2412",
pages = "78--83",
}
@article{Hedderich_Zhu_Klakow_2021,
title={Analysing the Noise Model Error for Realistic Noisy Label Data},
author={Hedderich, Michael A. and Zhu, Dawei and Klakow, Dietrich},
volume={35},
url={https://ojs.aaai.org/index.php/AAAI/article/view/16938},
number={9},
journal={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2021},
month={May},
pages={7675-7684},
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. | phucdev/noisyner | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:et",
"license:cc-by-nc-4.0",
"newspapers",
"1997-2009",
"arxiv:2101.09763",
"arxiv:2102.13129",
"region:us"
]
| 2022-12-05T14:30:17+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["et"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "noisyner", "pretty_name": "NoisyNER", "tags": ["newspapers", "1997-2009"], "dataset_info": [{"config_name": "estner_clean", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "lemmas", "sequence": "string"}, {"name": "grammar", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}], "splits": [{"name": "train", "num_bytes": 7544221, "num_examples": 11365}, {"name": "validation", "num_bytes": 986310, "num_examples": 1480}, {"name": "test", "num_bytes": 995204, "num_examples": 1433}], "download_size": 6258130, "dataset_size": 9525735}, {"config_name": "NoisyNER_labelset1", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "lemmas", "sequence": "string"}, {"name": "grammar", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}], "splits": [{"name": "train", "num_bytes": 7544221, "num_examples": 11365}, {"name": "validation", "num_bytes": 986310, "num_examples": 1480}, {"name": "test", "num_bytes": 995204, "num_examples": 1433}], "download_size": 6194276, "dataset_size": 9525735}, {"config_name": "NoisyNER_labelset2", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "lemmas", "sequence": "string"}, {"name": "grammar", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}], "splits": [{"name": "train", "num_bytes": 7544221, "num_examples": 11365}, {"name": "validation", "num_bytes": 986310, "num_examples": 1480}, {"name": "test", "num_bytes": 995204, "num_examples": 1433}], "download_size": 6201072, "dataset_size": 9525735}, {"config_name": "NoisyNER_labelset3", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "lemmas", "sequence": "string"}, {"name": "grammar", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}], "splits": [{"name": "train", "num_bytes": 7544221, "num_examples": 11365}, {"name": "validation", "num_bytes": 986310, "num_examples": 1480}, {"name": "test", "num_bytes": 995204, "num_examples": 1433}], "download_size": 6231384, "dataset_size": 9525735}, {"config_name": "NoisyNER_labelset4", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "lemmas", "sequence": "string"}, {"name": "grammar", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}], "splits": [{"name": "train", "num_bytes": 7544221, "num_examples": 11365}, {"name": "validation", "num_bytes": 986310, "num_examples": 1480}, {"name": "test", "num_bytes": 995204, "num_examples": 1433}], "download_size": 6201072, "dataset_size": 9525735}, {"config_name": "NoisyNER_labelset5", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "lemmas", "sequence": "string"}, {"name": "grammar", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}], "splits": [{"name": "train", "num_bytes": 7544221, "num_examples": 11365}, {"name": "validation", "num_bytes": 986310, "num_examples": 1480}, {"name": "test", "num_bytes": 995204, "num_examples": 1433}], "download_size": 6231384, "dataset_size": 9525735}, {"config_name": "NoisyNER_labelset6", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "lemmas", "sequence": "string"}, {"name": "grammar", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}], "splits": [{"name": "train", "num_bytes": 7544221, "num_examples": 11365}, {"name": "validation", "num_bytes": 986310, "num_examples": 1480}, {"name": "test", "num_bytes": 995204, "num_examples": 1433}], "download_size": 6226516, "dataset_size": 9525735}, {"config_name": "NoisyNER_labelset7", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "lemmas", "sequence": "string"}, {"name": "grammar", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC"}}}}], "splits": [{"name": "train", "num_bytes": 7544221, "num_examples": 11365}, {"name": "validation", "num_bytes": 986310, "num_examples": 1480}, {"name": "test", "num_bytes": 995204, "num_examples": 1433}], "download_size": 6229668, "dataset_size": 9525735}]} | 2023-01-05T12:09:58+00:00 |
b57da2cf4fdad395dc51b4fbae70d221894ad4ac | cristmedicals/lilithan | [
"license:openrail",
"region:us"
]
| 2022-12-05T14:44:53+00:00 | {"license": "openrail"} | 2022-12-05T14:50:31+00:00 |
|
cea834486a49858f650c75b486b6939d30696971 | # Dataset Card for "trecqa"
TREC-QA dataset for Answer Sentence Selection. The dataset contains 2 additional splits which are `clean` versions of the original development and test sets. `clean` versions contain only questions which have at least a positive and a negative answer candidate. | lucadiliello/trecqa | [
"region:us"
]
| 2022-12-05T15:04:54+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "answer", "dtype": "string"}, {"name": "key", "dtype": "int64"}, {"name": "question", "dtype": "string"}], "splits": [{"name": "test_clean", "num_bytes": 298298, "num_examples": 1442}, {"name": "train_all", "num_bytes": 12030615, "num_examples": 53417}, {"name": "dev_clean", "num_bytes": 293075, "num_examples": 1343}, {"name": "train", "num_bytes": 1517902, "num_examples": 5919}, {"name": "test", "num_bytes": 312688, "num_examples": 1517}, {"name": "dev", "num_bytes": 297598, "num_examples": 1364}], "download_size": 6215944, "dataset_size": 14750176}} | 2022-12-05T15:10:15+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.