sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
0c65aa7afa2205248e4ff201c467f79f55f2aa07 | # Dataset Card for "wikiqa"
WikiQA dataset for Answer Sentence Selection. The dataset contains 2 additional splits which are `clean` versions of the original development and test sets. `clean` versions contain only questions which have at least a positive and a negative answer candidate. | lucadiliello/wikiqa | [
"region:us"
]
| 2022-12-05T15:06:32+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "answer", "dtype": "string"}, {"name": "key", "dtype": "int64"}, {"name": "question", "dtype": "string"}], "splits": [{"name": "test_clean", "num_bytes": 449691, "num_examples": 2341}, {"name": "dev_clean", "num_bytes": 214886, "num_examples": 1126}, {"name": "train", "num_bytes": 4017460, "num_examples": 20360}, {"name": "test", "num_bytes": 1208042, "num_examples": 6165}, {"name": "dev", "num_bytes": 530358, "num_examples": 2733}], "download_size": 3111254, "dataset_size": 6420437}} | 2022-12-05T15:09:31+00:00 |
6925593a2b69dfeeef905ec5d5a8763e3b6012ba | ## Context
I got inspiration for this dataset from the [Rick&Morty Scripts](https://www.kaggle.com/datasets/andradaolteanu/rickmorty-scripts) by [Andrada Olteanu](https://www.kaggle.com/andradaolteanu) but felt like dataset was a little small and outdated
This dataset includes almost all the episodes till Season 5. More data will be updated
## Content
Rick and Morty Transcripts:
- index: index of the row
- speaker: the character's name
- dialogue: the dialogue of the character
## Acknowledgements
Thanks to the transcripts made available by
- [RickandMorty.fandom.com](https://rickandmorty.fandom.com/)
- [RickandMorty.newtfire.org](http://rickandmorty.newtfire.org/transcripts.html) | Prarabdha/Rick_and_Morty_Transcript | [
"license:mit",
"region:us"
]
| 2022-12-05T16:02:12+00:00 | {"license": "mit"} | 2022-12-05T16:09:45+00:00 |
2ea389870752c7c33cc6cca3351e46c18774b79a | # Dataset Card for "ESCWA"
Collected over two days of meetings of the United Nations Economic and Social Commission for West Asia (ESCWA) in 2019. The data includes intrasentential code alternation between Arabic and English. In the case of Algerian, Tunisian, and Moroccan native speakers, the switch is between Arabic and French.
The 2.8 hours ESCWA includes dialectal Arabic, with a Code Mixing Index (CMI) of ~28%.
More details about the ESCWA can be found https://arabicspeech.org/escwa/.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arbml/ESCWA | [
"region:us"
]
| 2022-12-05T16:03:38+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 783712001.0, "num_examples": 24}], "download_size": 766073404, "dataset_size": 783712001.0}} | 2022-12-05T20:06:58+00:00 |
f2ea4b91d6af899fac610c608ae8e90f1cb28448 | # Dataset Card for "clinical_trial_texts"
These are the text of clinical trials dowloaded from https://ClinicalTrials.gov/AllAPIJSON.zip on Dec 3rd 2022.
Total trials is 434977
Number of tokens is 2,184,397,556 (2.1bn tokens).
The tokens here are from the default BERT tokenizer in hugginface.
This data can be used for pretraining in the clinical trial and biomedical domains.
If you use this data please acknowledge @domenicrosati and link to this dataset
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | domenicrosati/clinical_trial_texts | [
"region:us"
]
| 2022-12-05T16:45:55+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "trial_id", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 22784316806, "num_examples": 434977}], "download_size": 5376659326, "dataset_size": 22784316806}} | 2022-12-05T17:34:13+00:00 |
615c6cbedea89e67705b09020e4046d33698bbeb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: sshleifer/distilbart-xsum-1-1
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Buckeyes2019](https://huggingface.co/Buckeyes2019) for evaluating this model. | autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-5bee1b-2343673799 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-05T17:07:29+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "sshleifer/distilbart-xsum-1-1", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-12-05T17:22:59+00:00 |
4249ba622beb5a668ee5eedc1e0f44c029254b13 | # Overview
This dataset is a subset of the huggingface wikipedia dataset with ~70'000 rows, each about a person on wikipedia.
Each row contains the original wikipedia texts as sentences,
as well as a paraphrased version of each sentence. For both versions full texts with the entity the wikipedia page is about being masked.
# features
- id: the id in the original dataset
- url: the link to the wikipedia page
- title: the title of the wikipedia page
- text: the original wikipedia text
- sentences: text split to sentences
- paraphrased_sentences: text split to sentences, with each sentence paraphrased (e.g. mutated a bit)
- masked_text_original: original text with entity masked in every occurence (<mask> as token)
- masked_entities_original: array of entities masked in masked_text_original
- masked_text_paraphrased: paraphrased text with entity masked in every occurence
- masked_entities_paraphrased: array of entities msked in masked_text_paraphrased
---
annotations_creators:
- no-annotation
- machine-generated
language:
- en
language_creators:
- found
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: wikipedia persons paraphrased and masked
size_categories:
- 10K<n<100K
source_datasets:
- extended|wikipedia
tags: []
task_categories:
- fill-mask
task_ids:
- slot-filling | Skatinger/wikipedia-persons-masked | [
"region:us"
]
| 2022-12-05T18:23:48+00:00 | {} | 2023-01-19T15:08:52+00:00 |
5cff6ebd7229a9f33a4ba18bd66795cb9dc2b2b6 | # Dataset Card for "sarcastic-news-headlines-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liamvbetts/sarcastic-news-headlines-1 | [
"region:us"
]
| 2022-12-05T18:42:23+00:00 | {"dataset_info": {"features": [{"name": "headline", "dtype": "string"}, {"name": "is_sarcastic", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1947706, "num_examples": 26709}], "download_size": 1328814, "dataset_size": 1947706}} | 2022-12-05T18:42:30+00:00 |
b7191ebd3e0c46a22b1c516ebb3eaa7bf61a1417 | rschwabco/ms_macro_big | [
"license:mit",
"region:us"
]
| 2022-12-05T18:58:17+00:00 | {"license": "mit"} | 2022-12-05T23:10:50+00:00 |
|
006684c68df888b6fe2789e83c12731e5cfee854 | # Dataset Card for "sarcastic-news-headlines-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liamvbetts/sarcastic-news-headlines-v2 | [
"region:us"
]
| 2022-12-05T19:02:06+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1947706, "num_examples": 26709}], "download_size": 1328187, "dataset_size": 1947706}} | 2022-12-05T19:03:15+00:00 |
284f57e8bb5ed25b5e309cc0136b2cefcf2ef166 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: lewiswatson/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-emotion-default-34e541-17396354 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-05T19:24:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "lewiswatson/distilbert-base-uncased-finetuned-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-12-05T19:24:44+00:00 |
3e0bb192ffa2691ac4a497c245a893b1189d82e4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: lewtun/minilm-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-emotion-default-34e541-17396352 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-05T19:24:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "lewtun/minilm-finetuned-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-12-05T19:24:42+00:00 |
3d24acf758fe45c0496c9eef0de8e343ee6da88f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: lewtun/sagemaker-distilbert-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-emotion-default-34e541-17396353 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-05T19:24:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "lewtun/sagemaker-distilbert-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-12-05T19:24:45+00:00 |
40fef67a9c92949796bd9e1e396b278b28fc3cc2 | boats | Telecom-BGDAI/boats | [
"region:us"
]
| 2022-12-05T19:45:34+00:00 | {} | 2022-12-05T19:47:13+00:00 |
d3ccf9d510142f299b65d9d1be0189be6ebe198a | # Dataset Card for "librispeech5k_augm_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | CristianaLazar/librispeech5k_augm_train | [
"region:us"
]
| 2022-12-05T19:57:05+00:00 | {"dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "chapter_id", "dtype": "int64"}, {"name": "id", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train.360", "num_bytes": 6796635145.0, "num_examples": 5000}], "download_size": 3988908181, "dataset_size": 6796635145.0}} | 2022-12-05T20:11:07+00:00 |
dc67ba949f055429bca9598c7761fee26594055b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: autoevaluate/summarization
* Dataset: autoevaluate/xsum-sample
* Config: autoevaluate--xsum-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-autoevaluate__xsum-sample-autoevaluate__xsum-sample-437a8a-17406355 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-05T20:08:52+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/xsum-sample"], "eval_info": {"task": "summarization", "model": "autoevaluate/summarization", "metrics": [], "dataset_name": "autoevaluate/xsum-sample", "dataset_config": "autoevaluate--xsum-sample", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-12-05T20:09:15+00:00 |
05901479683557d2843ab4ab0b3105193403c690 | # ObjectNet (Test set only)
Original paper: [ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models](https://objectnet.dev/objectnet-a-large-scale-bias-controlled-dataset-for-pushing-the-limits-of-object-recognition-models.pdf)
Homepage: https://objectnet.dev/
Bibtex:
```
@inproceedings{NEURIPS2019_97af07a1,
author = {Barbu, Andrei and Mayo, David and Alverio, Julian and Luo, William and Wang, Christopher and Gutfreund, Dan and Tenenbaum, Josh and Katz, Boris},
booktitle = {Advances in Neural Information Processing Systems},
editor = {H. Wallach and H. Larochelle and A. Beygelzimer and F. d\textquotesingle Alch\'{e}-Buc and E. Fox and R. Garnett},
pages = {},
publisher = {Curran Associates, Inc.},
title = {ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models},
url = {https://proceedings.neurips.cc/paper/2019/file/97af07a14cacba681feacf3012730892-Paper.pdf},
volume = {32},
year = {2019}
}
``` | djghosh/wds_objectnet_test | [
"region:us"
]
| 2022-12-05T20:28:14+00:00 | {} | 2022-12-12T21:18:15+00:00 |
e8aad2d4865abb219eda627f20161a9f5e2934ad | # RESISC45 (Test set only)
Original paper: [Remote Sensing Image Scene Classification: Benchmark and State of the Art](https://arxiv.org/abs/1703.00121v1)
Homepage (broken link): http://www.escience.cn/people/JunweiHan/NWPU-RESISC45.html
Bibtex:
```
@article{DBLP:journals/corr/ChengHL17,
author = {Gong Cheng and
Junwei Han and
Xiaoqiang Lu},
title = {Remote Sensing Image Scene Classification: Benchmark and State of
the Art},
journal = {CoRR},
volume = {abs/1703.00121},
year = {2017},
url = {http://arxiv.org/abs/1703.00121},
eprinttype = {arXiv},
eprint = {1703.00121},
timestamp = {Mon, 02 Dec 2019 09:32:19 +0100},
biburl = {https://dblp.org/rec/journals/corr/ChengHL17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | djghosh/wds_vtab-resisc45_test | [
"arxiv:1703.00121",
"region:us"
]
| 2022-12-05T20:55:55+00:00 | {} | 2022-12-12T22:10:10+00:00 |
0840d3af4684ffee0372cb54a2f9b76fd40bcccc |
# Dataset Card for Czech Court Decisions NER
## Dataset Description
Czech Court Decisions NER is a dataset of 300 court decisions published by The Supreme Court of the Czech Republic and the Constitutional Court of the Czech Republic.
In the documents, 4 types of named entities are selected.
## Dataset Features
Each sample contains:
- `filename`: file name in the original dataset
- `text`: court decision document in plain text
- `entities`: list of selected entities. Each entity contains:
- `category_id`: integer identifier of the entity category
- `category_str`: human-friendly category name in Czech (verbalizer)
- `start`: index on which the entity starts in the source text
- `end`: index on which the entity ends in the source text
- `content`: entity content, it was created as `text[start:end]`
- `entity_id`: unique entity string identifier
- `refers_to`: some entities (mostly of category 'Reference na rozhodnutí soudu') refer to a specific other entity. `refers_to` attribute contains the `entity_id` of the referred entity
The `entity_id` field was checked to be globally unique (across data samples and dataset splits.)
## Entity categories
The list of the recognized entities (`category_id`, `category_str` pairs):
```python3
{
0: 'Soudní instituce',
1: 'Reference na rozhodnutí soudu',
2: 'Účinnost',
3: 'Reference zákonu'
}
```
## Dataset Source
The dataset is a preprocessed adaptation of existing Czech Court Decisions Dataset [project info](https://ufal.mff.cuni.cz/ccdd), [link to data](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-2853). This adaptation contains (almost) same data, but converted to a convenient format and with stripped leaked xml-like tags in the texts.
The category names (verbalizers) were added by a Czech native speaker.
## Citation
Cite authors of the [original dataset](https://ufal.mff.cuni.cz/ccdd):
```bibtex
@misc{11234/1-2853,
title = {Czech Court Decisions Dataset},
author = {Kr{\'{\i}}{\v z}, Vincent and Hladk{\'a}, Barbora},
url = {http://hdl.handle.net/11234/1-2853},
note = {{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University},
copyright = {Creative Commons - Attribution-{NonCommercial}-{ShareAlike} 4.0 International ({CC} {BY}-{NC}-{SA} 4.0)},
year = {2014}
}
``` | fewshot-goes-multilingual/cs_czech-court-decisions-ner | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:cs",
"license:cc-by-nc-sa-4.0",
"czech NER",
"court decisions",
"region:us"
]
| 2022-12-05T22:03:19+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["cs"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "Czech Court Decisions NER", "tags": ["czech NER", "court decisions"]} | 2022-12-05T23:01:04+00:00 |
2de016a42923bd78c9ddac2e570af07745b3c936 | # Dataset Card for "xsum_tiny_ood"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | SpeedOfMagic/xsum_tiny_ood | [
"region:us"
]
| 2022-12-05T22:21:21+00:00 | {"dataset_info": {"features": [{"name": "document", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2343786.0, "num_examples": 1100}, {"name": "dev", "num_bytes": 398593.0, "num_examples": 200}, {"name": "test", "num_bytes": 468841.0, "num_examples": 200}], "download_size": 2101221, "dataset_size": 3211220.0}} | 2022-12-05T22:22:47+00:00 |
0d3d437768d386e97e513943daf34adb44e7c3b5 | # SUN397 (Test set only)
Original paper: [SUN Database: Exploring a Large Collection of Scene Categories](https://vision.princeton.edu/projects/2010/SUN/paperIJCV.pdf)
Homepage: https://vision.princeton.edu/projects/2010/SUN/
Bibtex:
```
@ARTICLE{Xiao2016-ix,
title = "{SUN} database: Exploring a large collection of scene categories",
author = "Xiao, Jianxiong and Ehinger, Krista A and Hays, James and
Torralba, Antonio and Oliva, Aude",
journal = "Int. J. Comput. Vis.",
publisher = "Springer Science and Business Media LLC",
volume = 119,
number = 1,
pages = "3--22",
month = aug,
year = 2016,
language = "en"
}
``` | djghosh/wds_sun397_test | [
"region:us"
]
| 2022-12-05T22:21:47+00:00 | {} | 2022-12-12T22:20:12+00:00 |
69a8ae244cfd6d825135ae01591a2582ed020c56 | # AutoTrain Dataset for project: acc_keys
## Dataset Description
This dataset has been automatically processed by AutoTrain for project acc_keys.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": 2,
"text": " workon"
},
{
"target": 5,
"text": " contact"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=11, names=['A101', 'A102', 'A103', 'A104', 'A105', 'A106', 'A107', 'A108', 'A109', 'A110', 'A112'], id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 457 |
| valid | 120 |
| alanila/autotrain-data-acc_keys | [
"task_categories:text-classification",
"region:us"
]
| 2022-12-05T22:25:46+00:00 | {"task_categories": ["text-classification"]} | 2022-12-05T22:26:22+00:00 |
8c4b45bd4e8683f0d8bdf2b64696a6420e989c4d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Palak/albert-base-v2_squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@18st13](https://huggingface.co/18st13) for evaluating this model. | autoevaluate/autoeval-eval-squad-plain_text-9c2592-2347273870 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-05T23:51:00+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "Palak/albert-base-v2_squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-12-05T23:54:03+00:00 |
ee280a672a3428cd539f6a4df09dcd26488137a5 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Palak/albert-base-v2_squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@18st13](https://huggingface.co/18st13) for evaluating this model. | autoevaluate/autoeval-eval-squad-plain_text-26d159-2347473871 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-05T23:51:06+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "Palak/albert-base-v2_squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-12-05T23:54:05+00:00 |
1033fef4e004752cc109469e787afbcedfa9e75b | Nise/NisePhotos | [
"license:openrail",
"region:us"
]
| 2022-12-06T00:30:14+00:00 | {"license": "openrail"} | 2022-12-06T00:32:11+00:00 |
|
e3015bd8c20f083eef01c544aa2a690ea886fff2 | thomasjeon/dulls | [
"license:mit",
"region:us"
]
| 2022-12-06T00:35:36+00:00 | {"license": "mit"} | 2022-12-06T00:36:31+00:00 |
|
ac9f3639760fe11e707d945d179d82afbb447af0 | # Dataset Card for "stack-filtered-pii-1M-java"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | loubnabnl/stack-filtered-pii-1M-java | [
"region:us"
]
| 2022-12-06T01:26:31+00:00 | {"dataset_info": {"features": [{"name": "hexsha", "dtype": "string"}, {"name": "size", "dtype": "int64"}, {"name": "ext", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "max_stars_repo_path", "dtype": "string"}, {"name": "max_stars_repo_name", "dtype": "string"}, {"name": "max_stars_repo_head_hexsha", "dtype": "string"}, {"name": "max_stars_repo_licenses", "sequence": "string"}, {"name": "max_stars_count", "dtype": "float64"}, {"name": "max_stars_repo_stars_event_min_datetime", "dtype": "string"}, {"name": "max_stars_repo_stars_event_max_datetime", "dtype": "string"}, {"name": "max_issues_repo_path", "dtype": "string"}, {"name": "max_issues_repo_name", "dtype": "string"}, {"name": "max_issues_repo_head_hexsha", "dtype": "string"}, {"name": "max_issues_repo_licenses", "sequence": "string"}, {"name": "max_issues_count", "dtype": "float64"}, {"name": "max_issues_repo_issues_event_min_datetime", "dtype": "string"}, {"name": "max_issues_repo_issues_event_max_datetime", "dtype": "string"}, {"name": "max_forks_repo_path", "dtype": "string"}, {"name": "max_forks_repo_name", "dtype": "string"}, {"name": "max_forks_repo_head_hexsha", "dtype": "string"}, {"name": "max_forks_repo_licenses", "sequence": "string"}, {"name": "max_forks_count", "dtype": "float64"}, {"name": "max_forks_repo_forks_event_min_datetime", "dtype": "string"}, {"name": "max_forks_repo_forks_event_max_datetime", "dtype": "string"}, {"name": "avg_line_length", "dtype": "float64"}, {"name": "max_line_length", "dtype": "int64"}, {"name": "alphanum_fraction", "dtype": "float64"}, {"name": "index", "dtype": "int64"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5117781075, "num_examples": 1000000}], "download_size": 1880524833, "dataset_size": 5117781075}} | 2022-12-06T01:28:17+00:00 |
d16cd5478bd7423b8dfd8f206e2bc59f921d6ba6 | # python datasets
| kidd2012/kidd-github-issues | [
"region:us"
]
| 2022-12-06T03:37:20+00:00 | {} | 2022-12-06T05:28:52+00:00 |
a51db0a390d45703ce29b1422e116b93a94a3f1c | # Dataset Card for "nsc_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dlproject/nsc_test | [
"region:us"
]
| 2022-12-06T04:26:53+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "attention_mask", "sequence": {"sequence": "int32"}}, {"name": "labels", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 233563168, "num_examples": 1769}], "download_size": 222492491, "dataset_size": 233563168}} | 2022-12-06T04:29:03+00:00 |
8ff0f0dc8eb0711a340f4715b7104b9e8603999c | This repository contains the intermediate checkpoints for the model https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K.
Each "epoch" corresponds to an additional 32B/256 samples seen.
The purpose of releasing these checkpoints and optimizer states is to enable analysis.
For the first 121 "epcohs", training was done with float16 mixed precision before switching to bfloat16 after a loss blow up. | laion/CLIP-ViT-H-14-laion2B-s32B-b79K-all-checkpoints | [
"license:mit",
"region:us"
]
| 2022-12-06T04:43:52+00:00 | {"license": "mit"} | 2022-12-09T03:23:17+00:00 |
03da623385d6ec3158e859d0b6096ccf8932c1be | Den4ikAI/ruWikiHow_instructions | [
"license:mit",
"region:us"
]
| 2022-12-06T04:54:58+00:00 | {"license": "mit"} | 2022-12-06T04:59:58+00:00 |
|
60d053949f6f112d60373b33c603523b8d473afb | # Dataset Card for "tokenized-recipe-nlg-gpt2-ingredients-to-recipe-end"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pratultandon/tokenized-recipe-nlg-gpt2-ingredients-to-recipe-end | [
"region:us"
]
| 2022-12-06T04:59:44+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 2217334238, "num_examples": 2022671}, {"name": "test", "num_bytes": 116785866, "num_examples": 106202}], "download_size": 749380879, "dataset_size": 2334120104}} | 2022-12-06T05:35:29+00:00 |
11a2d28556692050723773cd8e02b02498b646a4 |
See [DistilGPT2 Stable Diffusion](https://huggingface.co/FredZhang7/distilgpt2-stable-diffusion) | FredZhang7/krea-ai-prompts | [
"license:mit",
"region:us"
]
| 2022-12-06T05:23:09+00:00 | {"license": "mit"} | 2022-12-06T05:37:07+00:00 |
c8bfafd883966295d3a443d40af81681814e0958 | # Dataset Card for "olm-CC-MAIN-2022-40-sampling-ratio-0.15894621295-exact-dedup-only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-CC-MAIN-2022-40-sampling-ratio-0.15894621295-exact-dedup-only | [
"region:us"
]
| 2022-12-06T06:03:22+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "crawl_timestamp", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 33889845422, "num_examples": 5782492}], "download_size": 20360314176, "dataset_size": 33889845422}} | 2022-12-06T16:58:13+00:00 |
e2f37e95cc5eb38359b6aefc2cbf98a50fd1b7e4 |
## Dataset Description
A small subset in each dataset of `pile-v2`(~1000 samples) of [pile-v2]() dataset, each has 1,000 random samples from the original dataset. The dataset has 255MB of text (code and english).
## Languages
The dataset contains technical text on programming languages and natural language with the following subsets,
- Bible
- TED2020
- PileOfLaw
- StackExchange
- GithubIssues
- Opensubtitles
- USPTO
- S2ORC
- DevDocs
- CodePileReddit2022
- USENET
- GNOME
- ASFPublicMail
- PileV2Reddit2020
- CodePilePosts
- Discourse
- Tanzil
- arXiv
- UbuntuIRC
- PubMed
- CodePileReddit2020
- CodePileReddit2021
- GlobalVoices
- FreeLaw_Options
- PileV2Posts
## Dataset Structure
```python
from datasets import load_dataset
load_dataset("CarperAI/pile-v2-small")
```
### How to use it
You can either load the whole dataset like above, or load a specific subset such as arxiv by specifying the folder directory:
```python
load_dataset("CarperAI/pile-v2-small", data_dir="data/arxiv")
```
| CarperAI/pile-v2-small-filtered | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:unknown",
"language:en",
"language:code",
"region:us"
]
| 2022-12-06T06:08:44+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced"], "language": ["en", "code"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"]} | 2022-12-06T14:16:11+00:00 |
c0accde654b76bac81664947f4a27c983f1b8a41 | wuming156/yu | [
"license:unknown",
"region:us"
]
| 2022-12-06T06:22:03+00:00 | {"license": "unknown"} | 2023-09-03T14:04:07+00:00 |
|
025b58466d24eaa7462689bad4fd7c0aa2fdd631 |
# Dataset Card for librispeech_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12)
- **Repository:** [Needs More Information]
- **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Daniel Povey](mailto:[email protected])
### Dataset Summary
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER. An external leaderboard at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean ranks the latest models from research and academia.
### Languages
The audio is in English. There are two configurations: `clean` and `other`.
The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on
a different dataset, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other".
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'chapter_id': 141231,
'file': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '1272-141231-0000',
'speaker_id': 1272,
'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'}
```
### Data Fields
- file: A path to the downloaded audio file in .flac format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
The size of the corpus makes it impractical, or at least inconvenient
for some users, to distribute it as a single large archive. Thus the
training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively.
A simple automatic
procedure was used to select the audio in the first two sets to be, on
average, of higher recording quality and with accents closer to US
English. An acoustic model was trained on WSJ’s si-84 data subset
and was used to recognize the audio in the corpus, using a bigram
LM estimated on the text of the respective books. We computed the
Word Error Rate (WER) of this automatic transcript relative to our
reference transcripts obtained from the book texts.
The speakers in the corpus were ranked according to the WER of
the WSJ model’s transcripts, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other".
For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360
respectively accounting for 100h and 360h of the training data.
For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech.
| | Train.500 | Train.360 | Train.100 | Valid | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| clean | - | 104014 | 28539 | 2703 | 2620|
| other | 148688 | - | - | 2864 | 2939 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. | nguyenvulebinh/libris_clean_100 | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_ids:speaker-identification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
]
| 2022-12-06T07:19:09+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition", "audio-classification"], "task_ids": ["speaker-identification"], "paperswithcode_id": "librispeech-1", "pretty_name": "LibriSpeech", "dataset_info": [{"config_name": "clean", "features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "chapter_id", "dtype": "int64"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train.100", "num_bytes": 6619683041, "num_examples": 28539}, {"name": "train.360", "num_bytes": 23898214592, "num_examples": 104014}, {"name": "validation", "num_bytes": 359572231, "num_examples": 2703}, {"name": "test", "num_bytes": 367705423, "num_examples": 2620}], "download_size": 30121377654, "dataset_size": 31245175287}, {"config_name": "other", "features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "chapter_id", "dtype": "int64"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train.500", "num_bytes": 31810256902, "num_examples": 148688}, {"name": "validation", "num_bytes": 337283304, "num_examples": 2864}, {"name": "test", "num_bytes": 352396474, "num_examples": 2939}], "download_size": 31236565377, "dataset_size": 32499936680}, {"config_name": "all", "features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "chapter_id", "dtype": "int64"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train.clean.100", "num_bytes": 6627791685, "num_examples": 28539}, {"name": "train.clean.360", "num_bytes": 23927767570, "num_examples": 104014}, {"name": "train.other.500", "num_bytes": 31852502880, "num_examples": 148688}, {"name": "validation.clean", "num_bytes": 359505691, "num_examples": 2703}, {"name": "validation.other", "num_bytes": 337213112, "num_examples": 2864}, {"name": "test.clean", "num_bytes": 368449831, "num_examples": 2620}, {"name": "test.other", "num_bytes": 353231518, "num_examples": 2939}], "download_size": 61357943031, "dataset_size": 63826462287}]} | 2022-12-06T07:28:15+00:00 |
816402b14d619b5c794d67b0e66da778ff36a95b | Vickie1998/test_image | [
"region:us"
]
| 2022-12-06T07:35:55+00:00 | {} | 2022-12-09T00:29:21+00:00 |
|
b93d8a9da47a40a289a5ac0914e16ceb3248bafd | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-cnn
* Dataset: gigaword
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@geiright](https://huggingface.co/geiright) for evaluating this model. | autoevaluate/autoeval-eval-gigaword-default-2df74a-2350473902 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-06T08:07:04+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["gigaword"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": ["rouge"], "dataset_name": "gigaword", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-12-06T08:12:27+00:00 |
6843a3547a7fe029a9230bfc1ee99651b6033f5a | lmwang/MultiSports | [
"license:mit",
"region:us"
]
| 2022-12-06T08:30:18+00:00 | {"license": "mit"} | 2022-12-06T08:30:18+00:00 |
|
01600ce7eabbf42a5ee7c82b82f49a11597b3a5f |
# Dataset Card for MultiSports
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://deeperaction.github.io/datasets/multisports.html
- **Repository:** https://github.com/MCG-NJU/MultiSports
- **Paper:** https://arxiv.org/abs/2105.07404
- **Leaderboard:** https://paperswithcode.com/dataset/multisports
- **Point of Contact:** mailto: [email protected]
### Dataset Summary
Spatio-temporal action localization is an important and challenging problem in video understanding. Previous action detection benchmarks are limited in aspects of small numbers of instances in a trimmed video or low-level atomic actions. MultiSports is a multi-person dataset of spatio-temporal localized sports actions. Please refer to [this paper](https://arxiv.org/abs/2105.07404) for more details. Please refer to [this repository](https://github.com/MCG-NJU/MultiSports) for evaluation.
### Supported Tasks and Leaderboards
- `Spatial-temporal action localization`
Details about evaluation can be found in the [GitHub Repository](https://github.com/mcG-NJU/MultiSports). Previous challenge results can be found in [this page](https://deeperaction.github.io/results/index.html) and [this CodaLab challenge](https://codalab.lisn.upsaclay.fr/competitions/3736).
### Languages
The class labels in the dataset are in English.
## Dataset Structure
### Data Instances
Demo is available on [dataset homepage](https://deeperaction.github.io/datasets/multisports.html).
The dataset contains ```rawframes.tar``` and ```multisports_GT.pkl```. The GT pkl file is a dictionary with the following structure:
```
{
'labels': ['label1', 'label2', ...],
'train_videos': [['train_vid_1', 'train_vid_2', ...]],
'test_videos': [['test_vid_1', 'test_vid_2', ...]],
'nframes': {
'vid_1': nframes_1,
'vid_2': nframes_2,
...
},
'resolution': {
'vid_1': resolution_1,
'vid_2': resolution_2,
...
},
'gttubes': {
'vid_1': {
'label_1': [tube_1, tube_2, ...],
'label_2': [tube_1, tube_2, ...],
...
}
...
}
}
```
Here a ```tube``` is a ```numpy.ndarray``` with ```nframes``` rows and 5 columns ```<frame number> <x1> <y1> <x2> <y2>```.
### Data Fields
Raw frames are organized according to their sport category. The pickle file of GT contains the following fields.
- labels: list of labels
- train_videos: a list with one split element containing the list of training videos
- test_videos: a list with one split element containing the list of validation videos
- nframes: dictionary that gives the number of frames for each video
- resolution: dictionary that output a tuple ```(h,w)``` of the resolution for each video
- gttubes: dictionary that contains the gt tubes for each video. Gt tubes are dictionary that associates from each index of label, a list of tubes. A ```tube``` is a ```numpy.ndarray``` with ```nframes``` rows and 5 columns ```<frame number> <x1> <y1> <x2> <y2>```.
Please note that the label index starts from 0 and the frame index starts from 1. For the label index ```i```, the label name is ```labels[i]```.
<details>
<summary>
Click here to see the full list of MultiSports class labels mapping:
</summary>
|id|Class|
|--|-----|
| 0 | aerobic push up |
| 1 | aerobic explosive push up |
| 2 | aerobic explosive support |
| 3 | aerobic leg circle |
| 4 | aerobic helicopter |
| 5 | aerobic support |
| 6 | aerobic v support |
| 7 | aerobic horizontal support |
| 8 | aerobic straight jump |
| 9 | aerobic illusion |
| 10 | aerobic bent leg(s) jump |
| 11 | aerobic pike jump |
| 12 | aerobic straddle jump |
| 13 | aerobic split jump |
| 14 | aerobic scissors leap |
| 15 | aerobic kick jump |
| 16 | aerobic off axis jump |
| 17 | aerobic butterfly jump |
| 18 | aerobic split |
| 19 | aerobic turn |
| 20 | aerobic balance turn |
| 21 | volleyball serve |
| 22 | volleyball block |
| 23 | volleyball first pass |
| 24 | volleyball defend |
| 25 | volleyball protect |
| 26 | volleyball second pass |
| 27 | volleyball adjust |
| 28 | volleyball save |
| 29 | volleyball second attack |
| 30 | volleyball spike |
| 31 | volleyball dink |
| 32 | volleyball no offensive attack |
| 33 | football shoot |
| 34 | football long pass |
| 35 | football short pass |
| 36 | football through pass |
| 37 | football cross |
| 38 | football dribble |
| 39 | football trap |
| 40 | football throw |
| 41 | football diving |
| 42 | football tackle |
| 43 | football steal |
| 44 | football clearance |
| 45 | football block |
| 46 | football press |
| 47 | football aerial duels |
| 48 | basketball pass |
| 49 | basketball drive |
| 50 | basketball dribble |
| 51 | basketball 3-point shot |
| 52 | basketball 2-point shot |
| 53 | basketball free throw |
| 54 | basketball block |
| 55 | basketball offensive rebound |
| 56 | basketball defensive rebound |
| 57 | basketball pass steal |
| 58 | basketball dribble steal |
| 59 | basketball interfere shot |
| 60 | basketball pick-and-roll defensive |
| 61 | basketball sag |
| 62 | basketball screen |
| 63 | basketball pass-inbound |
| 64 | basketball save |
| 65 | basketball jump ball |
</details>
### Data Splits
| |train |validation| test |
|-------------|------:|---------:|------:|
|# of tubes |28514 |10116 | - |
*GT for test split is not provided. Please wait for the new competition to start. Information will be updated in [dataset homepage](https://deeperaction.github.io/datasets/multisports.html).*
## Dataset Creation
### Curation Rationale
Spatio-temporal action detection is an important and challenging problem in video understanding. Previous action detection benchmarks are limited in aspects of small numbers of instances in a trimmed video or low-level atomic actions.
### Source Data
#### Initial Data Collection and Normalization
> After choosing the four sports, we search for their competition videos by querying the name of sports like volleyball and the name of competition levels like Olympics and World Cup on YouTube, and then down- load videos from top search results. For each video, we only select high-resolution, e.g. 720P or 1080P, competition records and then manually cut them into clips of minutes, with less shot changes in each clip and to be more suitable for action detection.
#### Who are the source language producers?
The annotators of action categories and temporal boundaries are professional athletes of the corresponding sports. Please refer to [the paper](https://arxiv.org/abs/2105.07404) for more information.
### Annotations
#### Annotation process
1. (FIRST STAGE) A team of professional athletes generate records of the action la- bel, the starting and ending frame, and the person box in the starting frame, which can ensure the efficiency, accu- racy and consistency of our annotation results.
2. At least one annotator with domain knowledge double-check the annotations, correct wrong or inaccurate ones and also add missing annotations
3. (SECOND STAGE) With the help of FCOT tracking algorithm, a team of crowd-sourced annotators adjust bounding boxes of tracking results at each frame for each record.
4. Double-check each instance by playing it in 5fps and manually correct the inaccurate bounding boxes.
#### Who are the annotators?
For the first stage, annotators are professional athletes. For the second stage, annotators are common volunteers.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Authors of [this paper](https://arxiv.org/abs/2105.07404)
- Yixuan Li
- Lei Chen
- Runyu He
- Zhenzhi Wang
- Gangshan Wu
- Limin Wang
### Licensing Information
<a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/">Creative Commons Attribution-NonCommercial 4.0 International License</a>.
### Citation Information
If you find this dataset useful, please cite as
```
@InProceedings{Li_2021_ICCV,
author = {Li, Yixuan and Chen, Lei and He, Runyu and Wang, Zhenzhi and Wu, Gangshan and Wang, Limin},
title = {MultiSports: A Multi-Person Video Dataset of Spatio-Temporally Localized Sports Actions},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {13536-13545}
}
```
### Contributions
Thanks to [@Judie1999](https://github.com/Judie1999) for adding this dataset. | MCG-NJU/MultiSports | [
"task_categories:image-classification",
"task_categories:object-detection",
"task_categories:other",
"task_ids:multi-class-image-classification",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"video",
"action detection",
"spatial-temporal action localization",
"arxiv:2105.07404",
"region:us"
]
| 2022-12-06T08:32:53+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["original"], "task_categories": ["image-classification", "object-detection", "other"], "task_ids": ["multi-class-image-classification"], "pretty_name": "MultiSports", "tags": ["video", "action detection", "spatial-temporal action localization"], "extra_gated_heading": "Acknowledge license to accept the repository", "extra_gated_prompt": "This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License", "extra_gated_fields": {"I agree to use this dataset for non-commerical use ONLY": "checkbox"}} | 2022-12-13T07:47:16+00:00 |
d4dc2845e2a15fbb32b480c5881ec724b81a6705 | Over 20,000 512x512 mel spectrograms of 5 second samples of music from my Spotify liked playlist. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference using De-noising Diffusion Probabilistic Models.
```
x_res = 512
y_res = 512
sample_rate = 22050
n_fft = 2048
hop_length = 512
``` | teticio/audio-diffusion-512 | [
"task_categories:image-to-image",
"size_categories:10K<n<100K",
"audio",
"spectrograms",
"region:us"
]
| 2022-12-06T09:26:24+00:00 | {"size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["image-to-image"], "task_ids": [], "pretty_name": "Mel spectrograms of music", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "audio_file", "dtype": "string"}, {"name": "slice", "dtype": "int16"}], "splits": [{"name": "train", "num_bytes": 1903861364.293, "num_examples": 10663}], "download_size": 1903696036, "dataset_size": 1903861364.293}, "tags": ["audio", "spectrograms"]} | 2023-06-19T19:34:16+00:00 |
cd0134d435c080bb352b8b352a799ff35007dbb9 | # Dataset Card for "librispeech5k_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | CristianaLazar/librispeech5k_train | [
"region:us"
]
| 2022-12-06T10:51:00+00:00 | {"dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "chapter_id", "dtype": "int64"}, {"name": "id", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train.360", "num_bytes": 6796635145.0, "num_examples": 5000}], "download_size": 3988908181, "dataset_size": 6796635145.0}} | 2022-12-06T11:07:36+00:00 |
308e97469d4cc58caaef04f110ebbd65dce628fc | # Dataset Card for "uber-reviews"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/jschne61701/uber-rides-costumer-reviews-dataset
- **Paper:** N.A.
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Using Python's Beautiful Soup library and Scrappy framework, scraped date, star rating, and comment from all reviews from 2013 - 2019.
### Languages
english
### Citation Information
https://www.kaggle.com/datasets/jschne61701/uber-rides-costumer-reviews-dataset
https://www.sitejabber.com/reviews/uber.com
https://www.consumeraffairs.com/travel/uber.html
https://www.kaggle.com/purvank/uber-rider-reviews-dataset
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset.
| argilla/uber-reviews | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
]
| 2022-12-06T11:47:18+00:00 | {"language": ["en"], "license": ["unknown"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "inputs", "struct": [{"name": "text", "dtype": "string"}]}, {"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "prediction_agent", "dtype": "string"}, {"name": "annotation", "dtype": "null"}, {"name": "annotation_agent", "dtype": "null"}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 2761597, "num_examples": 2347}], "download_size": 1691346, "dataset_size": 2761597}} | 2022-12-06T12:00:28+00:00 |
d42d44b7dea25b643c58fc02fc19f5b81ee1d372 |
This DataSet contain nba games from 2019 tp 2022
| KDAM1/BasketballGames | [
"task_categories:other",
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
]
| 2022-12-06T11:56:33+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["other"], "task_ids": [], "pretty_name": "BasketballGames", "tags": []} | 2022-12-06T12:11:11+00:00 |
d374aa2b80b8f8aade881b5bf0dcb40284a841de | Magnarmonteh/XRC213 | [
"license:openrail",
"region:us"
]
| 2022-12-06T12:31:05+00:00 | {"license": "openrail"} | 2022-12-06T12:32:08+00:00 |
|
f57199dde555a1e4858be6fdb307dc0d5060761f | # Dataset Card for "tripadvisor-hotel-reviews"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/andrewmvd/trip-advisor-hotel-reviews
- **Paper:** https://zenodo.org/record/1219899
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Hotels play a crucial role in traveling and with the increased access to information new pathways of selecting the best ones emerged.
With this dataset, consisting of 20k reviews crawled from Tripadvisor, you can explore what makes a great hotel and maybe even use this model in your travels!
Citations on a scale from 1 to 5.
### Languages
english
### Citation Information
If you use this dataset in your research, please credit the authors.
Citation
Alam, M. H., Ryu, W.-J., Lee, S., 2016. Joint multi-grain topic sentiment: modeling semantic aspects for online reviews. Information Sciences 339, 206–223.
DOI
License
CC BY NC 4.0
Splash banner
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset. | argilla/tripadvisor-hotel-reviews | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
]
| 2022-12-06T13:04:42+00:00 | {"language": ["en"], "license": ["cc-by-nc-4.0"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "inputs", "struct": [{"name": "text", "dtype": "string"}]}, {"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "prediction_agent", "dtype": "string"}, {"name": "annotation", "dtype": "null"}, {"name": "annotation_agent", "dtype": "null"}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 31840239, "num_examples": 20491}], "download_size": 19678149, "dataset_size": 31840239}} | 2022-12-07T07:10:56+00:00 |
f88caaaae5963fd57542ed0a0b80eff469cbf9f5 | # Dataset Card for "twitter-genderbias"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/kevinmorgado/gender-bias-spanish
- **Paper:** N.A.
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
This dataset contains more than 1900 labeled Spanish tweets with the category biased or non-biased. This was made for a Hackathon to reduce gender bias on the internet.
- contents: Text
- label:
- biased
- non-biased
### Languages
spanish
### Citation Information
https://www.kaggle.com/datasets/kevinmorgado/gender-bias-spanish
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset. | argilla/twitter-genderbias | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:sentiment-analysis",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:es",
"license:unknown",
"region:us"
]
| 2022-12-06T13:17:03+00:00 | {"language": ["es"], "license": ["unknown"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification", "sentiment-analysis"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "inputs", "struct": [{"name": "text", "dtype": "string"}]}, {"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "prediction_agent", "dtype": "string"}, {"name": "annotation", "dtype": "null"}, {"name": "annotation_agent", "dtype": "null"}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 573508, "num_examples": 1914}], "download_size": 373847, "dataset_size": 573508}} | 2022-12-06T16:21:21+00:00 |
4d56aff3af34b027cace01c0dcc3b1f9445872f4 | # Dataset Card for "twitter-coronavirus"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/datatattle/covid-19-nlp-text-classification
- **Paper:** N.A.
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Perform Text Classification on the data. The tweets have been pulled from Twitter and manual tagging has been done then.
The names and usernames have been given codes to avoid any privacy concerns.
Columns:
1) Location
2) Tweet At
3) Original Tweet
4) Label
- Extremely Negative
- Negative
- Neutral
- Positive
- Extremely Positive
### Languages
english
### Citation Information
https://www.kaggle.com/datasets/datatattle/covid-19-nlp-text-classification
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset. | argilla/twitter-coronavirus | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:sentiment-analysis",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
]
| 2022-12-06T13:54:07+00:00 | {"language": ["en"], "license": ["unknown"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification", "sentiment-analysis"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "inputs", "struct": [{"name": "text", "dtype": "string"}]}, {"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "prediction_agent", "dtype": "string"}, {"name": "annotation", "dtype": "null"}, {"name": "annotation_agent", "dtype": "null"}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "struct": [{"name": "location", "dtype": "string"}, {"name": "screen_name", "dtype": "int64"}, {"name": "split", "dtype": "string"}, {"name": "user_name", "dtype": "int64"}]}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 25394534, "num_examples": 44955}], "download_size": 15712627, "dataset_size": 25394534}} | 2022-12-06T16:20:31+00:00 |
5f7b5b3276e087b395a0157e5aa8f37ff8679f62 | # Dataset Card for "scalableMLDL1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | marvmk/scalableMLDL1 | [
"region:us"
]
| 2022-12-06T14:03:14+00:00 | {"dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 5726523552, "num_examples": 5962}, {"name": "test", "num_bytes": 2546311152, "num_examples": 2651}], "download_size": 1397383253, "dataset_size": 8272834704}} | 2022-12-06T14:05:51+00:00 |
9c42976fd43cd0dd562b3627e7f2b43419181455 | # Dataset Card for "librispeech_validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | CristianaLazar/librispeech_validation | [
"region:us"
]
| 2022-12-06T14:19:25+00:00 | {"dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "chapter_id", "dtype": "int64"}, {"name": "id", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "validation", "num_bytes": 3218271771.125, "num_examples": 2703}], "download_size": 1286700444, "dataset_size": 3218271771.125}} | 2022-12-06T14:28:22+00:00 |
08ea56d47b74bc85ef202198f1f896d4a18561f1 | # Dataset Card for "speech2emotion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | wofeishenling/speech2emotion | [
"region:us"
]
| 2022-12-06T14:31:10+00:00 | {"dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neu", "1": "hap", "2": "ang", "3": "sad"}}}}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "session1", "num_bytes": 164498926.375, "num_examples": 1085}, {"name": "session2", "num_bytes": 153414523.125, "num_examples": 1023}, {"name": "session3", "num_bytes": 163876335.125, "num_examples": 1151}, {"name": "session4", "num_bytes": 146259809.125, "num_examples": 1031}, {"name": "session5", "num_bytes": 178359204.875, "num_examples": 1241}], "download_size": 788677878, "dataset_size": 806408798.625}} | 2023-02-15T07:42:52+00:00 |
390eadd23da82efb8905eb877dda85a73c8a6d0d | # Dataset Card for "leicester_loaded_annotations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
| davanstrien/leicester_loaded_annotations | [
"region:us"
]
| 2022-12-06T14:55:00+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "choice", "dtype": "string"}, {"name": "annotator", "dtype": "int64"}, {"name": "annotation_id", "dtype": "int64"}, {"name": "created_at", "dtype": "string"}, {"name": "updated_at", "dtype": "string"}, {"name": "lead_time", "dtype": "float64"}, {"name": "image_url", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "loaded_images", "dtype": "image"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "local_desc", "1": "county_desc", "2": "major_residences_index", "3": "advert", "4": "county_trades", "5": "county_residence_alpha", "6": "index_general_or_place", "7": "title_page", "8": "adverts_index_alpha", "9": "adverts_index_business_cat", "10": "prefatory_text"}}}}], "splits": [{"name": "train", "num_bytes": 1096673288.0, "num_examples": 525}], "download_size": 1064406432, "dataset_size": 1096673288.0}} | 2022-12-06T20:17:19+00:00 |
ef876f5339eb2a4a0343bd875bb14c2d3b3e6d73 | bazzhangz/sumdataset | [
"license:apache-2.0",
"region:us"
]
| 2022-12-06T15:07:20+00:00 | {"license": "apache-2.0"} | 2022-12-06T15:08:43+00:00 |
|
04a26f61e5df35b37f763582d1157b4c763470ea | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: knkarthick/MEETING_SUMMARY
* Dataset: bazzhangz/sumdataset
* Config: bazzhangz--sumdataset
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@bazzhangz](https://huggingface.co/bazzhangz) for evaluating this model. | autoevaluate/autoeval-eval-bazzhangz__sumdataset-bazzhangz__sumdataset-18687b-2355774138 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-06T15:26:09+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["bazzhangz/sumdataset"], "eval_info": {"task": "summarization", "model": "knkarthick/MEETING_SUMMARY", "metrics": [], "dataset_name": "bazzhangz/sumdataset", "dataset_config": "bazzhangz--sumdataset", "dataset_split": "train", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-12-06T15:57:43+00:00 |
4111ff2c9a6086993cbadf8e57cb0c4178494282 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-cnn_dailymail
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@AkankshaK](https://huggingface.co/AkankshaK) for evaluating this model. | autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-5c4aa4-2355874139 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-06T15:26:15+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "google/pegasus-cnn_dailymail", "metrics": ["meteor"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-12-06T17:12:47+00:00 |
dd64a3ea4fda4d360487cf28888ffeb65b91fd32 | # Dataset Card for "leicester_loaded_annotations_binary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | davanstrien/leicester_loaded_annotations_binary | [
"region:us"
]
| 2022-12-06T15:57:06+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "other", "1": "county_trades"}}}}], "splits": [{"name": "train", "num_bytes": 1090143420.0, "num_examples": 525}], "download_size": 0, "dataset_size": 1090143420.0}} | 2022-12-07T13:59:43+00:00 |
6329d8a3cdf2e9c8b1bb410184b72282dc1cf414 | lukablaskovic/student-enquiries-cro | [
"license:mit",
"region:us"
]
| 2022-12-06T17:06:53+00:00 | {"license": "mit"} | 2022-12-06T23:30:09+00:00 |
|
1aa02586aa87aa4123fc625a03b6a8ecfb45aac8 | # Dataset Card for "librispeech_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | CristianaLazar/librispeech_test | [
"region:us"
]
| 2022-12-06T17:15:36+00:00 | {"dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "chapter_id", "dtype": "int64"}, {"name": "id", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "test", "num_bytes": 3140310938.5, "num_examples": 2620}], "download_size": 1297324022, "dataset_size": 3140310938.5}} | 2022-12-06T17:22:31+00:00 |
831f5bc08aeba7dce79bcf35af78eb8afc933f3d | Hayoung/ko_en_dataset | [
"region:us"
]
| 2022-12-06T18:08:07+00:00 | {} | 2022-12-06T18:08:46+00:00 |
|
f2dd0c74d63d03f3932d0f9b85bc827ec2779d99 | # ImageNet-1k (Test set only)
Original paper: [ImageNet Large Scale Visual Recognition Challenge](https://arxiv.org/abs/1409.0575)
Homepage: https://www.image-net.org/
Bibtex:
```
@article{ILSVRC15,
Author = {Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei},
Title = {{ImageNet Large Scale Visual Recognition Challenge}},
Year = {2015},
journal = {International Journal of Computer Vision (IJCV)},
doi = {10.1007/s11263-015-0816-y},
volume={115},
number={3},
pages={211-252}
}
``` | djghosh/wds_imagenet1k_test | [
"arxiv:1409.0575",
"region:us"
]
| 2022-12-06T18:43:25+00:00 | {} | 2022-12-12T21:01:44+00:00 |
6e8a709e042f37a179671e87f72bc2b140488e72 | sorenlarson/test | [
"license:openrail",
"region:us"
]
| 2022-12-06T18:57:21+00:00 | {"license": "openrail"} | 2022-12-06T18:57:21+00:00 |
|
5e2ef70db24ea5ac74fd863ba2a416a2dc698379 |
# Araina Text Corpus
Text corpus in [Aranese variety of Gascon dialect of Occitan](https://en.wikipedia.org/wiki/Aranese_dialect).
## Corpora
- `_nogues`: Literary texts translated by Antòni Nogués. Sourced from [institutestudisaranesi.cat](http://www.institutestudisaranesi.cat/colleccion-antoni-nogues/#1541013646532-338ed5f5-a3aa)
- `_suils`: Language educational material by Jordi Suïls Subirà
- `_conselh`: Administrative proceedings from Conselh Generau d'Aran
## Project Araina
This corpus was prepared as part of [Project Araina](https://www.projecte-araina.org) with support from Culture Department of the Catalan autonomous government.
Aquest corpus s'ha elaborat en el marc del [Projecte Araina](https://www.projecte-araina.org) amb el suport del Departament de Cultura de la Generalitat de Catalunya.
<img src="https://github.com/collectivat/cmusphinx-models/raw/master/img/logo_generalitat.png" width="400"/>
| collectivat/araina-text-corpus | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:oc",
"license:cc0-1.0",
"region:us"
]
| 2022-12-06T18:59:41+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["oc"], "license": "cc0-1.0", "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"]} | 2022-12-30T15:42:30+00:00 |
070a7c8bdf18d2e9c2f2552f593a18673548f66c | Custom Marist QA dataset to train Kevin - version 12/01/22 | Bonorinoa/kevin_train_12_6 | [
"region:us"
]
| 2022-12-06T19:03:46+00:00 | {} | 2022-12-06T19:05:08+00:00 |
eab24cf63fc482f610795ecc93c6e5bc40317a68 | # Dataset Card for "librispeech_augm_validation-tiny"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | CristianaLazar/librispeech_augm_validation-tiny | [
"region:us"
]
| 2022-12-06T19:11:18+00:00 | {"dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "chapter_id", "dtype": "int64"}, {"name": "id", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "validation", "num_bytes": 3218271771.125, "num_examples": 2703}], "download_size": 1320733851, "dataset_size": 3218271771.125}} | 2022-12-06T19:23:38+00:00 |
790e193cda0dcabac473690072dea7fcfb487553 | ksang/steamreviews | [
"region:us"
]
| 2022-12-06T19:23:00+00:00 | {} | 2022-12-06T19:28:41+00:00 |
|
ee99e225418ab025716b664a25f535cee4bee363 | # Dataset Card for "starter2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Generated by a fine-tuned GPT3 curie model over 2020 and 2022 on 10.000 conversation starters.
Generation pipeline:
- generate 3 conversation starters
- fine-tuned curie classify 0-1
- take top 1
Humans have reviewed and fixed/deleted these conversation starters afterwards | Langame/starter2 | [
"region:us"
]
| 2022-12-06T19:26:51+00:00 | {"viewer": true, "dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "state", "dtype": "string"}, {"name": "apiCompletionModel", "dtype": "string"}, {"name": "createdAt", "dtype": "timestamp[ns, tz=UTC]"}, {"name": "completionType", "dtype": "float64"}, {"name": "apiClassificationModel", "dtype": "string"}, {"name": "fixGrammar", "dtype": "bool"}, {"name": "shard", "dtype": "float64"}, {"name": "parallelCompletions", "dtype": "float64"}, {"name": "disabled", "dtype": "bool"}, {"name": "brokenGrammar", "dtype": "string"}, {"name": "profanityThreshold", "dtype": "float64"}, {"name": "tweet", "dtype": "bool"}, {"name": "conversationStarters", "list": [{"name": "aiTopics", "sequence": "string"}, {"name": "broken_grammar", "dtype": "string"}, {"name": "classification", "dtype": "string"}, {"name": "conversation_starter", "dtype": "string"}]}, {"name": "topics", "sequence": "string"}, {"name": "embedding", "sequence": "float64"}, {"name": "error", "dtype": "string"}, {"name": "developer_message", "dtype": "string"}, {"name": "aiTopics", "sequence": "string"}, {"name": "tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 8891417, "num_examples": 3072}], "download_size": 6983130, "dataset_size": 8891417}} | 2023-02-23T08:40:29+00:00 |
ede3b50f99f5a4bdf343ef3a0bbea3482198d790 |
# Pastel Style Embedding / Textual Inversion
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/pastel_style/resolve/main/pastel_style.jpg"/>
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"pastel_style"```
Personally, I would recommend to use my embeddings with a strength of 0.8, like ```"(pastel_style:0.8)"```
I trained the embedding two epochs until 6000 steps.
I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508"
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/pastel_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"image-to-image",
"region:us"
]
| 2022-12-06T19:33:11+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/pastel_style/resolve/main/pastel_style.jpg", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false} | 2022-12-06T19:38:55+00:00 |
0ab561922c756a130ecbf817c8442cec136884b0 | alx-ai/nogglesonly | [
"license:cc0-1.0",
"region:us"
]
| 2022-12-06T19:33:31+00:00 | {"license": "cc0-1.0"} | 2022-12-06T19:34:03+00:00 |
|
d00c469f1dc9f5b8f968b95321eedeebe6bd35ea | # Dataset Card for "gal_yair_large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | galman33/gal_yair_166000_1664x832_fixed | [
"region:us"
]
| 2022-12-06T19:49:12+00:00 | {"dataset_info": {"features": [{"name": "lat", "dtype": "float64"}, {"name": "lon", "dtype": "float64"}, {"name": "country_code", "dtype": {"class_label": {"names": {"0": "ad", "1": "ae", "2": "al", "3": "aq", "4": "ar", "5": "au", "6": "bd", "7": "be", "8": "bg", "9": "bm", "10": "bo", "11": "br", "12": "bt", "13": "bw", "14": "ca", "15": "ch", "16": "cl", "17": "co", "18": "cz", "19": "de", "20": "dk", "21": "ec", "22": "ee", "23": "es", "24": "fi", "25": "fr", "26": "gb", "27": "gh", "28": "gl", "29": "gr", "30": "gt", "31": "hk", "32": "hr", "33": "hu", "34": "id", "35": "ie", "36": "il", "37": "is", "38": "it", "39": "ix", "40": "jp", "41": "kg", "42": "kh", "43": "kr", "44": "la", "45": "lk", "46": "ls", "47": "lt", "48": "lu", "49": "lv", "50": "me", "51": "mg", "52": "mk", "53": "mn", "54": "mo", "55": "mt", "56": "mx", "57": "my", "58": "nl", "59": "no", "60": "nz", "61": "pe", "62": "ph", "63": "pl", "64": "pt", "65": "ro", "66": "rs", "67": "ru", "68": "se", "69": "sg", "70": "si", "71": "sk", "72": "sn", "73": "sz", "74": "th", "75": "tn", "76": "tr", "77": "tw", "78": "ua", "79": "ug", "80": "us", "81": "uy", "82": "za"}}}}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 8079449515.0, "num_examples": 166000}], "download_size": 22205924633, "dataset_size": 8079449515.0}} | 2022-12-06T20:36:38+00:00 |
63b1d0965e3a941703e33272b61ba508411376d0 | # Dataset Card for "scalableMLDL2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | marvmk/scalableMLDL2 | [
"region:us"
]
| 2022-12-06T20:53:05+00:00 | {"dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 5726523552, "num_examples": 5962}, {"name": "test", "num_bytes": 2546311152, "num_examples": 2651}], "download_size": 1397392104, "dataset_size": 8272834704}} | 2022-12-06T22:08:42+00:00 |
6e48c15a2ea7559cac10e5c5a453a2ef5913577e |
# Splash Art Style Embedding / Textual Inversion
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/splash_art/resolve/main/splashart.jpg"/>
## Usage
I uploaded two different versions. Both embeddings create splash art images, although the splash_art2 is more consistent, splash_art generates more generic images than splash_art2. Enjoy!
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"splash_art"``` or ```"splash_art2"``` depending on which version you use
Personally, I would recommend to use my embeddings with a strength of 0.8, like ```"(splash_art:0.8)"``` or ```"(splash_art2:0.8)"```
I trained the embedding two epochs until 6800 steps.
I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508"
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/splash_art | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"image-to-image",
"region:us"
]
| 2022-12-06T20:55:26+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/splash_art/resolve/main/splashart.jpg", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false} | 2022-12-06T21:01:28+00:00 |
4e7b9d3eb67b6ce54ef37ea23f276ebe21f63aa2 | HuggingFaceM4/TextCaps | [
"license:cc-by-4.0",
"region:us"
]
| 2022-12-06T20:56:12+00:00 | {"license": "cc-by-4.0"} | 2022-12-09T01:38:32+00:00 |
|
36ab7c5086b4abfb1f13576b048abac0ceb4fea5 | # Dataset Card for "cantonese_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tilos/cantonese_processed | [
"region:us"
]
| 2022-12-06T21:53:29+00:00 | {"dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 18830225280, "num_examples": 19605}], "download_size": 1276665418, "dataset_size": 18830225280}} | 2022-12-06T22:26:24+00:00 |
6dca4f1b0b8758a58bc8a4eec54ff243843ffd95 | lukablaskovic/student-enquiries-cro_train | [
"license:mit",
"region:us"
]
| 2022-12-06T22:11:52+00:00 | {"license": "mit"} | 2022-12-06T22:12:36+00:00 |
|
e560d7c1989174b135a17e69e68889dbf38e0628 | # Dataset Card for "image50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Steven0633/image50 | [
"region:us"
]
| 2022-12-06T22:19:21+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "arrange chairs", "1": "arrange flowers", "2": "bake potato", "3": "beat eggs", "4": "bend knee", "5": "bend tree", "6": "bind hair", "7": "bite apple", "8": "block door", "9": "block window", "10": "boil egg", "11": "boil potato", "12": "break bowl", "13": "break cup", "14": "break door", "15": "break egg", "16": "break glass", "17": "break window", "18": "burn book", "19": "burn paper", "20": "burn tree", "21": "burn wood", "22": "burst balloon", "23": "burst door", "24": "carry bag", "25": "carry book", "26": "carry umbrella", "27": "chop carrot", "28": "chop meat", "29": "chop onion", "30": "chop tree", "31": "chop wood", "32": "close book", "33": "close cabinet", "34": "close door", "35": "close drawer", "36": "close window", "37": "coil rope", "38": "cook egg", "39": "cook meat", "40": "cook onion", "41": "cook potato", "42": "crack bottle", "43": "crack egg", "44": "crack glass", "45": "crack window", "46": "crash car", "47": "crop hair", "48": "cut apple", "49": "cut meat", "50": "cut onion", "51": "cut potato", "52": "cut tree", "53": "cut wood", "54": "fasten door", "55": "fasten window", "56": "fold paper", "57": "fry egg", "58": "fry meat", "59": "fry potato", "60": "grate carrot", "61": "grate potato", "62": "grind meat", "63": "hang bag", "64": "hang shirt", "65": "ignite paper", "66": "ignite wood", "67": "insert key", "68": "kick door", "69": "kick football", "70": "knot rope", "71": "label bottle", "72": "label box", "73": "lock cabinet", "74": "lock door", "75": "lock drawer", "76": "lock window", "77": "mash potato", "78": "mix eggs", "79": "open bottle", "80": "open box", "81": "open cabinet", "82": "open door", "83": "open drawer", "84": "open umbrella", "85": "open window", "86": "park car", "87": "peel apple", "88": "peel banana", "89": "peel carrot", "90": "peel orange", "91": "peel potato", "92": "pile books", "93": "pile boxes", "94": "pile wood", "95": "pitch baseball", "96": "ride bicycle", "97": "rip paper", "98": "roll paper", "99": "roll umbrella", "100": "saw tree", "101": "saw wood", "102": "scratch car", "103": "scratch knee", "104": "shave hair", "105": "shut door", "106": "shut window", "107": "skin knee", "108": "slice apple", "109": "slice meat", "110": "slice onion", "111": "slice potato", "112": "smash door", "113": "smash window", "114": "soak hair", "115": "soak shirt", "116": "spill coffee", "117": "split tree", "118": "split wood", "119": "squeeze bottle", "120": "squeeze orange", "121": "stain paper", "122": "stain shirt", "123": "stir coffee", "124": "stir soup", "125": "strip tree", "126": "tear book", "127": "tear paper", "128": "tear shirt", "129": "throw apple", "130": "throw baseball", "131": "throw football", "132": "throw frisbee", "133": "tie shoe", "134": "trim hair", "135": "trim tree", "136": "twist hair", "137": "twist rope", "138": "wrap book", "139": "wrap box"}}}}], "splits": [{"name": "train", "num_bytes": 191648684.53815603, "num_examples": 6126}, {"name": "test", "num_bytes": 20857643.465843983, "num_examples": 681}], "download_size": 213918792, "dataset_size": 212506328.004}} | 2022-12-06T22:38:39+00:00 |
d1b52887a64651e934e908daba767954f8299ac0 | # Dataset Card for "595Gao"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | epigone707/595Gao | [
"region:us"
]
| 2022-12-06T22:22:59+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "arrange+chairs", "1": "arrange+flowers", "2": "bake+potato", "3": "beat+eggs", "4": "bend+knee", "5": "bend+tree", "6": "bind+hair", "7": "bite+apple", "8": "block+door", "9": "block+window", "10": "boil+egg", "11": "boil+potato", "12": "break+bowl", "13": "break+cup", "14": "break+door", "15": "break+egg", "16": "break+glass", "17": "break+window", "18": "burn+book", "19": "burn+paper", "20": "burn+tree", "21": "burn+wood", "22": "burst+balloon", "23": "burst+door", "24": "carry+bag", "25": "carry+book", "26": "carry+umbrella", "27": "chop+carrot", "28": "chop+meat", "29": "chop+onion", "30": "chop+tree", "31": "chop+wood", "32": "close+book", "33": "close+cabinet", "34": "close+door", "35": "close+drawer", "36": "close+window", "37": "coil+rope", "38": "cook+egg", "39": "cook+meat", "40": "cook+onion", "41": "cook+potato", "42": "crack+bottle", "43": "crack+egg", "44": "crack+glass", "45": "crack+window", "46": "crash+car", "47": "crop+hair", "48": "cut+apple", "49": "cut+meat", "50": "cut+onion", "51": "cut+potato", "52": "cut+tree", "53": "cut+wood", "54": "fasten+door", "55": "fasten+window", "56": "fold+paper", "57": "fry+egg", "58": "fry+meat", "59": "fry+potato", "60": "grate+carrot", "61": "grate+potato", "62": "grind+meat", "63": "hang+bag", "64": "hang+shirt", "65": "ignite+paper", "66": "ignite+wood", "67": "insert+key", "68": "kick+door", "69": "kick+football", "70": "knot+rope", "71": "label+bottle", "72": "label+box", "73": "lock+cabinet", "74": "lock+door", "75": "lock+drawer", "76": "lock+window", "77": "mash+potato", "78": "mix+eggs", "79": "open+bottle", "80": "open+box", "81": "open+cabinet", "82": "open+door", "83": "open+drawer", "84": "open+umbrella", "85": "open+window", "86": "park+car", "87": "peel+apple", "88": "peel+banana", "89": "peel+carrot", "90": "peel+orange", "91": "peel+potato", "92": "pile+books", "93": "pile+boxes", "94": "pile+wood", "95": "pitch+baseball", "96": "ride+bicycle", "97": "rip+paper", "98": "roll+paper", "99": "roll+umbrella", "100": "saw+tree", "101": "saw+wood", "102": "scratch+car", "103": "scratch+knee", "104": "shave+hair", "105": "shut+door", "106": "shut+window", "107": "skin+knee", "108": "slice+apple", "109": "slice+meat", "110": "slice+onion", "111": "slice+potato", "112": "smash+door", "113": "smash+window", "114": "soak+hair", "115": "soak+shirt", "116": "spill+coffee", "117": "split+tree", "118": "split+wood", "119": "squeeze+bottle", "120": "squeeze+orange", "121": "stain+paper", "122": "stain+shirt", "123": "stir+coffee", "124": "stir+soup", "125": "strip+tree", "126": "tear+book", "127": "tear+paper", "128": "tear+shirt", "129": "throw+apple", "130": "throw+baseball", "131": "throw+football", "132": "throw+frisbee", "133": "tie+shoe", "134": "trim+hair", "135": "trim+tree", "136": "twist+hair", "137": "twist+rope", "138": "wrap+book", "139": "wrap+box"}}}}], "splits": [{"name": "train", "num_bytes": 165337731.7298711, "num_examples": 1843}, {"name": "test", "num_bytes": 20775526.807128906, "num_examples": 205}], "download_size": 187898542, "dataset_size": 186113258.537}} | 2022-12-07T00:40:07+00:00 |
26ed60040aa10c06960b90f17322a439829efa91 | kalisia/TongaASR_Space_Examples | [
"license:apache-2.0",
"region:us"
]
| 2022-12-06T22:46:07+00:00 | {"license": "apache-2.0"} | 2022-12-07T16:31:02+00:00 |
|
9767377ac7818d12838decc851ac906e09b7fac0 | iceicelandic/plowkeep | [
"license:apache-2.0",
"region:us"
]
| 2022-12-06T22:51:57+00:00 | {"license": "apache-2.0"} | 2022-12-06T22:51:57+00:00 |
|
d590bb4a2b1c16c734401837fced22716efdcdaf | # Dataset Card for "tagesschau"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tillschwoerer/tagesschau | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:de",
"newspapers",
"germany",
"2022",
"region:us"
]
| 2022-12-06T23:08:19+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["de"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": ["topic-classification"], "pretty_name": "tagesschau", "tags": ["newspapers", "germany", "2022"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "amerika", "1": "asien", "2": "finanzen", "3": "innenpolitik", "4": "sportschau", "5": "unternehmen", "6": "verbraucher"}}}}], "splits": [{"name": "train", "num_bytes": 4400114, "num_examples": 1200}, {"name": "validation", "num_bytes": 555716, "num_examples": 150}, {"name": "test", "num_bytes": 555716, "num_examples": 150}], "download_size": 3412287, "dataset_size": 5511546}} | 2022-12-06T23:21:09+00:00 |
3e48a07fe11096665a7b81e44148097081a424e6 | VeroSpacial/video_tennis | [
"license:openrail",
"region:us"
]
| 2022-12-06T23:13:30+00:00 | {"license": "openrail"} | 2022-12-06T23:15:13+00:00 |
|
4aa60b5519c07adf32aec7f2417b73de0238c351 | # Dataset Card for "sheet_music_ede2110"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EmileEsmaili/sheet_music_ede2110 | [
"region:us"
]
| 2022-12-07T00:03:50+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2229356112.491, "num_examples": 9219}], "download_size": 1211789844, "dataset_size": 2229356112.491}} | 2022-12-09T06:41:57+00:00 |
49ec483e772970fbf6f46919372c4e9b6a60bcee | # Dataset Card for "lyoko-ultimate"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Madiator2011/lyoko-ultimate | [
"region:us"
]
| 2022-12-07T00:14:50+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24808769.89, "num_examples": 1435}], "download_size": 24242906, "dataset_size": 24808769.89}} | 2022-12-07T00:20:36+00:00 |
4299deb1ab6aa43c277fb79d8da1c3e2e52a144b | # Dataset Card for "news_corpus_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hieule/news_corpus_v2 | [
"region:us"
]
| 2022-12-07T04:59:58+00:00 | {"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "sapo", "dtype": "string"}, {"name": "cates", "sequence": "string"}, {"name": "publish", "dtype": "timestamp[us]"}, {"name": "text_content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3228940922, "num_examples": 1000001}], "download_size": 1616424455, "dataset_size": 3228940922}} | 2022-12-07T07:27:40+00:00 |
96123958828d02684be0db371e5876c0bbe0f2de | # Dataset Card for "news-summary"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset?select=True.csv
- **Paper:** N.A.
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Officially it was supposed to be used for classification but, can you use this data set to summarize news articles?
### Languages
english
### Citation Information
Acknowledgements
Ahmed H, Traore I, Saad S. “Detecting opinion spams and fake news using text classification”, Journal of Security and Privacy, Volume 1, Issue 1, Wiley, January/February 2018.
Ahmed H, Traore I, Saad S. (2017) “Detection of Online Fake News Using N-Gram Analysis and Machine Learning Techniques. In: Traore I., Woungang I., Awad A. (eds) Intelligent, Secure, and Dependable Systems in Distributed and Cloud Environments. ISDDC 2017. Lecture Notes in Computer Science, vol 10618. Springer, Cham (pp. 127-138).
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset. | argilla/news-summary | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
]
| 2022-12-07T05:39:38+00:00 | {"language": ["en"], "license": ["cc-by-nc-4.0"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": ["news-articles-summarization"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "prediction", "list": [{"name": "score", "dtype": "float64"}, {"name": "text", "dtype": "string"}]}, {"name": "prediction_agent", "dtype": "string"}, {"name": "annotation", "dtype": "null"}, {"name": "annotation_agent", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 2563132.0446374374, "num_examples": 1000}, {"name": "test", "num_bytes": 52331466.955362566, "num_examples": 20417}], "download_size": 33207109, "dataset_size": 54894599.0}} | 2023-03-16T09:36:12+00:00 |
9eadedec22e78a74591874b01d1472d6a7f4a02a | # AutoTrain Dataset for project: enzydg
## Dataset Description
This dataset has been automatically processed by AutoTrain for project enzydg.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"id": 155,
"feat_compound_iso_smiles": "CC1(C2CC(=O)C1(C(=O)C2)C)C",
"feat_target_sequence": "MKQLATPFQEYSQKYENIRLERDGGVLLVTVHTEGKSLVWTSTAHDELAYCFHDIACDRENKVVILTGTGPSFCNEIDFTSFNLGTPHDWDEIIFEGQRLLNNLLSIEVPVIAAVNGPVTNAPEIPVMSDIVLAAESATFQDGPHFPSGIVPGDGAHVVWPHVLGSNRGRYFLLTGQELDARTALDYGAVNEVLSEQELLPRAWELARGIAEKPLLARRYARKVLTRQLRRVMEADLSLGLAHEALAAIDLGMESEQ",
"target": 13.621903419494629
},
{
"id": 180,
"feat_compound_iso_smiles": "C1=CC(=CC=C1C2=COC3=C(C2=O)C=CC(=C3)O[C@H]4[C@@H]([C@H]([C@@H]([C@H](O4)CO)O)O)O)O",
"feat_target_sequence": "MAFPAGFGWAAATAAYQVEGGWDADGKGPCVWDTFTHQGGERVFKNQTGDVACGSYTLWEEDLKCIKQLGLTHYRFSLSWSRLLPDGTTGFINQKGIDYYNKIIDDLLKNGVTPIVTLYHFDLPQTLEDQGGWLSEAIIESFDKYAQFCFSTFGDRVKQWITINEANVLSVMSYDLGMFPPGIPHFGTGGYQAAHNLIKAHARSWHSYDSLFRKKQKGMVSLSLFAVWLEPADPNSVSDQEAAKRAITFHLDLFAKPIFIDGDYPEVVKSQIASMSQKQGYPSSRLPEFTEEEKKMIKGTADFFAVQYYTTRLIKYQENKKGELGILQDAEIEFFPDPSWKNVDAIYVVPWGVCKLLKYIKDTYNNPVIYITENGFPQSDPAPLDDTQRWEYFRQTFQELFKAIQLDKVNLQVYCAWSLLDNFEWNQGYSSRFGLFHVDFEDPARPRVPYTSAKEYAKIIRNNGLEAHL",
"target": 17.67270851135254
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"id": "Value(dtype='int64', id=None)",
"feat_compound_iso_smiles": "Value(dtype='string', id=None)",
"feat_target_sequence": "Value(dtype='string', id=None)",
"target": "Value(dtype='float32', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 160 |
| valid | 44 |
| Shone/autotrain-data-enzydg | [
"region:us"
]
| 2022-12-07T05:53:14+00:00 | {} | 2022-12-07T05:54:50+00:00 |
2d2d471b4fcbb11b3486bbc9ffef249db55d6b15 | # Dataset Card for "news-fakenews"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset?select=True.csv
- **Paper:** N.A.
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Can you use this data set to make an algorithm able to determine if an article is fake news or not ?
### Languages
english
### Citation Information
Acknowledgements
Ahmed H, Traore I, Saad S. “Detecting opinion spams and fake news using text classification”, Journal of Security and Privacy, Volume 1, Issue 1, Wiley, January/February 2018.
Ahmed H, Traore I, Saad S. (2017) “Detection of Online Fake News Using N-Gram Analysis and Machine Learning Techniques. In: Traore I., Woungang I., Awad A. (eds) Intelligent, Secure, and Dependable Systems in Distributed and Cloud Environments. ISDDC 2017. Lecture Notes in Computer Science, vol 10618. Springer, Cham (pp. 127-138).
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset. | argilla/news-fakenews | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"region:us"
]
| 2022-12-07T06:37:24+00:00 | {"language": ["en"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "inputs", "struct": [{"name": "text", "dtype": "string"}]}, {"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "prediction_agent", "dtype": "string"}, {"name": "annotation", "dtype": "null"}, {"name": "annotation_agent", "dtype": "null"}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 227222498, "num_examples": 44898}], "download_size": 138350597, "dataset_size": 227222498}} | 2022-12-07T07:09:34+00:00 |
7e9e5c457c4a57965c113925d2d94cc885861821 | # Dataset Card for "olm-october-2022-tokenized-1024-exact-dedup-only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-october-2022-tokenized-1024-exact-dedup-only | [
"region:us"
]
| 2022-12-07T07:01:16+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 81122930784.0, "num_examples": 13177864}], "download_size": 21799520270, "dataset_size": 81122930784.0}} | 2022-12-07T07:49:28+00:00 |
cb7a9afd33d10e62b172dae13c115bd3af536eea | Monsterkeks/GTA | [
"license:other",
"region:us"
]
| 2022-12-07T07:07:17+00:00 | {"license": "other"} | 2022-12-07T07:09:00+00:00 |
|
2e8f1098ddeaca92b7300156a5a9395662992eda | # Dataset Card for "python-code-ds-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dipesh/python-code-ds-mini | [
"region:us"
]
| 2022-12-07T07:07:26+00:00 | {"dataset_info": {"features": [{"name": "code", "dtype": "string"}, {"name": "code_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1455104.6452533905, "num_examples": 2521}, {"name": "validation", "num_bytes": 162191.35474660958, "num_examples": 281}], "download_size": 742200, "dataset_size": 1617296.0}} | 2022-12-09T23:33:30+00:00 |
58103080018cf8568802b39651be1e008765d6d6 | # Dataset Card for "olm-CC-MAIN-2022-40-sampling-ratio-0.15894621295-no-bigscience-filters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-CC-MAIN-2022-40-sampling-ratio-0.15894621295-no-bigscience-filters | [
"region:us"
]
| 2022-12-07T07:11:06+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "crawl_timestamp", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 33670789930.82474, "num_examples": 16442332}], "download_size": 21113002013, "dataset_size": 33670789930.82474}} | 2022-12-07T07:36:23+00:00 |
15d1d222788d13a1db7f17992ad4bef5aff06dad | # Dataset Card for "Process_tested"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Shularp/Process_tested | [
"region:us"
]
| 2022-12-07T08:18:22+00:00 | {"dataset_info": {"features": [{"name": "sentence_arb_Arab", "dtype": "string"}, {"name": "sentence_eng_Latn", "dtype": "string"}], "splits": [{"name": "dev", "num_bytes": 333842, "num_examples": 997}, {"name": "devtest", "num_bytes": 351455, "num_examples": 1012}], "download_size": 411360, "dataset_size": 685297}} | 2022-12-07T08:20:07+00:00 |
5fbf1af3b9112e7ec74ca4fd94bf92fd9b8abaf0 | # Dataset Card for "Process_tested_02"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Shularp/Process_tested_02 | [
"region:us"
]
| 2022-12-07T08:20:24+00:00 | {"dataset_info": {"features": [{"name": "translation", "struct": [{"name": "ar", "dtype": "string"}, {"name": "en", "dtype": "string"}]}, {"name": "id", "sequence": "int64"}], "splits": [{"name": "dev", "num_bytes": 361758, "num_examples": 997}], "download_size": 199462, "dataset_size": 361758}} | 2022-12-07T08:20:28+00:00 |
0f3e7aab79bf370979764275e3d12d74d8235bc6 | # Dataset Card for "medical-domain"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/tboyle10/medicaltranscriptions
- **Paper:** N.A.
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Medical transcription data scraped from mtsamples.com
Medical data is extremely hard to find due to HIPAA privacy regulations. This dataset offers a solution by providing medical transcription samples.
This dataset contains sample medical transcriptions for various medical specialties.
### Languages
english
### Citation Information
Acknowledgements
Medical transcription data scraped from mtsamples.com
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset. | argilla/medical-domain | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"region:us"
]
| 2022-12-07T08:47:29+00:00 | {"language": ["en"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "inputs", "struct": [{"name": "text", "dtype": "string"}]}, {"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "prediction_agent", "dtype": "string"}, {"name": "annotation", "dtype": "null"}, {"name": "annotation_agent", "dtype": "null"}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 30903523, "num_examples": 4966}], "download_size": 14846569, "dataset_size": 30903523}} | 2022-12-07T11:57:58+00:00 |
1f2a465dcf1201ead498edc5051e35225b0c479c | # Dataset Card for "banking_sentiment_setfit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | argilla/banking_sentiment_setfit | [
"region:us"
]
| 2022-12-07T09:03:18+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral"}}}}], "splits": [{"name": "train", "num_bytes": 7433.25, "num_examples": 108}, {"name": "test", "num_bytes": 2477.75, "num_examples": 36}], "download_size": 8087, "dataset_size": 9911.0}} | 2022-12-07T09:08:25+00:00 |
7bb3b21e604df9388b7525812d3f723ef9e677b3 | # AutoTrain Dataset for project: boolq
## Dataset Description
This dataset has been automatically processed by AutoTrain for project boolq.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"context": "is an abstract the same as a summary",
"question": "Abstract (summary) -- An abstract is a brief summary of a research article, thesis, review, conference proceeding, or any in-depth analysis of a particular subject and is often used to help the reader quickly ascertain the paper's purpose. When used, an abstract always appears at the beginning of a manuscript or typescript, acting as the point-of-entry for any given academic paper or patent application. Abstracting and indexing services for various academic disciplines are aimed at compiling a body of literature for that particular subject.",
"answers.text": [
"757"
],
"answers.answer_start": [
-1
],
"feat_id": null,
"feat_title": null
},
{
"context": "was the opening of jumeirah beach park in 2009",
"question": "Jumeirah Beach Hotel -- Jumeirah Beach Hotel is a hotel in Dubai, United Arab Emirates. The hotel, which opened in 1997, is operated by the Dubai-based hotelier Jumeirah. The hotel contains 598 rooms and suites, 19 beachfront villas, and 20 restaurants and bars. This wave-shaped hotel complements the sail-shaped Burj Al Arab, which is adjacent to the Jumeirah Beach Hotel.",
"answers.text": [
"2817"
],
"answers.answer_start": [
-1
],
"feat_id": null,
"feat_title": null
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)",
"feat_id": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"feat_title": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 10786 |
| valid | 135411 |
| kn0w1dge/BoolQTrueFalse | [
"language:en",
"doi:10.57967/hf/0175",
"region:us"
]
| 2022-12-07T09:29:37+00:00 | {"language": ["en"]} | 2022-12-07T09:34:15+00:00 |
d5578010787ddb0996fba9bc1b136ad011c590da | # Dataset Card for "Process_tested-facebook-flores"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Shularp/Process_tested-facebook-flores | [
"region:us"
]
| 2022-12-07T09:43:46+00:00 | {"dataset_info": {"features": [{"name": "translation", "struct": [{"name": "ar", "dtype": "string"}, {"name": "en", "dtype": "string"}]}, {"name": "id", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 361758, "num_examples": 997}, {"name": "test", "num_bytes": 379791, "num_examples": 1012}], "download_size": 412821, "dataset_size": 741549}} | 2022-12-07T10:09:20+00:00 |
24f3995831700f0ee85092eb40f0a8d0be57d50c | # Dataset Card for "librispeech15k_augm_train-tiny"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | CristianaLazar/librispeech15k_augm_train-tiny | [
"region:us"
]
| 2022-12-07T09:51:40+00:00 | {"dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "chapter_id", "dtype": "int64"}, {"name": "id", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train.360", "num_bytes": 20473737704.0, "num_examples": 15000}], "download_size": 12376533972, "dataset_size": 20473737704.0}} | 2022-12-07T11:13:04+00:00 |
e4f7061f979a849657c669d6be9c5dec3575037c | manashxml/cp_and_none_10kdataset | [
"license:unknown",
"region:us"
]
| 2022-12-07T10:16:40+00:00 | {"license": "unknown"} | 2022-12-07T10:17:18+00:00 |
|
f25de9b018404f9cc708b6835a326d1c38b923a3 | # Dataset Card for "Process_tested-Shularp-Process_tested-facebook-floresarb_Arab_to_eng_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Shularp/Process_tested-Shularp-Process_tested-facebook-floresarb_Arab_to_eng_Latn | [
"region:us"
]
| 2022-12-07T10:18:20+00:00 | {"dataset_info": {"features": [{"name": "translation", "struct": [{"name": "ar", "dtype": "string"}, {"name": "en", "dtype": "string"}]}, {"name": "id", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 361758, "num_examples": 997}, {"name": "test", "num_bytes": 379791, "num_examples": 1012}], "download_size": 412821, "dataset_size": 741549}} | 2022-12-07T10:18:42+00:00 |
12141ed192361f8a2721aaae8c391b435a397365 | # Dataset Card for "news_commentary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Shularp/news_commentary | [
"region:us"
]
| 2022-12-07T10:29:05+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["ar", "en"]}}}], "splits": [{"name": "train", "num_bytes": 72589357.6306394, "num_examples": 74868}, {"name": "test", "num_bytes": 8065807.369360597, "num_examples": 8319}], "download_size": 45743247, "dataset_size": 80655165.0}} | 2022-12-07T10:29:34+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.