sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
54a392875563c471178438637212a270361715b3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v2-math-db74ac-2016866701 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T17:20:36+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v2"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v2", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v2", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T17:46:24+00:00 |
0d4d16e9ffdb156fdc3ece80942469517125c43a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v2-math-db74ac-2016866704 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T17:20:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v2"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v2", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v2", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T17:25:57+00:00 |
fed472106d3a2aa869b81140bec2dedebebaeb64 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v2-math-db74ac-2016866705 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T17:20:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v2"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-350m_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v2", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v2", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T17:21:51+00:00 |
9cc646a9fac38deb1980f415a056a9cdc7992cdb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v2-math-db74ac-2016866706 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T17:20:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v2"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v2", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v2", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T17:21:20+00:00 |
5ad77af9bbbb64d8e091b6d2a1bb0d5be78e3ec6 |
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">art by slime_style </em>
add <b>[ ]</b> around it to reduce its weight.
<h4> Included Files </h4>
<ul>
<li>6500 steps <em>Usage: art by slime_style-6500</em></li>
<li>10,000 steps <em>Usage: art by slime_style</em> </li>
</ul>
cheers<br>
Wipeout
<h4> Example Pictures </h4>
<table>
<tbody>
<tr>
<td><img height="100%/" width="100%" src="https://i.imgur.com/UU8lUKN.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/mrU4Ldw.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/TQEAKEa.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/gzRxFFd.png"></td>
</tr>
</tbody>
</table>
<h4> prompt comparison </h4>
<em> click the image to enlarge</em>
<a href="https://i.imgur.com/hHah7Dt.jpg" target="_blank"><img height="50%" width="50%" src="https://i.imgur.com/hHah7Dt.jpg"></a>
| zZWipeoutZz/slime_style | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-11-07T17:20:43+00:00 | {"license": "creativeml-openrail-m"} | 2022-11-07T17:33:39+00:00 |
6a799ab10990312cf80f0d1eeb3eafbbc18eee6b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v2-math-db74ac-2016866703 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T17:20:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v2"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v2", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v2", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T17:36:14+00:00 |
428e91185514f77f81566ac2d1e269edbd5554fe | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v2-math-db74ac-2016866702 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T17:20:48+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v2"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v2", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v2", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T18:18:12+00:00 |
189ad8662fdb96cd19ce86ada7d8eabde2d69247 |
# Dataset Card for "LexFiles"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Specifications](#supported-tasks-and-leaderboards)
## Dataset Description
- **Homepage:** https://github.com/coastalcph/lexlms
- **Repository:** https://github.com/coastalcph/lexlms
- **Paper:** https://arxiv.org/abs/xxx
- **Point of Contact:** [Ilias Chalkidis](mailto:[email protected])
### Dataset Summary
**Disclaimer: This is a pre-proccessed version of the LexFiles corpus (https://huggingface.co/datasets/lexlms/lexfiles), where documents are pre-split in chunks of 512 tokens.**
The LeXFiles is a new diverse English multinational legal corpus that we created including 11 distinct sub-corpora that cover legislation and case law from 6 primarily English-speaking legal systems (EU, CoE, Canada, US, UK, India).
The corpus contains approx. 19 billion tokens. In comparison, the "Pile of Law" corpus released by Hendersons et al. (2022) comprises 32 billion in total, where the majority (26/30) of sub-corpora come from the United States of America (USA), hence the corpus as a whole is biased towards the US legal system in general, and the federal or state jurisdiction in particular, to a significant extent.
### Dataset Specifications
| Corpus | Corpus alias | Documents | Tokens | Pct. | Sampl. (a=0.5) | Sampl. (a=0.2) |
|-----------------------------------|----------------------|-----------|--------|--------|----------------|----------------|
| EU Legislation | `eu-legislation` | 93.7K | 233.7M | 1.2% | 5.0% | 8.0% |
| EU Court Decisions | `eu-court-cases` | 29.8K | 178.5M | 0.9% | 4.3% | 7.6% |
| ECtHR Decisions | `ecthr-cases` | 12.5K | 78.5M | 0.4% | 2.9% | 6.5% |
| UK Legislation | `uk-legislation` | 52.5K | 143.6M | 0.7% | 3.9% | 7.3% |
| UK Court Decisions | `uk-court-cases` | 47K | 368.4M | 1.9% | 6.2% | 8.8% |
| Indian Court Decisions | `indian-court-cases` | 34.8K | 111.6M | 0.6% | 3.4% | 6.9% |
| Canadian Legislation | `canadian-legislation` | 6K | 33.5M | 0.2% | 1.9% | 5.5% |
| Canadian Court Decisions | `canadian-court-cases` | 11.3K | 33.1M | 0.2% | 1.8% | 5.4% |
| U.S. Court Decisions [1] | `court-listener` | 4.6M | 11.4B | 59.2% | 34.7% | 17.5% |
| U.S. Legislation | `us-legislation` | 518 | 1.4B | 7.4% | 12.3% | 11.5% |
| U.S. Contracts | `us-contracts` | 622K | 5.3B | 27.3% | 23.6% | 15.0% |
| Total | `lexlms/lexfiles` | 5.8M | 18.8B | 100% | 100% | 100% |
[1] We consider only U.S. Court Decisions from 1965 onwards (cf. post Civil Rights Act), as a hard threshold for cases relying on severely out-dated and in many cases harmful law standards. The rest of the corpora include more recent documents.
[2] Sampling (Sampl.) ratios are computed following the exponential sampling introduced by Lample et al. (2019).
Additional corpora not considered for pre-training, since they do not represent factual legal knowledge.
| Corpus | Corpus alias | Documents | Tokens |
|----------------------------------------|------------------------|-----------|--------|
| Legal web pages from C4 | `legal-c4` | 284K | 340M |
### Citation
[*Ilias Chalkidis\*, Nicolas Garneau\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.*
*LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.*
*2022. In the Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*](https://aclanthology.org/xxx/)
```
@inproceedings{chalkidis-garneau-etal-2023-lexlms,
title = {{LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development}},
author = "Chalkidis*, Ilias and
Garneau*, Nicolas and
Goanta, Catalina and
Katz, Daniel Martin and
Søgaard, Anders",
booktitle = "Proceedings of the 61h Annual Meeting of the Association for Computational Linguistics",
month = june,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/xxx",
}
``` | lexlms/lex_files_preprocessed | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-11-07T17:27:54+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["extended"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "LexFiles", "configs": ["eu_legislation", "eu_court_cases", "uk_legislation", "uk_court_cases", "us_legislation", "us_court_cases", "us_contracts", "canadian_legislation", "canadian_court_cases", "indian_court_cases"]} | 2023-05-10T15:01:44+00:00 |
97c8c45d205a5f24baddf626f6ed04ecc306b5d3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v2-math-db74ac-2016866707 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T17:32:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v2"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v2", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v2", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T17:35:45+00:00 |
d789d0498d9f7dee52e9ff4e1e2f18d2adeaf408 | RTM/LuckyData | [
"license:cc",
"region:us"
] | 2022-11-07T18:02:03+00:00 | {"license": "cc"} | 2022-11-07T18:02:03+00:00 |
|
516a6484ceb9cb23fead0f0cf5de86fd8ff963d7 | # Dataset Card for "petitions-ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | eminecg/petitions-ds-v2 | [
"region:us"
] | 2022-11-07T18:13:34+00:00 | {"dataset_info": {"features": [{"name": "petition", "dtype": "string"}, {"name": "petition_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 29426840.1, "num_examples": 2475}, {"name": "validation", "num_bytes": 3269648.9, "num_examples": 275}], "download_size": 14382239, "dataset_size": 32696489.0}} | 2022-11-07T18:13:42+00:00 |
cfab6adcb824f395dbd46ffc3001ffd38128460d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/electra-base-squad2
* Dataset: squadshifts
* Config: amazon
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@viralshanker](https://huggingface.co/viralshanker) for evaluating this model. | autoevaluate/autoeval-eval-squadshifts-amazon-74b272-2017966728 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:22:29+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squadshifts"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/electra-base-squad2", "metrics": [], "dataset_name": "squadshifts", "dataset_config": "amazon", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-11-07T19:25:09+00:00 |
b3b3a3a62ed04b6266acae69125216bef32bd040 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2
* Dataset: squadshifts
* Config: amazon
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@viralshanker](https://huggingface.co/viralshanker) for evaluating this model. | autoevaluate/autoeval-eval-squadshifts-amazon-74b272-2017966729 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:22:31+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squadshifts"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-base-squad2", "metrics": [], "dataset_name": "squadshifts", "dataset_config": "amazon", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-11-07T19:25:00+00:00 |
164c4c6b01f6ff2ac4b09b235de473bfdddfda9f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366741 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:42:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v4"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v4", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v4", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T19:42:58+00:00 |
9efbd406fbdcbaf43407452647359dc896d07380 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366738 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:42:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v4"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v4", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v4", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T19:57:18+00:00 |
07e6422317e8f235ab7f946475ab17fa72af8e70 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366742 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:42:14+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v4"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v4", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v4", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T19:45:46+00:00 |
dcc31dc3fb1f09771fec8b7bdade475b26fd584b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366736 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:42:15+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v4"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v4", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v4", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T20:07:18+00:00 |
0cf63e77119d0f0f992ebe49e450133ef24cace4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366735 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:42:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v4"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-66b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v4", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v4", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T21:41:07+00:00 |
b048e92848d7f9125b7c70cbafa2ec4c50b0864e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366739 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:42:19+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v4"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v4", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v4", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T20:37:13+00:00 |
480460c2c7aee0e610f719a6018cf6d78fbb0701 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366740 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:42:19+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v4"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v4", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v4", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T19:47:10+00:00 |
fadefe3f12997cab6f12c63824d313a0a76c889d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366737 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:44:22+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v4"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-350m_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v4", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v4", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T19:45:39+00:00 |
026e6d42bde2c22ccd1d5bb55c47fd57e5bb5b13 | At0x/AIUniverse | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-11-07T22:38:55+00:00 | {"license": "creativeml-openrail-m"} | 2022-11-09T22:39:52+00:00 |
|
8861727d8a9fcc7e1b9b997b3f160d40bba36e57 | AlliumPlayzDeluxo/wikiplussearch | [
"license:apache-2.0",
"region:us"
] | 2022-11-07T23:18:18+00:00 | {"license": "apache-2.0"} | 2022-11-07T23:20:34+00:00 |
|
df9ef59e5a8b02a5f4ec2c2b7bce07c0bafa921a | Duno9/text_inversion_toril | [
"license:openrail",
"region:us"
] | 2022-11-07T23:40:48+00:00 | {"license": "openrail"} | 2022-11-08T00:41:10+00:00 |
|
da826ecc7f0db408c41cb45766be606c44b3aed1 | mac326/test | [
"license:openrail",
"region:us"
] | 2022-11-08T00:43:45+00:00 | {"license": "openrail"} | 2022-11-08T00:49:45+00:00 |
|
ef0b6be47597c2ac7d3c116b1dffb405fbbda591 | jianguo/jianguo-1234 | [
"license:openrail",
"region:us"
] | 2022-11-08T03:12:13+00:00 | {"license": "openrail"} | 2022-11-08T03:12:13+00:00 |
|
25e7626c126613c2898bd29f8cb101e410fee989 | # Dataset Card for "olm-october-2022-tokenized-olm-bert-base-uncased"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-october-2022-tokenized | [
"region:us"
] | 2022-11-08T04:52:36+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 84051313200.0, "num_examples": 23347587}], "download_size": 21176572924, "dataset_size": 84051313200.0}} | 2022-11-08T07:58:59+00:00 |
a3e6a10b65441edae7f8f1de9f20eec218082d20 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: futin/random
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__random-en-805a17-2021966768 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-08T04:59:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/random"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-6.7b", "metrics": [], "dataset_name": "futin/random", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-08T07:38:50+00:00 |
42fda3c0d1ef504e2c100f16288a4da9e7a082b8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: futin/random
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__random-en-805a17-2021966769 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-08T04:59:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/random"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-2.7b", "metrics": [], "dataset_name": "futin/random", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-08T05:54:50+00:00 |
98684aeb6f743727a96594d3fe2d5f5c0a3fc0c1 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: futin/random
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__random-en-805a17-2021966770 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-08T04:59:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/random"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-1.3b", "metrics": [], "dataset_name": "futin/random", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-08T05:39:34+00:00 |
3c9caa2f2f6960711e7f4d2e800581def2b6c183 |
# Dataset Card for CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation
## Dataset Description
- **Repository:** [https://github.com/AbhilashaRavichander/CondaQA](https://github.com/AbhilashaRavichander/CondaQA)
- **Paper:** [https://arxiv.org/abs/2211.00295](https://arxiv.org/abs/2211.00295)
- **Point of Contact:** [email protected]
## Dataset Summary
Data from the EMNLP 2022 paper by Ravichander et al.: "CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation".
If you use this dataset, we would appreciate you citing our work:
```
@inproceedings{ravichander-et-al-2022-condaqa,
title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation},
author={Ravichander, Abhilasha and Gardner, Matt and Marasovi\'{c}, Ana},
proceedings={EMNLP 2022},
year={2022}
}
```
From the paper: "We introduce CondaQA to facilitate the future development of models that can process negation effectively. This is the first English reading comprehension dataset which requires reasoning about the implications of negated statements in paragraphs. We collect paragraphs with diverse negation cues, then have crowdworkers ask questions about the _implications_ of the negated statement in the passage. We also have workers make three kinds of edits to the passage---paraphrasing the negated statement, changing the scope of the negation, and reversing the negation---resulting in clusters of question-answer pairs that are difficult for models to answer with spurious shortcuts. CondaQA features 14,182 question-answer pairs with over 200 unique negation cues."
### Supported Tasks and Leaderboards
The task is to answer a question given a Wikipedia passage that includes something being negated. There is no official leaderboard.
### Language
English
## Dataset Structure
### Data Instances
Here's an example instance:
```
{"QuestionID": "q10",
"original cue": "rarely",
"PassageEditID": 0,
"original passage": "Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of \"soft drugs\", such as cannabis, illegal, though some local governments have laws contradicting federal laws.",
"SampleID": 5294,
"label": "YES",
"original sentence": "Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time.",
"sentence2": "If a drug addict is caught with marijuana, is there a chance he will be jailed?",
"PassageID": 444,
"sentence1": "Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of \"soft drugs\", such as cannabis, illegal, though some local governments have laws contradicting federal laws."
}
```
### Data Fields
* `QuestionID`: unique ID for this question (might be asked for multiple passages)
* `original cue`: Negation cue that was used to select this passage from Wikipedia
* `PassageEditID`: 0 = original passage, 1 = paraphrase-edit passage, 2 = scope-edit passage, 3 = affirmative-edit passage
* `original passage`: Original Wikipedia passage the passage is based on (note that the passage might either be the original Wikipedia passage itself, or an edit based on it)
* `SampleID`: unique ID for this passage-question pair
* `label`: answer
* `original sentence`: Sentence that contains the negated statement
* `sentence2`: question
* `PassageID`: unique ID for the Wikipedia passage
* `sentence1`: passage
### Data Splits
Data splits can be accessed as:
```
from datasets import load_dataset
train_set = load_dataset("condaqa", "train")
dev_set = load_dataset("condaqa", "dev")
test_set = load_dataset("condaqa", "test")
```
## Dataset Creation
Full details are in the paper.
### Curation Rationale
From the paper: "Our goal is to evaluate models on their ability to process the contextual implications of negation. We have the following desiderata for our question-answering dataset:
1. The dataset should include a wide variety of negation cues, not just negative particles.
2. Questions should be targeted towards the _implications_ of a negated statement, rather than the factual content of what was or wasn't negated, to remove common sources of spurious cues in QA datasets (Kaushik and Lipton, 2018; Naik et al., 2018; McCoy et al., 2019).
3. Questions should come in closely-related, contrastive groups, to further reduce the possibility of models' reliance on spurious cues in the data (Gardner et al., 2020). This will result in sets of passages that are similar to each other in terms of the words that they contain, but that may admit different answers to questions.
4. Questions should probe the extent to which models are sensitive to how the negation is expressed. In order to do this, there should be contrasting passages that differ only in their negation cue or its scope."
### Source Data
From the paper: "To construct CondaQA, we first collected passages from a July 2021 version of English Wikipedia that contained negation cues, including single- and multi-word negation phrases, as well as affixal negation."
"We use negation cues from [Morante et al. (2011)](https://aclanthology.org/L12-1077/) and [van Son et al. (2016)](https://aclanthology.org/W16-5007/) as a starting point which we extend."
#### Initial Data Collection and Normalization
We show ten passages to crowdworkers and allow them to choose a passage they would like to work on.
#### Who are the source language producers?
Original passages come from volunteers who contribute to Wikipedia. Passage edits, questions, and answers are produced by crowdworkers.
### Annotations
#### Annotation process
From the paper: "In the first stage of the task, crowdworkers made three types of modifications to the original passage: (1) they paraphrased the negated statement, (2) they modified the scope of the negated statement (while retaining the negation cue), and (3) they undid the negation. In the second stage, we instruct crowdworkers to ask challenging questions about the implications of the negated statement. The crowdworkers then answered the questions they wrote previously for the original and edited passages."
Full details are in the paper.
#### Who are the annotators?
From the paper: "Candidates took a qualification exam which consisted of 12 multiple-choice questions that evaluated comprehension of the instructions. We recruit crowdworkers who answer >70% of the questions correctly for the next stage of the dataset construction task." We use the CrowdAQ platform for the exam and Amazon Mechanical Turk for annotations.
### Personal and Sensitive Information
We expect that such information has already been redacted from Wikipedia.
## Considerations for Using the Data
### Social Impact of Dataset
A model that solves this dataset might be (mis-)represented as an evidence that the model understands the entirety of English language and consequently deployed where it will have immediate and/or downstream impact on stakeholders.
### Discussion of Biases
We are not aware of societal biases that are exhibited in this dataset.
### Other Known Limitations
From the paper: "Though CondaQA currently represents the largest NLU dataset that evaluates a model’s ability to process the implications of negation statements, it is possible to construct a larger dataset, with more examples spanning different answer types. Further CONDAQA is an English dataset, and it would be useful to extend our data collection procedures to build high-quality resources in other languages. Finally, while we attempt to extensively measure and control for artifacts in our dataset, it is possible that our dataset has hidden artifacts that we did not study."
## Additional Information
### Dataset Curators
From the paper: "In order to estimate human performance, and to construct a high-quality evaluation with fewer ambiguous examples, we have five verifiers provide answers for each question in the development and test sets." The first author has been manually checking the annotations throughout the entire data collection process that took ~7 months.
### Licensing Information
license: apache-2.0
### Citation Information
```
@inproceedings{ravichander-et-al-2022-condaqa,
title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation},
author={Ravichander, Abhilasha and Gardner, Matt and Marasovi\'{c}, Ana},
proceedings={EMNLP 2022},
year={2022}
}
``` | lasha-nlp/CONDAQA | [
"task_categories:question-answering",
"annotations_creators:crowdsourced",
"language_creators:found",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"negation",
"reading comprehension",
"arxiv:2211.00295",
"region:us"
] | 2022-11-08T05:41:56+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found", "crowdsourced"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": [], "pretty_name": "condaqa", "tags": ["negation", "reading comprehension"]} | 2022-11-08T07:04:12+00:00 |
7249f98b8f4c45f81cd81b7bb91b1aac8161d693 | hgnghnhfgh | fuxijun/ccc | [
"region:us"
] | 2022-11-08T06:52:23+00:00 | {} | 2022-11-17T07:06:55+00:00 |
a25191b4a0575327e61f541374b9afe45387f772 | iKonaN/ley | [
"license:afl-3.0",
"region:us"
] | 2022-11-08T08:16:12+00:00 | {"license": "afl-3.0"} | 2022-11-08T08:16:12+00:00 |
|
92b053991b1742eaa198212617eed2abd572e0f3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: futin/random
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__random-en-30c46b-2023566786 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-08T08:17:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/random"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-13b", "metrics": [], "dataset_name": "futin/random", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-08T12:21:56+00:00 |
b9cd95a557cc71a144179dfbc97b9603382e1cfa | # Dataset Card for "laion2B-fa-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | amir7d0/laion2B-fa-images | [
"region:us"
] | 2022-11-08T08:49:53+00:00 | {"dataset_info": {"features": [{"name": "SAMPLE_ID", "dtype": "int64"}, {"name": "TEXT", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "IMAGE_PATH", "dtype": "string"}, {"name": "IMAGE", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 21488547.0, "num_examples": 1000}], "download_size": 21283656, "dataset_size": 21488547.0}} | 2022-11-09T16:36:43+00:00 |
3ac5d43d148f74d080320b6b27d841a712f87cbc |
This is a dataset which contains the docs from all the PRs updating one of the docs from https://huggingface.co/docs.
It is automatically updated by this [github action](https://github.com/huggingface/doc-builder/blob/main/.github/workflows/build_pr_documentation.yml) from the [doc-buider](https://github.com/huggingface/doc-builder) repo. | hf-doc-build/doc-build-dev | [
"license:mit",
"documentation",
"region:us"
] | 2022-11-08T09:03:37+00:00 | {"license": "mit", "pretty_name": "HF Documentation (PRs)", "tags": ["documentation"]} | 2024-02-17T17:44:01+00:00 |
45fb5843a8fc3fde3028a623d7afb8d3e8f42007 | # Dataset Card for "petitions-ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | eminecg/petitions-ds | [
"region:us"
] | 2022-11-08T09:15:48+00:00 | {"dataset_info": {"features": [{"name": "petition", "dtype": "string"}, {"name": "petition_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 29426840.1, "num_examples": 2475}, {"name": "validation", "num_bytes": 3269648.9, "num_examples": 275}], "download_size": 14382239, "dataset_size": 32696489.0}} | 2022-11-08T09:28:57+00:00 |
546126dd7206964952182cc541052f1649e78525 | # Dataset Card for "test_push3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | polinaeterna/test_push3 | [
"region:us"
] | 2022-11-08T09:20:41+00:00 | {"dataset_info": {"features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 46, "num_examples": 3}, {"name": "train", "num_bytes": 116, "num_examples": 8}], "download_size": 1698, "dataset_size": 162}} | 2022-11-08T09:21:09+00:00 |
2bebc3c89a3f327680c2f6ae9d62b1e86fb6b6b6 | # Dataset Card for "resume_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Nma/resume_dataset | [
"region:us"
] | 2022-11-08T09:24:45+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 355695532, "num_examples": 161071}, {"name": "train", "num_bytes": 1421896716, "num_examples": 644282}], "download_size": 896434509, "dataset_size": 1777592248}} | 2022-11-08T09:25:22+00:00 |
a0aedcc2333fb5e70217bf070e0ae193c2254897 | # Dataset Card for "tmp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | amir7d0/tmp | [
"region:us"
] | 2022-11-08T09:25:03+00:00 | {"dataset_info": {"features": [{"name": "SAMPLE_ID", "dtype": "int64"}, {"name": "TEXT", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "IMAGE_PATH", "dtype": "string"}, {"name": "IMAGE", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 599579428.0, "num_examples": 100000}], "download_size": 2124724355, "dataset_size": 599579428.0}} | 2022-11-09T13:28:01+00:00 |
c3b175a8dfdcaaf7ad64a1f0ba2939f4266948bb | # Dataset Card for "test_push4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | polinaeterna/test_push4 | [
"region:us"
] | 2022-11-08T09:30:18+00:00 | {"dataset_info": [{"config_name": "v1", "features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train"}, {"name": "test"}]}, {"config_name": "v2", "features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train"}, {"name": "test"}]}]} | 2022-11-08T09:47:55+00:00 |
c99d6d2f4a02dacd94f6ffd3055db5472613750e | # Dataset Card for "test_push_no_conf"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | polinaeterna/test_push_no_conf | [
"region:us"
] | 2022-11-08T09:53:55+00:00 | {"dataset_info": {"features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 120, "num_examples": 8}, {"name": "test", "num_bytes": 46, "num_examples": 3}], "download_size": 1712, "dataset_size": 166}} | 2022-11-08T09:54:13+00:00 |
f0471f90290414cceb9e69cc3c16ffff338c4e9d | # Dataset Card for "tokenize_resume_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Nma/tokenize_resume_dataset | [
"region:us"
] | 2022-11-08T09:55:43+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "test", "num_bytes": 275640050, "num_examples": 161071}, {"name": "train", "num_bytes": 1102620205, "num_examples": 644282}], "download_size": 521528169, "dataset_size": 1378260255}} | 2022-11-08T09:56:21+00:00 |
c6abcf44778df8dbf38ba6599b19ed196ea6e5ae | # Dataset Card for "lm_resume_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Nma/lm_resume_dataset | [
"region:us"
] | 2022-11-08T10:22:14+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "test", "num_bytes": 714031412, "num_examples": 107083}, {"name": "train", "num_bytes": 2856345596, "num_examples": 428365}], "download_size": 1035174948, "dataset_size": 3570377008}} | 2022-11-08T10:23:33+00:00 |
8616749880709e4f10ab40bcad2fc62e33caed34 | All images taken from https://github.com/InputBlackBoxOutput/logo-images-dataset | superchthonic/logos-dataset | [
"region:us"
] | 2022-11-08T10:41:41+00:00 | {} | 2022-11-08T10:42:10+00:00 |
1fa6a3831dae1addb2e2f712bbf13edcd94b274a | # Dataset Card for "test_push_two_confs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | polinaeterna/test_push_two_confs | [
"region:us"
] | 2022-11-08T11:39:59+00:00 | {"dataset_info": {"features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 120, "num_examples": 8}, {"name": "test", "num_bytes": 46, "num_examples": 3}], "download_size": 1712, "dataset_size": 166}} | 2022-11-08T11:40:48+00:00 |
1f8d799c0974a1eec9499eb68a6a4c1092d4477d | # Dataset Card for "vira-intents-live"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ibm/vira-intents-live | [
"region:us"
] | 2022-11-08T12:34:19+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 536982, "num_examples": 7434}, {"name": "validation", "num_bytes": 227106, "num_examples": 3140}], "download_size": 348220, "dataset_size": 764088}} | 2022-11-22T15:12:25+00:00 |
667f41421b215542d57fb403481f6dab10c0759f | # Dataset Card for "AJ_sentence"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Ayush2609/AJ_sentence | [
"region:us"
] | 2022-11-08T13:42:24+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 249843.62830074583, "num_examples": 4464}, {"name": "validation", "num_bytes": 27816.37169925418, "num_examples": 497}], "download_size": 179173, "dataset_size": 277660.0}} | 2022-11-08T14:58:24+00:00 |
f41edc00905904578c4be9dd48c81da5b159ea05 | # Dataset Card for "artificial-unbalanced-500Kb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | PGT/artificial-unbalanced-500K | [
"region:us"
] | 2022-11-08T14:11:11+00:00 | {"dataset_info": {"features": [{"name": "edge_index", "sequence": {"sequence": "int64"}}, {"name": "y", "sequence": "int64"}, {"name": "num_nodes", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2712963616, "num_examples": 499986}], "download_size": 398809184, "dataset_size": 2712963616}} | 2022-11-08T14:16:21+00:00 |
bb2672ee1cfd0d5b8ec99ccce7f08a77c0d119b7 | Andris2067/Ainava | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-11-08T15:29:08+00:00 | {"license": "creativeml-openrail-m"} | 2022-11-08T16:14:01+00:00 |
|
6586dd8a9de762b7b8c7ed19b5e1b9feca2df218 | poppingtonic/book-dataset | [
"license:afl-3.0",
"region:us"
] | 2022-11-08T21:08:47+00:00 | {"license": "afl-3.0"} | 2022-11-08T21:08:47+00:00 |
|
0810deca4374fdadc5c433acebf0d1f8b16c7312 | zahragolpa/Caltech101 | [
"license:cc",
"region:us"
] | 2022-11-08T21:34:53+00:00 | {"license": "cc"} | 2022-11-08T21:34:53+00:00 |
|
f0425b614beebe3234f5f4256600d56b0d369947 | bahidalgo/Me | [
"license:afl-3.0",
"region:us"
] | 2022-11-08T22:29:38+00:00 | {"license": "afl-3.0"} | 2022-11-08T22:47:56+00:00 |
|
869802e52b4dfa074d8a8e255ce85580711cdc25 |
# Dataset Card for [Stackoverflow Post Questions]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Contributions](#contributions)
## Dataset Description
Companies that sell Open-source software tools usually hire an army of Customer representatives to try to answer every question asked about their tool. The first step in this process
is the prioritization of the question. The classification scale usually consists of 4 values, P0, P1, P2, and P3, with different meanings across every participant in the industry. On
the other hand, every software developer in the world has dealt with Stack Overflow (SO); the amount of shared knowledge there is incomparable to any other website. Questions in SO are
usually annotated and curated by thousands of people, providing metadata about the quality of the question. This dataset aims to provide an accurate prioritization for programming
questions.
### Dataset Summary
The dataset contains the title and body of stackoverflow questions and a label value(0,1,2,3) that was calculated using thresholds defined by SO badges.
### Languages
English
## Dataset Structure
title: string,
body: string,
label: int
### Data Splits
The split is 40/40/20, where classes have been balaned to be around the same size.
## Dataset Creation
The data set was extracted and labeled with the following query in BigQuery:
```
SELECT
title,
body,
CASE
WHEN score >= 100 OR favorite_count >= 100 OR view_count >= 10000 THEN 0
WHEN score >= 25 OR favorite_count >= 25 OR view_count >= 2500 THEN 1
WHEN score >= 10 OR favorite_count >= 10 OR view_count >= 1000 THEN 2
ELSE 3
END AS label
FROM `bigquery-public-data`.stackoverflow.posts_questions
```
### Source Data
The data was extracted from the Big Query public dataset: `bigquery-public-data.stackoverflow.posts_questions`
#### Initial Data Collection and Normalization
The original dataset contained high class imbalance:
label count
0 977424
1 2401534
2 3418179
3 16222990
Grand Total 23020127
The data was sampled from each class to have around the same amount of records on every class.
### Contributions
Thanks to [@pacofvf](https://github.com/pacofvf) for adding this dataset.
| pacovaldez/stackoverflow-questions | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"stackoverflow",
"technical questions",
"region:us"
] | 2022-11-09T01:16:19+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "stackoverflow_post_questions", "tags": ["stackoverflow", "technical questions"]} | 2022-11-10T00:14:37+00:00 |
f42882dca80f8604ea1ee720b24e45079d610a47 | # Dataset Card for "dataset_from_synthea_for_NER_with_train_val_test_splits"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jage/dataset_from_synthea_for_NER_with_train_val_test_splits | [
"region:us"
] | 2022-11-09T02:20:42+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-DATE", "2": "I-DATE", "3": "B-NAME", "4": "I-NAME", "5": "B-AGE", "6": "I-AGE"}}}}, {"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "test", "num_bytes": 6614328, "num_examples": 19176}, {"name": "train", "num_bytes": 32139432.0, "num_examples": 92300}, {"name": "val", "num_bytes": 13463574.0, "num_examples": 38138}], "download_size": 4703482, "dataset_size": 52217334.0}} | 2022-11-09T02:21:11+00:00 |
4bf5b5ed178e0e8052b3ec7ea5f7d745ad63cb3b | # AutoTrain Dataset for project: led-samsum-dialogsum
## Dataset Description
This dataset has been automatically processed by AutoTrain for project led-samsum-dialogsum.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_Unnamed: 0": 0,
"feat_id": 0,
"text": "Amanda: I baked cookies. Do you want some?\nJerry: Sure!\nAmanda: I'll bring you tomorrow :-)",
"target": "Amanda baked cookies and will bring Jerry some tomorrow."
},
{
"feat_Unnamed: 0": 1,
"feat_id": 1,
"text": "Olivia: Who are you voting for in this election? \nOliver: Liberals as always.\nOlivia: Me too!!\nOliver: Great",
"target": "Olivia and Olivier are voting for liberals in this election. "
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_Unnamed: 0": "Value(dtype='int64', id=None)",
"feat_id": "Value(dtype='int64', id=None)",
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 27191 |
| valid | 1318 |
| skashyap96/autotrain-data-led-samsum-dialogsum | [
"region:us"
] | 2022-11-09T04:39:14+00:00 | {"task_categories": ["conditional-text-generation"]} | 2022-11-09T08:45:51+00:00 |
ef21714574a046223d5e3d0dae6ec3c9d6f7d9c4 | nlhuong/panda_and_koala | [
"license:artistic-2.0",
"region:us"
] | 2022-11-09T06:15:19+00:00 | {"license": "artistic-2.0"} | 2022-11-12T10:18:12+00:00 |
|
cf2e3a51cc29efa42c3c1e4903282e800a865ce5 | camenduru/plushies | [
"region:us"
] | 2022-11-09T06:54:48+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 42942055.0, "num_examples": 730}], "download_size": 42653871, "dataset_size": 42942055.0}, "models": ["camenduru/plushies"]} | 2022-11-18T03:16:34+00:00 |
|
a7d7dedccabae5165972e24bcbd4ef50723db0d7 | # Dataset Card for "resume_dataset_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Nma/resume_dataset_train | [
"region:us"
] | 2022-11-09T07:20:00+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2856338396, "num_examples": 428365}], "download_size": 828086360, "dataset_size": 2856338396}} | 2022-11-09T07:20:47+00:00 |
2d9cb87dc7d013ac635c85ce578fcb53d526a9b5 | # Dataset Card for "resume_dataset_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Nma/resume_dataset_test | [
"region:us"
] | 2022-11-09T07:20:48+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "test", "num_bytes": 714029588, "num_examples": 107083}], "download_size": 207066918, "dataset_size": 714029588}} | 2022-11-09T07:21:01+00:00 |
3fbbcbdb0f6ead4b2933547ceea3729e2dc463c2 |
# Dataset Card for [Dataset Name]
## Table of Contents
[Table of Contents](#table-of-contents)
[Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | Sotaro0124/Ainu-Japan_translation_model | [
"region:us"
] | 2022-11-09T08:03:09+00:00 | {} | 2022-11-09T08:11:39+00:00 |
47d0385d3210b59938b3a7cca665abab29eccff4 | Over 20,000 256x256 mel spectrograms of 5 second samples of music from my Spotify liked playlist. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference using De-noising Diffusion Probabilistic Models.
```
x_res = 1024
y_res = 1024
sample_rate = 44100
n_fft = 2048
hop_length = 512
``` | teticio/audio-diffusion-1024 | [
"task_categories:image-to-image",
"size_categories:10K<n<100K",
"audio",
"spectrograms",
"region:us"
] | 2022-11-09T09:22:02+00:00 | {"annotations_creators": [], "language_creators": [], "language": [], "license": [], "multilinguality": [], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["image-to-image"], "task_ids": [], "pretty_name": "Mel spectrograms of music", "tags": ["audio", "spectrograms"]} | 2022-11-09T10:49:29+00:00 |
5c4e8f1aec1d0567864e8d7fd0c13f47084aaa09 | # Dataset Card for "zhou_ebola_human"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | wesleywt/zhou_ebola_human | [
"region:us"
] | 2022-11-09T09:22:23+00:00 | {"dataset_info": {"features": [{"name": "is_interaction", "dtype": "int64"}, {"name": "protein_1.id", "dtype": "string"}, {"name": "protein_1.primary", "dtype": "string"}, {"name": "protein_2.id", "dtype": "string"}, {"name": "protein_2.primary", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 275414, "num_examples": 300}, {"name": "train", "num_bytes": 29425605, "num_examples": 22682}], "download_size": 6430757, "dataset_size": 29701019}} | 2022-11-09T09:22:57+00:00 |
225c714c5b77688cad4b649c7c3fcccafcb4ecf7 | # Dataset Card for "zhou_h1n1_human"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | wesleywt/zhou_h1n1_human | [
"region:us"
] | 2022-11-09T09:36:31+00:00 | {"dataset_info": {"features": [{"name": "is_interaction", "dtype": "int64"}, {"name": "protein_1.id", "dtype": "string"}, {"name": "protein_1.primary", "dtype": "string"}, {"name": "protein_2.id", "dtype": "string"}, {"name": "protein_2.primary", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 723379, "num_examples": 762}, {"name": "train", "num_bytes": 28170698, "num_examples": 21716}], "download_size": 12309236, "dataset_size": 28894077}} | 2022-11-09T09:37:18+00:00 |
73bb31ac9151c2afe2dbcf1165d916927f78b0c8 | # Dataset Card for "williams_mtb_hpidb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | wesleywt/williams_mtb_hpidb | [
"region:us"
] | 2022-11-09T09:49:32+00:00 | {"dataset_info": {"features": [{"name": "is_interaction", "dtype": "int64"}, {"name": "protein_1.id", "dtype": "string"}, {"name": "protein_1.primary", "dtype": "string"}, {"name": "protein_2.id", "dtype": "string"}, {"name": "protein_2.primary", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 5138954, "num_examples": 4192}, {"name": "train", "num_bytes": 19964860, "num_examples": 16768}], "download_size": 16427398, "dataset_size": 25103814}} | 2022-11-09T09:50:16+00:00 |
367e0114c039c5259108e5cf72048e0d46bf861e |
# Dataset Card for "bill_summary_us"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [BillML](https://github.com/dreamproit/BillML)
- **Repository:** [BillML](https://github.com/dreamproit/BillML)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Dataset for summarization of summarization of US Congressional bills (bill_summary_us).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
English
## Dataset Structure
### Data Instances
#### default
### Data Fields
- id: id of the bill in format(congress number + bill type + bill number + bill version).
- congress: number of the congress.
- bill_type: type of the bill.
- bill_number: number of the bill.
- bill_version: version of the bill.
- sections: list of bill sections with section_id, text and header.
- sections_length: number with lenght of the sections list.
- text: bill text.
- text_length: number of characters in the text.
- summary: summary of the bill.
- summary_length: number of characters in the summary.
- title: official title of the bill.
### Data Splits
train
## Dataset Creation
### Curation Rationale
Bills (proposed laws) are specialized, structured documents with great public significance. Often, the language of a bill may not directly explain the potential impact of the legislation. For bills in the U.S. Congress, the Congressional Research Service of the Library of Congress provides professional, non-partisan summaries of bills. These are valuable for public understanding of the bills and are serve as an essential part of the lawmaking process to understand the meaning and potential legislative impact.
This dataset collects the text of bills, some metadata, as well as the CRS summaries. In order to build more accurate ML models for bill summarization it is important to have a clean dataset, alongside the professionally-written CRS summaries. ML summarization models built on generic data are bound to produce less accurate results (sometimes creating summaries that describe the opposite of a bill's actual effect). In addition, models that attempt to summarize all bills (some of which may reach 4000 pages long) may also be inaccurate due to the current limitations of summarization on long texts.
As a result, this dataset collects bill and summary information; it provides text as a list of sections with the text and header. This could be used to create a summary of sections and then a summary of summaries.
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
[govinfo.gov](https://www.govinfo.gov/)
#### Initial Data Collection and Normalization
The data consists of the US congress bills that were collected from the [govinfo.gov](https://www.govinfo.gov/) service provided by the United States Government Publishing Office (GPO) under CC0-1.0 license.
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[dreamproit.com](https://dreamproit.com/)
### Licensing Information
Bill and summary information are public and are unlicensed, as it is data produced by government entities. The collection and enhancement work that we provide for this dataset, to the degree it may be covered by copyright, is released under [CC0](https://creativecommons.org/share-your-work/public-domain/cc0/).
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@aih](https://github.com/aih) [@BorodaUA](https://github.com/BorodaUA), [@alexbojko](https://github.com/alexbojko) for adding this dataset. | dreamproit/bill_summary_us | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"bills",
"legal",
"region:us"
] | 2022-11-09T10:13:33+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": [], "pretty_name": "bill_summary_us", "tags": ["bills", "legal"], "configs": [{"config_name": "default"}]} | 2023-10-17T03:16:57+00:00 |
34b62ff3c2487b0e4a7cf74b19d636fe73b26e0c |
# Dataset Card for "saf_legal_domain_german"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This Short Answer Feedback (SAF) dataset contains 19 German questions in the domain of the German social law (with reference answers). The idea of constructing a bilingual (English and German) short answer dataset as a way to remedy the lack of content-focused feedback datasets was introduced in [Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset](https://aclanthology.org/2022.acl-long.587) (Filighera et al., ACL 2022). Please refer to [saf_micro_job_german](https://huggingface.co/datasets/Short-Answer-Feedback/saf_micro_job_german) and [saf_communication_networks_english](https://huggingface.co/datasets/Short-Answer-Feedback/saf_communication_networks_english) for similarly constructed datasets that can be used for SAF tasks.
### Supported Tasks and Leaderboards
- `short_answer_feedback`: The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.
### Languages
The questions, reference answers, provided answers and the answer feedback in the dataset are written in German.
## Dataset Structure
### Data Instances
An example of an entry of the training split looks as follows.
```
{
"id": "1",
"question": "Ist das eine Frage?",
"reference_answer": "Ja, das ist eine Frage.",
"provided_answer": "Ich bin mir sicher, dass das eine Frage ist.",
"answer_feedback": "Korrekt.",
"verification_feedback": "Correct",
"error_class": "Keine",
"score": 1
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature (UUID4 in HEX format).
- `question`: a `string` feature representing a question.
- `reference_answer`: a `string` feature representing a reference answer to the question.
- `provided_answer`: a `string` feature representing an answer that was provided for a particular question.
- `answer_feedback`: a `string` feature representing the feedback given to the provided answers.
- `verification_feedback`: a `string` feature representing an automatic labeling of the score. It can be `Correct` (`score` = 1), `Incorrect` (`score` = 0) or `Partially correct` (all intermediate scores).
- `error_class`: a `string` feature representing the type of error identified in the case of a not completely correct answer.
- `score`: a `float64` feature (between 0 and 1) representing the score given to the provided answer.
### Data Splits
The dataset is comprised of four data splits.
- `train`: used for training, contains a set of questions and the provided answers to them.
- `validation`: used for validation, contains a set of questions and the provided answers to them (derived from the original training set from which the data came from).
- `test_unseen_answers`: used for testing, contains unseen answers to the questions present in the `train` split.
- `test_unseen_questions`: used for testing, contains unseen questions that do not appear in the `train` split.
| Split |train|validation|test_unseen_answers|test_unseen_questions|
|-------------------|----:|---------:|------------------:|--------------------:|
|Number of instances| 1596| 400| 221| 275|
## Additional Information
### Contributions
Thanks to [@JohnnyBoy2103](https://github.com/JohnnyBoy2103) for adding this dataset. | Short-Answer-Feedback/saf_legal_domain_german | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:de",
"license:cc-by-4.0",
"short answer feedback",
"legal domain",
"region:us"
] | 2022-11-09T10:35:55+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["de"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "pretty_name": "SAF - Legal Domain - German", "tags": ["short answer feedback", "legal domain"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "reference_answer", "dtype": "string"}, {"name": "provided_answer", "dtype": "string"}, {"name": "answer_feedback", "dtype": "string"}, {"name": "verification_feedback", "dtype": "string"}, {"name": "error_class", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 2142112, "num_examples": 1596}, {"name": "validation", "num_bytes": 550206, "num_examples": 400}, {"name": "test_unseen_answers", "num_bytes": 301087, "num_examples": 221}, {"name": "test_unseen_questions", "num_bytes": 360616, "num_examples": 275}], "download_size": 484808, "dataset_size": 3354021}} | 2023-03-31T10:47:38+00:00 |
27a82a06d52de0e83e16b989032ce51650145440 | Zicara/Hands_11k | [
"license:unknown",
"region:us"
] | 2022-11-09T10:43:17+00:00 | {"license": "unknown"} | 2022-11-17T17:23:43+00:00 |
|
98f2b57b8be4e53c21ae981fd42495055004294b | This dataset is based on the "cumulative" configuration of the MultiWoz 2.2 dataset available also on the [HuggingFace Hub](https://huggingface.co/datasets/multi_woz_v22).
Therefore, the system and user utterances, the active intents, and the services are exactly the same.
In addition to the data present in version 2.2, this dataset contains, for each dialogue turn, the annotations from versions 2.1, 2.3, and 2.4.
NOTE:
- Each dialogue turn is composed of a system utterance and a user utterance, in this exact order
- The initial system utterance is filled in with the `none` string
- In the last dialogue turn is always the system that greets the user; this last turn is kept and the user utterance is filled in with the `none` string (usually during evaluation this dialogue turn is not considered)
- To be able to save data as an arrow file you need to "pad" the states to all have the same keys. To do this the None value is introduced. Therefore, when you load it back it is convenient to have a way to remove the "padding". In order to do so, a function like the following can help
```python
def remove_empty_slots(state: Union[Dict[str, Union[List[str], None]], None]) -> Union[Dict[str, List[str]], None]:
if state is None:
return None
return {k: v for k, v in state.items() if v is not None}
```
- The schema has been updated to make all the versions compatible. Basically, the "book" string has been removed from slots in v2.2. The updated schema is the following
```yaml
attraction-area
attraction-name
attraction-type
hotel-area
hotel-day
hotel-internet
hotel-name
hotel-parking
hotel-people
hotel-pricerange
hotel-stars
hotel-stay
hotel-type
restaurant-area
restaurant-day
restaurant-food
restaurant-name
restaurant-people
restaurant-pricerange
restaurant-time
taxi-arriveby
taxi-departure
taxi-destination
taxi-leaveat
train-arriveby
train-day
train-departure
train-destination
train-leaveat
train-people
``` | pietrolesci/multiwoz_all_versions | [
"region:us"
] | 2022-11-09T10:51:56+00:00 | {} | 2022-11-10T11:50:53+00:00 |
c9d83173de7024e112c2d0c815fb0c2b1301dc1e | # Dataset Card for "multi-label-classification-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | andreotte/multi-label-classification-test | [
"region:us"
] | 2022-11-09T12:42:43+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "Door", "1": "Eaves", "2": "Gutter", "3": "Vegetation", "4": "Vent", "5": "Window"}}}}, {"name": "pixel_values", "dtype": "image"}], "splits": [{"name": "test", "num_bytes": 9476052.0, "num_examples": 151}, {"name": "train", "num_bytes": 82422534.7, "num_examples": 1315}], "download_size": 91894615, "dataset_size": 91898586.7}} | 2022-11-09T12:42:54+00:00 |
6bd93f58710308b5e09fd788a8c9585fe20fe4c6 | Rahaneg/opdQA | [
"region:us"
] | 2022-11-09T15:32:32+00:00 | {} | 2022-11-10T03:16:48+00:00 |
|
75b569b006880d60ccd260a7f9492309f2bd7e5e | # Dataset Card for "dummy_data_clean"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | loubnabnl/dummy_data_clean | [
"region:us"
] | 2022-11-09T17:05:20+00:00 | {"dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "annotation_id", "dtype": "string"}, {"name": "pii", "dtype": "string"}, {"name": "pii_modified", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3808098.717948718, "num_examples": 400}], "download_size": 1311649, "dataset_size": 3808098.717948718}} | 2022-11-09T17:05:43+00:00 |
b5742c509417def7094c043d94a9c311b1d63b8e | My photos to train AI | rafaelmotac/rafaelcorreia | [
"region:us"
] | 2022-11-09T17:53:00+00:00 | {} | 2022-11-09T22:39:48+00:00 |
3c62f26bafdc4c4e1c16401ad4b32f0a94b46612 | # Dataset Card for "swerec-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ScandEval/swerec-mini | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:sv",
"license:cc-by-nc-4.0",
"region:us"
] | 2022-11-09T18:15:56+00:00 | {"language": ["sv"], "license": "cc-by-nc-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 713970, "num_examples": 2048}, {"name": "train", "num_bytes": 355633, "num_examples": 1024}, {"name": "val", "num_bytes": 82442, "num_examples": 256}], "download_size": 684710, "dataset_size": 1152045}} | 2023-07-05T08:46:49+00:00 |
0172a82241343327a319f1afa42957039e6ab9b4 | # Dataset Card for "indian_food_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | muhammadbilal5110/indian_food_images | [
"region:us"
] | 2022-11-09T18:19:20+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "burger", "1": "butter_naan", "2": "chai", "3": "chapati", "4": "chole_bhature", "5": "dal_makhani", "6": "dhokla", "7": "fried_rice", "8": "idli", "9": "jalebi", "10": "kaathi_rolls", "11": "kadai_paneer", "12": "kulfi", "13": "masala_dosa", "14": "momos", "15": "paani_puri", "16": "pakode", "17": "pav_bhaji", "18": "pizza", "19": "samosa"}}}}], "splits": [{"name": "test", "num_bytes": -50510587.406603925, "num_examples": 941}, {"name": "train", "num_bytes": -283960930.24139607, "num_examples": 5328}], "download_size": 1600880763, "dataset_size": -334471517.648}} | 2022-11-09T18:20:32+00:00 |
bac3f20df77a27858495b76880121c1e9531d9c7 |
# Dataset Card for "lmqg/qa_harvesting_from_wikipedia_pseudo"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a synthetic QA dataset generated with fine-tuned QG models over [`lmqg/qa_harvesting_from_wikipedia`](https://huggingface.co/datasets/lmqg/qa_harvesting_from_wikipedia), 1 million paragraph and answer pairs collected in [Du and Cardie, 2018](https://aclanthology.org/P18-1177/), made for question-answering based evaluation (QAE) for question generation model proposed by [Zhang and Bansal, 2019](https://aclanthology.org/D19-1253/).
The `train` split is the synthetic data and the `validation` split is the original validation set of [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/), where the model should be evaluate on.
This contains synthetic QA datasets created with the following QG models:
- [lmqg/bart-base-squad](https://huggingface.co/lmqg/bart-base-squad)
- [lmqg/bart-large-squad](https://huggingface.co/lmqg/bart-large-squad)
- [lmqg/t5-small-squad](https://huggingface.co/lmqg/t5-small-squad)
- [lmqg/t5-base-squad](https://huggingface.co/lmqg/t5-base-squad)
- [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad)
See more detail about the QAE at [https://github.com/asahi417/lm-question-generation/tree/master/misc/qa_based_evaluation](https://github.com/asahi417/lm-question-generation/tree/master/misc/emnlp_2022/qa_based_evaluation).
### Supported Tasks and Leaderboards
* `question-answering`
### Languages
English (en)
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature of id
- `title`: a `string` feature of title of the paragraph
- `context`: a `string` feature of paragraph
- `question`: a `string` feature of question
- `answers`: a `json` feature of answers
### Data Splits
|train |validation|
|--------:|---------:|
|1,092,142| 10,570 |
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | lmqg/qa_harvesting_from_wikipedia_pseudo | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
"arxiv:2210.03992",
"region:us"
] | 2022-11-09T19:05:38+00:00 | {"language": "en", "license": "cc-by-4.0", "multilinguality": "monolingual", "size_categories": "10K<n<100K", "source_datasets": ["extended|wikipedia"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "Synthetic QA dataset."} | 2022-11-10T11:30:06+00:00 |
e431cd6f537d0c97e854ed2137f4f996d49af5c5 | More information comming soon. | dreamproit/bill_summary | [
"region:us"
] | 2022-11-09T20:03:45+00:00 | {} | 2022-11-10T08:18:27+00:00 |
5eb17d96da67cef7250294e82b6a55ea81dcd5d6 | More information comming soon. | dreamproit/bill_summary_ua | [
"region:us"
] | 2022-11-09T20:04:02+00:00 | {} | 2022-11-10T08:18:05+00:00 |
c9f2148409945b463a4ec616f74e3d193bde1c64 | NosaOmer/arnosa | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-11-09T20:14:33+00:00 | {"license": "cc-by-sa-4.0"} | 2022-11-09T20:14:33+00:00 |
|
88bec913d85b5e2b31dae8730a980a246098c45f |
# text2image multi-prompt(s): a dataset collection
- collection of several text2image prompt datasets
- data was cleaned/normalized with the goal of removing "model specific APIs" like the "--ar" for Midjourney and so on
- data de-duplicated on a basic level: exactly duplicate prompts were dropped (_after cleaning and normalization_)
## updates
- Oct 2023: the `default` config has been updated with better deduplication. It was deduplicated with minhash (_params: n-gram size set to 3, deduplication threshold at 0.6, hash function chosen as xxh3 with 32-bit hash bits, and 128 permutations with a batch size of 10,000._) which drops 2+ million rows.
- original version is still available under `config_name="original"`
## contents
default:
```
DatasetDict({
train: Dataset({
features: ['text', 'src_dataset'],
num_rows: 1677221
})
test: Dataset({
features: ['text', 'src_dataset'],
num_rows: 292876
})
})
```
For `original` config:
```
DatasetDict({
train: Dataset({
features: ['text', 'src_dataset'],
num_rows: 3551734
})
test: Dataset({
features: ['text', 'src_dataset'],
num_rows: 399393
})
})
```
_NOTE: as the other two datasets did not have a `validation` split, the validation split of `succinctly/midjourney-prompts` was merged into `train`._ | pszemraj/text2image-multi-prompt | [
"task_categories:text-generation",
"task_categories:feature-extraction",
"multilinguality:monolingual",
"source_datasets:bartman081523/stable-diffusion-discord-prompts",
"source_datasets:succinctly/midjourney-prompts",
"source_datasets:Gustavosta/Stable-Diffusion-Prompts",
"language:en",
"license:apache-2.0",
"text generation",
"region:us"
] | 2022-11-09T22:47:39+00:00 | {"language": ["en"], "license": "apache-2.0", "multilinguality": ["monolingual"], "source_datasets": ["bartman081523/stable-diffusion-discord-prompts", "succinctly/midjourney-prompts", "Gustavosta/Stable-Diffusion-Prompts"], "task_categories": ["text-generation", "feature-extraction"], "pretty_name": "multi text2image prompts a dataset collection", "tags": ["text generation"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}, {"config_name": "original", "data_files": [{"split": "train", "path": "original/train-*"}, {"split": "test", "path": "original/test-*"}]}], "dataset_info": [{"config_name": "default", "features": [{"name": "text", "dtype": "string"}, {"name": "src_dataset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 262736830, "num_examples": 1677221}, {"name": "test", "num_bytes": 56294291, "num_examples": 292876}], "download_size": 151054782, "dataset_size": 319031121}, {"config_name": "original", "features": [{"name": "text", "dtype": "string"}, {"name": "src_dataset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 741427383, "num_examples": 3551734}, {"name": "test", "num_bytes": 83615440, "num_examples": 399393}], "download_size": 402186258, "dataset_size": 825042823}]} | 2023-11-21T13:19:29+00:00 |
3afe16b210dec396ba32a4c4669a951a13c8d1c0 | # Dataset Card for "quick-captioning-dataset-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nateraw/quick-captioning-dataset-test | [
"region:us"
] | 2022-11-09T23:16:50+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 345244.0, "num_examples": 4}], "download_size": 0, "dataset_size": 345244.0}} | 2022-11-09T23:20:40+00:00 |
379266b9d42eae2923d3bb4e2fa5e9e4cdc608fe | # Dataset Card for "test_pinkeyrepo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | treksis/test_pinkeyrepo | [
"region:us"
] | 2022-11-10T00:01:22+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 906786.0, "num_examples": 5}], "download_size": 908031, "dataset_size": 906786.0}} | 2022-11-10T00:01:25+00:00 |
ae03d5b8fc12f95b1b965ef6f3fabf29b6eaf2a8 |
## Description
The Spam SMS is a set of SMS-tagged messages that have been collected for SMS Spam research. It contains one set of SMS messages in English of 5,574 messages, tagged according to being ham (legitimate) or spam.
Source: [uciml/sms-spam-collection-dataset](https://www.kaggle.com/datasets/uciml/sms-spam-collection-dataset) | Ngadou/Spam_SMS | [
"license:cc",
"doi:10.57967/hf/0749",
"region:us"
] | 2022-11-10T00:24:36+00:00 | {"license": "cc"} | 2022-11-10T09:06:25+00:00 |
eddcf0f010fb54164d0ff44402da8be69ac3684b | Dataset contains queries for Problog database of facts about USA geography. Taken from [this source](https://www.cs.utexas.edu/users/ml/nldata/geoquery.html) | dvitel/geo | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_ids:language-modeling",
"task_ids:explanation-generation",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:other-en-prolog",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:gpl-2.0",
"geo",
"prolog",
"semantic-parsing",
"code-generation",
"region:us"
] | 2022-11-10T00:30:37+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["gpl-2.0"], "multilinguality": ["other-en-prolog"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-generation", "text2text-generation"], "task_ids": ["language-modeling", "explanation-generation"], "pretty_name": "GEO - semantic parsing to Geography Prolog queries", "tags": ["geo", "prolog", "semantic-parsing", "code-generation"]} | 2022-11-10T00:50:17+00:00 |
fe7cf7c231bfd0366e56ed6242d1421d23483e1d | Datasets for HEARTHSTONE card game. Taken from [this source](https://github.com/deepmind/card2code/tree/master/third_party/hearthstone)
| dvitel/hearthstone | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:other-en-python",
"size_categories:n<1K",
"language:en",
"license:mit",
"code-synthesis",
"semantic-parsing",
"python",
"hearthstone",
"region:us"
] | 2022-11-10T01:13:57+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["mit"], "multilinguality": ["other-en-python"], "size_categories": ["n<1K"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "HEARTHSTONE - synthesis of python code for card game descriptions", "tags": ["code-synthesis", "semantic-parsing", "python", "hearthstone"]} | 2022-11-10T01:24:14+00:00 |
904ada614d1d3dd374dd4752730b0db9017334df | # Stable Diffusion Prompts 200m
Because Diffusion-DB dataset is too big. So I extracted the prompts out for prompt study.
The file introduction:
- sd_promts_2m.txt : the main dataset.
- sd_top5000.keywords.tsv: the top 5000 frequent key words or phrase.
- | andyyang/stable_diffusion_prompts_2m | [
"license:cc0-1.0",
"region:us"
] | 2022-11-10T04:42:33+00:00 | {"license": "cc0-1.0"} | 2022-11-10T06:38:10+00:00 |
8d62a7d805261fc2ffd233a4f31e33049d87eec4 | # Dataset Card for COYO-Labeled-300M
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [COYO homepage](https://kakaobrain.com/contents/?contentId=7eca73e3-3089-43cb-b701-332e8a1743fd)
- **Repository:** [COYO repository](https://github.com/kakaobrain/coyo-dataset)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [COYO email]([email protected])
### Dataset Summary
**COYO-Labeled-300M** is a dataset of **machine-labeled** 300M images-multi-label pairs. We labeled subset of COYO-700M with a large model (efficientnetv2-xl) trained on imagenet-21k. We followed the same evaluation pipeline as in efficientnet-v2. The labels are top 50 most likely labels out of 21,841 classes from imagenet-21k. The label probabilies are provided rather than label so that the user can select threshold of their choice for multi-label classification use or can take top-1 class for single class classification use.
In other words, **COYO-Labeled-300M** is a ImageNet-like dataset. Instead of human labeled 1.25 million samples, it's machine-labeled 300 million samples. This dataset is similar to JFT-300M which is not released to the public.
### Supported Tasks and Leaderboards
We empirically validated the quality of COYO-Labeled-300M dataset by re-implementing popular model, [ViT](https://arxiv.org/abs/2010.11929).
We found that our ViT implementation trained on COYO-Labeled-300M performs similar to the performance numbers in the ViT paper trained on JFT-300M.
We also provide weights for the pretrained ViT model on COYO-Labeled-300M as well as its training & fine-tuning code.
### Languages
The labels in the COYO-Labeled-300M dataset consist of English.
## Dataset Structure
### Data Instances
Each instance in COYO-Labeled-300M represents multi-labels and image pair information with meta-attributes.
And we also provide label information, **imagenet21k_tree.pickle**.
```
{
'id': 315,
'url': 'https://a.1stdibscdn.com/pair-of-blue-and-white-table-lamps-for-sale/1121189/f_121556431538206028457/12155643_master.jpg?width=240',
'imagehash': 'daf5a50aae4aa54a',
'labels': [8087, 11054, 8086, 6614, 6966, 8193, 10576, 9710, 4334, 9909, 8090, 10104, 10105, 9602, 5278, 9547, 6978, 12011, 7272, 5273, 6279, 4279, 10903, 8656, 9601, 8795, 9326, 4606, 9907, 9106, 7574, 10006, 7257, 6959, 9758, 9039, 10682, 7164, 5888, 11654, 8201, 4546, 9238, 8197, 10882, 17380, 4470, 5275, 10537, 11548],
'label_probs': [0.4453125, 0.30419921875, 0.09417724609375, 0.033905029296875, 0.03240966796875, 0.0157928466796875, 0.01406097412109375, 0.01129150390625, 0.00978851318359375, 0.00841522216796875, 0.007720947265625, 0.00634002685546875, 0.0041656494140625, 0.004070281982421875, 0.002910614013671875, 0.0028018951416015625, 0.002262115478515625, 0.0020503997802734375, 0.0017080307006835938, 0.0016880035400390625, 0.0016679763793945312, 0.0016613006591796875, 0.0014324188232421875, 0.0012445449829101562, 0.0011739730834960938, 0.0010318756103515625, 0.0008969306945800781, 0.0008792877197265625, 0.0008726119995117188, 0.0008263587951660156, 0.0007123947143554688, 0.0006799697875976562, 0.0006561279296875, 0.0006542205810546875, 0.0006093978881835938, 0.0006046295166015625, 0.0005769729614257812, 0.00057220458984375, 0.0005636215209960938, 0.00055694580078125, 0.0005092620849609375, 0.000507354736328125, 0.000507354736328125, 0.000499725341796875, 0.000484466552734375, 0.0004456043243408203, 0.0004439353942871094, 0.0004355907440185547, 0.00043392181396484375, 0.00041866302490234375],
'width': 240,
'height': 240
}
```
### Data Fields
| name | type | description |
|--------------------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| id | long | Unique 64-bit integer ID generated by [monotonically_increasing_id()](https://spark.apache.org/docs/3.1.3/api/python/reference/api/pyspark.sql.functions.monotonically_increasing_id.html) which is the same value that is mapped with the existing COYO-700M. |
| url | string | The image URL extracted from the `src` attribute of the `<img>` |
| imagehash | string | The [perceptual hash(pHash)](http://www.phash.org/) of the image |
| labels | sequence[integer] | Inference results of EfficientNetV2-XL model trained on ImageNet-21K dataset (Top 50 indices among 21,841 classes) |
| label_probs | sequence[float] | Inference results of EfficientNetV2-XL model trained on ImageNet-21K dataset (Top 50 indices among 21,841 probabilites) |
| width | integer | The width of the image |
| height | integer | The height of the image |
### Data Splits
Data was not split, since the evaluation was expected to be performed on more widely used downstream task(s).
## Dataset Creation
### Curation Rationale
We labeled subset of COYO-700M with a large model (efficientnetv2-xl) trained on imagenet-21k. Data sampling was done with a size similar to jft-300m, filtered by a specific threshold for probabilities for the top-1 label.
### Source Data
[COYO-700M](https://huggingface.co/datasets/kakaobrain/coyo-700m)
#### Who are the source language producers?
[Common Crawl](https://commoncrawl.org/) is the data source for COYO-700M.
### Annotations
#### Annotation process
The dataset was built in a fully automated process that did not require human annotation.
#### Who are the annotators?
No human annotation
### Personal and Sensitive Information
The basic instruction, licenses and contributors are the same as for the [coyo-700m](https://huggingface.co/datasets/kakaobrain/coyo-700m).
| kakaobrain/coyo-labeled-300m | [
"task_categories:image-classification",
"task_ids:multi-label-image-classification",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"image-labeled pairs",
"arxiv:2010.11929",
"region:us"
] | 2022-11-10T06:30:56+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["other"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100M<n<1B"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-label-image-classification"], "pretty_name": "COYO-Labeled-300M", "tags": ["image-labeled pairs"]} | 2022-11-11T01:11:22+00:00 |
0d7f9fd522ab3d00f91cfff921cadfefdb25f0aa | lcolok/Asian_Regularization_images | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-11-10T06:32:17+00:00 | {"license": "creativeml-openrail-m"} | 2022-11-10T07:07:10+00:00 |
|
10f0d626a402d8a2ef4a98e5d0e41201bdd8a61f | # 1. Overview
This dataset is a collection of 5,000+ images of clothing & apparels set that are ready to use for optimizing the accuracy of computer vision models. All of the contents is sourced from PIXTA's stock library of 100M+ Asian-featured images and videos. PIXTA is the largest platform of visual materials in the Asia Pacific region offering fully-managed services, high quality contents and data, and powerful tools for businesses & organisations to enable their creative and machine learning projects.
# 2. Use case
The e-commerce apparel dataset could be used for various AI & Computer Vision models: Product Visual Search, Similar Product Recommendation, Product Catalog,... Each data set is supported by both AI and human review process to ensure labelling consistency and accuracy. Contact us for more custom datasets.
# 3. About PIXTA
PIXTASTOCK is the largest Asian-featured stock platform providing data, contents, tools and services since 2005. PIXTA experiences 15 years of integrating advanced AI technology in managing, curating, processing over 100M visual materials and serving global leading brands for their creative and data demands. Visit us at https://www.pixta.ai/ or contact via our email [email protected]." | pixta-ai/e-commerce-apparel-dataset-for-ai-ml | [
"license:other",
"region:us"
] | 2022-11-10T08:03:47+00:00 | {"license": "other"} | 2023-02-22T14:21:46+00:00 |
cb94668398d1077685f48d607207c315c34ebc7c | KETI-AIR/aihub_living_env_vqa | [
"license:apache-2.0",
"region:us"
] | 2022-11-10T09:56:59+00:00 | {"license": "apache-2.0"} | 2022-11-11T01:37:49+00:00 |
|
61f035be1be19394fd41ca836fb5cfd7b183a424 | KETI-AIR/aihub_visual_info_vqa | [
"license:apache-2.0",
"region:us"
] | 2022-11-10T09:57:44+00:00 | {"license": "apache-2.0"} | 2022-11-10T09:58:04+00:00 |
|
853470d118146bd1efd05a12e41e09838c74c7b7 | KETI-AIR/kvqa | [
"license:apache-2.0",
"region:us"
] | 2022-11-10T09:58:26+00:00 | {"license": "apache-2.0"} | 2022-11-10T09:58:40+00:00 |
|
4085d8bad777532784546b4043dfd175537a6085 | KETI-AIR/vqa | [
"license:apache-2.0",
"region:us"
] | 2022-11-10T09:58:58+00:00 | {"license": "apache-2.0"} | 2022-11-10T09:59:21+00:00 |
|
52c2eb978a809403513e188df36f895cc9067eaf | # Dataset Card for "mnli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lucadiliello/mnli | [
"region:us"
] | 2022-11-10T10:07:25+00:00 | {"dataset_info": {"features": [{"name": "key", "dtype": "string"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "dev_matched", "num_bytes": 1869989, "num_examples": 9815}, {"name": "dev_mismatched", "num_bytes": 1985345, "num_examples": 9832}, {"name": "test_matched", "num_bytes": 1884664, "num_examples": 9796}, {"name": "test_mismatched", "num_bytes": 1986695, "num_examples": 9847}, {"name": "train", "num_bytes": 76786075, "num_examples": 392702}], "download_size": 54416761, "dataset_size": 84512768}} | 2022-11-10T10:08:49+00:00 |
57f637d30f7a4c5ff44ecd64a63763179bd824e5 | # Dataset Card for "dalio-handwritten-io"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AlekseyKorshuk/dalio-handwritten-io | [
"region:us"
] | 2022-11-10T11:38:04+00:00 | {"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "output_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 14786, "num_examples": 10}, {"name": "train", "num_bytes": 186546, "num_examples": 156}, {"name": "validation", "num_bytes": 31729, "num_examples": 29}], "download_size": 114870, "dataset_size": 233061}} | 2022-11-10T11:41:00+00:00 |
b407d59e558e452bf6bc72f3365d4a622c7fe4f7 | # Dataset Card for "dalio-handwritten-complete"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AlekseyKorshuk/dalio-handwritten-complete | [
"region:us"
] | 2022-11-10T11:38:28+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 11957, "num_examples": 10}, {"name": "train", "num_bytes": 80837, "num_examples": 55}, {"name": "validation", "num_bytes": 13340, "num_examples": 10}], "download_size": 79024, "dataset_size": 106134}} | 2022-11-10T11:41:36+00:00 |
248a2ed0252e2ff647f27fe49276a697a9c583ab | # Dataset Card for "dalio-synthetic-io"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AlekseyKorshuk/dalio-synthetic-io | [
"region:us"
] | 2022-11-10T11:43:41+00:00 | {"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "output_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 34283, "num_examples": 19}, {"name": "train", "num_bytes": 483245, "num_examples": 303}, {"name": "validation", "num_bytes": 84125, "num_examples": 57}], "download_size": 299043, "dataset_size": 601653}} | 2022-11-10T11:44:04+00:00 |
0ee966aee92c0ceb06da61cb67cb0b8a5261785d | # Dataset Card for "dalio-synthetic-complete"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AlekseyKorshuk/dalio-synthetic-complete | [
"region:us"
] | 2022-11-10T11:44:06+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 24972, "num_examples": 19}, {"name": "train", "num_bytes": 209033, "num_examples": 118}, {"name": "validation", "num_bytes": 48527, "num_examples": 22}], "download_size": 165396, "dataset_size": 282532}} | 2022-11-10T11:44:30+00:00 |
a6415c44a59cc8dcfbf1aa722cc45c8a87e2819c | # Dataset Card for "dalio-all-io"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AlekseyKorshuk/dalio-all-io | [
"region:us"
] | 2022-11-10T11:44:43+00:00 | {"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "output_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 40070, "num_examples": 29}, {"name": "train", "num_bytes": 676060, "num_examples": 459}, {"name": "validation", "num_bytes": 118584, "num_examples": 86}], "download_size": 399681, "dataset_size": 834714}} | 2022-11-10T11:45:09+00:00 |
b6c482ef27596ffcd34956b45eedf37b1ccfc5cb | # Dataset Card for "dalio-all-complete"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AlekseyKorshuk/dalio-all-complete | [
"region:us"
] | 2022-11-10T11:45:10+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 28784, "num_examples": 29}, {"name": "train", "num_bytes": 302691, "num_examples": 173}, {"name": "validation", "num_bytes": 54939, "num_examples": 33}], "download_size": 210354, "dataset_size": 386414}} | 2022-11-10T11:45:33+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.