sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
ee4b72801a94f7bd65c7038592985c747374f513
|
# Dataset Card for "massive_takeaway"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_takeaway
|
[
"region:us"
] |
2023-02-08T11:15:45+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14816, "num_examples": 257}, {"name": "validation", "num_bytes": 2450, "num_examples": 44}, {"name": "test", "num_bytes": 3176, "num_examples": 57}], "download_size": 14963, "dataset_size": 20442}}
|
2023-02-08T12:25:09+00:00
|
2246e56eacbaa04bae156064ea64af5f4211f7b6
|
# Dataset Card for "massive_music"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_music
|
[
"region:us"
] |
2023-02-08T11:16:01+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16865, "num_examples": 332}, {"name": "validation", "num_bytes": 2899, "num_examples": 56}, {"name": "test", "num_bytes": 4123, "num_examples": 81}], "download_size": 16262, "dataset_size": 23887}}
|
2023-02-08T12:25:29+00:00
|
20ff832d5a0a6d084e7a4c89145b58bce67cbdef
|
# Dataset Card for "massive_alarm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_alarm
|
[
"region:us"
] |
2023-02-08T11:16:19+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20844, "num_examples": 390}, {"name": "validation", "num_bytes": 3251, "num_examples": 64}, {"name": "test", "num_bytes": 4818, "num_examples": 96}], "download_size": 17873, "dataset_size": 28913}}
|
2023-02-08T12:25:50+00:00
|
d45013e84342c3e504103232da6c425b7db141f2
|
# Dataset Card for "massive_weather"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_weather
|
[
"region:us"
] |
2023-02-08T11:16:36+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30514, "num_examples": 573}, {"name": "validation", "num_bytes": 6972, "num_examples": 126}, {"name": "test", "num_bytes": 8504, "num_examples": 156}], "download_size": 25707, "dataset_size": 45990}}
|
2023-02-08T12:26:11+00:00
|
1ad50a2f9e6c7c84b6f01efeaf4ecf4af769c5c8
|
# Dataset Card for "massive_social-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_social-de
|
[
"region:us"
] |
2023-02-08T11:23:29+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28548, "num_examples": 391}, {"name": "validation", "num_bytes": 4886, "num_examples": 68}, {"name": "test", "num_bytes": 7331, "num_examples": 106}], "download_size": 25046, "dataset_size": 40765}}
|
2023-02-08T12:28:40+00:00
|
cb5e422bf8b0e8cb05f0c8d675be566066dc0de9
|
# Dataset Card for "massive_transport-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_transport-de
|
[
"region:us"
] |
2023-02-08T11:23:46+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 36885, "num_examples": 571}, {"name": "validation", "num_bytes": 7175, "num_examples": 110}, {"name": "test", "num_bytes": 7787, "num_examples": 124}], "download_size": 28802, "dataset_size": 51847}}
|
2023-02-08T12:29:02+00:00
|
4bc695d3f616b8696e65f8700209ba79bf58474a
|
# Dataset Card for "massive_calendar-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_calendar-de
|
[
"region:us"
] |
2023-02-08T11:24:03+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 113565, "num_examples": 1688}, {"name": "validation", "num_bytes": 18678, "num_examples": 280}, {"name": "test", "num_bytes": 27631, "num_examples": 402}], "download_size": 79139, "dataset_size": 159874}}
|
2023-02-08T12:29:23+00:00
|
604f094c08dfd9a455acd3f1f7bd2158187cdd30
|
# Dataset Card for "massive_play-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_play-de
|
[
"region:us"
] |
2023-02-08T11:24:21+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 76088, "num_examples": 1377}, {"name": "validation", "num_bytes": 14011, "num_examples": 260}, {"name": "test", "num_bytes": 21427, "num_examples": 387}], "download_size": 60317, "dataset_size": 111526}}
|
2023-02-08T12:29:43+00:00
|
b906d0fa137c8018568bdd3e14ef932c361a7bb9
|
# Dataset Card for "massive_news-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_news-de
|
[
"region:us"
] |
2023-02-08T11:24:39+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30815, "num_examples": 503}, {"name": "validation", "num_bytes": 5434, "num_examples": 82}, {"name": "test", "num_bytes": 7882, "num_examples": 124}], "download_size": 25144, "dataset_size": 44131}}
|
2023-02-08T12:30:05+00:00
|
c72a2665a3fd8b5c40684fa526da54ecc373d471
|
# Dataset Card for "massive_datetime-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_datetime-de
|
[
"region:us"
] |
2023-02-08T11:24:57+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23001, "num_examples": 402}, {"name": "validation", "num_bytes": 4253, "num_examples": 73}, {"name": "test", "num_bytes": 5966, "num_examples": 103}], "download_size": 19657, "dataset_size": 33220}}
|
2023-02-08T12:30:27+00:00
|
11aed97fab1a7b8e281e504a7b3c330cba96c947
|
# Dataset Card for "massive_recommendation-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_recommendation-de
|
[
"region:us"
] |
2023-02-08T11:25:13+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28186, "num_examples": 433}, {"name": "validation", "num_bytes": 4608, "num_examples": 69}, {"name": "test", "num_bytes": 6729, "num_examples": 94}], "download_size": 23393, "dataset_size": 39523}}
|
2023-02-08T12:30:48+00:00
|
1d07f0e56d69dcce4044d56a86098a5cc4f1ebb0
|
# Dataset Card for "massive_email-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_email-de
|
[
"region:us"
] |
2023-02-08T11:25:30+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 61477, "num_examples": 953}, {"name": "validation", "num_bytes": 10136, "num_examples": 157}, {"name": "test", "num_bytes": 17478, "num_examples": 271}], "download_size": 46681, "dataset_size": 89091}}
|
2023-02-08T12:31:09+00:00
|
922cd87c8e22a18b632b93de8952782468cc26b4
|
# Dataset Card for "massive_iot-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_iot-de
|
[
"region:us"
] |
2023-02-08T11:25:46+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 41922, "num_examples": 769}, {"name": "validation", "num_bytes": 6206, "num_examples": 118}, {"name": "test", "num_bytes": 11808, "num_examples": 220}], "download_size": 31758, "dataset_size": 59936}}
|
2023-02-08T12:31:31+00:00
|
448bcd3087873ad011f7edb1dee8b8db89a4d9b4
|
# Dataset Card for "massive_general-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_general-de
|
[
"region:us"
] |
2023-02-08T11:26:02+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 38325, "num_examples": 652}, {"name": "validation", "num_bytes": 6823, "num_examples": 122}, {"name": "test", "num_bytes": 10941, "num_examples": 189}], "download_size": 36124, "dataset_size": 56089}}
|
2023-02-08T12:31:52+00:00
|
cf5039d6410af0063c6b900d496c0e3099028c9b
|
# Dataset Card for "massive_audio-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_audio-de
|
[
"region:us"
] |
2023-02-08T11:26:19+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14612, "num_examples": 290}, {"name": "validation", "num_bytes": 1763, "num_examples": 35}, {"name": "test", "num_bytes": 2978, "num_examples": 62}], "download_size": 13084, "dataset_size": 19353}}
|
2023-02-08T12:32:13+00:00
|
c3933c7b2c417a8d47aa3e7eed31636d1b361198
|
# Dataset Card for "massive_lists-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_lists-de
|
[
"region:us"
] |
2023-02-08T11:26:35+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 31261, "num_examples": 539}, {"name": "validation", "num_bytes": 6519, "num_examples": 112}, {"name": "test", "num_bytes": 8119, "num_examples": 142}], "download_size": 25338, "dataset_size": 45899}}
|
2023-02-08T12:32:34+00:00
|
106b0f62240c21560a00d7207696bb08a642cf8b
|
# Dataset Card for "massive_qa-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_qa-de
|
[
"region:us"
] |
2023-02-08T11:26:51+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 67774, "num_examples": 1183}, {"name": "validation", "num_bytes": 12266, "num_examples": 214}, {"name": "test", "num_bytes": 16558, "num_examples": 288}], "download_size": 57078, "dataset_size": 96598}}
|
2023-02-08T12:32:56+00:00
|
b8f19abc3131b6b7348969c1fb7cd9cb3cbf052d
|
# Dataset Card for "massive_cooking-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_cooking-de
|
[
"region:us"
] |
2023-02-08T11:27:08+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12678, "num_examples": 211}, {"name": "validation", "num_bytes": 2458, "num_examples": 43}, {"name": "test", "num_bytes": 4315, "num_examples": 72}], "download_size": 14356, "dataset_size": 19451}}
|
2023-02-08T12:33:16+00:00
|
864e74ea05a3a8c611d7a1dd6b0215d836224759
|
# Dataset Card for "massive_takeaway-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_takeaway-de
|
[
"region:us"
] |
2023-02-08T11:27:26+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16746, "num_examples": 257}, {"name": "validation", "num_bytes": 2767, "num_examples": 44}, {"name": "test", "num_bytes": 3656, "num_examples": 57}], "download_size": 16262, "dataset_size": 23169}}
|
2023-02-08T12:33:37+00:00
|
176d7341ffaabb6dc02742f315d6e5968b59c7ee
|
# Dataset Card for "massive_music-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_music-de
|
[
"region:us"
] |
2023-02-08T11:27:43+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18149, "num_examples": 332}, {"name": "validation", "num_bytes": 3198, "num_examples": 56}, {"name": "test", "num_bytes": 4440, "num_examples": 81}], "download_size": 17641, "dataset_size": 25787}}
|
2023-02-08T12:33:58+00:00
|
cd1f0486e097fa29eacc2cc281c21d5cdb42da92
|
# Dataset Card for "massive_alarm-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_alarm-de
|
[
"region:us"
] |
2023-02-08T11:27:59+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24135, "num_examples": 390}, {"name": "validation", "num_bytes": 3700, "num_examples": 64}, {"name": "test", "num_bytes": 5727, "num_examples": 96}], "download_size": 19133, "dataset_size": 33562}}
|
2023-02-08T12:34:19+00:00
|
650164d7429c3260b92077703a90f84b2f85b756
|
# Dataset Card for "massive_weather-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fathyshalab/massive_weather-de
|
[
"region:us"
] |
2023-02-08T11:28:15+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 31902, "num_examples": 573}, {"name": "validation", "num_bytes": 7264, "num_examples": 126}, {"name": "test", "num_bytes": 8886, "num_examples": 156}], "download_size": 25436, "dataset_size": 48052}}
|
2023-02-08T12:34:40+00:00
|
5990420a3ef72ae59e1afdf49ccae125fb51db11
|
# Dataset Card for "sq-anli_a1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
niv-al/sq-anli_a1
|
[
"language:sq",
"region:us"
] |
2023-02-08T11:48:40+00:00
|
{"language": ["sq"], "dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "train", "num_bytes": 5975530, "num_examples": 16946}, {"name": "validation", "num_bytes": 50063, "num_examples": 144}, {"name": "test", "num_bytes": 51311, "num_examples": 144}], "download_size": 2167104, "dataset_size": 6076904}}
|
2023-02-18T19:58:41+00:00
|
94248bf8d553b31edcc8e6ede37eebe07ae69b6f
|
# Dataset Card for "sq-anli_a2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
niv-al/sq-anli_a2
|
[
"language:sq",
"region:us"
] |
2023-02-08T11:48:55+00:00
|
{"language": ["sq"], "dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "train", "num_bytes": 10416951, "num_examples": 30000}, {"name": "validation", "num_bytes": 49978, "num_examples": 144}, {"name": "test", "num_bytes": 51667, "num_examples": 144}], "download_size": 5905662, "dataset_size": 10518596}}
|
2023-02-18T19:58:51+00:00
|
7a8916050a085c8b110b97b54191dffd10616542
|
# Dataset Card for "squad-v1.1-t5-question-generation"
## Dataset Description
- **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
- **Paper:** [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250)
### Dataset Summary
This is a modified Stanford Question Answering Dataset (SQuAD) to suit question generation with All Questions in One Line (AQOL) just like in [Transformer-based End-to-End Question Generation](https://arxiv.org/pdf/2005.01107v1.pdf)
specifically for the T5 family of models. The prefix is `generate questions: ` so that the task can be unique to a trained model.
Check out the generation notebook [here](https://nbviewer.org/urls/huggingface.co/datasets/derek-thomas/squad-v1.1-t5-question-generation/resolve/main/Squad_V1_Question_Generation.ipynb).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
## Dataset Structure
### Data Instances
#### plain_text
An example of 'train' looks as follows.
```
{
"context": "generate questions: This is a test context.",
"question": "Is this a test? {sep_token} Is this another Test {sep_token}"
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `context`: a `string` feature.
- `question`: a `string` feature.
### Data Splits
| name |train|validation|
|----------|----:|---------:|
|plain_text|18896| 2067|
### Citation Information
```
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
}
```
### Contributions
Thanks to [Derek Thomas](https://huggingface.co/derek-thomas) and [Thomas Simonini](https://huggingface.co/ThomasSimonini) for adding this to the hub
Check out: [How to contribute more](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Visitors
[](https://visitorbadge.io/status?path=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Fderek-thomas%2Fsquad-v1.1-t5-question-generation)
|
derek-thomas/squad-v1.1-t5-question-generation
|
[
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|squad",
"language:en",
"license:cc-by-4.0",
"questiongeneration",
"question-generation",
"text2text-generation",
"arxiv:1606.05250",
"arxiv:2005.01107",
"region:us"
] |
2023-02-08T12:10:34+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|squad"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "Question Generation for T5 based on Squad V1.1", "dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "questions", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20293805, "num_examples": 18896}, {"name": "validation", "num_bytes": 2376313, "num_examples": 2067}], "download_size": 12600387, "dataset_size": 22670118}, "tags": ["questiongeneration", "question-generation", "text2text-generation"]}
|
2023-03-09T13:50:46+00:00
|
2ccd94cb1d2ff2915c4a258ee612abb4e8bf8392
|
# Датасет прикольных ответов
Датасет смешных ответов собирается идеями от других людей (которые пишут мне если нет аккаунта в hugginface), и мной. Сборка началась 8 февраля 2023 года.
## Данные
JSON файл содержит data список где есть message - сообщение, response - ответ, и type - тип.
Это пример данных
```json
{
"data":
[
{"message": "Дано: Архимед упал в говно.", "response": "Найти: Выталкивающую силу.", "type": "w"},
{"message": "Как дела?", "response": "Всё было нормально, пока Вася выёживаться не стал)", "type": "n"},
{"message": "Что ты можешь сказать о сне?", "response": "Я так долго тренировался спать что могу делать это с закрытыми глазами.", "type": "n"},
...
]
}
...
```
# Список типов сообщений-ответов:
- "n" Нейтрально, без оскорбления
- "a" Аггресивный/токсичный ответ
- "w" Содержит не всегда приемлемые или оскорбительные слова.
- "s" Содержит маты. (либо ответ либо запрос содержит хотя бы один мат)
- "p" Писсимистичные ответы, с низкой самооценкой или суицидальными мыслями. (Оно не работает -> Безработный как я)
- "u" Небезопасные ответы, предложение чего-то запрещённого шуткой (например алкоголь)
|
MoyAI/Funniest-answers
|
[
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:text-classification",
"size_categories:n<1K",
"language:ru",
"region:us"
] |
2023-02-08T12:15:42+00:00
|
{"language": ["ru"], "size_categories": ["n<1K"], "task_categories": ["conversational", "text-generation", "text2text-generation", "text-classification"], "pretty_name": "Funny-responses"}
|
2023-12-25T14:56:56+00:00
|
e783a3a1d438ba4f3509fdab98a8cae5a9cbcc00
|
Images trained for my [phantom diffusion s3 the last 8](https://huggingface.co/Phantom-Artist/phantom-diffusion-s3-the-last-8) series.
Since they are all AI generated images that are public domain under the US law, I claim it is legal to redistribute them as public domain.
However, they might have copyright in your/their original country.
Still, many countries including Japan allow us to use them for training an AI under their copyrights law, and because all the artists here are from Japan, I assume it should be allowed to reuse it for training globally.
|
Phantom-Artist/phantom-diffusion-s3-the-last-8-dataset
|
[
"license:cc0-1.0",
"region:us"
] |
2023-02-08T12:21:23+00:00
|
{"license": "cc0-1.0"}
|
2023-02-08T12:27:04+00:00
|
9707dde47e3c7d4c695de87947c2ce82dc931d9d
|
# Dataset Card for "spectro_caption_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
gwkim22/spectro_caption_dataset
|
[
"region:us"
] |
2023-02-08T12:27:11+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3156701860.889, "num_examples": 37561}], "download_size": 3296229917, "dataset_size": 3156701860.889}}
|
2023-02-08T12:33:57+00:00
|
b3afdc02ba50189413a6c516c57f37ecc0e31452
|
# Dataset Card for "dummy_image_class_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
patrickvonplaten/dummy_image_class_data
|
[
"region:us"
] |
2023-02-08T12:27:32+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "resize"}}}}], "splits": [{"name": "train", "num_bytes": 1947037.0, "num_examples": 20}], "download_size": 1947666, "dataset_size": 1947037.0}}
|
2023-02-08T12:27:39+00:00
|
7ae5524d1e3fcd852081b54049a96df186a7da7e
|
# Dataset Card for "dummy_image_class_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hf-internal-testing/dummy_image_class_data
|
[
"region:us"
] |
2023-02-08T12:28:33+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "resize"}}}}], "splits": [{"name": "train", "num_bytes": 555953.0, "num_examples": 6}], "download_size": 556964, "dataset_size": 555953.0}}
|
2023-02-08T12:28:38+00:00
|
e884bd5de7af351da6bf7b4f63667745d0cc4c4e
|
gaussian01/ddpm-butterflies-128
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-08T12:31:35+00:00
|
{"license": "apache-2.0"}
|
2023-02-08T12:31:35+00:00
|
|
362fa6fccabdacaeb20fee2b13efedea61141b66
|
(almost) all aerial videos of Zelenograd until 2023. i dont have a rights of these videos, all of these were downloaded from youtube. if you an owner of some of these videos and dont want that it were there, please contact me [email protected]
|
4eJIoBek/Zelenograd-aerial-videos
|
[
"license:wtfpl",
"region:us"
] |
2023-02-08T12:34:30+00:00
|
{"license": "wtfpl"}
|
2023-11-24T21:17:58+00:00
|
24428867e1ca937b2513fbb4105c8fbee4ec76d9
|
# Dataset Card for "fnite_tf_combo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
olly4/fnite_tf_combo
|
[
"region:us"
] |
2023-02-08T12:36:11+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "description", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 397302067.52, "num_examples": 3104}], "download_size": 486452911, "dataset_size": 397302067.52}}
|
2023-02-08T16:10:05+00:00
|
15127aa41c6f7d4462fcf085f142537a6a97d30d
|
# Dataset Card for "small-coco-wm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
RIW/small-coco-wm
|
[
"region:us"
] |
2023-02-08T13:26:46+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "key", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "error_message", "dtype": "null"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "original_width", "dtype": "int64"}, {"name": "original_height", "dtype": "int64"}, {"name": "exif", "dtype": "string"}, {"name": "sha256", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 881124899.597, "num_examples": 8879}, {"name": "test", "num_bytes": 1728419997.344, "num_examples": 19769}, {"name": "validation", "num_bytes": 854191310.724, "num_examples": 8836}], "download_size": 1933564702, "dataset_size": 3463736207.665}}
|
2023-02-11T15:59:56+00:00
|
1222e093b5f32dc9f62fafd54af16d2077893442
|
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ell-hol/mT5-OrangeSum
* Dataset: orange_sum
* Config: abstract
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ell-hol](https://huggingface.co/ell-hol) for evaluating this model.
|
autoevaluate/autoeval-eval-orange_sum-abstract-68b9ca-3347592353
|
[
"autotrain",
"evaluation",
"region:us"
] |
2023-02-08T14:10:05+00:00
|
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["orange_sum"], "eval_info": {"task": "summarization", "model": "ell-hol/mT5-OrangeSum", "metrics": ["bertscore", "sacrebleu"], "dataset_name": "orange_sum", "dataset_config": "abstract", "dataset_split": "validation", "col_mapping": {"text": "text", "target": "summary"}}}
|
2023-02-08T14:17:47+00:00
|
3c21edb36e8b6aeb53cfe2b20979213af11a98e3
|
bbbh/a
|
[
"license:other",
"region:us"
] |
2023-02-08T14:45:11+00:00
|
{"license": "other"}
|
2023-02-11T17:21:11+00:00
|
|
8fe9fc8d2e5ce77f7d859e2323c2d104b9eea724
|
# Dataset Card for Model Card Dataset Mentions
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
librarian-bots/model_card_dataset_mentions
|
[
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:mit",
"model cards",
"metadata",
"region:us"
] |
2023-02-08T14:45:16+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "pretty_name": "Model Card Dataset Mentions", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "dataset_mention", "1": "no_dataset_mention"}}}}], "splits": [{"name": "train", "num_bytes": 58112, "num_examples": 297}], "download_size": 19321, "dataset_size": 58112}, "tags": ["model cards", "metadata"]}
|
2023-06-30T14:09:18+00:00
|
924631d5913eefc28ce772866b43884ad161d061
|
# Dataset Card for "instruction-pilot-outputs-sampling"
This dataset contains model outputs generated from the human demonstrations provided in [`HuggingFaceH4/instruction-pilot-prompts`](https://huggingface.co/datasets/HuggingFaceH4/instruction-pilot-prompts).
To convert each language model into a dialogue agent, we shortened [Anthropic's HHH prompt](https://gist.github.com/jareddk/2509330f8ef3d787fc5aaac67aab5f11/#file-hhh_prompt-txt) and prepended this to each sample provided to the models:
```
Below is a friendly conversation between a human and an AI assistant. \
The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. \
The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. \
It also tries to avoid giving false or misleading information, and it caveats when it isn’t entirely sure about the right answer. \
That said, the assistant is practical and really does its best, and doesn’t let caution get too much in the way of being useful.
Human: {input}
AI:
```
The reason to shorten the HHH prompt is because it is over 6,000 tokens long, which far exceeds the maximum context size of most open-source language models. For example, Flan-T5 only has a context window of 512 tokens. It is likely that better outputs could be produced for language models with larger context windows, where some dialogue examples can be included in the promopt.
To generate diverse outputs from each models, we used nucleus sampling with `temperature=0` and `top_p=0.9` and set `max_new_tokens=100` (which is about the mean lenght of the Self-Instruct outputs). For each example, 8 generations were produced per model.
|
HuggingFaceH4/instruction-pilot-outputs-sampling
|
[
"region:us"
] |
2023-02-08T15:02:22+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "source", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "outputs", "list": [{"name": "model", "dtype": "string"}, {"name": "outputs", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 1347447, "num_examples": 375}], "download_size": 430865, "dataset_size": 1347447}}
|
2023-02-09T21:04:38+00:00
|
d9d27ef9f38357dee815d7f649a3bebcaf4ceea7
|
# Dataset Card for "davinci-pairwise-tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/davinci-pairwise-tokenized
|
[
"region:us"
] |
2023-02-08T15:04:52+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2530035200, "num_examples": 64759}, {"name": "test", "num_bytes": 36178476, "num_examples": 7195}], "download_size": 0, "dataset_size": 2566213676}}
|
2023-02-08T15:25:42+00:00
|
e36cf31a987ea72d9ddb93a03876070770254809
|
# Dataset Card for "davinci-pairwise-medium"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/davinci-pairwise-medium
|
[
"region:us"
] |
2023-02-08T15:28:28+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2530035200, "num_examples": 64759}, {"name": "test", "num_bytes": 36178476, "num_examples": 7195}], "download_size": 848422865, "dataset_size": 2566213676}}
|
2023-02-08T15:29:49+00:00
|
1ed2d6af25bf41efcbb308d6ac8af7f5a6668d00
|
# Dataset Card for "IEMOCAP_Text"
This dataset obtained from IEMOCAP dataset. For more information go to [IEMOCAP](https://sail.usc.edu/iemocap/) webpage.
This dataset contains 5 most common classes includes angry, happy, excitement, neutral and sad. Based on articles in this field, we merge excitement and happy classes.
Our dataset contaions 5531 utterances and it splits based on the sessions.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Zahra99/IEMOCAP_Text
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"Emotion recognition",
"Text classification",
"region:us"
] |
2023-02-08T15:48:21+00:00
|
{"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "ang", "1": "hap", "2": "neu", "3": "sad"}}}}], "splits": [{"name": "session1", "num_bytes": 71932, "num_examples": 1085}, {"name": "session2", "num_bytes": 79012, "num_examples": 1023}, {"name": "session3", "num_bytes": 74980, "num_examples": 1151}, {"name": "session4", "num_bytes": 72622, "num_examples": 1031}, {"name": "session5", "num_bytes": 89524, "num_examples": 1241}], "download_size": 215486, "dataset_size": 388070}, "tags": ["Emotion recognition", "Text classification"]}
|
2023-02-12T13:02:36+00:00
|
4f8539a397ecc0d7185bf941bc1bb7238abc3648
|
# Dataset Card for "IEMOCAP_Audio"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Zahra99/IEMOCAP_Audio
|
[
"task_categories:audio-classification",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] |
2023-02-08T15:58:57+00:00
|
{"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["audio-classification"], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "label", "dtype": {"class_label": {"names": {"0": "ang", "1": "hap", "2": "neu", "3": "sad"}}}}], "splits": [{"name": "session1", "num_bytes": 163119572.96, "num_examples": 1085}, {"name": "session2", "num_bytes": 152984371.952, "num_examples": 1023}, {"name": "session3", "num_bytes": 166040715.472, "num_examples": 1151}, {"name": "session4", "num_bytes": 144715722.648, "num_examples": 1031}, {"name": "session5", "num_bytes": 183486561.006, "num_examples": 1241}], "download_size": 174395249, "dataset_size": 810346944.0380001}}
|
2024-02-05T03:16:33+00:00
|
fac035351a76dc75ec5ae7a64984571fcd6179db
|
# Dataset Card for "Hatefulmemes_test_text_davinci_002_Hatefulmemes_ns_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/Hatefulmemes_test_text_davinci_002_Hatefulmemes_ns_100
|
[
"region:us"
] |
2023-02-08T16:16:00+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "raw_prediction", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_10", "num_bytes": 29959416.0, "num_examples": 100}], "download_size": 29638886, "dataset_size": 29959416.0}}
|
2023-02-08T16:16:04+00:00
|
5128261682251c5eed78cab2076e0193c6df43e8
|
# Dataset Card for "Hatefulmemes_test_text_davinci_002_Hatefulmemes_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/Hatefulmemes_test_text_davinci_002_Hatefulmemes_ns_1000
|
[
"region:us"
] |
2023-02-08T16:23:53+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "raw_prediction", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_10", "num_bytes": 366795867.0, "num_examples": 1000}, {"name": "fewshot_15", "num_bytes": 369012572.0, "num_examples": 1000}], "download_size": 727994919, "dataset_size": 735808439.0}}
|
2023-02-08T16:30:05+00:00
|
1bd6a8e8f75924d1f5c3cbfb08513798ea13b0f9
|
# Dataset Card for "synthetic-instruct-gptj-pairwise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/synthetic-instruct-gptj-pairwise
|
[
"region:us"
] |
2023-02-08T16:39:22+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30204558, "num_examples": 31486}, {"name": "test", "num_bytes": 1604422, "num_examples": 1657}], "download_size": 18522018, "dataset_size": 31808980}}
|
2023-02-08T21:26:09+00:00
|
9fa2d9f500efd41ec609363e4bffb9b5f1e9a4ba
|
# Dataset Card for "hh-rlhf-pairwise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/hh-rlhf-pairwise
|
[
"region:us"
] |
2023-02-08T16:49:45+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 88512105, "num_examples": 76256}, {"name": "test", "num_bytes": 5957760, "num_examples": 5103}], "download_size": 56768654, "dataset_size": 94469865}}
|
2023-02-08T21:30:13+00:00
|
604d257f57c0ca33a213054c14caf740ed1da9b8
|
SummerSigh/PolicyData
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-08T17:25:48+00:00
|
{"license": "apache-2.0"}
|
2023-02-08T17:43:58+00:00
|
|
ad1c63ae80d95060b5cf4334c6f7b0e90982ff79
|
gustproof/shiny-cards-produce
|
[
"license:cc-by-sa-4.0",
"region:us"
] |
2023-02-08T17:39:25+00:00
|
{"license": "cc-by-sa-4.0"}
|
2023-02-08T17:48:18+00:00
|
|
2805a44143714c0461b0d8d7ece4c630d1a308f8
|
# Dataset Card for DDisco
## Dataset Description
The DDisco dataset is a dataset which can be used to train models to classify levels of coherence in _danish_ discourse. Each entry in the dataset is annotated with a discourse coherence label (rating from 1 to 3):
1: low coherence (difficult to understand, unorganized, contained unnecessary details and can not be summarized briefly and easily)
2: medium coherence
3: high coherence (easy to understand, well organized, only contain details that support the main point and can be summarized briefly and easily).
Grammatical and typing errors are ignored (i.e. they do not affect the coherency score) and the coherence of a text is considered within its own domain.
### Additional Information
[DDisCo: A Discourse Coherence Dataset for Danish](https://aclanthology.org/2022.lrec-1.260.pdf)
### Contributions
[@ajders](https://github.com/ajders)
|
alexandrainst/ddisco
|
[
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:da",
"license:afl-3.0",
"discourse",
"coherence",
"region:us"
] |
2023-02-08T18:05:24+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["da"], "license": ["afl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "DDisco", "tags": ["discourse", "coherence"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "rating", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 815571, "num_examples": 801}, {"name": "test", "num_bytes": 209297, "num_examples": 201}], "download_size": 672202, "dataset_size": 1024868}}
|
2023-02-08T18:12:26+00:00
|
2537b681735c256a7ab829666b118e03c3eb9e0a
|
# IMDB Movie Reviews

This is a dataset for binary sentiment classification containing substantially huge data. This dataset contains a set of 50,000 highly polar movie reviews for training models for text classification tasks.
The dataset is downloaded from
https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
This data is processed and splitted into training and test datasets (0.2% test split). Training dataset contains 40000 reviews and test dataset contains 10000 reviews.
Equal distribution among the labels in both training and test dataset. in training dataset, there are 20000 records for both positive and negative classes. In test dataset, there are 5000 records both the labels.
### Citation Information
```
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
}
```
|
ajaykarthick/imdb-movie-reviews
|
[
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:feature-extraction",
"size_categories:10K<n<100K",
"region:us"
] |
2023-02-08T18:30:11+00:00
|
{"size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "token-classification", "feature-extraction"], "pretty_name": "Movie-Reviews"}
|
2023-02-08T21:08:35+00:00
|
b987397400508e890b7209226da81c693dbb9306
|
# Dataset Card for "big-animal-dataset"
Hi! I combined animals 10 dataset, the oxford pets dataset, stanford dogs dataset, and the cats vs dogs dataset for a large animal dataset.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Isamu136/big-animal-dataset
|
[
"region:us"
] |
2023-02-08T18:32:58+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1198940745.5549998, "num_examples": 62149}], "download_size": 0, "dataset_size": 1198940745.5549998}}
|
2023-02-08T21:02:10+00:00
|
4db3187ae82f0faeed9f07b5dc4628bca09c7915
|
RYOIKITENKAI02/app
|
[
"license:openrail",
"region:us"
] |
2023-02-08T18:38:25+00:00
|
{"license": "openrail"}
|
2023-02-08T18:38:25+00:00
|
|
9a8c6e7338ecc5f9a6b51e664194456dbb0eb8b2
|
# Dataset Card for "wikipedia-20220301.en-block-size-1024"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
carlosejimenez/wikipedia-20220301.en-block-size-1024
|
[
"region:us"
] |
2023-02-08T19:44:24+00:00
|
{"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 301864191, "num_examples": 21817}, {"name": "train", "num_bytes": 60558566627, "num_examples": 4368542}], "download_size": 20321590769, "dataset_size": 60860430818}}
|
2023-02-13T19:43:51+00:00
|
052f26ea41986a0bcc6fac0f8ca05c3f5c5c7068
|
# Dataset for project: food-category-classification-v2.0
## Dataset Description
This dataset for project food-category-classification-v2.0 was scraped with the help of a bulk google image downloader.
## Dataset Structure
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Bread', 'Dairy', 'Dessert', 'Egg', 'Fried Food', 'Fruit', 'Meat', 'Noodles', 'Rice', 'Seafood', 'Soup', 'Vegetable'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follows:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1200 |
| valid | 300 |
|
Kaludi/food-category-classification-v2.0
|
[
"task_categories:image-classification",
"region:us"
] |
2023-02-08T19:46:45+00:00
|
{"task_categories": ["image-classification"]}
|
2023-02-09T19:38:17+00:00
|
364bf172c41a7ecefcd951d8fd9d9853882a5f23
|
# Dataset Card for "knots_AF_no_fragments"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/knots_AF_no_fragments
|
[
"region:us"
] |
2023-02-08T19:52:29+00:00
|
{"dataset_info": {"features": [{"name": "ID", "dtype": "string"}, {"name": "latestVersion", "dtype": "int64"}, {"name": "globalMetricValue", "dtype": "float64"}, {"name": "uniprotStart", "dtype": "int64"}, {"name": "uniprotEnd", "dtype": "int64"}, {"name": "uniprotSequence", "dtype": "string"}, {"name": "Length", "dtype": "float64"}, {"name": "Domain_architecture", "dtype": "string"}, {"name": "InterPro", "dtype": "string"}, {"name": "Max_Topology", "dtype": "string"}, {"name": "Max Freq", "dtype": "float64"}, {"name": "Knot Core", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "FamilyName", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 79153504.0, "num_examples": 136032}, {"name": "test", "num_bytes": 19788376.0, "num_examples": 34008}], "download_size": 79176508, "dataset_size": 98941880.0}}
|
2023-02-08T19:52:53+00:00
|
156e8fd397b70d35a96fe05721a463d265e69f4e
|
## Dataset Description
- **Repository:** [allenai/lila](https://github.com/allenai/lila)
- **Paper:** [LILA: A Unified Benchmark for Mathematical Reasoning](https://aclanthology.org/2022.emnlp-main.392.pdf)
- **Point of Contact:** [Matthew Finlayson](https://mattf1n.github.io/), [Sean Welleck](https://wellecks.com/)
# Lila: A Unified Benchmark for Mathematical Reasoning
- **Homepage: https://lila.apps.allenai.org/**
- **Repository: https://github.com/allenai/lila**
- **Paper: https://aclanthology.org/2022.emnlp-main.392.pdf**
### Licensing Information
Creative Commons Attribution 4.0 International
### Citation Information
Cite this dataset and the source datasets (see [sources.bib](https://github.com/allenai/Lila/blob/main/sources.bib)).
```bib
@INPROCEEDINGS{Mishra2022Lila,
author = {
Swaroop Mishra
and Matthew Finlayson
and Pan Lu
and Leonard Tang
and Sean Welleck
and Chitta Baral
and Tanmay Rajpurohit
and Oyvind Tafjord
and Ashish Sabharwal
and Peter Clark
and Ashwin Kalyan},
title = {Lila: A Unified Benchmark for Mathematical Reasoning},
booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year = {2022}
}
```
|
allenai/lila
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-02-08T21:39:35+00:00
|
{"license": "cc-by-4.0"}
|
2023-03-15T18:36:28+00:00
|
ac21ada92f4cdfc0bd14a7d8206760766dc8c1d2
|
vaclavkosar/GLAMI-1M-test-only
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-08T21:49:13+00:00
|
{"license": "apache-2.0"}
|
2023-02-09T09:56:31+00:00
|
|
bace4f31335e6fd578adce5c71aa3f3fcabd5fa8
|
# Dataset Card for "davinci-pairwise-filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/davinci-pairwise-filtered
|
[
"region:us"
] |
2023-02-08T21:53:36+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1517383530, "num_examples": 93540}, {"name": "test", "num_bytes": 123825205, "num_examples": 14391}], "download_size": 316920124, "dataset_size": 1641208735}}
|
2023-02-08T21:53:56+00:00
|
1cd13f88f02f1197783cdfbb6d81da9c3ca56b3a
|
HuggingFaceH4/instruction-pilot-outputs-filtered
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-08T22:00:39+00:00
|
{"license": "apache-2.0"}
|
2023-02-10T04:32:26+00:00
|
|
0fafedf499065581e10240291bab20dba8b2d33d
|
# Dataset Card for "combined-pairwise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/combined-pairwise
|
[
"region:us"
] |
2023-02-08T22:12:32+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1636100193, "num_examples": 201282}, {"name": "test", "num_bytes": 131387387, "num_examples": 21151}], "download_size": 453248009, "dataset_size": 1767487580}}
|
2023-02-08T22:12:56+00:00
|
9635c1dc32c9a45e70e4373cf82e0c2e8fbe12c3
|
# Dataset Card for "Europarl-ST"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.mllp.upv.es/europarl-st/
- **Paper:** https://ieeexplore.ieee.org/document/9054626
- **Point of Contact:** https://www.mllp.upv.es/
### Dataset Summary
Europarl-ST is a Multilingual Speech Translation Corpus, that contains paired audio-text samples for Speech Translation, constructed using the debates carried out in the European Parliament in the period between 2008 and 2012.
### Languages
Spanish, German, English, French, Dutch, Polish, Portuguese, Romanian, Italian
## Dataset Structure
### Data Fields
- **original_audio:** The original speech that is heard on the recording.
- **original_language:** The language of the audio
- **audio_path:** Path to the audio file
- **segment_start:** Second in which the speech begins
- **segment_end:** Second in which the speech ends
- **transcriptions:** Dictionary containing transcriptions into different languages
### Data Splits
- **train split:** 116138 samples
- **valid split:** 17538 samples
- **test split:** 18901 samples
Train set (hours):
| src/tgt | en | fr | de | it | es | pt | pl | ro | nl |
|---------|----|----|----|----|----|----|----|----|----|
| en | - | 81 | 83 | 80 | 81 | 81 | 79 | 72 | 80 |
| fr | 32 | - | 21 | 20 | 21 | 22 | 20 | 18 | 22 |
| de | 30 | 18 | - | 17 | 18 | 18 | 17 | 17 | 18 |
| it | 37 | 21 | 21 | - | 21 | 21 | 21 | 19 | 20 |
| es | 22 | 14 | 14 | 14 | - | 14 | 13 | 12 | 13 |
| pt | 15 | 10 | 10 | 10 | 10 | - | 9 | 9 | 9 |
| pl | 28 | 18 | 18 | 17 | 18 | 18 | - | 16 | 18 |
| ro | 24 | 12 | 12 | 12 | 12 | 12 | 12 | - | 12 |
| nl | 7 | 5 | 5 | 4 | 5 | 4 | 4 | 4 | - |
Valid/Test sets are all between 3 and 6 hours.
## Additional Information
### Licensing Information
* The work carried out for constructing the Europarl-ST corpus is released under a Creative Commons Attribution-NonCommercial 4.0 International license (CC BY-NC 4.0)
* All rights of the data belong to the European Union and respective copyright holders.
### Citation Information
If you use the corpus in your research please cite the following reference:
@INPROCEEDINGS{jairsan2020a,
author={J. {Iranzo-Sánchez} and J. A. {Silvestre-Cerdà} and J. {Jorge} and N. {Roselló} and A. {Giménez} and A. {Sanchis} and J. {Civera} and A. {Juan}},
booktitle={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Europarl-ST: A Multilingual Corpus for Speech Translation of Parliamentary Debates},
year={2020},
pages={8229-8233},}
|
tj-solergibert/Europarl-ST
|
[
"task_categories:translation",
"task_categories:text-to-speech",
"size_categories:100K<n<1M",
"language:es",
"language:de",
"language:en",
"language:fr",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:it",
"license:cc-by-nc-4.0",
"region:us"
] |
2023-02-08T22:47:18+00:00
|
{"language": ["es", "de", "en", "fr", "nl", "pl", "pt", "ro", "it"], "license": "cc-by-nc-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["translation", "text-to-speech"], "dataset_info": {"features": [{"name": "original_speech", "dtype": "string"}, {"name": "original_language", "dtype": "string"}, {"name": "audio_path", "dtype": "string"}, {"name": "segment_start", "dtype": "float32"}, {"name": "segment_end", "dtype": "float32"}, {"name": "transcriptions", "struct": [{"name": "de", "dtype": "string"}, {"name": "en", "dtype": "string"}, {"name": "es", "dtype": "string"}, {"name": "fr", "dtype": "string"}, {"name": "it", "dtype": "string"}, {"name": "nl", "dtype": "string"}, {"name": "pl", "dtype": "string"}, {"name": "pt", "dtype": "string"}, {"name": "ro", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 147857910, "num_examples": 116138}, {"name": "valid", "num_bytes": 21318985, "num_examples": 17538}, {"name": "test", "num_bytes": 22580968, "num_examples": 18901}], "download_size": 109205144, "dataset_size": 191757863}}
|
2023-02-09T10:22:06+00:00
|
45734a75ad88dd4fc40109e9d366a7fec6e51951
|
nateraw/fuego-20230208-175458-ff1b28
|
[
"fuego",
"region:us"
] |
2023-02-08T22:54:59+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230208-175458-ff1b28", "status": "preparing", "script": "main.py", "requirements_file": "requirements.txt", "space_id": "nateraw/fuego-20230208-175458-ff1b28", "space_hardware": "cpu-basic", "github_repo_id": "pytorch/examples", "github_repo_branch": "main", "github_repo_sha": "d8456a36d1bbb22f72b003f59406a19a0a0547c3"}}
|
2023-02-08T22:55:02+00:00
|
|
6fa65c950053abef18cb800b795da47808f68402
|
nateraw/fuego-20230208-175631-15666f
|
[
"fuego",
"region:us"
] |
2023-02-08T22:56:32+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230208-175631-15666f", "status": "done", "script": "main.py", "requirements_file": "requirements.txt", "space_id": "nateraw/fuego-20230208-175631-15666f", "space_hardware": "cpu-basic", "github_repo_id": "pytorch/examples", "github_repo_branch": "main", "github_repo_sha": "d8456a36d1bbb22f72b003f59406a19a0a0547c3"}}
|
2023-02-08T23:02:40+00:00
|
|
fc9d462f940ecf85b534562e6767ffaf527241e8
|
# Karen Fukuhara textual inversion
This is an embedding of Karen Fukuhara. She plays several different amazing roles from acting to voice acting: (Kimiko Miyashiro, Glimmah, Kipo).

## Embedding Usage
Use the token ```kfukvf-1990```

---
## 🎶 Prompt Examples
🧾 ```Perfectly-centered portrait-photograph of kfukvf-1990, dressed as a queen with glimmering jewelry, lifelike, subsurface scattering, photorealism, 8k resolution, beautiful, dynamic lighting```
⛔ Negative prompt: ```(bad_prompt_version2:0.8), lowres, text, error, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, ((duplicate)), ((morbid)), ((mutilated)), out of frame, (((mutation))), (((deformed))), ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), extra limbs, gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), (fused fingers), (((long neck)))```
_Steps: 20, Sampler: DDIM, CFG scale: 7, Seed: 2154893269, Size: 512x768, Model hash: d8691b4d16_
---
🧾 ```Perfectly-centered portrait-photograph of kfukvf-1990, sitting near a table having a drink, lifelike, subsurface scattering, photorealism, 8k resolution, beautiful```
⛔ Negative prompt: ```(bad_prompt_version2:0.8), lowres, text, error, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, ((duplicate)), ((morbid)), ((mutilated)), out of frame, (((mutation))), (((deformed))), ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), extra limbs, gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), (fused fingers), (((long neck)))```
_Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 2899587651, Size: 512x768, Model hash: e3cda540bf_
---
🧾 ```Perfectly-centered portrait-photograph of kfukvf-1990, wearing a dirty jacket near a busy city, lifelike, subsurface scattering, photorealism, 8k resolution, beautiful```
⛔ Negative prompt: ```(bad_prompt_version2:0.8), lowres, text, error, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, ((duplicate)), ((morbid)), ((mutilated)), out of frame, (((mutation))), (((deformed))), ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), extra limbs, gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), (fused fingers), (((long neck)))```
_Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 3576359901, Size: 512x768, Model hash: 67abd65708_


---
## 🎴 text2img Sampler and Checkpoint grids:
It's always great to get a visual of what's going on with sampler using different models with this embedding. See the examples below and tune them to your liking.
[Sampling Grid](https://huggingface.co/datasets/zuleo/karen-fukuhara/resolve/main/images/sampler_ckpt_grid.png)
---
☕ If you enjoy this model, buy me a coffee [Buy a coffee](https://ko-fi.com/3eegames)
---
|
zuleo/karen-fukuhara
|
[
"license:creativeml-openrail-m",
"stable-diffusion",
"embedding",
"textual-inversion",
"text-to-image",
"image-to-image",
"art",
"artistic",
"region:us"
] |
2023-02-08T23:03:32+00:00
|
{"license": "creativeml-openrail-m", "tags": ["stable-diffusion", "embedding", "textual-inversion", "text-to-image", "image-to-image", "art", "artistic"]}
|
2023-03-06T19:17:14+00:00
|
32b7b3846fe2b68e3e32e556371f85f4f394e2e9
|
nateraw/fuego-20230208-180352-b0cb47
|
[
"fuego",
"region:us"
] |
2023-02-08T23:03:53+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230208-180352-b0cb47", "status": "preparing", "script": "main.py", "requirements_file": "requirements.txt", "space_id": "nateraw/fuego-20230208-180352-b0cb47", "space_hardware": "cpu-basic", "github_repo_id": "pytorch/examples", "github_repo_branch": "main", "github_repo_sha": "d8456a36d1bbb22f72b003f59406a19a0a0547c3"}}
|
2023-02-08T23:03:56+00:00
|
|
7a70d71b370e5d9093c5596c6cb3bc4cbba9a0f1
|
nateraw/fuego-20230208-181323-773cbc
|
[
"fuego",
"region:us"
] |
2023-02-08T23:13:24+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230208-181323-773cbc", "status": "done", "script": "run_glue.py", "requirements_file": "requirements.txt", "space_id": "nateraw/fuego-20230208-181323-773cbc", "space_hardware": "t4-medium", "github_repo_id": "huggingface/transformers", "github_repo_branch": "main", "github_repo_sha": "c35bb6de547f8839434c3d5772777c699e9595de"}}
|
2023-02-08T23:20:51+00:00
|
|
1a809c2e4a9e5da5fcbf85a2f7b9812697892968
|
nateraw/fuego-20230208-181955-0992ab
|
[
"fuego",
"region:us"
] |
2023-02-08T23:19:56+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230208-181955-0992ab", "status": "done", "script": "main.py", "requirements_file": "requirements.txt", "space_id": "nateraw/fuego-20230208-181955-0992ab", "space_hardware": "cpu-basic", "github_repo_id": "pytorch/examples", "github_repo_branch": "main", "github_repo_sha": "d8456a36d1bbb22f72b003f59406a19a0a0547c3"}}
|
2023-02-08T23:24:43+00:00
|
|
474d3f03bfd55cbee3d469fa10a6ef79e4cb018c
|
nateraw/fuego-20230209-002433-b6f785
|
[
"fuego",
"region:us"
] |
2023-02-08T23:24:34+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230209-002433-b6f785", "status": "running", "script": "main.py", "requirements_file": "requirements.txt", "space_id": "nateraw/fuego-20230209-002433-b6f785", "space_hardware": "cpu-basic", "github_repo_id": "pytorch/examples", "github_repo_branch": "main", "github_repo_sha": "d8456a36d1bbb22f72b003f59406a19a0a0547c3"}}
|
2023-02-08T23:28:56+00:00
|
|
8803d65622f7904ff04abfe2a5d3e5cd246a9045
|
nateraw/fuego-20230209-002943-00c7a9
|
[
"fuego",
"region:us"
] |
2023-02-08T23:29:44+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230209-002943-00c7a9", "status": "preparing", "script": "run_glue.py", "requirements_file": "requirements.txt", "space_id": "nateraw/fuego-20230209-002943-00c7a9", "space_hardware": "cpu-basic", "github_repo_id": "huggingface/transformers", "github_repo_branch": "main", "github_repo_sha": "c35bb6de547f8839434c3d5772777c699e9595de"}}
|
2023-02-08T23:29:46+00:00
|
|
a2dbd1fe2c898850aeaf2b7703a494ec2174bf9b
|
nateraw/fuego-20230209-003125-8f59a7
|
[
"fuego",
"region:us"
] |
2023-02-08T23:31:26+00:00
|
{"tags": ["fuego"], "fuego": {"id": "20230209-003125-8f59a7", "status": "done", "script": "run_glue.py", "requirements_file": "requirements.txt", "space_id": "nateraw/fuego-20230209-003125-8f59a7", "space_hardware": "cpu-basic", "github_repo_id": "huggingface/transformers", "github_repo_branch": "main", "github_repo_sha": "c35bb6de547f8839434c3d5772777c699e9595de"}}
|
2023-02-09T05:48:01+00:00
|
|
ff54431d755bd04cbf336a97d18417dbe3c93f71
|
# Dataset Card for "Sound_Spectrogram_Description"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jha2ee/Sound_Spectrogram_Description
|
[
"region:us"
] |
2023-02-08T23:58:26+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16182594.0, "num_examples": 218}], "download_size": 16178537, "dataset_size": 16182594.0}}
|
2023-02-09T01:25:04+00:00
|
206eae0b49785320d08d3dce87ecb2e2294cf780
|
# Dataset Card for Genshin Voice
## Dataset Description
### Dataset Summary
The Genshin Voice dataset is a text-to-voice dataset of different Genshin Impact characters unpacked from the game.
### Languages
The text in the dataset is in Mandarin.
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by unpacking the [Genshin Impact](https://genshin.hoyoverse.com/) game.
#### Who are the source language producers?
The language producers are the employee of [Hoyoverse](https://hoyoverse.com/) and contractors from [EchoSky Studio](http://qx.asiacu.com/).
### Annotations
The dataset contains official annotations from the game, including ingame speaker name and transcripts.
## Additional Information
### Dataset Curators
The dataset was created by [w4123](https://github.com/w4123) initially in his [GitHub repository](https://github.com/w4123/GenshinVoice).
### Licensing Information
Copyright © COGNOSPHERE. All Rights Reserved.
|
hanamizuki-ai/genshin-voice-v3.4-mandarin
|
[
"task_categories:text-to-speech",
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"source_datasets:original",
"language:zh",
"region:us"
] |
2023-02-09T01:50:09+00:00
|
{"language": ["zh"], "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["text-to-speech", "automatic-speech-recognition"], "pretty_name": "Genshin Voice", "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "language", "dtype": "string"}, {"name": "npcName", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20516788863.251, "num_examples": 78337}], "download_size": 34041643248, "dataset_size": 20516788863.251}}
|
2023-04-13T01:28:53+00:00
|
806da0c59c7c6a1176362a4cfe6add9b59cb66ed
|
# Dataset Card for "Hatefulmemes_test_facebook_opt_6.7b_Hatefulmemes_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/Hatefulmemes_test_facebook_opt_6.7b_Hatefulmemes_ns_1000
|
[
"region:us"
] |
2023-02-09T02:41:25+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 362719567.0, "num_examples": 1000}, {"name": "fewshot_3_bs_16", "num_bytes": 363587206.0, "num_examples": 1000}, {"name": "fewshot_5_bs_16", "num_bytes": 364454992.0, "num_examples": 1000}, {"name": "fewshot_8_bs_16", "num_bytes": 365760377.0, "num_examples": 1000}, {"name": "fewshot_10_bs_16", "num_bytes": 366632224.0, "num_examples": 1000}], "download_size": 1814428039, "dataset_size": 1823154366.0}}
|
2023-02-09T03:01:01+00:00
|
f29bdc6a22d6ad3453a71fd85e6d04bf6b4018e6
|
lsy641/reddit_mhp
|
[
"license:mit",
"region:us"
] |
2023-02-09T03:04:45+00:00
|
{"license": "mit"}
|
2023-06-15T04:34:52+00:00
|
|
b72308c25e89ec8484d1d71011b47ab0c60ebbfa
|
# Dataset Card for "Hatefulmemes_test_facebook_opt_13b_Hatefulmemes_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/Hatefulmemes_test_facebook_opt_13b_Hatefulmemes_ns_1000
|
[
"region:us"
] |
2023-02-09T03:18:48+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_1_bs_16", "num_bytes": 362719683.0, "num_examples": 1000}, {"name": "fewshot_3_bs_16", "num_bytes": 363588098.0, "num_examples": 1000}, {"name": "fewshot_5_bs_16", "num_bytes": 364455860.0, "num_examples": 1000}, {"name": "fewshot_8_bs_16", "num_bytes": 365761089.0, "num_examples": 1000}, {"name": "fewshot_10_bs_16", "num_bytes": 366632848.0, "num_examples": 1000}], "download_size": 1814428412, "dataset_size": 1823157578.0}}
|
2023-02-09T03:41:39+00:00
|
f8b80ef811934c64f783726712fb57b3a58850a1
|
## Source
Combined text-only dataset from
- poloclub/diffusiondb
- Gustavosta/Stable-Diffusion-Prompts
- bartman081523/stable-diffusion-discord-prompts
- FredZhang7/krea-ai-prompts
For preprocessing methods, please see [Fast GPT2 PromptGen](https://huggingface.co/FredZhang7/distilgpt2-stable-diffusion-v2).
## Python
Download and save the dataset to `all_prompts.txt` locally.
```bash
pip install datasets
```
```python
import datasets
dataset = datasets.load_dataset("FredZhang7/stable-diffusion-prompts-2.47M")
train = dataset["train"]
prompts = train["text"]
with open("all_prompts.txt", "w") as f:
for prompt in prompts:
f.write(prompt + "\n")
```
|
FredZhang7/stable-diffusion-prompts-2.47M
|
[
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:en",
"license:creativeml-openrail-m",
"region:us"
] |
2023-02-09T04:03:22+00:00
|
{"language": ["en"], "license": "creativeml-openrail-m", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "pretty_name": "SDP-2.47M"}
|
2023-02-11T21:59:33+00:00
|
f3ae4c046375d88f68131defb65276a70f911ef5
|
bydavid/biblecorpuscsv
|
[
"license:cc",
"region:us"
] |
2023-02-09T04:28:50+00:00
|
{"license": "cc"}
|
2023-02-09T04:31:37+00:00
|
|
d341e36418ac5d3f485a2258bfbd65939f4ce56f
|
# Dataset Card for "filled_stacks_metadata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bejaeger/filled_stacks_metadata
|
[
"region:us"
] |
2023-02-09T04:39:49+00:00
|
{"dataset_info": {"features": [{"name": "videoId", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "channelId", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "publishedAt", "dtype": "string"}, {"name": "likes", "dtype": "string"}, {"name": "views", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 45766, "num_examples": 98}], "download_size": 0, "dataset_size": 45766}}
|
2023-02-09T04:40:27+00:00
|
45de94dcfcf6e305a9a76517983f20cb86e397da
|
Yusen/Sovits_ATRI
|
[
"license:other",
"region:us"
] |
2023-02-09T04:44:15+00:00
|
{"license": "other"}
|
2023-02-09T05:47:00+00:00
|
|
f8e94d0e3d17822667514cd0cdaf989433ce48d8
|
# Dataset Card for "filled_stacks_transcriptions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bejaeger/filled_stacks_transcriptions
|
[
"region:us"
] |
2023-02-09T05:41:25+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "published", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "videoId", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "start", "dtype": "float64"}, {"name": "end", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 3495561, "num_examples": 13292}], "download_size": 959645, "dataset_size": 3495561}}
|
2023-02-09T05:41:27+00:00
|
9ad3be54f50b7e9a616d0f74be76ba39850abb8d
|
# Dataset Card for "turkishReviews-ds-mini1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ramazank2000/turkishReviews-ds-mini1
|
[
"region:us"
] |
2023-02-09T06:02:03+00:00
|
{"dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1252876.2642514652, "num_examples": 3378}, {"name": "validation", "num_bytes": 139455.7357485349, "num_examples": 376}], "download_size": 0, "dataset_size": 1392332.0}}
|
2023-02-13T03:03:24+00:00
|
8ddda106843a068e607c65b3ea5109df6fc4e574
|
nc33/multispan_adversarial_qa
|
[
"license:mit",
"region:us"
] |
2023-02-09T06:25:16+00:00
|
{"license": "mit", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "sequence": "string"}, {"name": "question", "sequence": "string"}, {"name": "type", "dtype": "string"}, {"name": "structure", "dtype": "string"}, {"name": "num_span", "dtype": "int64"}, {"name": "label", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 60668335, "num_examples": 30000}, {"name": "validation", "num_bytes": 5963239, "num_examples": 3000}, {"name": "test", "num_bytes": 6637189, "num_examples": 3000}], "download_size": 6604109, "dataset_size": 73268763}}
|
2023-02-14T05:05:44+00:00
|
|
b6704667642eedbf094faa42e19351f891a19173
|
nc33/multispan_quoref
|
[
"license:mit",
"region:us"
] |
2023-02-09T06:25:48+00:00
|
{"license": "mit", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "answer_start", "dtype": "int32"}, {"name": "text", "dtype": "string"}]}, {"name": "num_span", "dtype": "int64"}, {"name": "label", "sequence": "string"}, {"name": "type", "dtype": "string"}, {"name": "structure", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 10164883, "num_examples": 2418}, {"name": "train", "num_bytes": 83767911, "num_examples": 19399}], "download_size": 7949927, "dataset_size": 93932794}}
|
2023-02-09T10:04:57+00:00
|
|
1e93147c6ea19e4136c72ed3076aa4e3f1714c69
|
nc33/multispan_xquad
|
[
"license:mit",
"region:us"
] |
2023-02-09T06:26:13+00:00
|
{"license": "mit", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "sequence": "string"}, {"name": "question", "sequence": "string"}, {"name": "type", "dtype": "string"}, {"name": "structure", "dtype": "string"}, {"name": "num_span", "dtype": "int64"}, {"name": "label", "sequence": "string"}], "splits": [{"name": "validation", "num_bytes": 2574555, "num_examples": 1190}], "download_size": 398324, "dataset_size": 2574555}}
|
2023-02-14T04:56:01+00:00
|
|
7b46e0beba37d9741207b2b7f6c5c3d56774fb29
|
nc33/multispan_squad_v2
|
[
"license:mit",
"region:us"
] |
2023-02-09T06:26:37+00:00
|
{"license": "mit", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "sequence": "string"}, {"name": "question", "sequence": "string"}, {"name": "type", "dtype": "string"}, {"name": "structure", "dtype": "string"}, {"name": "num_span", "dtype": "int64"}, {"name": "label", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 266803429, "num_examples": 130319}, {"name": "validation", "num_bytes": 25814814, "num_examples": 11873}], "download_size": 31180462, "dataset_size": 292618243}}
|
2023-02-14T04:56:06+00:00
|
|
760d9e19600e0f48f00dab9cd772e518e8701374
|
NutzZa/DCF
|
[
"license:unknown",
"region:us"
] |
2023-02-09T06:42:06+00:00
|
{"license": "unknown"}
|
2023-02-09T08:03:50+00:00
|
|
3f1c54f7517556648417cdc487bd924dfbe1d02c
|
# Dataset Card for "srk_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rkhilnani96/srk_images
|
[
"region:us"
] |
2023-02-09T07:47:57+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 226593.0, "num_examples": 7}], "download_size": 227211, "dataset_size": 226593.0}}
|
2023-02-09T07:48:02+00:00
|
ab788cd63e2b59f2ab1fe7086c62a82b1f4a65ca
|
For more info on data collection and the preprocessing algorithm, please see [Fast Anime PromptGen](https://huggingface.co/FredZhang7/anime-anything-promptgen-v2).
## 80K unique prompts
- `safebooru_clean`: Cleaned prompts with upscore ≥ 8 from the Safebooru API
---
For disclaimers about the Danbooru data, please see [Danbooru Tag Generator](https://huggingface.co/FredZhang7/danbooru-tag-generator).
## 100K unique prompts (each)
- `danbooru_raw`: Raw prompts with upscore ≥ 3 from Danbooru API
- `danbooru_clean`: Cleaned prompts with upscore ≥ 3 from Danbooru API
---
## Python
Download and save the dataset to anime_prompts.csv locally.
```bash
pip install datasets
```
```python
import csv
import datasets
dataset = datasets.load_dataset("FredZhang7/anime-prompts-180K")
train = dataset["train"]
safebooru_clean = train["safebooru_clean"]
danbooru_clean = train["danbooru_clean"]
danbooru_raw = train["danbooru_raw"]
with open("anime_prompts.csv", "w") as f:
writer = csv.writer(f)
writer.writerow(["safebooru_clean", "danbooru_clean", "danbooru_raw"])
for i in range(len(safebooru_clean)):
writer.writerow([safebooru_clean[i], danbooru_clean[i], danbooru_raw[i]])
```
|
FredZhang7/anime-prompts-180K
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:creativeml-openrail-m",
"region:us"
] |
2023-02-09T07:55:28+00:00
|
{"language": ["en"], "license": "creativeml-openrail-m", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "viewer": false}
|
2023-12-20T07:18:20+00:00
|
304a05a55bc6abc0446d8fae0d0771716b6a271a
|
Dataset from https://www.kaggle.com/datasets/rmisra/news-category-dataset
|
heegyu/news-category-dataset
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-02-09T08:08:22+00:00
|
{"license": "cc-by-4.0"}
|
2023-02-09T08:10:48+00:00
|
dce1f51a6b8ab34e6b8ce01eecdeec5ac4b519ec
|
learnanything/feedback-series
|
[
"license:other",
"region:us"
] |
2023-02-09T08:15:58+00:00
|
{"license": "other"}
|
2023-02-09T08:47:24+00:00
|
|
788394263fe37873c9f14cac34d498393f287c49
|
Yeva/armSum
|
[
"license:other",
"region:us"
] |
2023-02-09T08:40:53+00:00
|
{"license": "other"}
|
2023-02-09T08:43:46+00:00
|
|
a8142d89cd7e437f8750761598505dfd1e456ef8
|
# 3Blue1Brown transcripts
## Data
This dataset provides transcriptions of all videos of the amazing [3Blue1Brown](https://www.youtube.com/c/3blue1brown?app=desktop).
Last update was on 09.02.2022.
### Schema
```
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 video_title 116 non-null object
1 transcription 116 non-null object
```
|
psetinek/3Blue1Brown_transcripts
|
[
"language:en",
"region:us"
] |
2023-02-09T08:53:34+00:00
|
{"language": ["en"]}
|
2023-02-09T09:09:55+00:00
|
e7cc904360e371f826821ec5418940fab8a892cc
|
sustcsenlp/bn_asr_spontaneous
|
[
"license:creativeml-openrail-m",
"region:us"
] |
2023-02-09T08:54:31+00:00
|
{"license": "creativeml-openrail-m"}
|
2023-02-09T08:54:31+00:00
|
|
cb22f5f801015f0159ef5cadf2d16de9d33b5ee6
|
# Dataset Card for Dataset Name
## Dataset Description
- **Repository:** https://www.kaggle.com/datasets/muhammadalbrham/rgb-arabic-alphabets-sign-language-dataset
- **Paper:** https://arxiv.org/abs/2301.11932
- **Point of Contact:** [email protected]
### Dataset Summary
RGB Arabic Alphabet Sign Language (AASL) dataset comprises 7,857 raw and fully labelled RGB images of the Arabic sign language alphabets, which to our best knowledge is the first publicly available RGB dataset. The dataset is aimed to help those interested in developing real-life Arabic sign language classification models. AASL was collected from more than 200 participants and with different settings such as lighting, background, image orientation, image size, and image resolution. Experts in the field supervised, validated and filtered the collected images to ensure a high-quality dataset.
### Supported Tasks and Leaderboards
- Image Classification
### Languages
- Arabic
## Dataset Structure
### Data Splits
- All images for now
### Licensing Information
https://creativecommons.org/licenses/by-sa/4.0/
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2301.11932,
doi = {10.48550/ARXIV.2301.11932},
url = {https://arxiv.org/abs/2301.11932},
author = {Al-Barham, Muhammad and Alsharkawi, Adham and Al-Yaman, Musa and Al-Fetyani, Mohammad and Elnagar, Ashraf and SaAleek, Ahmad Abu and Al-Odat, Mohammad},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {RGB Arabic Alphabets Sign Language Dataset},
publisher = {arXiv},
year = {2023},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
pain/AASL
|
[
"task_categories:image-segmentation",
"size_categories:1K<n<10K",
"language:ar",
"license:cc-by-sa-4.0",
"arxiv:2301.11932",
"region:us"
] |
2023-02-09T09:36:57+00:00
|
{"language": ["ar"], "license": "cc-by-sa-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["image-segmentation"], "pretty_name": "RGB Arabic Alphabets Sign Language Dataset"}
|
2023-02-20T19:17:55+00:00
|
205552ab8c0a1938f79394d2992fc10fc2b020d9
|
# Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jaese/github-issues
|
[
"region:us"
] |
2023-02-09T09:46:39+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "labels", "list": [{"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "comments", "dtype": "int64"}, {"name": "created_at", "dtype": "string"}, {"name": "updated_at", "dtype": "string"}, {"name": "author_association", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "reactions", "struct": [{"name": "+1", "dtype": "int64"}, {"name": "-1", "dtype": "int64"}, {"name": "confused", "dtype": "int64"}, {"name": "eyes", "dtype": "int64"}, {"name": "heart", "dtype": "int64"}, {"name": "hooray", "dtype": "int64"}, {"name": "laugh", "dtype": "int64"}, {"name": "rocket", "dtype": "int64"}, {"name": "total_count", "dtype": "int64"}, {"name": "url", "dtype": "string"}]}, {"name": "timeline_url", "dtype": "string"}, {"name": "draft", "dtype": "bool"}, {"name": "pull_request", "struct": [{"name": "diff_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "merged_at", "dtype": "string"}, {"name": "patch_url", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "closed_at", "dtype": "string"}, {"name": "state_reason", "dtype": "string"}, {"name": "assignee", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "assignees", "list": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "milestone", "struct": [{"name": "closed_at", "dtype": "string"}, {"name": "closed_issues", "dtype": "int64"}, {"name": "created_at", "dtype": "string"}, {"name": "creator", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "description", "dtype": "string"}, {"name": "due_on", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "labels_url", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "open_issues", "dtype": "int64"}, {"name": "state", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "updated_at", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "is_pull_request", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 16723706, "num_examples": 5524}], "download_size": 3919619, "dataset_size": 16723706}}
|
2023-02-09T09:46:52+00:00
|
c7c4247a49a2760735dd267c52adec3c76eeacc4
|
# Dataset Card for "restaurant_order_local_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
figfig/restaurant_order_local_test
|
[
"region:us"
] |
2023-02-09T09:53:56+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 270680.0, "num_examples": 2}, {"name": "test", "num_bytes": 270680.0, "num_examples": 2}], "download_size": 272201, "dataset_size": 541360.0}}
|
2023-02-09T12:44:23+00:00
|
4f51eb19092a3f78e637d13e8c0453cee50b30ae
|
nc33/multiSpanQa_exp
|
[
"license:mit",
"region:us"
] |
2023-02-09T10:14:43+00:00
|
{"license": "mit"}
|
2023-02-09T14:36:16+00:00
|
|
bd9c7122a23ebf20af506a86760e62304a191a71
|
alexses2200/ARCHIVE
|
[
"license:other",
"region:us"
] |
2023-02-09T11:34:21+00:00
|
{"license": "other"}
|
2023-04-21T07:25:00+00:00
|
|
057c46d4a1901d9efb23e0cb7b9d4eadf80c8120
|
kannanwisen/De-Hazing-Dataset
|
[
"license:creativeml-openrail-m",
"region:us"
] |
2023-02-09T11:54:14+00:00
|
{"license": "creativeml-openrail-m"}
|
2023-02-09T12:09:12+00:00
|
|
491eca866122317396e68d4cc8ce2a4ef1f42bfb
|
# squad_v2_factuality_v1
This dataset is derived from "squad_v2" training "context" with the following steps.
1. NER is run to extract entities.
2. Lexicon of person's name, date, organisation name and location are collected.
3. 20% of the time, one of the text attribute (person's name, date, organisation name and location) is randomly replaced. For consistency of context, all other place with the same name is also replaced.
# Purpose of the Dataset
The purpose of this dataset is to assess if a language model could detect factuality.
|
kenhktsui/squad_v2_factuality_v1
|
[
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] |
2023-02-09T12:18:35+00:00
|
{"language": ["en"], "license": "cc-by-sa-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"]}
|
2023-02-13T04:06:43+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.