sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
0d0fe435a5528bff22b95a516e0a946d83e9ddcf | # Dataset Card for "sv_corpora_parliament_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | saichand09/sv_corpora_parliament_processed | [
"region:us"
]
| 2022-12-09T05:18:16+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 292351437, "num_examples": 1892723}], "download_size": 161955537, "dataset_size": 292351437}} | 2022-12-09T05:18:27+00:00 |
319f6d868eee2bc9d369191bcaadce9755c66a67 | # Dataset Card for "rice-rgb-demo2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gokulraja17/rice-rgb-demo2 | [
"region:us"
]
| 2022-12-09T05:25:08+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "RiceLeafs_BrownSpot", "1": "RiceLeafs_Healthy", "2": "RiceLeafs_Hispa", "3": "RiceLeafs_LeafBlast"}}}}], "splits": [{"name": "train", "num_bytes": 11929981.02, "num_examples": 2683}, {"name": "test", "num_bytes": 3059814.0, "num_examples": 672}], "download_size": 14605882, "dataset_size": 14989795.02}} | 2022-12-09T05:25:16+00:00 |
c9a878514e9367cb3ecb1dce2522f5ec97a3d5d5 | # Dataset Card for "sentence_eval_aa2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | shreyasharma/sentence_eval_aa2 | [
"region:us"
]
| 2022-12-09T05:37:50+00:00 | {"dataset_info": {"features": [{"name": "declarativized", "dtype": "string"}, {"name": "correct", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 35463, "num_examples": 615}, {"name": "validation", "num_bytes": 18279, "num_examples": 315}, {"name": "test", "num_bytes": 17185, "num_examples": 300}], "download_size": 56380, "dataset_size": 70927}} | 2022-12-09T05:38:04+00:00 |
ca9a623765e3622fd7c10f3da8cae77abb43acf9 | # Dataset Card for "sentence_eval_aa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | shreyasharma/sentence_eval_aa | [
"region:us"
]
| 2022-12-09T05:38:21+00:00 | {"dataset_info": {"features": [{"name": "declarativized", "dtype": "string"}, {"name": "correct", "dtype": "bool"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 103267, "num_examples": 1359}, {"name": "validation", "num_bytes": 29118, "num_examples": 379}, {"name": "test", "num_bytes": 29277, "num_examples": 370}], "download_size": 77770, "dataset_size": 161662}} | 2022-12-09T05:38:35+00:00 |
40e99810c8e78e6f70da9aefce099268cd99c3e3 |
<br><br>
<p align="center">
<img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>The data offered by Jina AI, Finetuner team.</b>
</p>
## Summary
This dataset is a German-language dataset based on the [Fashion12K](https://github.com/Toloka/Fashion12K_german_queries) dataset, which originally contains both English and German text descriptions for each item.
This dataset was used to to finetuner CLIP using the [Finetuner](https://finetuner.jina.ai/) tool.
## Fine-tuning
Please refer to our documentation: [Multilingual Text-to-Image Search with MultilingualCLIP](https://finetuner.jina.ai/notebooks/multilingual_text_to_image/)
and blog [Improving Search Quality for Non-English Queries with Fine-tuned Multilingual CLIP Models](https://jina.ai/news/improving-search-quality-non-english-queries-fine-tuned-multilingual-clip-models/)
## Instances
Each data point consists of a 'text' and an 'image' field, where the 'text' field describes an item of clothing in German, and the 'image' field contains and image of that item of clothing.
## Fields
- 'text': A string describing the item of clothing.
- 'image': A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. `dataset[0]["image"]` should always be preferred over dataset["image"][0].
## Splits
| | train | test |
|------------|-------|------|
| # of items | 10000 | 2001 |
## Source
Images were sampled from the [Fashion200K dataset](https://github.com/xthan/fashion-200k).
## Annotations
Data was annotated using [Toloka](https://toloka.ai/). See their site for more details.
## Licensing Information
This work is licensed under a Creative Commons Attribution 4.0 International License.
## Contributors
Thanks to contributors from [Jina AI](https://jina.ai) and [Toloka](https://toloka.ai) for adding this dataset. | jinaai/fashion-captions-de | [
"task_categories:text-to-image",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:de",
"license:cc-by-4.0",
"region:us"
]
| 2022-12-09T06:11:54+00:00 | {"language": ["de"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-to-image"], "pretty_name": "Fashion12k DE", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 282285477, "num_examples": 10000}, {"name": "test", "num_bytes": 56612023.875, "num_examples": 2001}], "download_size": 320681179, "dataset_size": 338897500.875}} | 2023-07-09T09:37:31+00:00 |
a9211192e0405af9560768f986c8974d00358efa | # Dataset Card for "news_corpus_v2_p2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hieule/news_corpus_v2_p2 | [
"region:us"
]
| 2022-12-09T06:14:09+00:00 | {"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "sapo", "dtype": "string"}, {"name": "cates", "dtype": "null"}, {"name": "publish", "dtype": "timestamp[us]"}, {"name": "text_content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18238489193, "num_examples": 5000000}], "download_size": 9130800517, "dataset_size": 18238489193}} | 2022-12-09T08:28:51+00:00 |
dad9e9e6de9bfe942619234047d24d420949e50f | # Dataset Card for "sheet_music_clean"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EmileEsmaili/sheet_music_clean | [
"region:us"
]
| 2022-12-09T06:44:26+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2229356112.491, "num_examples": 9219}], "download_size": 1211789844, "dataset_size": 2229356112.491}} | 2022-12-09T06:47:10+00:00 |
a038c6ad1f3f655a0cb85bcaf6291b8aa230502d | This dataset contains 5 pickle files which is a hugging face dataset, each contains 10,000 images in PIL.Image data type together with their class label and class id | taquynhnga/preprocessed-val-imagenet | [
"region:us"
]
| 2022-12-09T07:20:52+00:00 | {} | 2022-12-09T07:22:46+00:00 |
a1ca0941e2f75dd343cafa52a2409238dae8ee27 | zhangchi0104/MaaOcrDataset | [
"license:mit",
"region:us"
]
| 2022-12-09T07:44:43+00:00 | {"license": "mit"} | 2022-12-09T07:44:43+00:00 |
|
b88d87988b5645dda3f69bb920a2468b3da82054 | # Dataset Card for "sprite_caption_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tkarr/sprite_caption_dataset | [
"region:us"
]
| 2022-12-09T07:46:51+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32010144.421997756, "num_examples": 12830}, {"name": "test", "num_bytes": 1778895.7890011223, "num_examples": 713}, {"name": "valid", "num_bytes": 1778895.7890011223, "num_examples": 713}], "download_size": 26944262, "dataset_size": 35567936.0}} | 2022-12-15T02:27:33+00:00 |
5a4de27b0bbacc6562d40b12575e30bba244c5a3 | # Dataset Card for "tnews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | xusenlin/tnews | [
"region:us"
]
| 2022-12-09T07:48:43+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4421883, "num_examples": 53360}, {"name": "validation", "num_bytes": 830536, "num_examples": 10000}], "download_size": 3695633, "dataset_size": 5252419}} | 2022-12-09T08:04:49+00:00 |
384c074e2f675f0c4e030a500574643826248471 | # Dataset Card for "news_corpus_v2_p3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hieule/news_corpus_v2_p3 | [
"region:us"
]
| 2022-12-09T07:54:18+00:00 | {"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "sapo", "dtype": "string"}, {"name": "cates", "dtype": "null"}, {"name": "publish", "dtype": "timestamp[us]"}, {"name": "text_content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 17984316450, "num_examples": 4896998}], "download_size": 9019821584, "dataset_size": 17984316450}} | 2022-12-09T11:29:30+00:00 |
5f1d8cb527e42851d69eb4b9f7d4bcfe0095334a | # Dataset Card for "olm-october-2022-tokenized-1024-perplexity-filters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-october-2022-tokenized-1024-perplexity-filters | [
"region:us"
]
| 2022-12-09T08:09:43+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 78517730052.0, "num_examples": 12754667}], "download_size": 21283341524, "dataset_size": 78517730052.0}} | 2022-12-09T09:03:30+00:00 |
27d9a626352c2c6e2b24e18c167b658182a836af | AsAHuman/ForNAI | [
"license:unknown",
"region:us"
]
| 2022-12-09T08:30:07+00:00 | {"license": "unknown"} | 2023-02-25T02:40:00+00:00 |
|
c761727c4a026646f1ba20907606937562a99640 |
# Dataset Card for [REDv2]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is the second version of the Romanian Emotions Dataset (RED) containing 5449 tweets annotated in a multi-label fashion with the following 7 emotions: Anger (Furie), Fear (Frică), Joy (Bucurie), Sadness (Tristețe), Surprise (Surpriză), Trust (Încredere) and Neutral (Neutru).
### Supported Tasks and Leaderboards
This dataset is intended for multi-class & multi-label emotion classification.
### Languages
The data is in Romanian.
## Dataset Structure
### Data Instances
Each instance is a tweet with a corresponding ID and one or more emotion annotations (or neutral).
### Data Fields
The simplified configuration includes:
```
text: the tweet
text_id: unique identifier of the tweet (can be used to look up the entry in the raw dataset)
agreed_labels: the agreed emotion annotations vector (each value of 1 means that at least two annotators recognized that specific emotion)
procentual_labels: vector containing three values: 0.33 if one annotator recognised the emotion, 0.66 if two annotators agreed on the emotion, and 0.99 if all annotators recognised the emotion
```
In addition to the above, the raw data includes:
```
Anger, Fear, Joy, Neutral, Sadness, Surprise, Trust: boolean values - True if the specific emotion is found in the agreed_labels vector
annotator1, annotator2, annotator3: vectors of zeros of ones - 1 means the annotator recognized the emotion on the corresponding vector index
sum_labels: the sum of annotator1, annotator2 and annotator3 vectors
```
The arrays of 7 values correspond to the following emotions: ['Sadness', 'Surprise', 'Fear', 'Anger', 'Neutral', 'Trust', 'Joy'].
### Data Splits
This dataset includes a set of train/val/test splits with 4088, 818, and 543 examples respectively.
## Dataset Creation
### Curation Rationale
From the paper introduction:
>Interpreting correctly one’s own emotions, as well as
other people’s emotional states, is a central aspect of
emotional intelligence. Today, people can automate
the process of emotion detection by creating machine
learning models, provided by the fact that the model
training was done on qualitative and sufficient data.
With the constant increase of social media usage there
is also an increase in online public data, freely available
for model creation. Thus, analyzing emotions in online
content naturally has became more and more of a topic
of interest in the recent years.
### Source Data
#### Initial Data Collection and Normalization
Data was collected from Twitter (for more information see Chapter 3.1 of the [paper](https://aclanthology.org/2022.lrec-1.149.pdf)).
#### Who are the source language producers?
Romanian-speaking Twitter users.
### Annotations
#### Annotation process
See Chapter 3.2. in the [paper](https://aclanthology.org/2022.lrec-1.149.pdf).
#### Who are the annotators?
Annotations were produced by 66 Cognitive Science students, University of Bucharest, Faculty of Psichology and Educational Sciences.
### Personal and Sensitive Information
All tweets in this dataset are anonymized by removing usernames and proper nouns.
## Additional Information
### Dataset Curators
Researchers at the University of Bucharest and Adobe (see the authors of the paper [here](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.149.pdf)).
### Licensing Information
The [GitHub repository](https://github.com/Alegzandra/RED-Romanian-Emotions-Dataset/tree/main/REDv2) of this dataset has an MIT license.
### Citation Information
If you are using this dataset in your research, please cite:
```
@inproceedings{redv2,
author = "Alexandra Ciobotaru and
Mihai V. Constantinescu and
Liviu P. Dinu and
Stefan Daniel Dumitrescu",
title = "{RED} v2: {E}nhancing {RED} {D}ataset for {M}ulti-{L}abel {E}motion {D}etection",
journal = "Proceedings of the 13th Language Resources and Evaluation Conference (LREC 2022)",
pages = "1392–1399",
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.149.pdf",
language = "English"
}
```
### Contributions
Thanks to [@Alegzandra](https://github.com/<Alegzandra>) for adding this dataset.
| Alegzandra/REDv2 | [
"license:mit",
"region:us"
]
| 2022-12-09T09:11:36+00:00 | {"license": "mit"} | 2022-12-22T15:14:54+00:00 |
7dea87ff1f1b63cd98164fa25b96227636144d04 | <h1>Berikut adalah kumpulan artikel berita Hoaks dan Non Hoaks</h1>
Artikel terdiri dari 90 artikel Hoaks dan 100 artikel Non-Hoaks Berbahasa Indonesia. Artikel ini dikumpulkan pada tahun 2019 <br>
Keterangan File:<br>
H = Hoaks <br>
NH = Non-Hoaks | mathaillah/BeritaHoaks-NonHoaks | [
"region:us"
]
| 2022-12-09T09:47:43+00:00 | {} | 2022-12-09T10:08:25+00:00 |
86aea26468da12ee49dc6f0a299af4cd46596154 | # Dataset Card for Pokémon wiki captions
This project is inspired by [pokmon-blip-captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions), where the captions are all generated by pre-trained BLIP without any manual effort.
However, the quality and accuracy of their captions are not satisfactory enough, which leaves it known whether better captions lead to better results. This motivates our dataset.
# Example

> General attribute, looks like a little monkey, body color is composed of purple and beige, the end of the tail is like a hand

> Poisonous attributes, it looks like a huge purple cobra, with black stripes on its body, small head, and triangular eyes
# Properties
All 898 images are from [The Complete Pokemon Images Data Set](https://www.kaggle.com/datasets/arenagrenade/the-complete-pokemon-images-data-set?resource=download) in Kaggle with size 475x475. Each image is accompanied with corresponding
pokemon name and its detailed description from [Pokemon Wiki](https://wiki.52poke.com/wiki/%E4%B8%BB%E9%A1%B5), English and Chinese captions are provided. Human efforts are also involved to revise.
# How to use
```
from datasets import load_dataset
dataset = load_dataset("wanghaofan/pokemon-wiki-captions")
```
The dataset is formatted as below. For each row the dataset contains `image`, `name_en`, `name_zh`, `text_en` and `text_zh` keys. `image` is a varying size PIL jpeg, `name` is the name of pokemon, and `text` is the accompanying text caption. Only a train split is provided.
```
DatasetDict({
train: Dataset({
features: ['image', 'name_en', 'name_zh', 'text_en', 'text_zh'],
num_rows: 898
})
})
```
# Citation
If you use this dataset in your work, please cite it as:
```
@misc{wanghaofan2022pokemon,
author = {Haofan Wang},
title = {Pokemon wiki captions},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/wanghaofan/pokemon-wiki-captions/}}
}
```
| wanghaofan/pokemon-wiki-captions | [
"region:us"
]
| 2022-12-09T11:13:28+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "name_en", "dtype": "string"}, {"name": "name_zh", "dtype": "string"}, {"name": "text_en", "dtype": "string"}, {"name": "text_zh", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 117645424.0, "num_examples": 898}], "download_size": 117512478, "dataset_size": 117645424.0}} | 2022-12-09T12:50:49+00:00 |
1ea019fd53e0a46c3f0f363723a5b4dcd332c44b | # Dataset Card for "an2012"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Martw/an2012 | [
"region:us"
]
| 2022-12-09T11:17:10+00:00 | {"dataset_info": {"features": [{"name": "sentences", "sequence": "string"}, {"name": "labels", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 30151484, "num_examples": 240}], "download_size": 474849, "dataset_size": 30151484}} | 2022-12-09T11:17:17+00:00 |
cd44191bda8f015f2af29a150fd78d549408f411 |
# CARES - A Corpus of Anonymised Radiological Evidences in Spanish 📑🏥
CARES is a high-quality text resource manually labeled with ICD-10 codes and reviewed by radiologists. These types of resources are essential for developing automatic text classification tools as they are necessary for training and fine-tuning our computational systems.
The CARES corpus has been manually annotated using the ICD-10 ontology, which stands for for the 10th version of the International Classification of Diseases. For each radiological report, a minimum of one code and a maximum of 9 codes were assigned, while the average number of codes per text is 2.15 with the standard deviation of 1.12.
The corpus was additionally preprocessed in order to make its format coherent with the automatic text classification task. Considering the hierarchical structure of the ICD-10 ontology, each sub-code was mapped to its respective code and chapter, obtaining two new sets of labels for each report. The entire CARES collection contains 6,907 sub-code annotations among the 3,219 radiologic reports. There are 223 unique ICD-10 sub-codes within the annotations, which were mapped to 156 unique ICD-10 codes and 16 unique chapters of the cited ontology.
As for the dataset train and test subsets, a stratified split was performed in order to guarantee that the number of labels in the test data is representative. | chizhikchi/CARES | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:es",
"license:afl-3.0",
"radiology",
"biomedicine",
"ICD-10",
"region:us"
]
| 2022-12-09T12:13:39+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["es"], "license": ["afl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "pretty_name": "CARES", "tags": ["radiology", "biomedicine", "ICD-10"], "dataset_info": {"features": [{"name": "iddoc", "dtype": "float64"}, {"name": "id", "dtype": "int64"}, {"name": "full_text", "dtype": "string"}, {"name": "icd10", "sequence": "string"}, {"name": "general", "sequence": "string"}, {"name": "chapters", "sequence": "int64"}, {"name": "area", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 3377631, "num_examples": 2253}, {"name": "test", "num_bytes": 1426962, "num_examples": 966}], "download_size": 2291080, "dataset_size": 4804593}} | 2022-12-09T12:22:08+00:00 |
bdebcf0d2aff9d20459c8ce8a2d4f2da45438bcc | # Dataset Card for "dataset-identities-v-1.4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | SDbiaseval/identities-sd-1.4 | [
"region:us"
]
| 2022-12-09T12:14:03+00:00 | {"dataset_info": {"features": [{"name": "ethnicity", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "no", "dtype": "int32"}, {"name": "image_path", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 27230671.0, "num_examples": 680}], "download_size": 27136582, "dataset_size": 27230671.0}} | 2023-01-26T22:36:11+00:00 |
f43aae2a3acb390f24629b419927d9fa91ec7207 | # Dataset Card for "gal_yair_8300_10x10_fixed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | galman33/gal_yair_8300_10x10_fixed | [
"region:us"
]
| 2022-12-09T13:08:58+00:00 | {"dataset_info": {"features": [{"name": "lat", "dtype": "float64"}, {"name": "lon", "dtype": "float64"}, {"name": "country_code", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3019225.5, "num_examples": 8300}], "download_size": 2658915, "dataset_size": 3019225.5}} | 2022-12-09T13:09:02+00:00 |
7e0e9bf8ec61616630135d5a0355aaedac0f0e28 | # Dataset Card for "kuvshinov_art_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | WiNE-iNEFF/kuvshinov_art_dataset | [
"region:us"
]
| 2022-12-09T13:31:10+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 201634655.912, "num_examples": 1802}], "download_size": 239927533, "dataset_size": 201634655.912}} | 2022-12-09T13:31:20+00:00 |
1206803d2b5fe6eb31d569b00bae108629a82a1e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: mlxen/electra-smallcase-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ob](https://huggingface.co/ob) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-d6c6f4-2395874932 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-09T13:51:55+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "mlxen/electra-smallcase-squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-12-09T13:54:32+00:00 |
6c2f4d47ed2363793344aca34458f026873c3896 | Dallasmorningstar/Yu | [
"license:unknown",
"region:us"
]
| 2022-12-09T14:22:40+00:00 | {"license": "unknown"} | 2022-12-09T14:22:40+00:00 |
|
8a9c6be53d3e5a1ab4ce647b3c00760765ff7ee0 | mmd/lai | [
"license:openrail++",
"region:us"
]
| 2022-12-09T14:42:37+00:00 | {"license": "openrail++"} | 2022-12-09T14:43:50+00:00 |
|
6bfde83873caf828d9554979e0cb9c5103725da4 |
Dataset of a few Marsupilami pictures
PS/ I used git+ssh to push this commit to the Hub 🔥
Thank you @XCiD and @sbrandeis | julien-c/autotrain-dreambooth-marsupilami-data | [
"task_categories:image-to-image",
"size_categories:n<1K",
"license:openrail",
"marsupilami",
"not-for-all-eyes",
"region:us"
]
| 2022-12-09T15:10:39+00:00 | {"license": "openrail", "size_categories": ["n<1K"], "task_categories": ["image-to-image"], "tags": ["marsupilami", "not-for-all-eyes"]} | 2023-04-06T10:39:49+00:00 |
f58bc41a905185f754721929b0910985732ba0ed | # Dataset Card for "speech2text-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hts98/speech2text-dataset | [
"region:us"
]
| 2022-12-09T15:27:22+00:00 | {"dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 32013193208, "num_examples": 33322}, {"name": "test", "num_bytes": 5950633136, "num_examples": 6194}], "download_size": 7460647938, "dataset_size": 37963826344}} | 2022-12-09T15:43:16+00:00 |
acb8826c6fbe5bf0973170643e9d9fbfb98f9915 | ywan111/macbook-dataset-b1 | [
"license:apache-2.0",
"region:us"
]
| 2022-12-09T16:00:30+00:00 | {"license": "apache-2.0"} | 2022-12-09T16:00:30+00:00 |
|
dbfa896f20652ba72860e4a7907ef1906e33396a | ywan111/macbook-dataset-b2 | [
"license:apache-2.0",
"region:us"
]
| 2022-12-09T16:13:18+00:00 | {"license": "apache-2.0"} | 2022-12-09T16:13:18+00:00 |
|
6c952168836f94d8dc9da5add83503852748e05e | ywan111/macbook-dataset-b3 | [
"license:apache-2.0",
"region:us"
]
| 2022-12-09T16:27:12+00:00 | {"license": "apache-2.0"} | 2022-12-09T16:27:12+00:00 |
|
7fa1103c3606b2d5603552f544d41b6811d532bb | # Dataset Card for "c_corpus_br_finetuning_language_model_deberta"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | rosimeirecosta/c_corpus_br_finetuning_language_model_deberta | [
"region:us"
]
| 2022-12-09T17:01:17+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 36065567, "num_examples": 228736}, {"name": "validation", "num_bytes": 9012563, "num_examples": 57184}], "download_size": 0, "dataset_size": 45078130}} | 2022-12-15T18:22:57+00:00 |
1b0d0dffe63234703ef3210505a003d015806784 | ywan111/macbook-dataset-b4 | [
"license:apache-2.0",
"region:us"
]
| 2022-12-09T17:08:42+00:00 | {"license": "apache-2.0"} | 2022-12-09T17:08:42+00:00 |
|
0c24572c7e686fc798e4fb64f59cfe27086341d3 | ywan111/macbook-dataset-b5 | [
"license:apache-2.0",
"region:us"
]
| 2022-12-09T17:52:08+00:00 | {"license": "apache-2.0"} | 2022-12-09T17:52:08+00:00 |
|
924a029aac69bfdd8d34598f27f2286550b1c00b | # Dataset Card for "gal_yair_166000_256x256_fixed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | galman33/gal_yair_166000_256x256_fixed | [
"region:us"
]
| 2022-12-09T18:02:01+00:00 | {"dataset_info": {"features": [{"name": "lat", "dtype": "float64"}, {"name": "lon", "dtype": "float64"}, {"name": "country_code", "dtype": {"class_label": {"names": {"0": "ad", "1": "ae", "2": "al", "3": "aq", "4": "ar", "5": "au", "6": "bd", "7": "be", "8": "bg", "9": "bm", "10": "bo", "11": "br", "12": "bt", "13": "bw", "14": "ca", "15": "ch", "16": "cl", "17": "co", "18": "cz", "19": "de", "20": "dk", "21": "ec", "22": "ee", "23": "es", "24": "fi", "25": "fr", "26": "gb", "27": "gh", "28": "gl", "29": "gr", "30": "gt", "31": "hk", "32": "hr", "33": "hu", "34": "id", "35": "ie", "36": "il", "37": "is", "38": "it", "39": "ix", "40": "jp", "41": "kg", "42": "kh", "43": "kr", "44": "la", "45": "lk", "46": "ls", "47": "lt", "48": "lu", "49": "lv", "50": "me", "51": "mg", "52": "mk", "53": "mn", "54": "mo", "55": "mt", "56": "mx", "57": "my", "58": "nl", "59": "no", "60": "nz", "61": "pe", "62": "ph", "63": "pl", "64": "pt", "65": "ro", "66": "rs", "67": "ru", "68": "se", "69": "sg", "70": "si", "71": "sk", "72": "sn", "73": "sz", "74": "th", "75": "tn", "76": "tr", "77": "tw", "78": "ua", "79": "ug", "80": "us", "81": "uy", "82": "za"}}}}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 16156275005.0, "num_examples": 166000}], "download_size": 16115168331, "dataset_size": 16156275005.0}} | 2022-12-09T18:18:35+00:00 |
496c964fd8bd8f2626fe43b01eb56255f7998c5b | MennaHalim/donate_a_cry | [
"license:other",
"doi:10.57967/hf/0184",
"region:us"
]
| 2022-12-09T18:35:46+00:00 | {"license": "other"} | 2022-12-09T22:33:40+00:00 |
|
f4b16ad198fda72263aa9b2989f54561056856ad |
# Dataset Card for althingi_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data](#data)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Other Known Limitations](#other-known-limitations)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** Althingi Parliamentary Speech
- **Repository:** [LDC](https://catalog.ldc.upenn.edu/LDC2021S01)
- **Paper:** [Building an ASR corpus using Althingi’s Parliamentary Speeches](https://www.researchgate.net/profile/Jon-Gudnason/publication/319185185_Building_an_ASR_Corpus_Using_Althingi's_Parliamentary_Speeches/links/5d1dbdd3a6fdcc2462bdda0f/Building-an-ASR-Corpus-Using-Althingis-Parliamentary-Speeches.pdf)
- **Point of Contact:** [Jón Guðnason](mailto:[email protected])
### Dataset Summary
Althingi Parliamentary Speech consists of approximately 542 hours of recorded speech from Althingi, the Icelandic Parliament, along with corresponding transcripts, a pronunciation dictionary and two language models. Speeches date from 2005-2016.
This dataset was collected in 2016 by the ASR for Althingi project at [Reykjavik University](https://en.ru.is/) in collaboration with the Althingi speech department. The purpose of that project was to develop an ASR (automatic speech recognition) system for parliamentary speech to replace the procedure of manually transcribing performed speeches.
### Data
The mean speech length is six minutes, with speeches ranging from under one minute to around thirty minutes. The corpus features 197 speakers (105 male, 92 female) and is split into training, development and evaluation sets. The language models are of two types: a pruned trigram model, used in decoding, and an unpruned constant ARPA 5-gram model, used for re-scoring decoding results.
Audio data is presented as single channel 16-bit mp3 files; the majority of these files have a sample rate of 44.1 kHz. Transcripts and other text data are plain text encoded in UTF-8.
### Example Usage
The Althingi Corpus is divided in 3 splits: train, validation and test. To load a specific split pass its name as a config name:
```python
from datasets import load_dataset
althingi_asr = load_dataset("language-and-voice-lab/althingi_asr")
```
To load an specific split (for example, the validation split) do:
```python
from datasets import load_dataset
althingi_asr = load_dataset("language-and-voice-lab/althingi_asr",split="validation")
```
### Supported Tasks
automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### Languages
The audio is in Icelandic.
## Dataset Structure
### Data Instances
```python
{
'audio_id': 'rad20160602T000219_00083',
'audio': {
'path': '/home/inga/.cache/HuggingFace/datasets/downloads/extracted/52607f9db9e3394263070575d29323213b99a06a996c43d4fe75bca115827d12/dev/EyH/rad20160602T000219/rad20160602T000219_00083.flac',
'array': array([-0.01098633, -0.01489258, -0.01040649, ..., 0.00314331,
0.00186157, 0.00527954], dtype=float32),
'sampling_rate': 16000
},
'speaker_id': 'rad20160602T000219',
'duration': 12.67199993133545,
'normalized_text': 'og má svo sannarlega segja að landslagið sé nokkuð breytt frá því þrjú komma tvö prósent þjóðarinnar töldust vera innflytjendur árið tvö þúsund en nú teljast tíu prósent þjóðarinnar vera fyrsta og önnur kynslóð innflytjenda'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `speaker_id` (string) - id of speaker
* `duration` (float32) - duration of the audio file in seconds.
* `normalized_text` (string) - normalized audio segment transcription.
### Data Splits
The corpus is split into train, evaluation, and test portions. Lenghts of every portion are: train = 514h29m, test = 13h52m, evaluation=14h02m.
To load an specific portion please see the above section "Example Usage".
## Additional Information
### Other Known Limitations
"Althingi Parliamentary Speech" by the Language and Voice Laboratory (LVL) at the Reykjavik University is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
### Licensing Information
[CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@misc{helgadottiralthingi2021,
title={Althingi Parliamentary Speech},
ldc_catalog_no={LDC2021S01},
DOI={https://doi.org/10.35111/695b-6697},
author={Helgadóttir, Inga Rún and Kjaran, Róbert and Nikulásdóttir, Anna Björk and Guðnason, Jón},
publisher={Reykjavík University}
journal={Linguistic Data Consortium, Philadelphia},
year={2021},
url={https://catalog.ldc.upenn.edu/LDC2021S01},
}
```
### Contributions
This project was made possible through the support of Althingi’s information and publications departments. The authors would like to thank Solveig K. Jónsdóttir, Þorbjörg Árnadóttir and Ingvi Stígsson for their valuable help.
| language-and-voice-lab/althingi_asr | [
"task_categories:automatic-speech-recognition",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:is",
"license:cc-by-4.0",
"icelandic",
"parliamentary speech",
"parlament",
"althingi",
"region:us"
]
| 2022-12-09T20:33:28+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["is"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "Althingi Parliamentary Speech", "tags": ["icelandic", "parliamentary speech", "parlament", "althingi"]} | 2023-02-24T22:14:42+00:00 |
59843ff2c0685fc3cb96d0649a272c6edf04ef59 | # Dataset Card for "custom_squad_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mlxen/custom_squad_dataset | [
"region:us"
]
| 2022-12-09T20:50:05+00:00 | {"dataset_info": {"features": [{"name": "number", "dtype": "int64"}, {"name": "id", "dtype": "string"}, {"name": "context_no", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_start", "dtype": "int64"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 139122, "num_examples": 97}], "download_size": 0, "dataset_size": 139122}} | 2022-12-09T20:56:36+00:00 |
bc4b7491f5f275abfb43b0c0188c826115389adf | # Dataset Card for "eqasc_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Sandipan1994/eqasc_data | [
"region:us"
]
| 2022-12-09T21:12:54+00:00 | {"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 11336048, "num_examples": 84964}, {"name": "validation", "num_bytes": 1296119, "num_examples": 9710}, {"name": "test", "num_bytes": 1259181, "num_examples": 9630}], "download_size": 4494168, "dataset_size": 13891348}} | 2022-12-10T00:30:48+00:00 |
009699a5e429ac4fd9801df5e4aeceff74a9dbab | # Dataset Card for "squad_adversarial_train_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mlxen/squad_adversarial_train_dataset | [
"region:us"
]
| 2022-12-09T23:22:22+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 79446510, "num_examples": 87599}], "download_size": 14480010, "dataset_size": 79446510}} | 2022-12-09T23:22:27+00:00 |
c2525831eddb507aabf514178ad7afa3751bf6a0 | # Dataset Card for "squad_adversarial_validation_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mlxen/squad_adversarial_validation_dataset | [
"region:us"
]
| 2022-12-09T23:22:27+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "validation", "num_bytes": 10484818, "num_examples": 10570}], "download_size": 1825207, "dataset_size": 10484818}} | 2022-12-09T23:22:32+00:00 |
2ca6e6b6b845830fe614a0795898bd0f92d764ae | # Dataset Card for "squad_contrasting_validation_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mlxen/squad_contrasting_validation_dataset | [
"region:us"
]
| 2022-12-10T03:16:28+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "validation", "num_bytes": 10482482, "num_examples": 10570}], "download_size": 1835309, "dataset_size": 10482482}} | 2022-12-10T04:26:19+00:00 |
13b916e5dbdb14f5400603318799e1cca08e2f6e | LightChen2333/OpenSLU | [
"license:mit",
"region:us"
]
| 2022-12-10T04:25:36+00:00 | {"license": "mit"} | 2023-02-22T05:25:40+00:00 |
|
026e02701cce00d794751ccab575c411481ed4e6 | # Dataset Card for "squad_contrasting_training_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mlxen/squad_contrasting_training_dataset | [
"region:us"
]
| 2022-12-10T04:26:20+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 79421752, "num_examples": 87599}], "download_size": 14482954, "dataset_size": 79421752}} | 2022-12-10T04:26:26+00:00 |
f16ad58df63176c1309b0475670b150a220f6249 | # Dataset Card for "olm-CC-MAIN-2022-40-sampling-ratio-0.15894621295-suffix-array-dedup"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-CC-MAIN-2022-40-sampling-ratio-0.15894621295-suffix-array-dedup | [
"region:us"
]
| 2022-12-10T06:37:03+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "crawl_timestamp", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 33979057509.213223, "num_examples": 7520438}], "download_size": 8573685687, "dataset_size": 33979057509.213223}} | 2022-12-10T06:56:58+00:00 |
1cdba0df71a62446a006948b98e791cc5d28936e | live-draw-sgp/live-draw-sgp | [
"license:openrail",
"region:us"
]
| 2022-12-10T07:20:47+00:00 | {"license": "openrail"} | 2022-12-10T07:20:51+00:00 |
|
958911be2e4c249be7996f78c2466299999e9706 | thisisanshgupta/Pycode | [
"license:mit",
"region:us"
]
| 2022-12-10T07:44:37+00:00 | {"license": "mit"} | 2022-12-10T07:46:11+00:00 |
|
46bf9dfd3ba28d0747e6d5aaf8ea575a223a6efd | OptimOS/Selfies | [
"license:unknown",
"region:us"
]
| 2022-12-10T09:36:04+00:00 | {"license": "unknown"} | 2022-12-10T09:36:04+00:00 |
|
67dc62b4cd8940ef139bf1b8ee73558ec044fd37 | nyanko7/konachan-images | [
"license:openrail",
"region:us"
]
| 2022-12-10T09:46:55+00:00 | {"license": "openrail"} | 2022-12-10T10:25:55+00:00 |
|
4010a45b784aa8f09bd99c988ef050fc4924eb2a | timmytheBEST/girls | [
"license:creativeml-openrail-m",
"region:us"
]
| 2022-12-10T11:42:45+00:00 | {"license": "creativeml-openrail-m"} | 2022-12-10T11:42:45+00:00 |
|
9d392462b02175db1047497071a1c794a17b4339 | # AutoTrain Dataset for project: multiconer2-test1
## Dataset Description
This dataset has been automatically processed by AutoTrain for project multiconer2-test1.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"tokens": [
"robert",
"gottschalk",
"1939",
"academy",
"award",
"winner",
"and",
"founder",
"of",
"panavision"
],
"tags": [
18,
49,
62,
29,
60,
62,
62,
62,
62,
16
]
},
{
"tokens": [
"during",
"the",
"reign",
"of",
"the",
"tongzhi",
"emperor",
"(",
"r",
".",
"1861",
"\u2013",
"1875",
")",
":"
],
"tags": [
62,
62,
62,
62,
62,
18,
49,
62,
62,
62,
62,
62,
62,
62,
62
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"tags": "Sequence(feature=ClassLabel(num_classes=63, names=['B-AnatomicalStructure', 'B-ArtWork', 'B-Artist', 'B-Athlete', 'B-CarManufacturer', 'B-Cleric', 'B-Clothing', 'B-Disease', 'B-Drink', 'B-Facility', 'B-Food', 'B-HumanSettlement', 'B-MedicalProcedure', 'B-Medication/Vaccine', 'B-MusicalGRP', 'B-MusicalWork', 'B-ORG', 'B-OtherLOC', 'B-OtherPER', 'B-OtherPROD', 'B-Politician', 'B-PrivateCorp', 'B-PublicCorp', 'B-Scientist', 'B-Software', 'B-SportsGRP', 'B-SportsManager', 'B-Station', 'B-Vehicle', 'B-VisualWork', 'B-WrittenWork', 'I-AnatomicalStructure', 'I-ArtWork', 'I-Artist', 'I-Athlete', 'I-CarManufacturer', 'I-Cleric', 'I-Clothing', 'I-Disease', 'I-Drink', 'I-Facility', 'I-Food', 'I-HumanSettlement', 'I-MedicalProcedure', 'I-Medication/Vaccine', 'I-MusicalGRP', 'I-MusicalWork', 'I-ORG', 'I-OtherLOC', 'I-OtherPER', 'I-OtherPROD', 'I-Politician', 'I-PrivateCorp', 'I-PublicCorp', 'I-Scientist', 'I-Software', 'I-SportsGRP', 'I-SportsManager', 'I-Station', 'I-Vehicle', 'I-VisualWork', 'I-WrittenWork', 'O'], id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2169 |
| valid | 829 |
| Nenma/autotrain-data-multiconer2-test1 | [
"task_categories:token-classification",
"region:us"
]
| 2022-12-10T12:08:47+00:00 | {"task_categories": ["token-classification"]} | 2022-12-10T14:01:19+00:00 |
f01d4b8229e52622a2da2973884da571051c8779 | # Dataset Card for "news"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | steciuk/news | [
"region:us"
]
| 2022-12-10T12:32:23+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1121248, "num_examples": 6402}], "download_size": 657470, "dataset_size": 1121248}} | 2022-12-10T12:32:27+00:00 |
e9195188dbd6939e10785e3ed9ee1c513731c9bc |
# Dataset Card for GLENDA - The ITEC Gynecologic Laparoscopy Endometriosis Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://ftp.itec.aau.at/datasets/GLENDA/index.html
- **Repository:**
- **Paper:** [GLENDA: Gynecologic Laparoscopy Endometriosis Dataset](https://link.springer.com/chapter/10.1007/978-3-030-37734-2_36)
- **Leaderboard:**
- **Point of Contact:** [email protected]
### Dataset Summary
GLENDA (Gynecologic Laparoscopy ENdometriosis DAtaset) comprises over 350 annotated endometriosis lesion images taken from 100+ gynecologic laparoscopy surgeries as well as over 13K unannotated non pathological images of 20+ surgeries. The dataset is purposefully created to be utilized for a variety of automatic content analysis problems in the context of Endometriosis recognition.
**Usage Information (Disclaimer)**
The dataset is exclusively provided for scientific research purposes and as such cannot be used commercially or for any other purpose. If any other purpose is intended, you may directly contact the originator of the videos.
For additional information (including contact details), please visit [the official website](http://ftp.itec.aau.at/datasets/GLENDA/index.html).
**Description**
Endometriosis is a benign but potentially painful condition among women in child bearing age involving the growth of uterine-like tissue in locations outside of the uterus. Corresponding lesions can be found in various positions and severities, often in multiple instances per patient requiring a physician to determine its extent. This most frequently is accomplished by calculating its magnitude via utilizing the combination of two popular classification systems, the revised American Society for Reproductive Medicine (rASRM) and the European Enzian scores. Endometriosis can not reliably identified by laymen, therefore, the dataset has been created with the help of medical experts in the field of endometriosis treatment.
**Purposes**
* binary (endometriosis) classification
* detection/localization
**Overview**
The dataset includes region-based annotations of 4 pathological endometriosis categories as well as non pathological counter example images. Annotations are created for single video frames that may be part of larger sequences comprising several consecutive frames (all showing the annotated condition). Frames can contain multiple annotations, potentially of different categories. Each single annotation is exported as a binary image (similar to below examples, albeit one image per annotation).
# TODO: FIXME: A bit more useful info on dataset case distribution class distribution and link to original + preview link
TODO: FIXME: A bit more useful info on dataset case distribution class distribution and link to original + preview link
### Supported Tasks and Leaderboards
- `image_classification`: The dataset can be used for binary (no pathology / endometriosis) or multiclass image classification (No-Pathology, 6.1.1.1\_Endo-Peritoneum, 6.1.1.2\_Endo-Ovar, 6.1.1.3\_Endo-DIE, 6.1.1.4\_Endo-Uterus. These classes respectively correspond to: no visible pathology in relation to endometriosis, peritoneal endometriosis, endometriosis on ovaries, deep infiltrating endometriosis (DIE) and uterine endometriosis.).
## Dataset Structure
### Data Instances
#### binary\_classification
TODO DESCRIBE
#### multiclass\_classification
TODO DESCRIBE
## Dataset Creation
### Curation Rationale
From the [official website](http://ftp.itec.aau.at/datasets/GLENDA/index.html)
> The dataset is purposefully created to be utilized for a variety of automatic content analysis problems in the context of Endometriosis recognition
### Source Data
#### Initial Data Collection and Normalization
From the [official website](http://ftp.itec.aau.at/datasets/GLENDA/index.html)
> The dataset includes region-based annotations of 4 pathological endometriosis categories as well as non pathological counter example images. Annotations are created for single video frames that may be part of larger sequences comprising several consecutive frames (all showing the annotated condition). Frames can contain multiple annotations, potentially of different categories. Each single annotation is exported as a binary image (similar to below examples, albeit one image per annotation).
### Annotations
#### Annotation process
From the [official website](http://ftp.itec.aau.at/datasets/GLENDA/index.html)
> Corresponding lesions can be found in various positions and severities, often in multiple instances per patient requiring a physician to determine its extent. This most frequently is accomplished by calculating its magnitude via utilizing the combination of two popular classification systems, the revised American Society for Reproductive Medicine (rASRM) and the European Enzian scores. Endometriosis can not reliably identified by laymen, therefore, the dataset has been created with the help of medical experts in the field of endometriosis treatment.
#### Who are the annotators?
Medical experts in the field of endometriosis treatment.
### Personal and Sensitive Information
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is exclusively provided for scientific research purposes and as such cannot be used commercially or for any other purpose. If any other purpose is intended, you may directly contact the originator of the videos, Prof. Dr. Jörg Keckstein.
GLENDA is licensed under Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0, Creative Commons License) and is created as well as maintained by Distributed Multimedia Systems Group of the Institute of Information Technology (ITEC) at Alpen-Adria Universität in Klagenfurt, Austria.
This license allows users of this dataset to copy, distribute and transmit the work under the following conditions:
* Attribution: You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
* Non-Commercial: You may not use the material for commercial purposes.
For further legal details, please read the [complete license terms](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
For additional information, please visit the [official website](http://ftp.itec.aau.at/datasets/GLENDA/index.html).
### Citation Information
```
@inproceedings{10.1007/978-3-030-37734-2_36,
abstract = {Gynecologic laparoscopy as a type of minimally invasive surgery (MIS) is performed via a live feed of a patient's abdomen surveying the insertion and handling of various instruments for conducting treatment. Adopting this kind of surgical intervention not only facilitates a great variety of treatments, the possibility of recording said video streams is as well essential for numerous post-surgical activities, such as treatment planning, case documentation and education. Nonetheless, the process of manually analyzing surgical recordings, as it is carried out in current practice, usually proves tediously time-consuming. In order to improve upon this situation, more sophisticated computer vision as well as machine learning approaches are actively developed. Since most of such approaches heavily rely on sample data, which especially in the medical field is only sparsely available, with this work we publish the Gynecologic Laparoscopy ENdometriosis DAtaset (GLENDA) -- an image dataset containing region-based annotations of a common medical condition named endometriosis, i.e. the dislocation of uterine-like tissue. The dataset is the first of its kind and it has been created in collaboration with leading medical experts in the field.},
address = {Cham},
author = {Leibetseder, Andreas and Kletz, Sabrina and Schoeffmann, Klaus and Keckstein, Simon and Keckstein, J{\"o}rg},
booktitle = {MultiMedia Modeling},
editor = {Ro, Yong Man and Cheng, Wen-Huang and Kim, Junmo and Chu, Wei-Ta and Cui, Peng and Choi, Jung-Woo and Hu, Min-Chun and De Neve, Wesley},
isbn = {978-3-030-37734-2},
pages = {439--450},
publisher = {Springer International Publishing},
title = {GLENDA: Gynecologic Laparoscopy Endometriosis Dataset},
year = {2020}
}
```
| MFreidank/glenda | [
"task_categories:image-classification",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
]
| 2022-12-10T12:55:32+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["image-classification"], "task_ids": [], "pretty_name": "GLENDA - The ITEC Gynecologic Laparoscopy Endometriosis Dataset", "dataset_info": [{"config_name": "binary_classification", "features": [{"name": "image", "dtype": "image"}, {"name": "metadata", "struct": [{"name": "id", "dtype": "int32"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}, {"name": "file_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "fickr_url", "dtype": "string"}, {"name": "coco_url", "dtype": "string"}, {"name": "date_captured", "dtype": "string"}, {"name": "case_id", "dtype": "int32"}, {"name": "video_id", "dtype": "int32"}, {"name": "frame_id", "dtype": "int32"}, {"name": "from_seconds", "dtype": "int32"}, {"name": "to_seconds", "dtype": "int32"}]}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "no_pathology", "1": "endometriosis"}}}}], "splits": [{"name": "train", "num_bytes": 4524957, "num_examples": 13811}], "download_size": 895554144, "dataset_size": 4524957}, {"config_name": "multiclass_classification", "features": [{"name": "image", "dtype": "image"}, {"name": "metadata", "struct": [{"name": "id", "dtype": "int32"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}, {"name": "file_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "fickr_url", "dtype": "string"}, {"name": "coco_url", "dtype": "string"}, {"name": "date_captured", "dtype": "string"}, {"name": "case_id", "dtype": "int32"}, {"name": "video_id", "dtype": "int32"}, {"name": "frame_id", "dtype": "int32"}, {"name": "from_seconds", "dtype": "int32"}, {"name": "to_seconds", "dtype": "int32"}]}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "No-Pathology", "1": "6.1.1.1_Endo-Peritoneum", "2": "6.1.1.2_Endo-Ovar", "3": "6.1.1.3_Endo-TIE", "4": "6.1.1.4_Endo-Uterus"}}}}], "splits": [{"name": "train", "num_bytes": 4524957, "num_examples": 13811}], "download_size": 895554144, "dataset_size": 4524957}]} | 2022-12-29T12:19:47+00:00 |
1b42948377bbb5f5adacbfa481b9ece3781bf854 | # AutoTrain Dataset for project: disparities_pubmed_mit
## Dataset Description
This dataset has been automatically processed by AutoTrain for project disparities_pubmed_mit.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "IDH1-R132H acts as a tumor suppressor in glioma via epigenetic up-regulation of the DNA damage response. Patients with glioma whose tumors carry a mutation in isocitrate dehydrogenase 1 (IDH1(R132H)) are younger at diagnosis and live longer. IDH1 mutations co-occur with other molecular lesions, such as 1p/19q codeletion, inactivating mutations in the tumor suppressor protein 53 (TP53) gene, and loss-of-function mutations in alpha thalassemia/mental retardation syndrome X-linked gene (ATRX). All adult low-grade gliomas (LGGs) harboring ATRX loss also express the IDH1(R132H) mutation. The current molecular classification of LGGs is based, partly, on the distribution of these mutations. We developed a genetically engineered mouse model harboring IDH1(R132H), TP53 and ATRX inactivating mutations, and activated NRAS G12V. Previously, we established that ATRX deficiency, in the context of wild-type IDH1, induces genomic instability, impairs nonhomologous end-joining DNA repair, and increases sensitivity to DNA-damaging therapies. In this study, using our mouse model and primary patient-derived glioma cultures with IDH1 mutations, we investigated the function of IDH1(R132H) in the context of TP53 and ATRX loss. We discovered that IDH1(R132H) expression in the genetic context of ATRX and TP53 gene inactivation (i) increases median survival in the absence of treatment, (ii) enhances DNA damage response (DDR) via epigenetic up-regulation of the ataxia-telangiectasia-mutated (ATM) signaling pathway, and (iii) elicits tumor radioresistance. Accordingly, pharmacological inhibition of ATM or checkpoint kinases 1 and 2, essential kinases in the DDR, restored the tumors' radiosensitivity. Translation of these findings to patients with IDH1(132H) glioma harboring TP53 and ATRX loss could improve the therapeutic efficacy of radiotherapy and, consequently, patient survival.",
"target": 0
},
{
"text": "Activation of prolyl hydroxylase-2 for stabilization of mitochondrial stress along with simultaneous downregulation of HIF-1\u00ce\u00b1/FASN in ER\u00c2\u00a0+\u00c2\u00a0breast cancer subtype. The present study was undertaken to inquest the chemical activation of prolyl hydroxylase-2 for the curtailment of hypoxia-inducible factor-1\u00ce\u00b1 and fatty acid synthase. It was well documented that hypoxia-inducible factor-1\u00ce\u00b1 and fatty acid synthase were overexpressed in mammary gland carcinomas. After screening a battery of compounds, BBAP-2 was retrieved as a potential prolyl hydroxylase-2 activator and validates its activity using ER\u00c2\u00a0+\u00c2\u00a0MCF-7 cell line and n-methyl-n-nitrosourea-induced rat in vivo model, respectively. BBAP-2 was palpable for the morphological characteristics of apoptosis along with changes in the mitochondrial intergrity as visualized by acridine orange/ethidium bromide and JC-1 staining against ER\u00c2\u00a0+\u00c2\u00a0MCF-7 cells. BBAP-2 also arrest the cell cycle of ER\u00c2\u00a0+\u00c2\u00a0MCF-7 cells at G2/M phase. Afterward, BBAP-2 has scrutinized against n-methyl-n-nitrosourea-induced mammary gland carcinoma in albino Wistar rats. BBAP-2 restored the morphological architecture when screened through carmine staining, haematoxylin and eosin staining, and scanning electron microscopy. BBAP-2 also delineated the markers of oxidative stress favourably. The immunoblotting and mRNA expression analysis validated that BBAP-2 has a potentialty activate the prolyl hydroxylase-2 with sequential downregulating effect on hypoxia-inducible factor-1\u00ce\u00b1 and its downstream checkpoint. BBAP-2 also fostered apoptosis through mitochondrial-mediated death pathway. The present study elaborates the chemical activation of prolyl hydroxylase-2 by which the increased expression of HIF-1\u00ce\u00b1 and FASN can be reduced in mammary gland carcinoma.",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['0', '1'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 807 |
| valid | 203 |
| eber/autotrain-data-disparities_pubmed_mit | [
"task_categories:text-classification",
"language:en",
"region:us"
]
| 2022-12-10T13:08:55+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2022-12-10T13:34:47+00:00 |
bfebd41171043bec836113629cbb5480db40b250 |
# Dataset Card for PANDA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/facebookresearch/ResponsibleNLP/
- **Paper:** https://arxiv.org/abs/2205.12586
- **Point of Contact:** [email protected], [email protected], [email protected], [email protected]
### Dataset Summary
PANDA (Perturbation Augmentation NLP DAtaset) consists of approximately 100K pairs of crowdsourced human-perturbed text snippets (original, perturbed). Annotators were given selected terms and target demographic attributes, and instructed to rewrite text snippets along three demographic axes: gender, race and age, while preserving semantic meaning. Text snippets were sourced from a range of text corpora (BookCorpus, Wikipedia, ANLI, MNLI, SST, SQuAD). PANDA can be used for training a learned perturber that can rewrite text with control. PANDA can also be used to evaluate the demographic robustness of language models.
### Languages
English
## Dataset Structure
### Data Instances
- Size of training data: 198.6 MB
- Size of validation data: 22.2 MB
Examples of data instances:
```
{
"original": "the moment the girl mentions the subject she will be yours .",
"selected_word": "girl",
"target_attribute": "man",
"perturbed": "the moment the boy mentions the subject he will be yours.\n\n"
}
{
"original": "are like magic tricks, says the New York Times ' Michael Kimmelman. <SEP> Michael Kimmelman has never likened anything to a magic trick.",
"selected_word": "Michael",
"target_attribute": "woman",
"perturbed": "are like magic tricks, says the New York Times' Michelle Kimmelman. <SEP> Michelle Kimmelman has never likened anything to a magic trick."
}
{
"original": "lilly ann looked at him asking herself how he cold not know .",
"selected_word": "he",
"target_attribute": "non-binary",
"perturbed": "Lilly Ann looked at them, asking herself how they could not know."
}
```
Examples with <SEP> tokens are the result of concatenation of text fields in source datasets, such as the premise and hypothesis of NLI datasets.
### Data Fields
- `original`: Source (unperturbed) text snippet, sampled from a variety of English text corpora.
- `selected_word`: Demographic term that needs to be perturbed.
- `target_attribute`: Target demographic category.
- `perturbed`: Perturbed text snippet, which is the source text rewritten to alter the selected word along the specified target demographic attribute. For example, if the selected word is "Lily" and target is "man", all references to "Lily" (eg. pronouns) in the source text are altered to refer to a man. Note that some examples may be unchanged, either due to the lack of demographic information, or ambiguity of the task; given the subjective nature of identifying demographic terms and attributes, we allow some room for interpretation provided the rewrite does not perpetuate harmful social biases.
### Data Splits
- `train`: 94966
- `valid`: 10551
## Dataset Creation
### Curation Rationale
We constructed PANDA to create and release the first large scale dataset of demographic text perturbations. This enables the training of the first neural perturber model, which outperforms heuristic approaches.
### Source Data
#### Initial Data Collection and Normalization
We employed 524 crowdworkers to create PANDA examples over the span of several months. Annotators were tasked with rewriting text snippets sourced from popular English text corpora. For more information on the task UI and methodology, see our paper *Perturbation Augmentation for Fairer NLP*.
### Annotations
#### Annotation process
PANDA was collected in a 3 stage annotation process:
1. Span identification: Annotators select demographic terms in source text samples.
2. Attribute identification: Identified demographic terms are annotated for gender/race/age attributes, such as "man", "Asian", "old" etc.
3. Rewrite text: Annotators rewrite text by modifying the selected entity to reflect the target demographic attribute. Annotators are encouraged to create minimal edits, eg. "George" -> "Georgina".
The annotation process is explained in more detail in our paper.
#### Who are the annotators?
PANDA was annotated by English speaking Amazon Mechanical Turk workers. We included a voluntary demographic survey along with annotation tasks that did not contribute to pay. For a breakdown of annotators' demographic identities, see our paper.
### Personal and Sensitive Information
PANDA does not contain identifying information about annotators.
## Considerations for Using the Data
### Social Impact of Dataset
By releasing the first large scale dataset of demographic text rewrites, we hope to enable exciting future work in fairness in NLP toward more scalable, automated approaches to reducing biases in datasets and language models.
Furthermore, PANDA aims to be diverse in text domain and demographic representation. PANDA includes a large proportion of non-binary gender annotations, which are underrepresented in existing text corpora and prior fairness datasets. Text examples vary in length, with examples spanning single sentences and long Wikipedia passages, and are sourced from a variety of text corpora that can be used to train a domain agnostic perturber.
### Discussion of Biases
For this work, we sourced our annotated data from a range of sources to ensure: (i) permissive data licensing, (ii) that our perturber works well on downstream applications such as NLU classification tasks, and (iii) that our perturber can handle data from multiple domains to be maximally useful. However, we acknowledge that there may be other existing biases in PANDA as a result of our data sourcing choices. For example, it is possible that data sources like BookWiki primarily contain topics of interest to people with a certain amount of influence and educational access, people from the so-called “Western world”, etc. Other topics that might be interesting and relevant to others may be missing or only present in limited quantities. The present approach can only weaken associations inherited from the data sources we use, but in future work, we would love to explore the efficacy of our approach on text from other sources that contain a wider range of topics and text domain differences.
### Other Known Limitations
Our augmentation process can sometimes create nonexistent versions of real people, such as discussing an English King Victor (not a historical figure), as opposed to a Queen Victoria (a historical figure). We embrace the counterfactuality of many of our perturbations, but the lack of guaranteed factuality means that our approach may not be well-suited to all NLP tasks. For example, it might not be suitable for augmenting misinformation detection datasets, because peoples’ names, genders, and other demographic information should not be changed.
## Additional Information
### Dataset Curators
Rebecca Qian, Candace Ross, Jude Fernandes, Douwe Kiela and Adina Williams.
### Licensing Information
PANDA is released under the MIT license.
### Citation Information
https://arxiv.org/abs/2205.12586
### Contributions
Thanks to [@Rebecca-Qian](https://github.com/Rebecca-Qian) for adding this dataset. | facebook/panda | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:mit",
"fairness",
"nlp",
"demographic",
"diverse",
"gender",
"non-binary",
"race",
"age",
"arxiv:2205.12586",
"region:us"
]
| 2022-12-10T13:54:23+00:00 | {"annotations_creators": ["expert-generated", "crowdsourced"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": [], "paperswithcode_id": "winobias", "pretty_name": "panda", "tags": ["fairness", "nlp", "demographic", "diverse", "gender", "non-binary", "race", "age"]} | 2022-12-10T14:01:45+00:00 |
15cb16ec1976aee7f952b14e61e9f4bcadc33307 | Nishanth22222/Dreambooth-satwik | [
"license:other",
"region:us"
]
| 2022-12-10T13:56:01+00:00 | {"license": "other"} | 2022-12-10T13:56:01+00:00 |
|
88072735c300736388f71ee6ff879d5fd2e494bc | kllmagn/memEditor_Captions | [
"license:openrail",
"region:us"
]
| 2022-12-10T15:24:06+00:00 | {"license": "openrail"} | 2022-12-10T15:24:06+00:00 |
|
803987ef2e01a6d1bdeb30aa8be1b7d0c080de80 | zmao/food_img_caption_small | [
"license:other",
"region:us"
]
| 2022-12-10T16:28:54+00:00 | {"license": "other"} | 2022-12-10T16:32:05+00:00 |
|
c456131bbd846b9becfc9d028911b46967a6cbf7 | # Dataset Card for "trdg_random_en_zh_text_recognition"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | priyank-m/trdg_random_en_zh_text_recognition | [
"region:us"
]
| 2022-12-10T16:42:28+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12592040013.0, "num_examples": 410000}], "download_size": 12595188446, "dataset_size": 12592040013.0}} | 2022-12-14T06:05:17+00:00 |
7134135b90e8c44c55fe5e76c2a2426b74ab3fb9 | # Dataset Card for "uniform_top"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | WillHeld/uniform_top | [
"region:us"
]
| 2022-12-10T17:54:27+00:00 | {"dataset_info": {"features": [{"name": "utterance", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "semantic_parse", "dtype": "string"}], "splits": [{"name": "eval_en", "num_bytes": 283034, "num_examples": 2235}, {"name": "test_en", "num_bytes": 554754, "num_examples": 4386}, {"name": "train_en", "num_bytes": 1973838, "num_examples": 15667}, {"name": "eval_de", "num_bytes": 242996, "num_examples": 1815}, {"name": "test_de", "num_bytes": 471105, "num_examples": 3549}, {"name": "train_de", "num_bytes": 1804566, "num_examples": 13424}, {"name": "eval_es", "num_bytes": 207924, "num_examples": 1527}, {"name": "test_es", "num_bytes": 402468, "num_examples": 2998}, {"name": "train_es", "num_bytes": 1473681, "num_examples": 10934}, {"name": "eval_fr", "num_bytes": 208175, "num_examples": 1577}, {"name": "test_fr", "num_bytes": 427290, "num_examples": 3193}, {"name": "train_fr", "num_bytes": 1578716, "num_examples": 11814}, {"name": "eval_hi", "num_bytes": 435694, "num_examples": 2012}, {"name": "test_hi", "num_bytes": 576384, "num_examples": 2789}, {"name": "train_hi", "num_bytes": 2356893, "num_examples": 11330}, {"name": "eval_th", "num_bytes": 363531, "num_examples": 1671}, {"name": "test_th", "num_bytes": 586408, "num_examples": 2765}, {"name": "train_th", "num_bytes": 2303175, "num_examples": 10759}, {"name": "eval_cstop", "num_bytes": 74530, "num_examples": 559}, {"name": "test_cstop", "num_bytes": 153728, "num_examples": 1167}, {"name": "train_cstop", "num_bytes": 540817, "num_examples": 4077}, {"name": "eval_top_v2", "num_bytes": 2565386, "num_examples": 17160}, {"name": "test_top_v2", "num_bytes": 5759599, "num_examples": 38785}, {"name": "train_top_v2", "num_bytes": 18815125, "num_examples": 124597}, {"name": "validation_hinglish_top", "num_bytes": 220386, "num_examples": 1390}, {"name": "test_hinglish_top", "num_bytes": 1069867, "num_examples": 6513}, {"name": "train_hinglish_top", "num_bytes": 478317, "num_examples": 2993}, {"name": "eval_cstop_artificial", "num_bytes": 70248, "num_examples": 559}, {"name": "test_cstop_artificial", "num_bytes": 144553, "num_examples": 1167}, {"name": "train_cstop_artificial", "num_bytes": 508926, "num_examples": 4077}], "download_size": 17110962, "dataset_size": 46652114}} | 2022-12-10T17:59:40+00:00 |
ae849aed5b25fe086d2fedc490b772dd59bcdeb4 | Annanay/aml_song_lyrics_balanced | [
"license:unknown",
"region:us"
]
| 2022-12-10T17:55:24+00:00 | {"license": "unknown"} | 2022-12-13T00:49:59+00:00 |
|
e76145e15c14851addd36013957a9794749b201a | # Dataset Card for "olm-october-2022-tokenized-1024-suffix-array-dedup"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-october-2022-tokenized-1024-suffix-array-dedup | [
"region:us"
]
| 2022-12-10T19:38:35+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 81147320856, "num_examples": 13181826}], "download_size": 21892490583, "dataset_size": 81147320856}} | 2022-12-11T07:10:35+00:00 |
5bde9b7bee121458d38ddc9acc8e8513cbe28a17 | # Dataset Card for "mediaspeech-with-cv-tr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | zeynepgulhan/mediaspeech-with-cv-tr | [
"region:us"
]
| 2022-12-10T19:49:34+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10631918399.222, "num_examples": 38638}, {"name": "test", "num_bytes": 278050703.0, "num_examples": 10143}], "download_size": 1643709639, "dataset_size": 10909969102.222}} | 2022-12-10T20:07:07+00:00 |
c3cd8df5b0453d8f035c0e9244ab0be3b5ae8d7e |
# Numerai Datasets
This is a mirror of the official numerai dataset - NOT OFFICIALLY SUPPORTED OR MAINTAINED BY NUMERAI.
Official source: https://numer.ai/data
Use the official source to submit your predictions, no guarantees for correctness or completeness.
This is maintained by the Numerai community. | Numerati/numerai-datasets | [
"task_categories:time-series-forecasting",
"task_categories:tabular-classification",
"task_categories:other",
"task_ids:multivariate-time-series-forecasting",
"task_ids:tabular-single-column-regression",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:other-my-multilinguality",
"source_datasets:original",
"license:unknown",
"numerai",
"stock market",
"hedge fund",
"obfuscated",
"region:us"
]
| 2022-12-10T19:58:16+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": [], "license": ["unknown"], "multilinguality": ["other-my-multilinguality"], "size_categories": [], "source_datasets": ["original"], "task_categories": ["time-series-forecasting", "tabular-classification", "other"], "task_ids": ["multivariate-time-series-forecasting", "tabular-single-column-regression"], "pretty_name": "Numerai Dataset", "tags": ["numerai", "stock market", "hedge fund", "obfuscated"]} | 2022-12-11T13:11:50+00:00 |
6d2f849fbb120f3f03761ecaa66cc3248263f023 | PinkysMusing/Resources | [
"license:cc",
"region:us"
]
| 2022-12-10T20:23:11+00:00 | {"license": "cc"} | 2022-12-10T20:23:11+00:00 |
|
7e3508b23eb2b39d7f79a8b1fc583a3c20309628 | train-eval-index:
- config: default
task: token-classification
task_id: entity_extraction
splits:
eval_split: test
col_mapping:
tokens: tokens
labels: tags | breadlicker45/yahoo_answers_v2 | [
"license:mit",
"region:us"
]
| 2022-12-11T00:54:12+00:00 | {"license": "mit"} | 2023-02-01T12:20:18+00:00 |
74d8e7c0def08e69f46b4638546f0debc7c3d4f2 | 1k images for the class "artstyle" that were made with & for the [JoePenna Dreambooth repo](https://github.com/JoePenna/Dreambooth-Stable-Diffusion) with Stable Diffusion 1.5
```
seed: 10
ddim_eta: 0.0
scale: 10.0
ddim_steps: 50
```
| proxima/SD_1-5_reg_images | [
"license:creativeml-openrail-m",
"region:us"
]
| 2022-12-11T01:31:03+00:00 | {"license": "creativeml-openrail-m"} | 2022-12-11T03:00:05+00:00 |
a37b1891609c0376fa81eced756e7863e1bd873b | # Dataset Card for "oxford-flowers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nelorth/oxford-flowers | [
"task_categories:image-classification",
"task_categories:unconditional-image-generation",
"source_datasets:https://www.robots.ox.ac.uk/~vgg/data/flowers",
"license:unknown",
"flowers",
"oxford",
"region:us"
]
| 2022-12-11T02:14:19+00:00 | {"license": ["unknown"], "source_datasets": "https://www.robots.ox.ac.uk/~vgg/data/flowers", "task_categories": ["image-classification", "unconditional-image-generation"], "pretty_name": "Oxford Flowers Dataset", "tags": ["flowers", "oxford"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "1", "1": "10", "2": "100", "3": "101", "4": "102", "5": "11", "6": "12", "7": "13", "8": "14", "9": "15", "10": "16", "11": "17", "12": "18", "13": "19", "14": "2", "15": "20", "16": "21", "17": "22", "18": "23", "19": "24", "20": "25", "21": "26", "22": "27", "23": "28", "24": "29", "25": "3", "26": "30", "27": "31", "28": "32", "29": "33", "30": "34", "31": "35", "32": "36", "33": "37", "34": "38", "35": "39", "36": "4", "37": "40", "38": "41", "39": "42", "40": "43", "41": "44", "42": "45", "43": "46", "44": "47", "45": "48", "46": "49", "47": "5", "48": "50", "49": "51", "50": "52", "51": "53", "52": "54", "53": "55", "54": "56", "55": "57", "56": "58", "57": "59", "58": "6", "59": "60", "60": "61", "61": "62", "62": "63", "63": "64", "64": "65", "65": "66", "66": "67", "67": "68", "68": "69", "69": "7", "70": "70", "71": "71", "72": "72", "73": "73", "74": "74", "75": "75", "76": "76", "77": "77", "78": "78", "79": "79", "80": "8", "81": "80", "82": "81", "83": "82", "84": "83", "85": "84", "86": "85", "87": "86", "88": "87", "89": "88", "90": "89", "91": "9", "92": "90", "93": "91", "94": "92", "95": "93", "96": "94", "97": "95", "98": "96", "99": "97", "100": "98", "101": "99"}}}}], "splits": [{"name": "train", "num_bytes": 308119477.446, "num_examples": 7169}, {"name": "test", "num_bytes": 43247670.14, "num_examples": 1020}], "download_size": 346597973, "dataset_size": 351367147.58599997}} | 2022-12-11T02:38:31+00:00 |
5743e7dbd4f86eed44e8bcf754035b7277f0bdf1 | A jsonlines dataset of 98000 prompt-completion pairs for algebra questions.
The prompt has a question and the Completion has the answer with rationale.
Originally taken from https://www.deepmind.com/open-source/aqua-rat for finetuning GPT-3 but use it for jobs of your choice.
For questions, open a discussion on community. | Chinar/AQuA-RAT | [
"license:mit",
"region:us"
]
| 2022-12-11T04:21:17+00:00 | {"license": "mit"} | 2022-12-11T04:30:40+00:00 |
2d102e61dc886c8557c893d8e0175f809011b117 |
# wukong100m
## 简介 Brief Introduction
取自Noah-Wukong多语言多模态数据集中的中文部分,一共100M个图文对。
A subset from Noah-Wukong (a multimodal dataset), around 100M image-text pairs (only Chinese).
## 数据集信息 Dataset Information
大约一共100M个中文图文对。大约占用16GB空间(仅仅是url等文本信息,不包含图片)。下载成功率在80%左右。(虽然我没有统计下载之后会占用多少空间,但是,可以说非常非常大)
- Homepage: [Noah-Wukong](https://wukong-dataset.github.io/wukong-dataset/index.html)
## 下载 Download
```bash
mkdir wukong100m && cd wukong100m
for i in {00000..00031}; do wget https://huggingface.co/datasets/wanng/wukong100m/resolve/main/data/train-$i-of-00032.parquet; done
cd ..
```
## Lisence
CC BY-NC-SA 4.0
| wanng/wukong100m | [
"task_categories:feature-extraction",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"language:zh",
"license:cc-by-nc-sa-4.0",
"region:us"
]
| 2022-12-11T04:26:12+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["zh"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "task_categories": ["feature-extraction"], "pretty_name": "Wukong100M"} | 2022-12-11T06:24:05+00:00 |
39e8d041a477f1b06fff8c335bc13304fb4b350c | chloeliu/lyrics | [
"license:cc",
"region:us"
]
| 2022-12-11T04:46:33+00:00 | {"license": "cc"} | 2022-12-11T04:53:42+00:00 |
|
e266c66713f0d7a61cc4f10d9abf34de0cb17f5f | # Caution! This dataset contains explicit language and fraud information. Use at your own risk!
For AutoTrain use: please select Text Classification (Binary) as Task.
## What is included
- conversations in chinese under tag 0
- spam conversations under tag1
## Where does the data come from
- part of the data came from conversations in Chinese Telegram groups
- part of them are from logging channels of anti-spam bots
## How many data is included
- A total of 9.9k conversations are inside
- ~4700 conversations are marked as normal
- rest are marked as spam
| paulkm/chinese_conversation_and_spam | [
"task_categories:text-classification",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zh",
"license:apache-2.0",
"conversation",
"spam",
"region:us"
]
| 2022-12-11T04:52:46+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced"], "language": ["zh"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "chinese_conversation_and_spam", "tags": ["conversation", "spam"]} | 2022-12-12T08:31:27+00:00 |
8e7d8a4957cbb40b039ceda36d34c27685d862da | This dataset contains speech(wave files) from a single woman and the tsv file contain transcript of the speech files
| aungmyatv8/mm_speech | [
"license:mit",
"region:us"
]
| 2022-12-11T05:25:56+00:00 | {"license": "mit"} | 2022-12-11T05:59:19+00:00 |
348058c0c2d081957ee1edcf562fdc13e0b68c5e | adehaze/petrified | [
"license:other",
"region:us"
]
| 2022-12-11T06:34:10+00:00 | {"license": "other"} | 2022-12-11T06:34:10+00:00 |
|
7529fc965175df799b0f93215c933894ad892953 | Xingwei/agnews_with_knowledge | [
"license:apache-2.0",
"region:us"
]
| 2022-12-11T08:03:21+00:00 | {"license": "apache-2.0"} | 2022-12-11T08:10:41+00:00 |
|
5c04c9b8074513ff2c36a6715a54092b7056b728 | # Dataset Card for "olm-october-2022-tokenized-128"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-october-2022-tokenized-128 | [
"region:us"
]
| 2022-12-11T08:33:27+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 80492759880, "num_examples": 103195846}], "download_size": 23151736447, "dataset_size": 80492759880}} | 2022-12-11T17:51:02+00:00 |
26e578fb65c3d56942eee8976e4889921152fa6b | # List of repositories included in the dataset
| Project | Language | Fetched Count | Url |
|-----------------------|------------|---------------|----------------------------------------------------|
| Moby | Go | 5 943 | https://github.com/moby/moby |
| Rxjava | Java | 516 | https://github.com/RxJava/ReactiveX |
| Spring-framework | Java | 2 529 | https://github.com/spring-framework/spring-project |
| Chart.js | Javascript | 641 | https://github.com/Chart.js/chartjs |
| Three.js | Javascript | 1 512 | https://github.com/three.js/mrdoob |
| Redux | Javascript | 592 | https://github.com/redux/reduxjs |
| React-native | Javascript | 2 901 | https://github.com/react-native/facebook |
| React | Javascript | 2 335 | https://github.com/react/facebook |
| Pdf.js | Javascript | 966 | https://github.com/pdf.js/mozilla |
| Node | Javascript | 7 779 | https://github.com/node/nodejs |
| Next.js | Javascript | 1 183 | https://github.com/next.js/vercel |
| Moment | Javascript | 422 | https://github.com/moment/moment |
| Video.js | Javascript | 254 | https://github.com/video.js/videojs |
| Immutable-js | Javascript | 112 | https://github.com/immutable-js/immutable-js |
| Jquery | Javascript | 527 | https://github.com/jquery/jquery |
| Webpack | Javascript | 1 715 | https://github.com/webpack/webpack |
| Angular.js | Javascript | 1 938 | https://github.com/angular.js/angular |
| Atom | Javascript | 1 090 | https://github.com/atom/atom |
| Ember.js | Javascript | 2 094 | https://github.com/ember.js/emberjs |
| Axios | Javascript | 110 | https://github.com/axios/axios |
| D3 | Javascript | 371 | https://github.com/d3/d3 |
| Framework | PHP | 4 668 | https://github.com/framework/laravel |
| Cakephp | PHP | 5 827 | https://github.com/cakephp/cakephp |
| Laravel | PHP | 769 | https://github.com/laravel/laravel |
| Transformers | Python | 1 735 | https://github.com/transformers/huggingface |
| Python | Python | 330 | https://github.com/Python/TheAlgorithms |
| Airflow | Python | 1 592 | https://github.com/airflow/apache |
| Spacy | Python | 2 033 | https://github.com/spaCy/explosion |
| Freecodecamp | Python | 2 773 | https://github.com/freeCodeCamp/freeCodeCamp |
| Glances | Python | 494 | https://github.com/glances/nicolargo |
| Django-rest-framework | Python | 1 084 | https://github.com/django-rest-framework/encode |
| Libcloud | Python | 1 104 | https://github.com/libcloud/apache |
| Numpy | Python | 2 512 | https://github.com/numpy/numpy |
| Flask | Python | 277 | https://github.com/flask/pallets |
| Celery | Python | 565 | https://github.com/celery/celery |
| Keras | Python | 1 466 | https://github.com/keras/keras-team |
| Models | Python | 930 | https://github.com/models/tensorflow |
| Django | Python | 170 | https://github.com/django/django |
| Brew | Ruby | 5 560 | https://github.com/brew/Homebrew |
| Rails | Ruby | 8 421 | https://github.com/rails/rails |
| mamiksik/processed-commit-diffs | [
"region:us"
]
| 2022-12-11T11:59:10+00:00 | {"dataset_info": {"features": [{"name": "content_type", "dtype": "string"}, {"name": "main_lang", "dtype": "string"}, {"name": "message", "dtype": "string"}, {"name": "sha", "dtype": "string"}, {"name": "patch", "dtype": "string"}, {"name": "file_count", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 260603276.0, "num_examples": 62272}, {"name": "test", "num_bytes": 32575409.5, "num_examples": 7784}, {"name": "valid", "num_bytes": 32575409.5, "num_examples": 7784}], "download_size": 112191621, "dataset_size": 325754095.0}} | 2023-01-26T12:17:28+00:00 |
7a44f9cddbe70e5d0c62f7f4b161a0d43f81aa0b | # Dataset Card for "trdg_wikipedia_en_zh_text_recognition"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | priyank-m/trdg_wikipedia_en_text_recognition | [
"region:us"
]
| 2022-12-11T12:35:41+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3330256280.625, "num_examples": 105899}], "download_size": 3330793505, "dataset_size": 3330256280.625}} | 2022-12-14T06:46:50+00:00 |
b64dff025eaed409999345c6182a61482e2c68f6 |
# Dataset Card for [Malayalam Asr Corpus]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@parambharat](https://github.com/parambharat) for adding this dataset. | parambharat/malayalam_asr_corpus | [
"task_categories:automatic-speech-recognition",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|common_voice",
"source_datasets:extended|openslr",
"language:ml",
"license:cc-by-4.0",
"region:us"
]
| 2022-12-11T12:46:03+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["ml"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|common_voice", "extended|openslr"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "Malayalam ASR Corpus", "tags": []} | 2022-12-11T13:05:27+00:00 |
7307019ccdb228717747c98e7aeed3c2516f1bd9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-cnn
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@peter](https://huggingface.co/peter) for evaluating this model. | autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-0658a1-2419875379 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-11T13:10:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "validation", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-12-11T14:22:44+00:00 |
ba7ca26018707ce23888317f1e312a02882307e2 | ## Context
The Stanford Dogs dataset contains images of 120 breeds of dogs from around the world. This dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It was originally collected for fine-grain image categorization, a challenging problem as certain dog breeds have near identical features or differ in colour and age. <b> I have used only images, so this does not contain any labels <b>.
## Content
Number of images: 20,580
## Acknowledgements
The original data source is found on http://vision.stanford.edu/aditya86/ImageNetDogs/ and contains additional information on the train/test splits and baseline results.
If you use this dataset in a publication, please cite the dataset on the following papers:
Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao and Li Fei-Fei. Novel dataset for Fine-Grained Image Categorization. First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [pdf] [poster] [BibTex]
Secondary:
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009. [pdf] [BibTex] | ksaml/Stanford_dogs | [
"license:other",
"region:us"
]
| 2022-12-11T15:31:02+00:00 | {"license": "other"} | 2022-12-11T17:55:02+00:00 |
8be5b53d571a873ea97cf6bd3436cace4923d854 | # Dataset Card for "litbank-entities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | garutyunov/litbank-entities | [
"region:us"
]
| 2022-12-11T15:48:23+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}, {"name": "ner_spans", "sequence": {"sequence": "int32"}}], "splits": [{"name": "train", "num_bytes": 6575841, "num_examples": 100}], "download_size": 2042117, "dataset_size": 6575841}} | 2022-12-11T15:48:46+00:00 |
d65848886a1d4105d2695da01260503a33f49ba2 | Primer comit | kirah/conjunto_datos | [
"region:us"
]
| 2022-12-11T16:37:10+00:00 | {} | 2022-12-11T16:38:33+00:00 |
b916406f6fecdee5f24d98dbc419b2f3acca0bca | breadlicker45/yahoo-answers-3k-lines | [
"license:mit",
"region:us"
]
| 2022-12-11T16:41:53+00:00 | {"license": "mit"} | 2022-12-11T16:42:13+00:00 |
|
183e21353d6523e7569b255c9d6486e92c75b853 | # AutoTrain Dataset for project: yahoo-answer-small
## Dataset Description
This dataset has been automatically processed by AutoTrain for project yahoo-answer-small.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "how do you get a girl to like you? and how can you make her your girlfriend?",
"target": "Be yourself. It's the oldest and best advice. She may still not like you, but that's the risk, and you keep your manliness and dignity. Never, never forget this."
},
{
"text": "how long is a bacterium's life?",
"target": "It depends on the bacterium. For E. coli (common lab bacteria) 20-30 minutes is an average doubling time, but different strains vary.\\n\\nI heard something somewhere about a weird form of bacteria that lives miles underground in granite formations and only divides once every ten thousand years, or something crazy like that. I can't give you a source, it's just a freaky thing off the top of my head that I haven't gone to the trouble to confirm. My contention is that if reincarnation is true, that would be the *worst* thing to come back as. So mind your karma.\\n\\nPart of the reason it would be the worst is that bacteria reproduce by one cell dividing into two, so as long as there are any of that strain still alive, it hasn't really died.\\n\\nThey can die though. In a liquid culture you can tell because of a lot of turbidity (cloudiness) some of which is from cells and some from debris from dead cells. Or, you could have agar plates that get really nasty and dried up, and most of those bacteria are probably dead. Or you could tell because they look really crappy under a microscope. If you try to streak it and grow it on a plate and it doesn't grow, it's probably dead."
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2399 |
| valid | 600 |
| breadlicker45/autotrain-data-yahoo-answer-small | [
"task_categories:summarization",
"region:us"
]
| 2022-12-11T16:52:12+00:00 | {"task_categories": ["summarization"]} | 2022-12-11T16:52:43+00:00 |
3722ef3775569c2342568bd936598fcc95714758 | strajk/dataset-test | [
"license:mit",
"region:us"
]
| 2022-12-11T17:28:45+00:00 | {"license": "mit"} | 2022-12-11T17:28:45+00:00 |
|
491baa46704fcd730eadbe1ca16baa0270bde1c0 | # Dataset Card for "octnormal200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | g30rv17ys/octnormal200 | [
"region:us"
]
| 2022-12-11T18:12:07+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 13541578.0, "num_examples": 200}], "download_size": 13542226, "dataset_size": 13541578.0}} | 2022-12-11T18:13:46+00:00 |
491487c4258287aa2be7cc179b7928768be7008d | Nooon/Donate_a_cry | [
"license:mit",
"region:us"
]
| 2022-12-11T18:24:59+00:00 | {"license": "mit"} | 2022-12-11T18:43:22+00:00 |
|
19b28e3bba091621752b83e7b055e45d13b2ffa8 |
# MultiPL-E Evaluation Raw Data
This is the raw data for the MultiPL-E paper: https://arxiv.org/abs/2208.08227 | nuprl/MultiPL-E-raw-data | [
"license:bsd-3-clause",
"arxiv:2208.08227",
"region:us"
]
| 2022-12-11T19:07:19+00:00 | {"license": "bsd-3-clause"} | 2022-12-20T18:40:05+00:00 |
2ebbb0ad8dbe9ad5d9fd96fc70147a2e9eec3108 | Nooon/Donate_a_cryy | [
"license:mit",
"region:us"
]
| 2022-12-11T19:39:48+00:00 | {"license": "mit"} | 2022-12-12T23:50:42+00:00 |
|
383573df080a54f0efabf3c081944a3864018703 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | ZihaoLin/zhlds | [
"task_categories:image-classification",
"task_categories:object-detection",
"task_ids:multi-class-image-classification",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
]
| 2022-12-11T20:34:47+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["other"], "multilinguality": [], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["image-classification", "object-detection"], "task_ids": ["multi-class-image-classification"], "pretty_name": "This is a test version for ELEVATER benchmark.", "tags": []} | 2022-12-16T20:26:09+00:00 |
836b934d60ea2c4aa651524d368e73eff477f2d6 | scvi-tools/DATASET-FOR-UNIT-TESTING-1 | [
"license:cc-by-4.0",
"region:us"
]
| 2022-12-11T21:29:36+00:00 | {"license": "cc-by-4.0"} | 2022-12-11T21:31:24+00:00 |
|
217a35a927ac100438e4021b1edac8a53327c387 | AlienKevin/source_han_sans_ja_extra_light_left_right | [
"license:cc0-1.0",
"region:us"
]
| 2022-12-11T21:29:59+00:00 | {"license": "cc0-1.0"} | 2022-12-11T21:36:56+00:00 |
|
9cd10776cbec2f0048ebcb2b3e6447e8a339b5c9 | AlienKevin/source_han_sans_ja_regular_left_right | [
"license:cc0-1.0",
"region:us"
]
| 2022-12-11T21:46:20+00:00 | {"license": "cc0-1.0"} | 2022-12-11T21:54:07+00:00 |
|
e8fce42c364581091638ec9381ddc764b8124156 | zmao/chinese_food_caption | [
"license:other",
"region:us"
]
| 2022-12-11T22:31:03+00:00 | {"license": "other"} | 2022-12-11T22:33:11+00:00 |
|
ed12234ef16f7703b94b3c8aa34b7cbb9f1d8f60 | # Dataset Card for "MULTI_VALUE_mnli_comparative_than"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/MULTI_VALUE_mnli_comparative_than | [
"region:us"
]
| 2022-12-12T01:17:27+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "dev_matched", "num_bytes": 20489, "num_examples": 90}, {"name": "dev_mismatched", "num_bytes": 23738, "num_examples": 93}, {"name": "test_matched", "num_bytes": 23244, "num_examples": 96}, {"name": "test_mismatched", "num_bytes": 32356, "num_examples": 125}, {"name": "train", "num_bytes": 862863, "num_examples": 3645}], "download_size": 561095, "dataset_size": 962690}} | 2022-12-12T01:17:43+00:00 |
f564d4e680e54b105519430ab18094a37289bcc5 | # Dataset Card for "MULTI_VALUE_mnli_future_sub_gon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/MULTI_VALUE_mnli_future_sub_gon | [
"region:us"
]
| 2022-12-12T01:17:51+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "dev_matched", "num_bytes": 150175, "num_examples": 648}, {"name": "dev_mismatched", "num_bytes": 154530, "num_examples": 702}, {"name": "test_matched", "num_bytes": 134661, "num_examples": 566}, {"name": "test_mismatched", "num_bytes": 137820, "num_examples": 632}, {"name": "train", "num_bytes": 5440195, "num_examples": 23152}], "download_size": 3647954, "dataset_size": 6017381}} | 2022-12-12T01:18:06+00:00 |
140ce2e2760e77b0512ef3151e8057663d54d9ac | # Dataset Card for "MULTI_VALUE_mnli_comparative_more_and"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/MULTI_VALUE_mnli_comparative_more_and | [
"region:us"
]
| 2022-12-12T01:18:13+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "dev_matched", "num_bytes": 20865, "num_examples": 90}, {"name": "dev_mismatched", "num_bytes": 24150, "num_examples": 93}, {"name": "test_matched", "num_bytes": 23648, "num_examples": 96}, {"name": "test_mismatched", "num_bytes": 32892, "num_examples": 125}, {"name": "train", "num_bytes": 878395, "num_examples": 3645}], "download_size": 566194, "dataset_size": 979950}} | 2022-12-12T01:18:28+00:00 |
3b7f0b46dc24e6926f279fab119f26fba787c4d6 | # Dataset Card for "MULTI_VALUE_mnli_progressives"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/MULTI_VALUE_mnli_progressives | [
"region:us"
]
| 2022-12-12T01:19:42+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "dev_matched", "num_bytes": 1104064, "num_examples": 4794}, {"name": "dev_mismatched", "num_bytes": 1208621, "num_examples": 5098}, {"name": "test_matched", "num_bytes": 1135615, "num_examples": 4901}, {"name": "test_mismatched", "num_bytes": 1217348, "num_examples": 5184}, {"name": "train", "num_bytes": 45857810, "num_examples": 195951}], "download_size": 32015718, "dataset_size": 50523458}} | 2022-12-12T01:20:05+00:00 |
ed8894becb12050eeb453b794b53509513531bd0 | # Dataset Card for "MULTI_VALUE_mnli_preposition_chopping"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/MULTI_VALUE_mnli_preposition_chopping | [
"region:us"
]
| 2022-12-12T01:19:49+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "dev_matched", "num_bytes": 40321, "num_examples": 205}, {"name": "dev_mismatched", "num_bytes": 23972, "num_examples": 133}, {"name": "test_matched", "num_bytes": 48102, "num_examples": 238}, {"name": "test_mismatched", "num_bytes": 26368, "num_examples": 139}, {"name": "train", "num_bytes": 1598111, "num_examples": 7913}], "download_size": 1029356, "dataset_size": 1736874}} | 2022-12-12T01:20:07+00:00 |
bc576351a437352cf56ad108b66bb3792131e53b | # Dataset Card for "MULTI_VALUE_mnli_invariant_tag_amnt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/MULTI_VALUE_mnli_invariant_tag_amnt | [
"region:us"
]
| 2022-12-12T01:19:57+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "dev_matched", "num_bytes": 612, "num_examples": 5}, {"name": "test_matched", "num_bytes": 2448, "num_examples": 11}, {"name": "test_mismatched", "num_bytes": 1814, "num_examples": 7}, {"name": "train", "num_bytes": 55962, "num_examples": 334}], "download_size": 35609, "dataset_size": 60836}} | 2022-12-12T01:20:10+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.