sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
45294eb6d4d2aa1230c8caa07dda2a03982e350e
|
# VRoid Image Dataset Lite
This is a dataset to train text-to-image or other models without any copyright issue.
All materials used in this dataset are CC0 or properly licensed.
This dataset is also used to train [Mitsua Diffusion One](https://huggingface.co/Mitsua/mitsua-diffusion-one), which is a latent text-to-image diffusion model, whose VAE and U-Net are trained from scratch using only public domain/CC0 or copyright images with permission for use.
Various parameters such as camera angle, pose, skin color and facial expression were randomly set and the images were output.
## Dataset License
[Creative Open-Rail++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
This model is open access and available to all, with a CreativeML OpenRAIL++-M license further specifying rights and usage. The CreativeML OpenRAIL++-M License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL++-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
## Materials used in this dataset and their licenses
### VRoid Models
- VRM models used in this dataset are all CC0.
- These models are made by VRoid Project
- [HairSample_Male](https://vroid.pixiv.help/hc/en-us/articles/4402614652569-Do-VRoid-Studio-s-sample-models-come-with-conditions-of-use-)
- [HairSample_Female](https://vroid.pixiv.help/hc/en-us/articles/4402614652569-Do-VRoid-Studio-s-sample-models-come-with-conditions-of-use-)
- [AvatarSample-D](https://vroid.pixiv.help/hc/en-us/articles/360012381793-AvatarSample-D)
- [AvatarSample-E](https://vroid.pixiv.help/hc/en-us/articles/360014900273-AvatarSample-E)
- [AvatarSample-F](https://vroid.pixiv.help/hc/en-us/articles/360014900113-AvatarSample-F)
- [AvatarSample-G](https://vroid.pixiv.help/hc/en-us/articles/360014900233-AvatarSample-G)
- [Sakurada Fumiriya](https://vroid.pixiv.help/hc/en-us/articles/360014788554-Sakurada-Fumiriya)
- [Sendagaya Shino](https://vroid.pixiv.help/hc/en-us/articles/360013482714-Sendagaya-Shino)
- These models are made by pastelskies
- [015](https://hub.vroid.com/characters/1636202188966335207/models/6893459099891579554)
- [009](https://hub.vroid.com/characters/2472286065213980612/models/9151142999439416702)
- [008](https://hub.vroid.com/characters/601931587119584437/models/3857812504036458003)
- These models are made by yomox9
- [Qi](https://hub.vroid.com/characters/2048759159111415425/models/6905433332368675090)
- These models are made by くつした
- [【CC0】オリジナルアバター「少女A」【Cluster想定】](https://hub.vroid.com/characters/5271108759876567944/models/9069514665234246177)
- These models are made by ろーてく
- [【CC0】オリジナルアバター「シャペル」【VRChat想定】](https://lowteq.booth.pm/items/1349366)
### Pose and motions
- Our original poses.
- Free edition pose subset in [Unity Humanoid AnimationClip - PoseCollection](https://necocoya.booth.pm/items/1634088) made by かんな久@ねここや様 (❗❗**NOT CC0**❗❗)
- We have obtained permission directly from the author for training or distributing the AI model.
- This dataset uses only a subset of the "Free edition (ポーズ詰め合わせ(無料版)in Japanese)", which is allowed to use for AI training.
- We have confirmed directly from the author that an exact equivalent license is not necesserily needed to distribute the trained model or to generate images.
- Therefore, to avoid harmful content generation, the Creative Open Rail++-M license is applied to this dataset, and an equivalent or more restrictive license must be applied to its derivatives.
### Shader
- MToon (MIT) with some modification by dev team.
### Other Textures for Skybox / Ground
- [Poly Haven](https://polyhaven.com/) (CC0)
- [ambientCG](https://ambientcg.com/) (CC0)
## Metadata Description
The final caption is not provided in this dataset, but you can create complete caption from metadata.
### Color Shifting
Color shift is used to create more diverse images. It is applied to skin/hair/eye/cloth/accesories independently.
- Parameter xyz = (H_Shift, S_Factor, V_Factor)
- New Color HSV = (H + H_Shift, S * S_Factor, V * V_Factor)
### Metadata Items
- vrm_name : VRoid model name
- clip_name : Pose Clip Number
- camera_profile
- facial_expression
- lighting
- lighting_color
- outline
- shade_toony
- skin_profile
- looking_label
- camera_position : 3D position in meter
- camera_rotation : Pitch/Yaw/Roll in degree
- camera_fov : in degree
- hair_color_shift : HSV color shift of hair
- eye_color_shift : HSV color shift of eye
- color_shift : HSV color shift of cloth and accesories
- ground_plane_material
- left_hand_sign
- right_hand_sign
- skybox
## Full Dataset
This is a subset of full dataset consisting of approx. 600k images.
Full dataset would be available upon request only for non-commercial research purposes.
You will need to provide 1 TB of online storage so that we could upload the data set or send us an empty 1 TB physical hard drive to our office located in Tokyo Japan.
Contact : info [at] elanmitsua.com
## Developed by
- Abstract Engine dev team
- Special Thanks to Mitsua Contributors
- VRoid is a trademark or registered trademark of Pixiv inc. in Japan and other regions.
|
Mitsua/vroid-image-dataset-lite
|
[
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"language:ja",
"license:openrail++",
"region:us"
] |
2023-02-09T12:27:18+00:00
|
{"language": ["en", "ja"], "license": "openrail++", "size_categories": ["1K<n<10K"], "task_categories": ["text-to-image"]}
|
2023-03-03T15:02:51+00:00
|
befda45d8d4e8b8082bb8a1912d1f9eb9483991c
|

GLAMI-1M contains 1.1 million fashion items, 968 thousand unique images and 1 million unique texts. It contains 13 languages, mostly European. And 191 fine-grained categories, for example we have 15 shoe types. It contains high quality annotations from professional curators and it also presents a difficult production industry problem.
Each sample contains an image, country code, name in corresponding language, description, target category and source of the label which can be of multiple types, it can be human or rule-based but most of the samples are human-based labels.
Read more on [GLAMI-1M home page at GitHub](https://github.com/glami/glami-1m)
|
glami/glami-1m
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-09T12:47:17+00:00
|
{"license": "apache-2.0"}
|
2023-02-10T08:11:55+00:00
|
58f50310361fdc7d4d11cc63ef6939c12ee399b2
|
kenhktsui/off-topic
|
[
"license:apache-2.0",
"region:us"
] |
2023-02-09T13:18:03+00:00
|
{"license": "apache-2.0"}
|
2023-02-09T13:18:03+00:00
|
|
51dadf57bfdf335e71dee740bdca4c0d3d288df1
|
# Dataset Card for "Hansard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Nadav/Hansard
|
[
"region:us"
] |
2023-02-09T13:32:10+00:00
|
{"dataset_info": {"features": [{"name": "sentence_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "decade", "dtype": "string"}, {"name": "speechdate", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3568239600, "num_examples": 8074848}, {"name": "test", "num_bytes": 398954434, "num_examples": 897205}], "download_size": 1234278574, "dataset_size": 3967194034}}
|
2023-02-09T13:34:02+00:00
|
89ba50c970706109f500449354c142cd28da6a6b
|
kannanwisen/Indian-Traffic-Sign-Classification
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-02-09T13:54:21+00:00
|
{"license": "cc-by-4.0"}
|
2023-02-09T14:03:38+00:00
|
|
294441fae4c22275c33e3fa368e694b4e71d7e2a
|
# Dataset Card for "Clemt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Nadav/Clemt
|
[
"region:us"
] |
2023-02-09T13:56:46+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "file", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 180028095, "num_examples": 300}, {"name": "test", "num_bytes": 18814514, "num_examples": 33}], "download_size": 117182541, "dataset_size": 198842609}}
|
2023-02-09T13:57:06+00:00
|
e062bf9ffba7bb193cdba2802c557fe8090401d6
|
# Dataset Card for "testimdb1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
abhishek/testimdb1
|
[
"region:us"
] |
2023-02-09T14:09:03+00:00
|
{"dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "sentiment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 66283508, "num_examples": 50000}], "download_size": 42961480, "dataset_size": 66283508}}
|
2023-02-09T14:09:29+00:00
|
37fed152b283cc3dc1dc9ab313d4ea9bfc92cd60
|
# Dataset Card for "davinci-vs-lit-pairwise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/davinci-vs-lit-pairwise
|
[
"region:us"
] |
2023-02-09T14:10:08+00:00
|
{"dataset_info": {"features": [{"name": "davinci", "dtype": "string"}, {"name": "lit", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "api_prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1845380427, "num_examples": 47954}], "download_size": 809346083, "dataset_size": 1845380427}}
|
2023-02-10T12:02:21+00:00
|
25eac818b32ac9657363088400cab47ff019d7ac
|
# Dataset Card for "whisper_mix_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thanhduycao/whisper_mix_data
|
[
"region:us"
] |
2023-02-09T14:31:01+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1287134674.425454, "num_examples": 8373}, {"name": "test", "num_bytes": 540858435.8587947, "num_examples": 1903}], "download_size": 1785316365, "dataset_size": 1827993110.2842486}}
|
2023-02-09T14:32:44+00:00
|
b2fbc10431b721dc9b0409b716d56a759d1cf332
|
# Dataset Card for "vivos_ng_only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thanhduycao/vivos_ng_only
|
[
"region:us"
] |
2023-02-09T14:41:51+00:00
|
{"dataset_info": {"features": [{"name": "speaker_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8858003.732418524, "num_examples": 60}, {"name": "test", "num_bytes": 453063.74736842106, "num_examples": 4}], "download_size": 717197, "dataset_size": 9311067.479786946}}
|
2023-02-09T14:42:53+00:00
|
ad2a0fdfdffed33ec5e6424728addc6bedfede6a
|
# Dataset Card for "QADatasetForPatho"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Galahad3x/QADatasetForPatho
|
[
"region:us"
] |
2023-02-09T14:43:19+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 35253904, "num_examples": 1567}, {"name": "test", "num_bytes": 9123066, "num_examples": 339}], "download_size": 655278, "dataset_size": 44376970}}
|
2023-02-09T15:13:03+00:00
|
e0194203f77ab05bdccca4f1b197ced598100206
|
# Satvision Pretraining Dataset - Small
- **Developed by:** NASA GSFC CISTO Data Science Group
- **Model type:** Pre-trained visual transformer model
- **License:** Apache license 2.0
This dataset repository houses the pretraining data for the Satvision pretrained transformers.
This dataset was constructed using [webdatasets](https://github.com/webdataset/webdataset) to
limit the number of inodes used in HPC systems with limited shared storage. Each file has 100000
tiles, with pairs of image input and annotation. The data has been further compressed to ease
the download from HuggingFace.
SatelliteVision-Base (SatVis-B) is a pre-trained vision transformer based on the SwinV2 mode architecture.
The model is pre-trained on global MODIS surface reflectance data from which 1.99 million image chips were used. SatVis-B is pre-trained using
the masked-image-modeling (MIM) contrastive pre-training strategy. The MIM pre-training approach utilizes random
masking of the input geospatial image chip, using a linear layer to regress the raw pixel values of the masked
area with an l1 loss serving as the loss function.
Resolution of the pre-training MODIS chips was `128x128` with a window size of `16x16`. SatViz-B was pre-trained
for `800` epochs on 8x A100 GPUs and 12x V100 GPUs.
### SatVision Transformer
**Pre-trained models pre-trained on MODIS-Small dataset**
| name | pre-train epochs | pre-train resolution | #params | pre-trained model |
| :---: | :---: | :---: | :---: | :---: |
| SatVision-Base | 800 | 128x128 | 84.5m | [checkpoint](https://huggingface.co/nasa-cisto-data-science-group/satvision-base/blob/main/ckpt_epoch_800.pth)/[config](https://github.com/nasa-nccs-hpda/pytorch-caney/blob/develop/examples/satvision/mim_pretrain_swinv2_satvision_base_192_window12_800ep.yaml) |
## Getting Started with SatVision-Base
- **Training repository:** https://github.com/nasa-nccs-hpda/pytorch-caney
- **Pre-training dataset repository:** https://huggingface.co/datasets/nasa-cisto-data-science-group/satvision-pretrain-small
### Installation
If you have singularity installed
```bash
$ git clone [email protected]:nasa-nccs-hpda/pytorch-caney.git
$ singularity build --sandbox pytorch-caney.sif docker://nasanccs/pytorch-caney:latest
# To shell into the container
$ singularity shell --nv -B <mounts> pytorch-caney.sif
```
Anaconda installation
```bash
$ git clone [email protected]:nasa-nccs-hpda/pytorch-caney.git
$ conda create -n satvision-env python==3.9
```
### Fine-tuning Satvision-Base
- Create config file [example config](https://github.com/nasa-nccs-hpda/pytorch-caney/blob/finetuning/examples/satvision/finetune_satvision_base_landcover5class_192_window12_100ep.yaml)
- Download checkpoint from this HF model repo
- `$ git clone [email protected]:nasa-nccs-hpda/pytorch-caney.git`
- Add a new pytorch dataset in pytorch-caney/pytorch_caney/data/datasets/
- Add new pytorch dataset to dict in pytorch-caney/pytorch_caney/data/datamodules/finetune_datamodule.py
```bash
torchrun --nproc_per_node <NGPUS> pytorch-caney/pytorch_caney/pipelines/finetuning/finetune.py --cfg <config-file> --pretrained <path-to-pretrained> --dataset <dataset-name (key for new dataset)> --data-paths <path-to-data-dir> --batch-size <batch-size> --output <output-dir> --enable-amp
```
### Pre-training with pytorch-caney
## Pre-training with SatVision-Base with Masked Image Modeling and pytorch-caney
To pre-train the swinv2 base model with masked image modeling pre-training, run:
```bash
torchrun --nproc_per_node <NGPUS> pytorch-caney/pytorch_caney/pipelines/pretraining/mim.py --cfg <config-file> --dataset <dataset-name> --data-paths <path-to-data-subfolder-1> --batch-size <batch-size> --output <output-dir> --enable-amp
```
For example to run on a compute node with 4 GPUs and a batch size of 128 on the MODIS SatVision pre-training dataset with a base swinv2 model, run:
```bash
singularity shell --nv -B <mounts> /path/to/container/pytorch-caney-container
Singularity> export PYTHONPATH=$PWD:$PWD/pytorch-caney
Singularity> torchrun --nproc_per_node 4 pytorch-caney/pytorch_caney/pipelines/pretraining/mim.py --cfg pytorch-caney/examples/satvision/mim_pretrain_swinv2_satvision_base_192_window12_800ep.yaml --dataset MODIS --data-paths /explore/nobackup/projects/ilab/data/satvision/pretraining/training_* --batch-size 128 --output . --enable-amp
```
## SatVision-Base Pre-Training Datasets
| name | bands | resolution | #chips | meters-per-pixel |
| :---: | :---: | :---: | :---: | :---: |
| MODIS-Small | 7 | 128x128 | 1,994,131 | 500m |
## Citing SatVision-Base
If this model helped your research, please cite `satvision-base` in your publications.
```
@misc{satvision-base,
author = {Carroll, Mark and Li, Jian and Spradlin, Caleb and Caraballo-Vega, Jordan},
doi = {10.57967/hf/1017},
month = aug,
title = {{satvision-base}},
url = {https://huggingface.co/nasa-cisto-data-science-group/satvision-base},
repository-code = {https://github.com/nasa-nccs-hpda/pytorch-caney}
year = {2023}
}
```
|
nasa-cisto-data-science-group/satvision-pretrain-small
|
[
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-02-09T14:55:44+00:00
|
{"language": ["en"], "license": "apache-2.0"}
|
2023-08-31T00:38:48+00:00
|
66155fae195d8fc041b0b989c3e2f323e852bc64
|
PhanAnh/dao_finetune
|
[
"license:creativeml-openrail-m",
"region:us"
] |
2023-02-09T15:09:16+00:00
|
{"license": "creativeml-openrail-m"}
|
2023-02-10T13:55:28+00:00
|
|
0a63f6f8433496978efefb90416376832f25fbb7
|
# Dataset Card for "whisper_mix_data_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thanhduycao/whisper_mix_data_v2
|
[
"region:us"
] |
2023-02-09T15:10:51+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1287134674.425454, "num_examples": 8373}, {"name": "test", "num_bytes": 540858435.8587947, "num_examples": 1903}], "download_size": 1785351931, "dataset_size": 1827993110.2842486}}
|
2023-02-09T15:12:28+00:00
|
e827cf62747fe23bd20759e7f2be8b7fec78c573
|
# Dataset Card for "binhvq-news-corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hieunguyen1053/binhvq-news-corpus
|
[
"region:us"
] |
2023-02-09T15:33:41+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51179763136, "num_examples": 13954498}], "download_size": 19065155948, "dataset_size": 51179763136}}
|
2023-02-09T15:49:42+00:00
|
12ee32c1971c11b429da8e4eb2099941847e83c8
|
# Indonesian Instructions Dataset
|
cahya/instructions_indonesian
|
[
"license:mit",
"region:us"
] |
2023-02-09T16:34:47+00:00
|
{"license": "mit"}
|
2023-02-09T17:03:53+00:00
|
dc5b0253d7d537403711bb2ebf4f8808fd042acf
|
speedoflight/My-test-dataset
|
[
"license:unlicense",
"region:us"
] |
2023-02-09T17:01:29+00:00
|
{"license": "unlicense"}
|
2023-02-09T17:01:29+00:00
|
|
3347fc072b366e468feaa42b5ef9b99012b0b0d4
|
# Dataset Card for "binhvq-news-corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ademax/binhvq-news-corpus
|
[
"region:us"
] |
2023-02-09T17:34:08+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 50813003038.15213, "num_examples": 13854498}, {"name": "test", "num_bytes": 366761776.84786654, "num_examples": 100000}], "download_size": 27332182633, "dataset_size": 51179764815.0}}
|
2023-02-09T18:49:26+00:00
|
9f559c517af3feff6c590c985ace4116676d7cb8
|
BuroIdentidadDigital/recibos_cfe
|
[
"license:c-uda",
"region:us"
] |
2023-02-09T17:35:09+00:00
|
{"license": "c-uda"}
|
2023-11-08T13:21:36+00:00
|
|
bd7e0dc3256a31e3dae03c41b756dddf3b947f20
|
The NEG-1500-SIMP-TEMP is the extension of NEG-136-SIMP. It is created using a template derived from original papers.
If this dataset is useful to you please cite our work
@article{shivagunde2023larger,
title={Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning},
author={Shivagunde, Namrata and Lialin, Vladislav and Rumshisky, Anna},
journal={arXiv preprint arXiv:2303.16445},
year={2023}
}
|
text-machine-lab/NEG-1500-SIMP-TEMP
|
[
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
] |
2023-02-09T18:18:15+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"]}
|
2023-04-20T17:09:50+00:00
|
89da87baea593e32704df80a2776bb26916dc772
|
NEG-1500-SIMP-GEN is an extended version of NEG-136-SIMP. The dataset is extended using GPT-3.
If this dataset is useful to you please cite our work
@article{shivagunde2023larger,
title={Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning},
author={Shivagunde, Namrata and Lialin, Vladislav and Rumshisky, Anna},
journal={arXiv preprint arXiv:2303.16445},
year={2023}
}
|
text-machine-lab/NEG-1500-SIMP-GEN
|
[
"license:mit",
"region:us"
] |
2023-02-09T18:30:16+00:00
|
{"license": "mit"}
|
2023-04-20T17:09:28+00:00
|
5d964f929efa2a106edae5894c6c2f03d4b8d127
|
ROLE-1500 is the extended version of ROLE-88. The dataset is extended using GPT-3.
If this dataset is useful to you please cite our work
@article{shivagunde2023larger,
title={Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning},
author={Shivagunde, Namrata and Lialin, Vladislav and Rumshisky, Anna},
journal={arXiv preprint arXiv:2303.16445},
year={2023}
}
|
text-machine-lab/ROLE-1500
|
[
"license:mit",
"region:us"
] |
2023-02-09T18:33:21+00:00
|
{"license": "mit"}
|
2023-04-20T17:08:47+00:00
|
2c352154666341a07bc49d894afa68d3160df255
|
elifftosunn/bank-dataset
|
[
"license:mit",
"region:us"
] |
2023-02-09T19:01:33+00:00
|
{"license": "mit"}
|
2023-02-09T19:05:27+00:00
|
|
b8888b24d84d93c82a0a407d2d72962796f05846
|
Basically used in Correctness Chorus to train T5 model to predict grammar correction.
|
Owishiboo/grammar-correction
|
[
"language:en",
"region:us"
] |
2023-02-09T19:18:01+00:00
|
{"language": ["en"]}
|
2023-02-09T19:19:44+00:00
|
71dfae610327807488396d01b519baf9aae483a3
|
## Dataset Description
A dataset of pairs of TypeScript code to appropriate type declarations.
## Language
TypeScript only.
## To Load
```python
from datasets import load_dataset
load_dataset("noahshinn024/ts-code2td")
```
## Distribution of type declaration code lengths
- uses the tokenizer from [bigcode/santacoder](https://huggingface.co/bigcode/santacoder)

|
noahshinn/ts-code2td
|
[
"license:mit",
"region:us"
] |
2023-02-09T19:26:58+00:00
|
{"license": "mit"}
|
2023-02-13T23:58:20+00:00
|
3e5093cfd4db1087e08dd35a99a65f5abe3284bd
|
CSAle/dilbert_comics
|
[
"license:mit",
"region:us"
] |
2023-02-09T20:05:13+00:00
|
{"license": "mit"}
|
2023-02-09T20:05:13+00:00
|
|
1a770059d5f6f6014fa414f65556f65f6c85c7fe
|
# Dataset Card for "arxiv-abstract-matching"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
karukas/arxiv-abstract-matching
|
[
"region:us"
] |
2023-02-09T20:46:50+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7119340064, "num_examples": 203037}, {"name": "validation", "num_bytes": 216202656, "num_examples": 6436}, {"name": "test", "num_bytes": 216585242, "num_examples": 6440}], "download_size": 3635681697, "dataset_size": 7552127962}}
|
2023-02-09T20:48:55+00:00
|
97a19fd85201ce3a0cc3627b6485457bc5285eb8
|
# Dataset Card for "DilbertDiffusionDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CSAle/DilbertDiffusionDataset
|
[
"region:us"
] |
2023-02-09T21:13:21+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 530433.0, "num_examples": 7}], "download_size": 531640, "dataset_size": 530433.0}}
|
2023-02-09T22:57:55+00:00
|
9ae784d84163e9e45b5c8c54690f9c8f4db80179
|
# Dataset Card for "pubmed-abstract-matching"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
karukas/pubmed-abstract-matching
|
[
"region:us"
] |
2023-02-09T21:18:08+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2237510856, "num_examples": 119924}, {"name": "validation", "num_bytes": 126574623, "num_examples": 6633}, {"name": "test", "num_bytes": 126357120, "num_examples": 6658}], "download_size": 1156008015, "dataset_size": 2490442599}}
|
2023-02-09T21:18:46+00:00
|
e7a6742b53bddec6cb620dfeffaf9930db8aa020
|
CalamityChain/FineTuningSD
|
[
"license:afl-3.0",
"region:us"
] |
2023-02-09T22:15:57+00:00
|
{"license": "afl-3.0"}
|
2023-02-09T22:17:28+00:00
|
|
7dbfdf54ffb75f10401e10c87319db5bad444415
|
# Dataset Card for "stormfront-small-textonly"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kastan/stormfront-small-textonly
|
[
"region:us"
] |
2023-02-10T00:48:19+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8544511, "num_examples": 10000}, {"name": "test", "num_bytes": 352602, "num_examples": 791}], "download_size": 5539239, "dataset_size": 8897113}}
|
2023-02-10T00:48:28+00:00
|
487e912c023414c19256cbae821a3dfa43a14478
|
# Dataset Card for "instructions-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
cahya/instructions-test
|
[
"region:us"
] |
2023-02-10T01:11:36+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16048, "num_examples": 22}], "download_size": 15127, "dataset_size": 16048}}
|
2023-02-10T01:11:45+00:00
|
c80d11d88ebb46b4717020638d345cc45a56a2df
|
# Dataset Card for "instructions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
cahya/instructions
|
[
"region:us"
] |
2023-02-10T01:24:43+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 71483925.44051038, "num_examples": 170485}, {"name": "test", "num_bytes": 3971585.428468864, "num_examples": 9472}, {"name": "validation", "num_bytes": 3971166.1310207574, "num_examples": 9471}], "download_size": 45997378, "dataset_size": 79426677.0}}
|
2023-02-10T21:02:35+00:00
|
aa7608a8de1c6bb71a1a7234e42d4f5d62e55e29
|
# Dataset Card for "stormfront-full-textonly"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kastan/stormfront-full-textonly
|
[
"region:us"
] |
2023-02-10T02:06:48+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7021007551, "num_examples": 10458223}, {"name": "test", "num_bytes": 352602, "num_examples": 791}], "download_size": 4432257231, "dataset_size": 7021360153}}
|
2023-02-10T02:09:18+00:00
|
929b62a7edd7497089dace2e71f663c5f2fbb41d
|
dwidlee/wiki-dump-ko
|
[
"license:cc",
"region:us"
] |
2023-02-10T02:31:13+00:00
|
{"license": "cc"}
|
2023-02-10T02:31:13+00:00
|
|
04fe889ec1d92c80b92e38cc8252c6ae11e22207
|
# Dataset Card for "VALUE_wikitext2_been_done"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext2_been_done
|
[
"language:en",
"region:us"
] |
2023-02-10T03:53:01+00:00
|
{"language": "en", "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 614492, "num_examples": 720}, {"name": "train", "num_bytes": 5110338, "num_examples": 5991}, {"name": "validation", "num_bytes": 558461, "num_examples": 673}], "download_size": 3699279, "dataset_size": 6283291}}
|
2023-08-02T23:08:33+00:00
|
1103f6fcec810fafe4862064dd496c176e462bee
|
# Dataset Card for "VALUE_wikitext2_dey_it"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext2_dey_it
|
[
"region:us"
] |
2023-02-10T03:59:18+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 121545, "num_examples": 139}, {"name": "train", "num_bytes": 1066585, "num_examples": 1212}, {"name": "validation", "num_bytes": 102624, "num_examples": 126}], "download_size": 809771, "dataset_size": 1290754}}
|
2023-02-10T03:59:23+00:00
|
93d41921cd54a117d6d539a00d30bcffe4c04971
|
# Dataset Card for "VALUE_wikitext2_drop_aux"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext2_drop_aux
|
[
"region:us"
] |
2023-02-10T04:05:59+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 287459, "num_examples": 386}, {"name": "train", "num_bytes": 2899414, "num_examples": 3888}, {"name": "validation", "num_bytes": 235138, "num_examples": 340}], "download_size": 2054815, "dataset_size": 3422011}}
|
2023-02-10T04:06:04+00:00
|
7fefb2dc5f3a3c44b3e3fa5c795fd66f9230b39e
|
# Dataset Card for "VALUE_wikitext2_got"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext2_got
|
[
"region:us"
] |
2023-02-10T04:12:14+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 96781, "num_examples": 118}, {"name": "train", "num_bytes": 947742, "num_examples": 1195}, {"name": "validation", "num_bytes": 78369, "num_examples": 91}], "download_size": 705607, "dataset_size": 1122892}}
|
2023-02-10T04:12:19+00:00
|
5177335a6d6f3f2879a06f4db778bbe08ea4c6ec
|
# Dataset Card for "VALUE_wikitext2_negative_concord"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext2_negative_concord
|
[
"region:us"
] |
2023-02-10T04:36:14+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 165495, "num_examples": 178}, {"name": "train", "num_bytes": 1546197, "num_examples": 1691}, {"name": "validation", "num_bytes": 152679, "num_examples": 173}], "download_size": 1160295, "dataset_size": 1864371}}
|
2023-02-10T04:36:19+00:00
|
8a1eca48c9ab3699cba0f6540320e4aeb691885b
|
# Dataset Card for "VALUE_wikitext2_null_genetive"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext2_null_genetive
|
[
"region:us"
] |
2023-02-10T04:48:50+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 631888, "num_examples": 784}, {"name": "train", "num_bytes": 6202950, "num_examples": 7653}, {"name": "validation", "num_bytes": 625986, "num_examples": 793}], "download_size": 4380528, "dataset_size": 7460824}}
|
2023-02-10T04:48:55+00:00
|
b5c405468ced93af422b8120e8566cd6e808d07c
|
# Dataset Card for "VALUE_wikitext2_null_relcl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext2_null_relcl
|
[
"region:us"
] |
2023-02-10T04:55:10+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 455656, "num_examples": 507}, {"name": "train", "num_bytes": 3975445, "num_examples": 4605}, {"name": "validation", "num_bytes": 393700, "num_examples": 465}], "download_size": 2915183, "dataset_size": 4824801}}
|
2023-02-10T04:55:15+00:00
|
f7c9a7bfcb4d15412c89c98b0e1e5bbd564a1b77
|
# Dataset Card for "VALUE_wikitext2_uninflect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext2_uninflect
|
[
"region:us"
] |
2023-02-10T05:01:49+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 826318, "num_examples": 1068}, {"name": "train", "num_bytes": 7759024, "num_examples": 9991}, {"name": "validation", "num_bytes": 804972, "num_examples": 1053}], "download_size": 5470745, "dataset_size": 9390314}}
|
2023-02-10T05:01:55+00:00
|
cd5ab54c0bc2f6a4a3bb3853597db4d1e1d89f3f
|
Joe02/Monobe_refs
|
[
"license:other",
"region:us"
] |
2023-02-10T05:47:00+00:00
|
{"license": "other"}
|
2023-02-10T05:47:21+00:00
|
|
b4e0398c7d4e8ce91d941e5efd291050597c7e30
|
trungai/Vie_QA
|
[
"region:us"
] |
2023-02-10T06:32:29+00:00
|
{}
|
2023-02-10T06:53:15+00:00
|
|
d95d43ab9a96984e9aec4bbaf11d09f751cba013
|
Cartinoe5930/Politifact_fake_news
|
[
"license:unknown",
"region:us"
] |
2023-02-10T07:54:18+00:00
|
{"license": "unknown"}
|
2023-09-07T22:53:58+00:00
|
|
ce593ca22ec00e6a24dc3521d276434b7d1fcd9c
|
Joe02/Sian_refs
|
[
"license:other",
"region:us"
] |
2023-02-10T09:08:53+00:00
|
{"license": "other"}
|
2023-08-01T15:02:22+00:00
|
|
9ff52cae9c6c8c40e28d78f38cc5b8bc8cc47f4a
|
# Dataset Card for SentiCoref
### Dataset Summary
SentiCoref is a Slovenian coreference resolution dataset containing **391962** tokens inside **756** documents*.
Also contains automatically (?) annotated named entities and manually verified lemmas and morphosyntactic tags (MSD).
\* This is the latest version of SentiCoref, contained in SUK: Slovenian training corpus.
### Supported Tasks and Leaderboards
Coreference resolution.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset, with most actual data truncated for the purpose of clarity:
```
{
"id_doc": "senticoref3408",
"words": [
[
["Ljubljana", "-", "Upravi", "trgovske", "družbe", "Mercator", "se", "je", "z", "letom", "2010", ...],
...
],
...
],
"lemmas": [
[
["Ljubljana", "-", "uprava", "trgovski", "družba", "Mercator", "se", "biti", "z", "leto", "2010", ...],
...
],
...
],
"msds": [
[
["mte:Slzei", "mte:U", "mte:Sozed", "mte:Ppnzer", "mte:Sozer", "mte:Slmei", "mte:Zp------k", ...],
...
],
...
],
"ne_tags": [
[
["B-LOC", "O", "O", "O", "O", "B-ORG", "O", "O", ...],
...
],
...
],
"mentions": [
{
"id_mention": "senticoref3408.1.1.ne1",
"mention_data": {
"idx_par": 0,
"idx_sent": 0,
"word_indices": [0],
"global_word_indices": [0]
}
},
...
],
"coref_clusters": [
["senticoref3408.1.1.phr17-1", "senticoref3408.1.2.t7", "senticoref3408.1.2.ne2", "senticoref3408.1.4.ne3"],
...
]
}
```
### Data Fields
Please note that documents are represented as lists of paragraphs, which are in turn lists of words.
This means that `words`, `lemmas`, `msds`, and `ne_tags` are of type List[List[List[string]]].
This is done because it is easier to discard the segmentation information than re-obtain it.
- `id_doc`: the identifier of the document;
- `words`: words in the document;
- `lemmas`: lemmas in the document;
- `msds`: [morphosyntactic tags](https://nl.ijs.si/ME/V6/msd/) in the document;
- `ne_tags`: named entity annotations in IOB2 format;
- `mentions`: list of entity mentions in the document. Includes named entities, phrases, and single words (e.g., pronouns). Each mention is represented with its ID and the
indices of contained words: either (1) the index of the paragraph, the sentence inside the paragraph, and the positions inside the sentence, or
(2) the global word index that can be used on a flattened list of document words;
- `coref_clusters`: coreference clusters present in the document. Each list represents one cluster of entity mentions, represented by their IDs
## Additional Information
### Using the dataset
1. Unless you are doing something more sophisticated, feel free to drop the paragraph and sentence segmentation information by flattening the document words, lemmas, MSDs, and named entity tags:
```python
import datasets
data = datasets.load_dataset("cjvt/senticoref", split="train")
doc = data[0]
flattened_words = [w for par in doc["words"] for sent in par for w in sent]
# ... Do the same for other fields
```
2. To get a better understanding of the entity mentions and coreference clusters in the document, you can convert the mention information into a dictionary and
link the mentions and clusters to the actual words.
```python
import datasets
data = datasets.load_dataset("cjvt/senticoref", split="train")
doc = data[0]
flattened_words = [w for par in doc["words"] for sent in par for w in sent]
id2mentiondata = {}
for mention in doc["mentions"]:
id2mentiondata[mention['id_mention']] = mention['mention_data']
# Display the entity mention clusters in the string format
# (1) Using the flattened document structure and global word indices
for cluster in doc["coref_clusters"]:
print("{")
for id_mention in cluster:
print(f"\t{[flattened_words[_i] for _i in id2mentiondata[id_mention]['global_word_indices']]}")
print("}")
print("")
# (2) Using the initial document structure and local word indices
for cluster in doc["coref_clusters"]:
print("{")
for id_mention in cluster:
_mention_data = id2mentiondata[id_mention]
idx_par, idx_sent = _mention_data["idx_par"], _mention_data["idx_sent"]
print(f"\t{[doc['words'][idx_par][idx_sent][_i] for _i in _mention_data['word_indices']]}")
print("}")
print("")
```
**Output:**
```
...
{
['trgovske', 'družbe', 'Mercator']
['družbe']
['Mercator']
['Mercatorja']
}
{
['letom', '2010']
['leta', '2010']
}
... (truncated)
```
### Dataset Curators
Špela Arhar Holdt; et al. (please see http://hdl.handle.net/11356/1747 for the full list)
### Licensing Information
CC BY-SA 4.0.
### Citation Information
```
@misc{suk,
title = {Training corpus {SUK} 1.0},
author = {Arhar Holdt, {\v S}pela and Krek, Simon and Dobrovoljc, Kaja and Erjavec, Toma{\v z} and Gantar, Polona and {\v C}ibej, Jaka and Pori, Eva and Ter{\v c}on, Luka and Munda, Tina and {\v Z}itnik, Slavko and Robida, Nejc and Blagus, Neli and Mo{\v z}e, Sara and Ledinek, Nina and Holz, Nanika and Zupan, Katja and Kuzman, Taja and Kav{\v c}i{\v c}, Teja and {\v S}krjanec, Iza and Marko, Dafne and Jezer{\v s}ek, Lucija and Zajc, Anja},
url = {http://hdl.handle.net/11356/1747},
note = {Slovenian language resource repository {CLARIN}.{SI}},
year = {2022}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
|
cjvt/senticoref
|
[
"task_categories:token-classification",
"size_categories:n<1K",
"language:sl",
"license:cc-by-sa-4.0",
"coreference resolution",
"region:us"
] |
2023-02-10T10:07:38+00:00
|
{"language": ["sl"], "license": "cc-by-sa-4.0", "size_categories": ["n<1K"], "task_categories": ["token-classification"], "pretty_name": "SentiCoref", "tags": ["coreference resolution"]}
|
2023-02-10T18:30:01+00:00
|
18aaf5d6a1ad1e63f872a29c2c00c8840947659d
|
# Dataset Card for "aesthetic-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
torphix/aesthetic-v2
|
[
"region:us"
] |
2023-02-10T11:00:24+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 5854280397.36, "num_examples": 14712}], "download_size": 491617940, "dataset_size": 5854280397.36}}
|
2023-02-27T01:14:20+00:00
|
d959675dc4344284e05d0fa7aaf62d64267e76a3
|
# Dataset Card for "weakly_labelled_dataset_mentions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davanstrien/weakly_labelled_dataset_mentions
|
[
"region:us"
] |
2023-02-10T11:12:10+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "inputs", "struct": [{"name": "text", "dtype": "string"}]}, {"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "prediction_agent", "dtype": "string"}, {"name": "annotation", "dtype": "null"}, {"name": "annotation_agent", "dtype": "null"}, {"name": "vectors", "dtype": "null"}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "struct": [{"name": "split", "dtype": "string"}]}, {"name": "status", "dtype": "string"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}, {"name": "label", "dtype": {"class_label": {"names": {"0": "dataset_mention", "1": "no_dataset_mention"}}}}], "splits": [{"name": "train", "num_bytes": 56713799, "num_examples": 30274}], "download_size": 24899276, "dataset_size": 56713799}}
|
2023-02-16T08:29:39+00:00
|
7e01043924240627ce6d6f2f56d51d898e0bfba9
|
# AutoTrain Dataset for project: dataset-mentions
## Dataset Description
This dataset has been automatically processed by AutoTrain for project dataset-mentions.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": " How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained(\"Geotrend/bert-base-en-fr-zh-ja-vi-cased\") model = AutoModel.from_pretrained(\"Geotrend/bert-base-en-fr-zh-ja-vi-cased\") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ",
"target": 0
},
{
"text": " Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['dataset_mention', 'no_dataset_mention'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 7428 |
| valid | 1858 |
|
davanstrien/autotrain-data-dataset-mentions
|
[
"task_categories:text-classification",
"language:en",
"region:us"
] |
2023-02-10T11:17:13+00:00
|
{"language": ["en"], "task_categories": ["text-classification"]}
|
2023-02-10T11:17:37+00:00
|
e782ebf35c7e4cafccb08ca680b0a76706533067
|
## Dataset Description
A small subset of [the-stack](https://huggingface.co/datasets/bigcode/the-stack) dataset, with 87 programming languages, each has 10,000 random samples from the original dataset.
## Languages
The dataset contains 87 programming languages:
````
'ada', 'agda', 'alloy', 'antlr', 'applescript', 'assembly', 'augeas', 'awk', 'batchfile', 'bison', 'bluespec', 'c',
'c++', 'c-sharp', 'clojure', 'cmake', 'coffeescript', 'common-lisp', 'css', 'cuda', 'dart', 'dockerfile', 'elixir',
'elm', 'emacs-lisp','erlang', 'f-sharp', 'fortran', 'glsl', 'go', 'groovy', 'haskell','html', 'idris', 'isabelle', 'java',
'java-server-pages', 'javascript', 'julia', 'kotlin', 'lean', 'literate-agda', 'literate-coffeescript', 'literate-haskell',
'lua', 'makefile', 'maple', 'markdown', 'mathematica', 'matlab', 'ocaml', 'pascal', 'perl', 'php', 'powershell', 'prolog',
'protocol-buffer', 'python', 'r', 'racket', 'restructuredtext', 'rmarkdown', 'ruby', 'rust', 'sas', 'scala', 'scheme',
'shell', 'smalltalk', 'solidity', 'sparql', 'sql', 'stan', 'standard-ml', 'stata', 'systemverilog', 'tcl', 'tcsh', 'tex',
'thrift', 'typescript', 'verilog', 'vhdl', 'visual-basic', 'xslt', 'yacc', 'zig'
`````
## Dataset Structure
```python
# to load go:
from datasets import load_dataset
load_dataset("bigcode/the-stack-smol-xl", data_dir="data/go")
```
|
bigcode/the-stack-smol-xl
|
[
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:unknown",
"language:code",
"region:us"
] |
2023-02-10T11:17:22+00:00
|
{"annotations_creators": [], "language_creators": ["crowdsourced"], "language": ["code"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"]}
|
2023-02-10T17:22:38+00:00
|
f18b0a70359ebfb41f658fd564208d0355b013f4
|
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://scienceqa.github.io/index.html#home](https://scienceqa.github.io/index.html#home)
- **Repository:** [https://github.com/lupantech/ScienceQA](https://github.com/lupantech/ScienceQA)
- **Paper:** [https://arxiv.org/abs/2209.09513](https://arxiv.org/abs/2209.09513)
- **Leaderboard:** [https://paperswithcode.com/dataset/scienceqa](https://paperswithcode.com/dataset/scienceqa)
- **Point of Contact:** [Pan Lu](https://lupantech.github.io/) or file an issue on [Github](https://github.com/lupantech/ScienceQA/issues)
### Dataset Summary
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
### Supported Tasks and Leaderboards
Multi-modal Multiple Choice
### Languages
English
## Dataset Structure
### Data Instances
Explore more samples [here](https://scienceqa.github.io/explore.html).
``` json
{'image': Image,
'question': 'Which of these states is farthest north?',
'choices': ['West Virginia', 'Louisiana', 'Arizona', 'Oklahoma'],
'answer': 0,
'hint': '',
'task': 'closed choice',
'grade': 'grade2',
'subject': 'social science',
'topic': 'geography',
'category': 'Geography',
'skill': 'Read a map: cardinal directions',
'lecture': 'Maps have four cardinal directions, or main directions. Those directions are north, south, east, and west.\nA compass rose is a set of arrows that point to the cardinal directions. A compass rose usually shows only the first letter of each cardinal direction.\nThe north arrow points to the North Pole. On most maps, north is at the top of the map.',
'solution': 'To find the answer, look at the compass rose. Look at which way the north arrow is pointing. West Virginia is farthest north.'}
```
Some records might be missing any or all of image, lecture, solution.
### Data Fields
- `image` : Contextual image
- `question` : Prompt relating to the `lecture`
- `choices` : Multiple choice answer with 1 correct to the `question`
- `answer` : Index of choices corresponding to the correct answer
- `hint` : Hint to help answer the `question`
- `task` : Task description
- `grade` : Grade level from K-12
- `subject` : High level
- `topic` : natural-sciences, social-science, or language-science
- `category` : A subcategory of `topic`
- `skill` : A description of the task required
- `lecture` : A relevant lecture that a `question` is generated from
- `solution` : Instructions on how to solve the `question`
Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [Datasets Tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
### Data Splits
- name: train
- num_bytes: 16416902
- num_examples: 12726
- name: validation
- num_bytes: 5404896
- num_examples: 4241
- name: test
- num_bytes: 5441676
- num_examples: 4241
## Dataset Creation
### Curation Rationale
When answering a question, humans utilize the information available across different modalities to synthesize a consistent and complete chain of thought (CoT). This process is normally a black box in the case of deep learning models like large-scale language models. Recently, science question benchmarks have been used to diagnose the multi-hop reasoning ability and interpretability of an AI system. However, existing datasets fail to provide annotations for the answers, or are restricted to the textual-only modality, small scales, and limited domain diversity. To this end, we present Science Question Answering (ScienceQA).
### Source Data
ScienceQA is collected from elementary and high school science curricula.
#### Initial Data Collection and Normalization
See Below
#### Who are the source language producers?
See Below
### Annotations
Questions in the ScienceQA dataset are sourced from open resources managed by IXL Learning,
an online learning platform curated by experts in the field of K-12 education. The dataset includes
problems that align with California Common Core Content Standards. To construct ScienceQA, we
downloaded the original science problems and then extracted individual components (e.g. questions,
hints, images, options, answers, lectures, and solutions) from them based on heuristic rules.
We manually removed invalid questions, such as questions that have only one choice, questions that
contain faulty data, and questions that are duplicated, to comply with fair use and transformative
use of the law. If there were multiple correct answers that applied, we kept only one correct answer.
Also, we shuffled the answer options of each question to ensure the choices do not follow any
specific pattern. To make the dataset easy to use, we then used semi-automated scripts to reformat
the lectures and solutions. Therefore, special structures in the texts, such as tables and lists, are
easily distinguishable from simple text passages. Similar to ImageNet, ReClor, and PMR datasets,
ScienceQA is available for non-commercial research purposes only and the copyright belongs to
the original authors. To ensure data quality, we developed a data exploration tool to review examples
in the collected dataset, and incorrect annotations were further manually revised by experts. The tool
can be accessed at https://scienceqa.github.io/explore.html.
#### Annotation process
See above
#### Who are the annotators?
See above
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
- Pan Lu1,3
- Swaroop Mishra2,3
- Tony Xia1
- Liang Qiu1
- Kai-Wei Chang1
- Song-Chun Zhu1
- Oyvind Tafjord3
- Peter Clark3
- Ashwin Kalyan3
From:
1. University of California, Los Angeles
2. Arizona State University
3. Allen Institute for AI
### Licensing Information
[Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
](https://creativecommons.org/licenses/by-nc-sa/4.0/)
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@inproceedings{lu2022learn,
title={Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering},
author={Lu, Pan and Mishra, Swaroop and Xia, Tony and Qiu, Liang and Chang, Kai-Wei and Zhu, Song-Chun and Tafjord, Oyvind and Clark, Peter and Ashwin Kalyan},
booktitle={The 36th Conference on Neural Information Processing Systems (NeurIPS)},
year={2022}
}
```
### Contributions
Thanks to [Derek Thomas](https://huggingface.co/derek-thomas) [@datavistics](https://github.com/datavistics) for adding this dataset.
|
derek-thomas/ScienceQA
|
[
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:other",
"task_categories:visual-question-answering",
"task_categories:text-classification",
"task_ids:multiple-choice-qa",
"task_ids:closed-domain-qa",
"task_ids:open-domain-qa",
"task_ids:visual-question-answering",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"annotations_creators:found",
"language_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"multi-modal-qa",
"science",
"chemistry",
"biology",
"physics",
"earth-science",
"engineering",
"geography",
"history",
"world-history",
"civics",
"economics",
"global-studies",
"grammar",
"writing",
"vocabulary",
"natural-science",
"language-science",
"social-science",
"arxiv:2209.09513",
"region:us"
] |
2023-02-10T11:28:58+00:00
|
{"annotations_creators": ["expert-generated", "found"], "language_creators": ["expert-generated", "found"], "language": ["en"], "license": "cc-by-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["multiple-choice", "question-answering", "other", "visual-question-answering", "text-classification"], "task_ids": ["multiple-choice-qa", "closed-domain-qa", "open-domain-qa", "visual-question-answering", "multi-class-classification"], "paperswithcode_id": "scienceqa", "pretty_name": "ScienceQA", "tags": ["multi-modal-qa", "science", "chemistry", "biology", "physics", "earth-science", "engineering", "geography", "history", "world-history", "civics", "economics", "global-studies", "grammar", "writing", "vocabulary", "natural-science", "language-science", "social-science"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int8"}, {"name": "hint", "dtype": "string"}, {"name": "task", "dtype": "string"}, {"name": "grade", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "topic", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "skill", "dtype": "string"}, {"name": "lecture", "dtype": "string"}, {"name": "solution", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16416902, "num_examples": 12726}, {"name": "validation", "num_bytes": 5404896, "num_examples": 4241}, {"name": "test", "num_bytes": 5441676, "num_examples": 4241}], "download_size": 0, "dataset_size": 27263474}}
|
2023-02-25T04:23:01+00:00
|
1e3dd39b39787bddb20d7008e4d71c330d99f55b
|
## Dataset Description
A small subset of [the-stack](https://huggingface.co/datasets/bigcode/the-stack) dataset, with 87 programming languages, each has 100 random samples from the original dataset for visualization.
## Languages
The dataset contains 87 programming languages:
````
'ada', 'agda', 'alloy', 'antlr', 'applescript', 'assembly', 'augeas', 'awk', 'batchfile', 'bison', 'bluespec', 'c',
'c++', 'c-sharp', 'clojure', 'cmake', 'coffeescript', 'common-lisp', 'css', 'cuda', 'dart', 'dockerfile', 'elixir',
'elm', 'emacs-lisp','erlang', 'f-sharp', 'fortran', 'glsl', 'go', 'groovy', 'haskell','html', 'idris', 'isabelle', 'java',
'java-server-pages', 'javascript', 'julia', 'kotlin', 'lean', 'literate-agda', 'literate-coffeescript', 'literate-haskell',
'lua', 'makefile', 'maple', 'markdown', 'mathematica', 'matlab', 'ocaml', 'pascal', 'perl', 'php', 'powershell', 'prolog',
'protocol-buffer', 'python', 'r', 'racket', 'restructuredtext', 'rmarkdown', 'ruby', 'rust', 'sas', 'scala', 'scheme',
'shell', 'smalltalk', 'solidity', 'sparql', 'sql', 'stan', 'standard-ml', 'stata', 'systemverilog', 'tcl', 'tcsh', 'tex',
'thrift', 'typescript', 'verilog', 'vhdl', 'visual-basic', 'xslt', 'yacc', 'zig'
`````
## Dataset Structure
You can specify which language you want to load, python is loaded by default:
```python
# to load go:
from datasets import load_dataset
load_dataset("bigcode/the-stack-smol-xs", "go")
DatasetDict({
train: Dataset({
features: ['content', 'lang', 'size', 'ext', 'max_stars_count', 'avg_line_length', 'max_line_length', 'alphanum_fraction'],
num_rows: 100
})
})
```
|
bigcode/the-stack-smol-xs
|
[
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:unknown",
"language:code",
"region:us"
] |
2023-02-10T11:47:50+00:00
|
{"annotations_creators": [], "language_creators": ["crowdsourced"], "language": ["code"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"]}
|
2023-02-13T09:05:23+00:00
|
08658458a8137ddd744c6ba5b608557240b38268
|
# Dataset Card for "yoci_monkey"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Swatermelon/yoci_monkey
|
[
"region:us"
] |
2023-02-10T12:08:57+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 594675.0, "num_examples": 43}], "download_size": 0, "dataset_size": 594675.0}}
|
2023-02-14T07:03:46+00:00
|
1076853c2f3eefc47bba652dc00459018ce695d1
|
# Dataset Card for "stanford_plato"
## Description
This is a collection of articles in the Stanford Encyclopedia of Philosophy (https://plato.stanford.edu/index.html).
This dataset includes 1776 articles, each explaining one philosophy term/people/topic. It has 8 features:
- shorturl: The shorturl for the article. For example, the shorturl 'abduction' correspond to the page https://plato.stanford.edu/entries/abduction/
- title: The title of the article.
- pubinfo: The publication information.
- **preamble**: The preface text of the article. The data is a list, each item of the list is a paragraph of the data. I choose not to break the paragraph structure. Certainly, you can merge them by, for example, ''.join(data['preamble'])
- toc: Table of contents. Also represented as list. Each item is a dictionary, the 'content_title' is the main content title, and the 'sub_toc' is a list of subcontent titles.
- **main_text**: The main text of the article.
The data is also a list, each item represents a section of the article.
Each item is a dictionary, 'section_title' is the title of the section, 'main_content' is a list of paragraphs before subsections,
'subsections' is a list of subsections, each item is also a dictionary, has its own title 'subsection_title' and list of paragraphs 'content'.
- bibliography: list of bibliography.
- related_entries: list of entries related to the current entry.
## Copyright and license
See the information at the offical website: https://plato.stanford.edu/info.html#c
This is not an official release. May be deleted later if violates copyright. The responsibility of not abusing is on the user.
|
hugfaceguy0001/stanford_plato
|
[
"region:us"
] |
2023-02-10T12:47:16+00:00
|
{"dataset_info": {"features": [{"name": "shorturl", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "pubinfo", "dtype": "string"}, {"name": "preamble", "sequence": "string"}, {"name": "toc", "list": [{"name": "content_title", "dtype": "string"}, {"name": "sub_toc", "sequence": "string"}]}, {"name": "main_text", "list": [{"name": "main_content", "sequence": "string"}, {"name": "section_title", "dtype": "string"}, {"name": "subsections", "list": [{"name": "content", "sequence": "string"}, {"name": "subsection_title", "dtype": "string"}]}]}, {"name": "bibliography", "sequence": "string"}, {"name": "related_entries", "list": [{"name": "href", "dtype": "string"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 160405734, "num_examples": 1776}], "download_size": 90000475, "dataset_size": 160405734}}
|
2023-02-10T14:03:54+00:00
|
82e842316742f503cc46a9e919bbc87b1ca6e150
|
# Dataset Card for "cities-suburbs-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
olly4/cities-suburbs-small
|
[
"region:us"
] |
2023-02-10T12:55:10+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "Unnamed: 0", "dtype": "int64"}, {"name": "description", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 872929816.432, "num_examples": 2202}], "download_size": 428529931, "dataset_size": 872929816.432}}
|
2023-02-10T13:06:21+00:00
|
3c119e7fba88fb20cf79910a7908d964baebda1d
|
# Dataset Card for "chai-synthetic-pairwise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/chai-synthetic-pairwise
|
[
"region:us"
] |
2023-02-10T13:01:05+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1690616961, "num_examples": 41128}, {"name": "test", "num_bytes": 47839521, "num_examples": 4570}], "download_size": 781208088, "dataset_size": 1738456482}}
|
2023-02-10T13:03:25+00:00
|
78847d5c474df635a3575b2a819532e87cd35cdf
|
# AutoTrain Dataset for project: histopathological_image_classification
## Dataset Description
This dataset has been automatically processed by AutoTrain for project histopathological_image_classification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<700x460 RGB PIL image>",
"target": 6
},
{
"image": "<700x460 RGB PIL image>",
"target": 2
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['1', '2', '3', '4', '5', '6', '7', '8'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 333 |
| valid | 89 |
|
JoffreyMa/autotrain-data-histopathological_image_classification
|
[
"task_categories:image-classification",
"region:us"
] |
2023-02-10T13:05:16+00:00
|
{"task_categories": ["image-classification"]}
|
2023-02-10T13:25:34+00:00
|
f861285103b1ad536e964df4f2dd57323cd4256d
|
# Dataset Card for "chai-real-and-synthetic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AlekseyKorshuk/chai-real-and-synthetic
|
[
"region:us"
] |
2023-02-10T13:21:26+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3208000491, "num_examples": 134668}, {"name": "test", "num_bytes": 171664726, "num_examples": 18961}], "download_size": 1180192490, "dataset_size": 3379665217}}
|
2023-02-10T13:22:05+00:00
|
7aef09d4834c8d2d265ef58acc5aeba160f2bbc7
|
MatthewWhaley/test_dataset1
|
[
"task_categories:question-answering",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
2023-02-10T13:25:16+00:00
|
{"language": ["en"], "license": "cc-by-nc-sa-4.0", "task_categories": ["question-answering"]}
|
2023-02-10T13:29:07+00:00
|
|
6ebc752130d09a397ccf15e5daf781c9905f98d3
|
### Dataset Summary
This dataset will help you to develop a machine learning-based model to predict the pathogenic variants (Positive labels) by utilizing their amino acid sequences.
**Used as an example to benchmark _biomerida_ as part of the Bio-Hakathon Mena region**
|
sequential-lab/TP53_protein_variants
|
[
"region:us"
] |
2023-02-10T14:04:25+00:00
|
{}
|
2023-02-10T14:27:18+00:00
|
bfc78b0d405307cb2c6d48976a317f0a1d6fa106
|
# Dataset Card for "dataset-affecthqnet-fer2013"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Piro17/dataset-affecthqnet-fer2013
|
[
"region:us"
] |
2023-02-10T14:07:09+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "anger", "1": "disgust", "2": "fear", "3": "happy", "4": "neutral", "5": "sad", "6": "surprise"}}}}], "splits": [{"name": "train", "num_bytes": 106887329.048, "num_examples": 56532}], "download_size": 7975090261, "dataset_size": 106887329.048}}
|
2023-02-10T14:13:09+00:00
|
9bd05027eb3b7a8feda41dfab25afd158e9c250c
|
# Dataset Card for "SynthDog-RU_EN"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Nyaaneet/SynthDog-RU_EN
|
[
"region:us"
] |
2023-02-10T14:21:54+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}, {"name": "lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 458147114.98, "num_examples": 9570}, {"name": "test", "num_bytes": 113499938.11, "num_examples": 2430}], "download_size": 558565516, "dataset_size": 571647053.09}}
|
2023-02-10T14:55:04+00:00
|
76b25ee1b7a376b8c4a204ef8c5fad7e1ffcb3bb
|
chrisfxd/flores
|
[
"license:openrail",
"region:us"
] |
2023-02-10T14:49:41+00:00
|
{"license": "openrail"}
|
2023-02-10T14:49:41+00:00
|
|
08c7b0f394f7d7f863a6abca58c1f682496441b9
|
# instruction dataset for code bugfix
## TODO:
- [ ] Add commit message as prompt or summary of the bug
- [ ] Add source code repos & file & commit_sha
|
zirui3/ManySStuBs4J-instructions-v0
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-02-10T15:07:19+00:00
|
{"license": "cc-by-4.0"}
|
2023-03-08T04:47:04+00:00
|
b562e57d6d9a3c5c4d67ddd334a969c67f93c005
|
# Dataset Card for "Europarl-ST-processed-mt-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/Europarl-ST-processed-mt-en
|
[
"region:us"
] |
2023-02-10T15:10:47+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": {"class_label": {"names": {"0": "de", "1": "en", "2": "es", "3": "fr", "4": "it", "5": "nl", "6": "pl", "7": "pt", "8": "ro"}}}}], "splits": [{"name": "train", "num_bytes": 198087377, "num_examples": 602605}, {"name": "valid", "num_bytes": 27678568, "num_examples": 81968}, {"name": "test", "num_bytes": 29120332, "num_examples": 86170}], "download_size": 104863110, "dataset_size": 254886277}}
|
2023-02-10T16:04:39+00:00
|
7e5e15291fccfb528029621991762ec1941c740f
|
# Dataset Card for "Europarl-ST-processed-mt-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/Europarl-ST-processed-mt-es
|
[
"region:us"
] |
2023-02-10T15:19:21+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": {"class_label": {"names": {"0": "de", "1": "en", "2": "es", "3": "fr", "4": "it", "5": "nl", "6": "pl", "7": "pt", "8": "ro"}}}}], "splits": [{"name": "train", "num_bytes": 191389757, "num_examples": 553896}, {"name": "valid", "num_bytes": 26548844, "num_examples": 74770}, {"name": "test", "num_bytes": 27538253, "num_examples": 77952}], "download_size": 95321190, "dataset_size": 245476854}}
|
2023-02-10T16:12:33+00:00
|
f6e34b4e22c9418ae3da0b1cf33d0a7999244e8a
|
# Dataset Card for "sq-anli_a3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
niv-al/sq-anli_a3
|
[
"language:sq",
"region:us"
] |
2023-02-10T15:20:03+00:00
|
{"language": ["sq"], "dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "train", "num_bytes": 10222003, "num_examples": 30000}, {"name": "validation", "num_bytes": 49754, "num_examples": 144}, {"name": "test", "num_bytes": 48945, "num_examples": 144}], "download_size": 6272043, "dataset_size": 10320702}}
|
2023-02-18T19:58:08+00:00
|
10b9eeb03ec04f574dad8f1e75a58484ea7e689d
|
# Dataset Card for "Europarl-ST-processed-mt-fr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/Europarl-ST-processed-mt-fr
|
[
"region:us"
] |
2023-02-10T15:27:34+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": {"class_label": {"names": {"0": "de", "1": "en", "2": "es", "3": "fr", "4": "it", "5": "nl", "6": "pl", "7": "pt", "8": "ro"}}}}], "splits": [{"name": "train", "num_bytes": 199700180, "num_examples": 560866}, {"name": "valid", "num_bytes": 27382683, "num_examples": 74712}, {"name": "test", "num_bytes": 28363822, "num_examples": 77906}], "download_size": 95095990, "dataset_size": 255446685}}
|
2023-02-10T16:20:20+00:00
|
d81aa1666c63fd3d14d0e47b3d403e878496a92b
|
# Dataset Card for "Europarl-ST-processed-mt-it"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/Europarl-ST-processed-mt-it
|
[
"region:us"
] |
2023-02-10T15:36:06+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": {"class_label": {"names": {"0": "de", "1": "en", "2": "es", "3": "fr", "4": "it", "5": "nl", "6": "pl", "7": "pt", "8": "ro"}}}}], "splits": [{"name": "train", "num_bytes": 179019467, "num_examples": 504773}, {"name": "valid", "num_bytes": 24674767, "num_examples": 67701}, {"name": "test", "num_bytes": 25588641, "num_examples": 70814}], "download_size": 89055953, "dataset_size": 229282875}}
|
2023-02-10T16:27:42+00:00
|
95e7e2b07c05c3065782a53d04362040af31ecad
|
# Dataset Card for "Europarl-ST-processed-mt-nl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/Europarl-ST-processed-mt-nl
|
[
"region:us"
] |
2023-02-10T15:44:38+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": {"class_label": {"names": {"0": "de", "1": "en", "2": "es", "3": "fr", "4": "it", "5": "nl", "6": "pl", "7": "pt", "8": "ro"}}}}], "splits": [{"name": "train", "num_bytes": 138469976, "num_examples": 384704}, {"name": "valid", "num_bytes": 17984502, "num_examples": 48280}, {"name": "test", "num_bytes": 19576114, "num_examples": 53360}], "download_size": 66327284, "dataset_size": 176030592}}
|
2023-02-10T18:13:50+00:00
|
9e086906606038acfa1a33247876d546f83dc09b
|
# Dataset Card for "dataset-balanced-affecthqnet-fer2013"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Piro17/balancednumber-affecthqnet-fer2013
|
[
"region:us"
] |
2023-02-10T15:46:56+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "anger", "1": "disgust", "2": "fear", "3": "happy", "4": "neutral", "5": "sad", "6": "surprise"}}}}], "splits": [{"name": "train", "num_bytes": 40414185.188, "num_examples": 21343}], "download_size": 1835629540, "dataset_size": 40414185.188}}
|
2023-02-10T15:48:19+00:00
|
9f3f868611dcaec554fdef4771f5a811545b945e
|
# Dataset Card for "sq-babi_nli_counting"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
niv-al/sq-babi_nli_counting
|
[
"language:sq",
"region:us"
] |
2023-02-10T15:56:45+00:00
|
{"language": ["sq"], "dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "not-entailed", "1": "entailed"}}}}], "splits": [{"name": "train", "num_bytes": 250135, "num_examples": 1000}, {"name": "validation", "num_bytes": 34065, "num_examples": 144}, {"name": "test", "num_bytes": 37455, "num_examples": 144}], "download_size": 60218, "dataset_size": 321655}}
|
2023-02-18T19:58:32+00:00
|
05c787b56ea336fdd2addfaca1a1d47d1b3f8c3a
|
# Dataset Card for "sq-babi_nli_positional-reasoning"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
niv-al/sq-babi_nli_positional-reasoning
|
[
"language:sq",
"region:us"
] |
2023-02-10T16:21:18+00:00
|
{"language": ["sq"], "dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "not-entailed", "1": "entailed"}}}}], "splits": [{"name": "train", "num_bytes": 152195, "num_examples": 1000}, {"name": "validation", "num_bytes": 21191, "num_examples": 144}, {"name": "test", "num_bytes": 21022, "num_examples": 144}], "download_size": 17282, "dataset_size": 194408}}
|
2023-02-18T19:59:07+00:00
|
a9240eca6217d5d14ef73d2645e0de5cdcf1ca76
|
# Dataset Card for "sq-babi_nli_size-reasoning"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
niv-al/sq-babi_nli_size-reasoning
|
[
"language:sq",
"region:us"
] |
2023-02-10T17:03:37+00:00
|
{"language": ["sq"], "dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "not-entailed", "1": "entailed"}}}}], "splits": [{"name": "train", "num_bytes": 276956, "num_examples": 1000}, {"name": "validation", "num_bytes": 38395, "num_examples": 144}, {"name": "test", "num_bytes": 38898, "num_examples": 144}], "download_size": 32189, "dataset_size": 354249}}
|
2023-02-18T19:59:16+00:00
|
46b273656c822dfacc92ce28419c9f83c5693ebd
|
# Dataset Card for "Europarl-ST-processed-mt-pt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/Europarl-ST-processed-mt-pt
|
[
"region:us"
] |
2023-02-10T17:17:09+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": {"class_label": {"names": {"0": "de", "1": "en", "2": "es", "3": "fr", "4": "it", "5": "nl", "6": "pl", "7": "pt", "8": "ro"}}}}], "splits": [{"name": "train", "num_bytes": 138047666, "num_examples": 384704}, {"name": "valid", "num_bytes": 17922979, "num_examples": 48280}, {"name": "test", "num_bytes": 19562527, "num_examples": 53360}], "download_size": 67380711, "dataset_size": 175533172}}
|
2023-02-10T18:04:10+00:00
|
d0aee2f08211c56141cd0fb1e3d942469a7c0d55
|
# Dataset Card for "VQAv2_sample_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_train
|
[
"region:us"
] |
2023-02-10T17:59:46+00:00
|
{"dataset_info": {"features": [{"name": "question_type", "dtype": "string"}, {"name": "multiple_choice_answer", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "answers_original", "list": [{"name": "answer", "dtype": "string"}, {"name": "answer_confidence", "dtype": "string"}, {"name": "answer_id", "dtype": "int64"}]}, {"name": "id_image", "dtype": "int64"}, {"name": "answer_type", "dtype": "string"}, {"name": "question_id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "id", "dtype": "int64"}, {"name": "clip_tags_ViT_L_14", "sequence": "string"}, {"name": "blip_caption", "dtype": "string"}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float32"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float32"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 158100925.0, "num_examples": 1000}], "download_size": 155253264, "dataset_size": 158100925.0}}
|
2023-02-12T00:01:41+00:00
|
99487d2651df3799002b2fb3e455741744514a02
|
# Dataset Card for "VQAv2_sample_validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_validation
|
[
"region:us"
] |
2023-02-10T17:59:57+00:00
|
{"dataset_info": {"features": [{"name": "question_type", "dtype": "string"}, {"name": "multiple_choice_answer", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "answers_original", "list": [{"name": "answer", "dtype": "string"}, {"name": "answer_confidence", "dtype": "string"}, {"name": "answer_id", "dtype": "int64"}]}, {"name": "id_image", "dtype": "int64"}, {"name": "answer_type", "dtype": "string"}, {"name": "question_id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "id", "dtype": "int64"}, {"name": "clip_tags_ViT_L_14", "sequence": "string"}, {"name": "blip_caption", "dtype": "string"}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float32"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float32"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14", "sequence": "string"}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes_ViT_L_14", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "new_info_captions3", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "captions_module", "sequence": {"sequence": "string"}}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "captions_module", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_without_filtering", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "captions_module", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "clip_tags_LAION_ViT_H_14_2B", "sequence": "string"}, {"name": "LLM_Description_gpt3_downstream_tasks_visual_genome_LAION-ViT-H-14-2B", "sequence": "string"}, {"name": "DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_random", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "caption", "dtype": "string"}, {"name": "captions_module", "sequence": "string"}, {"name": "captions_module_filter", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "Attributes_ViT_L_14_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "clip_tags_ViT_L_14_with_openai", "sequence": "string"}, {"name": "clip_tags_LAION_ViT_H_14_2B_with_openai", "sequence": "string"}, {"name": "blip_caption_beam_5_Salesforce_blip2_flan_t5_xxl", "dtype": "string"}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large_", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "captions_all_patches", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large_clean", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "captions_all_patches", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "blip_caption_topk_50_Salesforce_blip_image_captioning_base_multiple", "sequence": "string"}, {"name": "DETA_detections_deta_swin_large_o365_clip_caption_all_patches_Salesforce_blip_image_captioning_large__ViT_L_14", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float64"}, {"name": "captions_all_patches", "sequence": "string"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float64"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "blip_caption_Salesforce_blip_image_captioning_large_intensive", "sequence": "string"}, {"name": "blip_caption_Salesforce_blip_image_captioning_base_intensive", "sequence": "string"}], "splits": [{"name": "validation", "num_bytes": 511357022.0, "num_examples": 1000}], "download_size": 293191811, "dataset_size": 511357022.0}}
|
2023-06-08T23:06:10+00:00
|
d8448eca07d1505b6c9da24eddad0b2bc6a24e08
|
# Dataset Card for "VQAv2_sample_testdev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_testdev
|
[
"region:us"
] |
2023-02-10T18:00:08+00:00
|
{"dataset_info": {"features": [{"name": "question_type", "dtype": "string"}, {"name": "multiple_choice_answer", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "answers_original", "list": [{"name": "answer", "dtype": "string"}, {"name": "answer_confidence", "dtype": "string"}, {"name": "answer_id", "dtype": "int64"}]}, {"name": "id_image", "dtype": "int64"}, {"name": "answer_type", "dtype": "string"}, {"name": "question_id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "id", "dtype": "int64"}, {"name": "clip_tags_ViT_L_14", "sequence": "string"}, {"name": "blip_caption", "dtype": "string"}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float32"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float32"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}, {"name": "blip_caption_beam_5", "dtype": "string"}], "splits": [{"name": "testdev", "num_bytes": 159593151.0, "num_examples": 1000}], "download_size": 156894337, "dataset_size": 159593151.0}}
|
2023-05-14T21:08:48+00:00
|
a03bae0c44445a77a6579d801ca3124fb0d023ca
|
# Dataset Card for "VQAv2_sample_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/VQAv2_sample_test
|
[
"region:us"
] |
2023-02-10T18:00:19+00:00
|
{"dataset_info": {"features": [{"name": "question_type", "dtype": "string"}, {"name": "multiple_choice_answer", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "answers_original", "list": [{"name": "answer", "dtype": "string"}, {"name": "answer_confidence", "dtype": "string"}, {"name": "answer_id", "dtype": "int64"}]}, {"name": "id_image", "dtype": "int64"}, {"name": "answer_type", "dtype": "string"}, {"name": "question_id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "id", "dtype": "int64"}, {"name": "clip_tags_ViT_L_14", "sequence": "string"}, {"name": "blip_caption", "dtype": "string"}, {"name": "DETA_detections_deta_swin_large_o365_coco_classes", "list": [{"name": "attribute", "dtype": "string"}, {"name": "box", "sequence": "float32"}, {"name": "label", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "ratio", "dtype": "float32"}, {"name": "size", "dtype": "string"}, {"name": "tag", "dtype": "string"}]}], "splits": [{"name": "test", "num_bytes": 158160937.0, "num_examples": 1000}], "download_size": 155874093, "dataset_size": 158160937.0}}
|
2023-02-12T00:03:37+00:00
|
fa32e5077dd9580d0d9b0102ebdf36f41eb52981
|
# Dataset Card for "climate_fever_fixed"
### Dataset Summary
This dataset was created to aid our team in developing a model to more accurately perform climate change-related fact checking. We approach this task from a perspective heavily impacted
by the work of the [ClimateBERT](https://climatebert.ai/about) team. With that in mind, our team likewise leveraged a BERT Language model to solve this task. This dataset presents an
edited version of the [Climate_Fever](https://huggingface.co/datasets/climate_fever) dataset, hosted by HuggingFace. Climate_Fever is composed of climate-related documents
that have been annotated with labels related to fact-checking and misinformation. However, in the climate-plus project, we decided to modify the dataset to remove redundancy
and keep only the essentials of a text-entailment problem: claim as the premise and evidence as the hypothesis.
### Data Fields
This dataset contains 7675 records, each of which is composed of several attributes:
- `claim_id`: a `integer` feature, which serves as a unique identifier for each record/row.
- `claim`: a `string` feature, containes the raw text of a given climate-related claim.
- `evidence`: a `string` feature, which provides free text evidence that relates to the previously established claim.
- `label`: a `class label` feature representing an assigned class, where values can either be 0: "supports", 1: "refutes" and 2: "not enough info".
- `category`: a `string` feature, which provides additional detail about the particular focus of a given claim.
<br>
This dataset was then broken into train, test and validation sets to enable proper evaluation of our model. These splits contain the following amount of data:
- `Train`: 4300 Records
- `Test`: 1540 Records
- `Val`: 1840 Records
### Source Data
This dataset represents an evolved version of the original [Climate_Fever](https://huggingface.co/datasets/climate_fever) dataset, hosted by HuggingFace. It was adapted to meet
the needs of our team, as we attempted to solve a specific climate change-related task. The original dataset adopted the FEVER methodology, discussed in more detail [here](https://www.amazon.science/blog/the-fever-data-set-what-doesnt-kill-it-will-make-it-stronger).
Their original dataset consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence
sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs.
### Methodology
This dataset was curated by our team to reduce redundancy and keep only the essentials of a text-entailment problem: claim as the premise and evidence as the hypothesis.
For each given claim, there are multiple sentences of evidence. We decided to expand the one-to-many relation to one-to-one.
This resulted in a modified version of the climate_fever dataset that includes only one evidence sentence per claim.
### Languages
The text contained in the dataset is entirely in English, as found in the real-world financial disclosures identified by the TCFD. The associated BCP-47 code is [`en`](https://www.techonthenet.com/js/language_tags.php), to ensure clear labeling of language usage for downstream tasks and other future applications.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rexarski/climate_fever_fixed
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"climate",
"region:us"
] |
2023-02-10T18:01:46+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "climate_fever dataset with one-to-one claim-evidence pair", "dataset_info": {"features": [{"name": "claim_id", "dtype": "int64"}, {"name": "claim", "dtype": "string"}, {"name": "evidence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "SUPPORTS", "1": "REFUTES", "2": "NOT_ENOUGH_INFO"}}}}, {"name": "category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1467456, "num_examples": 4298}, {"name": "test", "num_bytes": 526276, "num_examples": 1535}, {"name": "valid", "num_bytes": 635174, "num_examples": 1842}], "download_size": 1372892, "dataset_size": 2628906}, "tags": ["climate"]}
|
2023-04-30T02:46:52+00:00
|
12d4b26f6ae25ce7ca6639c458bbc02e7f552fdc
|
# Dataset Card for "Imagenet1k_sample_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/Imagenet1k_sample_train
|
[
"region:us"
] |
2023-02-10T18:05:04+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "tench, Tinca tinca", "1": "goldfish, Carassius auratus", "2": "great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias", "3": "tiger shark, Galeocerdo cuvieri", "4": "hammerhead, hammerhead shark", "5": "electric ray, crampfish, numbfish, torpedo", "6": "stingray", "7": "cock", "8": "hen", "9": "ostrich, Struthio camelus", "10": "brambling, Fringilla montifringilla", "11": "goldfinch, Carduelis carduelis", "12": "house finch, linnet, Carpodacus mexicanus", "13": "junco, snowbird", "14": "indigo bunting, indigo finch, indigo bird, Passerina cyanea", "15": "robin, American robin, Turdus migratorius", "16": "bulbul", "17": "jay", "18": "magpie", "19": "chickadee", "20": "water ouzel, dipper", "21": "kite", "22": "bald eagle, American eagle, Haliaeetus leucocephalus", "23": "vulture", "24": "great grey owl, great gray owl, Strix nebulosa", "25": "European fire salamander, Salamandra salamandra", "26": "common newt, Triturus vulgaris", "27": "eft", "28": "spotted salamander, Ambystoma maculatum", "29": "axolotl, mud puppy, Ambystoma mexicanum", "30": "bullfrog, Rana catesbeiana", "31": "tree frog, tree-frog", "32": "tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui", "33": "loggerhead, loggerhead turtle, Caretta caretta", "34": "leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea", "35": "mud turtle", "36": "terrapin", "37": "box turtle, box tortoise", "38": "banded gecko", "39": "common iguana, iguana, Iguana iguana", "40": "American chameleon, anole, Anolis carolinensis", "41": "whiptail, whiptail lizard", "42": "agama", "43": "frilled lizard, Chlamydosaurus kingi", "44": "alligator lizard", "45": "Gila monster, Heloderma suspectum", "46": "green lizard, Lacerta viridis", "47": "African chameleon, Chamaeleo chamaeleon", "48": "Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis", "49": "African crocodile, Nile crocodile, Crocodylus niloticus", "50": "American alligator, Alligator mississipiensis", "51": "triceratops", "52": "thunder snake, worm snake, Carphophis amoenus", "53": "ringneck snake, ring-necked snake, ring snake", "54": "hognose snake, puff adder, sand viper", "55": "green snake, grass snake", "56": "king snake, kingsnake", "57": "garter snake, grass snake", "58": "water snake", "59": "vine snake", "60": "night snake, Hypsiglena torquata", "61": "boa constrictor, Constrictor constrictor", "62": "rock python, rock snake, Python sebae", "63": "Indian cobra, Naja naja", "64": "green mamba", "65": "sea snake", "66": "horned viper, cerastes, sand viper, horned asp, Cerastes cornutus", "67": "diamondback, diamondback rattlesnake, Crotalus adamanteus", "68": "sidewinder, horned rattlesnake, Crotalus cerastes", "69": "trilobite", "70": "harvestman, daddy longlegs, Phalangium opilio", "71": "scorpion", "72": "black and gold garden spider, Argiope aurantia", "73": "barn spider, Araneus cavaticus", "74": "garden spider, Aranea diademata", "75": "black widow, Latrodectus mactans", "76": "tarantula", "77": "wolf spider, hunting spider", "78": "tick", "79": "centipede", "80": "black grouse", "81": "ptarmigan", "82": "ruffed grouse, partridge, Bonasa umbellus", "83": "prairie chicken, prairie grouse, prairie fowl", "84": "peacock", "85": "quail", "86": "partridge", "87": "African grey, African gray, Psittacus erithacus", "88": "macaw", "89": "sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita", "90": "lorikeet", "91": "coucal", "92": "bee eater", "93": "hornbill", "94": "hummingbird", "95": "jacamar", "96": "toucan", "97": "drake", "98": "red-breasted merganser, Mergus serrator", "99": "goose", "100": "black swan, Cygnus atratus", "101": "tusker", "102": "echidna, spiny anteater, anteater", "103": "platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus", "104": "wallaby, brush kangaroo", "105": "koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus", "106": "wombat", "107": "jellyfish", "108": "sea anemone, anemone", "109": "brain coral", "110": "flatworm, platyhelminth", "111": "nematode, nematode worm, roundworm", "112": "conch", "113": "snail", "114": "slug", "115": "sea slug, nudibranch", "116": "chiton, coat-of-mail shell, sea cradle, polyplacophore", "117": "chambered nautilus, pearly nautilus, nautilus", "118": "Dungeness crab, Cancer magister", "119": "rock crab, Cancer irroratus", "120": "fiddler crab", "121": "king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica", "122": "American lobster, Northern lobster, Maine lobster, Homarus americanus", "123": "spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish", "124": "crayfish, crawfish, crawdad, crawdaddy", "125": "hermit crab", "126": "isopod", "127": "white stork, Ciconia ciconia", "128": "black stork, Ciconia nigra", "129": "spoonbill", "130": "flamingo", "131": "little blue heron, Egretta caerulea", "132": "American egret, great white heron, Egretta albus", "133": "bittern", "134": "crane", "135": "limpkin, Aramus pictus", "136": "European gallinule, Porphyrio porphyrio", "137": "American coot, marsh hen, mud hen, water hen, Fulica americana", "138": "bustard", "139": "ruddy turnstone, Arenaria interpres", "140": "red-backed sandpiper, dunlin, Erolia alpina", "141": "redshank, Tringa totanus", "142": "dowitcher", "143": "oystercatcher, oyster catcher", "144": "pelican", "145": "king penguin, Aptenodytes patagonica", "146": "albatross, mollymawk", "147": "grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus", "148": "killer whale, killer, orca, grampus, sea wolf, Orcinus orca", "149": "dugong, Dugong dugon", "150": "sea lion", "151": "Chihuahua", "152": "Japanese spaniel", "153": "Maltese dog, Maltese terrier, Maltese", "154": "Pekinese, Pekingese, Peke", "155": "Shih-Tzu", "156": "Blenheim spaniel", "157": "papillon", "158": "toy terrier", "159": "Rhodesian ridgeback", "160": "Afghan hound, Afghan", "161": "basset, basset hound", "162": "beagle", "163": "bloodhound, sleuthhound", "164": "bluetick", "165": "black-and-tan coonhound", "166": "Walker hound, Walker foxhound", "167": "English foxhound", "168": "redbone", "169": "borzoi, Russian wolfhound", "170": "Irish wolfhound", "171": "Italian greyhound", "172": "whippet", "173": "Ibizan hound, Ibizan Podenco", "174": "Norwegian elkhound, elkhound", "175": "otterhound, otter hound", "176": "Saluki, gazelle hound", "177": "Scottish deerhound, deerhound", "178": "Weimaraner", "179": "Staffordshire bullterrier, Staffordshire bull terrier", "180": "American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier", "181": "Bedlington terrier", "182": "Border terrier", "183": "Kerry blue terrier", "184": "Irish terrier", "185": "Norfolk terrier", "186": "Norwich terrier", "187": "Yorkshire terrier", "188": "wire-haired fox terrier", "189": "Lakeland terrier", "190": "Sealyham terrier, Sealyham", "191": "Airedale, Airedale terrier", "192": "cairn, cairn terrier", "193": "Australian terrier", "194": "Dandie Dinmont, Dandie Dinmont terrier", "195": "Boston bull, Boston terrier", "196": "miniature schnauzer", "197": "giant schnauzer", "198": "standard schnauzer", "199": "Scotch terrier, Scottish terrier, Scottie", "200": "Tibetan terrier, chrysanthemum dog", "201": "silky terrier, Sydney silky", "202": "soft-coated wheaten terrier", "203": "West Highland white terrier", "204": "Lhasa, Lhasa apso", "205": "flat-coated retriever", "206": "curly-coated retriever", "207": "golden retriever", "208": "Labrador retriever", "209": "Chesapeake Bay retriever", "210": "German short-haired pointer", "211": "vizsla, Hungarian pointer", "212": "English setter", "213": "Irish setter, red setter", "214": "Gordon setter", "215": "Brittany spaniel", "216": "clumber, clumber spaniel", "217": "English springer, English springer spaniel", "218": "Welsh springer spaniel", "219": "cocker spaniel, English cocker spaniel, cocker", "220": "Sussex spaniel", "221": "Irish water spaniel", "222": "kuvasz", "223": "schipperke", "224": "groenendael", "225": "malinois", "226": "briard", "227": "kelpie", "228": "komondor", "229": "Old English sheepdog, bobtail", "230": "Shetland sheepdog, Shetland sheep dog, Shetland", "231": "collie", "232": "Border collie", "233": "Bouvier des Flandres, Bouviers des Flandres", "234": "Rottweiler", "235": "German shepherd, German shepherd dog, German police dog, alsatian", "236": "Doberman, Doberman pinscher", "237": "miniature pinscher", "238": "Greater Swiss Mountain dog", "239": "Bernese mountain dog", "240": "Appenzeller", "241": "EntleBucher", "242": "boxer", "243": "bull mastiff", "244": "Tibetan mastiff", "245": "French bulldog", "246": "Great Dane", "247": "Saint Bernard, St Bernard", "248": "Eskimo dog, husky", "249": "malamute, malemute, Alaskan malamute", "250": "Siberian husky", "251": "dalmatian, coach dog, carriage dog", "252": "affenpinscher, monkey pinscher, monkey dog", "253": "basenji", "254": "pug, pug-dog", "255": "Leonberg", "256": "Newfoundland, Newfoundland dog", "257": "Great Pyrenees", "258": "Samoyed, Samoyede", "259": "Pomeranian", "260": "chow, chow chow", "261": "keeshond", "262": "Brabancon griffon", "263": "Pembroke, Pembroke Welsh corgi", "264": "Cardigan, Cardigan Welsh corgi", "265": "toy poodle", "266": "miniature poodle", "267": "standard poodle", "268": "Mexican hairless", "269": "timber wolf, grey wolf, gray wolf, Canis lupus", "270": "white wolf, Arctic wolf, Canis lupus tundrarum", "271": "red wolf, maned wolf, Canis rufus, Canis niger", "272": "coyote, prairie wolf, brush wolf, Canis latrans", "273": "dingo, warrigal, warragal, Canis dingo", "274": "dhole, Cuon alpinus", "275": "African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus", "276": "hyena, hyaena", "277": "red fox, Vulpes vulpes", "278": "kit fox, Vulpes macrotis", "279": "Arctic fox, white fox, Alopex lagopus", "280": "grey fox, gray fox, Urocyon cinereoargenteus", "281": "tabby, tabby cat", "282": "tiger cat", "283": "Persian cat", "284": "Siamese cat, Siamese", "285": "Egyptian cat", "286": "cougar, puma, catamount, mountain lion, painter, panther, Felis concolor", "287": "lynx, catamount", "288": "leopard, Panthera pardus", "289": "snow leopard, ounce, Panthera uncia", "290": "jaguar, panther, Panthera onca, Felis onca", "291": "lion, king of beasts, Panthera leo", "292": "tiger, Panthera tigris", "293": "cheetah, chetah, Acinonyx jubatus", "294": "brown bear, bruin, Ursus arctos", "295": "American black bear, black bear, Ursus americanus, Euarctos americanus", "296": "ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus", "297": "sloth bear, Melursus ursinus, Ursus ursinus", "298": "mongoose", "299": "meerkat, mierkat", "300": "tiger beetle", "301": "ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle", "302": "ground beetle, carabid beetle", "303": "long-horned beetle, longicorn, longicorn beetle", "304": "leaf beetle, chrysomelid", "305": "dung beetle", "306": "rhinoceros beetle", "307": "weevil", "308": "fly", "309": "bee", "310": "ant, emmet, pismire", "311": "grasshopper, hopper", "312": "cricket", "313": "walking stick, walkingstick, stick insect", "314": "cockroach, roach", "315": "mantis, mantid", "316": "cicada, cicala", "317": "leafhopper", "318": "lacewing, lacewing fly", "319": "dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", "320": "damselfly", "321": "admiral", "322": "ringlet, ringlet butterfly", "323": "monarch, monarch butterfly, milkweed butterfly, Danaus plexippus", "324": "cabbage butterfly", "325": "sulphur butterfly, sulfur butterfly", "326": "lycaenid, lycaenid butterfly", "327": "starfish, sea star", "328": "sea urchin", "329": "sea cucumber, holothurian", "330": "wood rabbit, cottontail, cottontail rabbit", "331": "hare", "332": "Angora, Angora rabbit", "333": "hamster", "334": "porcupine, hedgehog", "335": "fox squirrel, eastern fox squirrel, Sciurus niger", "336": "marmot", "337": "beaver", "338": "guinea pig, Cavia cobaya", "339": "sorrel", "340": "zebra", "341": "hog, pig, grunter, squealer, Sus scrofa", "342": "wild boar, boar, Sus scrofa", "343": "warthog", "344": "hippopotamus, hippo, river horse, Hippopotamus amphibius", "345": "ox", "346": "water buffalo, water ox, Asiatic buffalo, Bubalus bubalis", "347": "bison", "348": "ram, tup", "349": "bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis", "350": "ibex, Capra ibex", "351": "hartebeest", "352": "impala, Aepyceros melampus", "353": "gazelle", "354": "Arabian camel, dromedary, Camelus dromedarius", "355": "llama", "356": "weasel", "357": "mink", "358": "polecat, fitch, foulmart, foumart, Mustela putorius", "359": "black-footed ferret, ferret, Mustela nigripes", "360": "otter", "361": "skunk, polecat, wood pussy", "362": "badger", "363": "armadillo", "364": "three-toed sloth, ai, Bradypus tridactylus", "365": "orangutan, orang, orangutang, Pongo pygmaeus", "366": "gorilla, Gorilla gorilla", "367": "chimpanzee, chimp, Pan troglodytes", "368": "gibbon, Hylobates lar", "369": "siamang, Hylobates syndactylus, Symphalangus syndactylus", "370": "guenon, guenon monkey", "371": "patas, hussar monkey, Erythrocebus patas", "372": "baboon", "373": "macaque", "374": "langur", "375": "colobus, colobus monkey", "376": "proboscis monkey, Nasalis larvatus", "377": "marmoset", "378": "capuchin, ringtail, Cebus capucinus", "379": "howler monkey, howler", "380": "titi, titi monkey", "381": "spider monkey, Ateles geoffroyi", "382": "squirrel monkey, Saimiri sciureus", "383": "Madagascar cat, ring-tailed lemur, Lemur catta", "384": "indri, indris, Indri indri, Indri brevicaudatus", "385": "Indian elephant, Elephas maximus", "386": "African elephant, Loxodonta africana", "387": "lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens", "388": "giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca", "389": "barracouta, snoek", "390": "eel", "391": "coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch", "392": "rock beauty, Holocanthus tricolor", "393": "anemone fish", "394": "sturgeon", "395": "gar, garfish, garpike, billfish, Lepisosteus osseus", "396": "lionfish", "397": "puffer, pufferfish, blowfish, globefish", "398": "abacus", "399": "abaya", "400": "academic gown, academic robe, judge's robe", "401": "accordion, piano accordion, squeeze box", "402": "acoustic guitar", "403": "aircraft carrier, carrier, flattop, attack aircraft carrier", "404": "airliner", "405": "airship, dirigible", "406": "altar", "407": "ambulance", "408": "amphibian, amphibious vehicle", "409": "analog clock", "410": "apiary, bee house", "411": "apron", "412": "ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin", "413": "assault rifle, assault gun", "414": "backpack, back pack, knapsack, packsack, rucksack, haversack", "415": "bakery, bakeshop, bakehouse", "416": "balance beam, beam", "417": "balloon", "418": "ballpoint, ballpoint pen, ballpen, Biro", "419": "Band Aid", "420": "banjo", "421": "bannister, banister, balustrade, balusters, handrail", "422": "barbell", "423": "barber chair", "424": "barbershop", "425": "barn", "426": "barometer", "427": "barrel, cask", "428": "barrow, garden cart, lawn cart, wheelbarrow", "429": "baseball", "430": "basketball", "431": "bassinet", "432": "bassoon", "433": "bathing cap, swimming cap", "434": "bath towel", "435": "bathtub, bathing tub, bath, tub", "436": "beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon", "437": "beacon, lighthouse, beacon light, pharos", "438": "beaker", "439": "bearskin, busby, shako", "440": "beer bottle", "441": "beer glass", "442": "bell cote, bell cot", "443": "bib", "444": "bicycle-built-for-two, tandem bicycle, tandem", "445": "bikini, two-piece", "446": "binder, ring-binder", "447": "binoculars, field glasses, opera glasses", "448": "birdhouse", "449": "boathouse", "450": "bobsled, bobsleigh, bob", "451": "bolo tie, bolo, bola tie, bola", "452": "bonnet, poke bonnet", "453": "bookcase", "454": "bookshop, bookstore, bookstall", "455": "bottlecap", "456": "bow", "457": "bow tie, bow-tie, bowtie", "458": "brass, memorial tablet, plaque", "459": "brassiere, bra, bandeau", "460": "breakwater, groin, groyne, mole, bulwark, seawall, jetty", "461": "breastplate, aegis, egis", "462": "broom", "463": "bucket, pail", "464": "buckle", "465": "bulletproof vest", "466": "bullet train, bullet", "467": "butcher shop, meat market", "468": "cab, hack, taxi, taxicab", "469": "caldron, cauldron", "470": "candle, taper, wax light", "471": "cannon", "472": "canoe", "473": "can opener, tin opener", "474": "cardigan", "475": "car mirror", "476": "carousel, carrousel, merry-go-round, roundabout, whirligig", "477": "carpenter's kit, tool kit", "478": "carton", "479": "car wheel", "480": "cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM", "481": "cassette", "482": "cassette player", "483": "castle", "484": "catamaran", "485": "CD player", "486": "cello, violoncello", "487": "cellular telephone, cellular phone, cellphone, cell, mobile phone", "488": "chain", "489": "chainlink fence", "490": "chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour", "491": "chain saw, chainsaw", "492": "chest", "493": "chiffonier, commode", "494": "chime, bell, gong", "495": "china cabinet, china closet", "496": "Christmas stocking", "497": "church, church building", "498": "cinema, movie theater, movie theatre, movie house, picture palace", "499": "cleaver, meat cleaver, chopper", "500": "cliff dwelling", "501": "cloak", "502": "clog, geta, patten, sabot", "503": "cocktail shaker", "504": "coffee mug", "505": "coffeepot", "506": "coil, spiral, volute, whorl, helix", "507": "combination lock", "508": "computer keyboard, keypad", "509": "confectionery, confectionary, candy store", "510": "container ship, containership, container vessel", "511": "convertible", "512": "corkscrew, bottle screw", "513": "cornet, horn, trumpet, trump", "514": "cowboy boot", "515": "cowboy hat, ten-gallon hat", "516": "cradle", "517": "crane2", "518": "crash helmet", "519": "crate", "520": "crib, cot", "521": "Crock Pot", "522": "croquet ball", "523": "crutch", "524": "cuirass", "525": "dam, dike, dyke", "526": "desk", "527": "desktop computer", "528": "dial telephone, dial phone", "529": "diaper, nappy, napkin", "530": "digital clock", "531": "digital watch", "532": "dining table, board", "533": "dishrag, dishcloth", "534": "dishwasher, dish washer, dishwashing machine", "535": "disk brake, disc brake", "536": "dock, dockage, docking facility", "537": "dogsled, dog sled, dog sleigh", "538": "dome", "539": "doormat, welcome mat", "540": "drilling platform, offshore rig", "541": "drum, membranophone, tympan", "542": "drumstick", "543": "dumbbell", "544": "Dutch oven", "545": "electric fan, blower", "546": "electric guitar", "547": "electric locomotive", "548": "entertainment center", "549": "envelope", "550": "espresso maker", "551": "face powder", "552": "feather boa, boa", "553": "file, file cabinet, filing cabinet", "554": "fireboat", "555": "fire engine, fire truck", "556": "fire screen, fireguard", "557": "flagpole, flagstaff", "558": "flute, transverse flute", "559": "folding chair", "560": "football helmet", "561": "forklift", "562": "fountain", "563": "fountain pen", "564": "four-poster", "565": "freight car", "566": "French horn, horn", "567": "frying pan, frypan, skillet", "568": "fur coat", "569": "garbage truck, dustcart", "570": "gasmask, respirator, gas helmet", "571": "gas pump, gasoline pump, petrol pump, island dispenser", "572": "goblet", "573": "go-kart", "574": "golf ball", "575": "golfcart, golf cart", "576": "gondola", "577": "gong, tam-tam", "578": "gown", "579": "grand piano, grand", "580": "greenhouse, nursery, glasshouse", "581": "grille, radiator grille", "582": "grocery store, grocery, food market, market", "583": "guillotine", "584": "hair slide", "585": "hair spray", "586": "half track", "587": "hammer", "588": "hamper", "589": "hand blower, blow dryer, blow drier, hair dryer, hair drier", "590": "hand-held computer, hand-held microcomputer", "591": "handkerchief, hankie, hanky, hankey", "592": "hard disc, hard disk, fixed disk", "593": "harmonica, mouth organ, harp, mouth harp", "594": "harp", "595": "harvester, reaper", "596": "hatchet", "597": "holster", "598": "home theater, home theatre", "599": "honeycomb", "600": "hook, claw", "601": "hoopskirt, crinoline", "602": "horizontal bar, high bar", "603": "horse cart, horse-cart", "604": "hourglass", "605": "iPod", "606": "iron, smoothing iron", "607": "jack-o'-lantern", "608": "jean, blue jean, denim", "609": "jeep, landrover", "610": "jersey, T-shirt, tee shirt", "611": "jigsaw puzzle", "612": "jinrikisha, ricksha, rickshaw", "613": "joystick", "614": "kimono", "615": "knee pad", "616": "knot", "617": "lab coat, laboratory coat", "618": "ladle", "619": "lampshade, lamp shade", "620": "laptop, laptop computer", "621": "lawn mower, mower", "622": "lens cap, lens cover", "623": "letter opener, paper knife, paperknife", "624": "library", "625": "lifeboat", "626": "lighter, light, igniter, ignitor", "627": "limousine, limo", "628": "liner, ocean liner", "629": "lipstick, lip rouge", "630": "Loafer", "631": "lotion", "632": "loudspeaker, speaker, speaker unit, loudspeaker system, speaker system", "633": "loupe, jeweler's loupe", "634": "lumbermill, sawmill", "635": "magnetic compass", "636": "mailbag, postbag", "637": "mailbox, letter box", "638": "maillot", "639": "maillot, tank suit", "640": "manhole cover", "641": "maraca", "642": "marimba, xylophone", "643": "mask", "644": "matchstick", "645": "maypole", "646": "maze, labyrinth", "647": "measuring cup", "648": "medicine chest, medicine cabinet", "649": "megalith, megalithic structure", "650": "microphone, mike", "651": "microwave, microwave oven", "652": "military uniform", "653": "milk can", "654": "minibus", "655": "miniskirt, mini", "656": "minivan", "657": "missile", "658": "mitten", "659": "mixing bowl", "660": "mobile home, manufactured home", "661": "Model T", "662": "modem", "663": "monastery", "664": "monitor", "665": "moped", "666": "mortar", "667": "mortarboard", "668": "mosque", "669": "mosquito net", "670": "motor scooter, scooter", "671": "mountain bike, all-terrain bike, off-roader", "672": "mountain tent", "673": "mouse, computer mouse", "674": "mousetrap", "675": "moving van", "676": "muzzle", "677": "nail", "678": "neck brace", "679": "necklace", "680": "nipple", "681": "notebook, notebook computer", "682": "obelisk", "683": "oboe, hautboy, hautbois", "684": "ocarina, sweet potato", "685": "odometer, hodometer, mileometer, milometer", "686": "oil filter", "687": "organ, pipe organ", "688": "oscilloscope, scope, cathode-ray oscilloscope, CRO", "689": "overskirt", "690": "oxcart", "691": "oxygen mask", "692": "packet", "693": "paddle, boat paddle", "694": "paddlewheel, paddle wheel", "695": "padlock", "696": "paintbrush", "697": "pajama, pyjama, pj's, jammies", "698": "palace", "699": "panpipe, pandean pipe, syrinx", "700": "paper towel", "701": "parachute, chute", "702": "parallel bars, bars", "703": "park bench", "704": "parking meter", "705": "passenger car, coach, carriage", "706": "patio, terrace", "707": "pay-phone, pay-station", "708": "pedestal, plinth, footstall", "709": "pencil box, pencil case", "710": "pencil sharpener", "711": "perfume, essence", "712": "Petri dish", "713": "photocopier", "714": "pick, plectrum, plectron", "715": "pickelhaube", "716": "picket fence, paling", "717": "pickup, pickup truck", "718": "pier", "719": "piggy bank, penny bank", "720": "pill bottle", "721": "pillow", "722": "ping-pong ball", "723": "pinwheel", "724": "pirate, pirate ship", "725": "pitcher, ewer", "726": "plane, carpenter's plane, woodworking plane", "727": "planetarium", "728": "plastic bag", "729": "plate rack", "730": "plow, plough", "731": "plunger, plumber's helper", "732": "Polaroid camera, Polaroid Land camera", "733": "pole", "734": "police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria", "735": "poncho", "736": "pool table, billiard table, snooker table", "737": "pop bottle, soda bottle", "738": "pot, flowerpot", "739": "potter's wheel", "740": "power drill", "741": "prayer rug, prayer mat", "742": "printer", "743": "prison, prison house", "744": "projectile, missile", "745": "projector", "746": "puck, hockey puck", "747": "punching bag, punch bag, punching ball, punchball", "748": "purse", "749": "quill, quill pen", "750": "quilt, comforter, comfort, puff", "751": "racer, race car, racing car", "752": "racket, racquet", "753": "radiator", "754": "radio, wireless", "755": "radio telescope, radio reflector", "756": "rain barrel", "757": "recreational vehicle, RV, R.V.", "758": "reel", "759": "reflex camera", "760": "refrigerator, icebox", "761": "remote control, remote", "762": "restaurant, eating house, eating place, eatery", "763": "revolver, six-gun, six-shooter", "764": "rifle", "765": "rocking chair, rocker", "766": "rotisserie", "767": "rubber eraser, rubber, pencil eraser", "768": "rugby ball", "769": "rule, ruler", "770": "running shoe", "771": "safe", "772": "safety pin", "773": "saltshaker, salt shaker", "774": "sandal", "775": "sarong", "776": "sax, saxophone", "777": "scabbard", "778": "scale, weighing machine", "779": "school bus", "780": "schooner", "781": "scoreboard", "782": "screen, CRT screen", "783": "screw", "784": "screwdriver", "785": "seat belt, seatbelt", "786": "sewing machine", "787": "shield, buckler", "788": "shoe shop, shoe-shop, shoe store", "789": "shoji", "790": "shopping basket", "791": "shopping cart", "792": "shovel", "793": "shower cap", "794": "shower curtain", "795": "ski", "796": "ski mask", "797": "sleeping bag", "798": "slide rule, slipstick", "799": "sliding door", "800": "slot, one-armed bandit", "801": "snorkel", "802": "snowmobile", "803": "snowplow, snowplough", "804": "soap dispenser", "805": "soccer ball", "806": "sock", "807": "solar dish, solar collector, solar furnace", "808": "sombrero", "809": "soup bowl", "810": "space bar", "811": "space heater", "812": "space shuttle", "813": "spatula", "814": "speedboat", "815": "spider web, spider's web", "816": "spindle", "817": "sports car, sport car", "818": "spotlight, spot", "819": "stage", "820": "steam locomotive", "821": "steel arch bridge", "822": "steel drum", "823": "stethoscope", "824": "stole", "825": "stone wall", "826": "stopwatch, stop watch", "827": "stove", "828": "strainer", "829": "streetcar, tram, tramcar, trolley, trolley car", "830": "stretcher", "831": "studio couch, day bed", "832": "stupa, tope", "833": "submarine, pigboat, sub, U-boat", "834": "suit, suit of clothes", "835": "sundial", "836": "sunglass", "837": "sunglasses, dark glasses, shades", "838": "sunscreen, sunblock, sun blocker", "839": "suspension bridge", "840": "swab, swob, mop", "841": "sweatshirt", "842": "swimming trunks, bathing trunks", "843": "swing", "844": "switch, electric switch, electrical switch", "845": "syringe", "846": "table lamp", "847": "tank, army tank, armored combat vehicle, armoured combat vehicle", "848": "tape player", "849": "teapot", "850": "teddy, teddy bear", "851": "television, television system", "852": "tennis ball", "853": "thatch, thatched roof", "854": "theater curtain, theatre curtain", "855": "thimble", "856": "thresher, thrasher, threshing machine", "857": "throne", "858": "tile roof", "859": "toaster", "860": "tobacco shop, tobacconist shop, tobacconist", "861": "toilet seat", "862": "torch", "863": "totem pole", "864": "tow truck, tow car, wrecker", "865": "toyshop", "866": "tractor", "867": "trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi", "868": "tray", "869": "trench coat", "870": "tricycle, trike, velocipede", "871": "trimaran", "872": "tripod", "873": "triumphal arch", "874": "trolleybus, trolley coach, trackless trolley", "875": "trombone", "876": "tub, vat", "877": "turnstile", "878": "typewriter keyboard", "879": "umbrella", "880": "unicycle, monocycle", "881": "upright, upright piano", "882": "vacuum, vacuum cleaner", "883": "vase", "884": "vault", "885": "velvet", "886": "vending machine", "887": "vestment", "888": "viaduct", "889": "violin, fiddle", "890": "volleyball", "891": "waffle iron", "892": "wall clock", "893": "wallet, billfold, notecase, pocketbook", "894": "wardrobe, closet, press", "895": "warplane, military plane", "896": "washbasin, handbasin, washbowl, lavabo, wash-hand basin", "897": "washer, automatic washer, washing machine", "898": "water bottle", "899": "water jug", "900": "water tower", "901": "whiskey jug", "902": "whistle", "903": "wig", "904": "window screen", "905": "window shade", "906": "Windsor tie", "907": "wine bottle", "908": "wing", "909": "wok", "910": "wooden spoon", "911": "wool, woolen, woollen", "912": "worm fence, snake fence, snake-rail fence, Virginia fence", "913": "wreck", "914": "yawl", "915": "yurt", "916": "web site, website, internet site, site", "917": "comic book", "918": "crossword puzzle, crossword", "919": "street sign", "920": "traffic light, traffic signal, stoplight", "921": "book jacket, dust cover, dust jacket, dust wrapper", "922": "menu", "923": "plate", "924": "guacamole", "925": "consomme", "926": "hot pot, hotpot", "927": "trifle", "928": "ice cream, icecream", "929": "ice lolly, lolly, lollipop, popsicle", "930": "French loaf", "931": "bagel, beigel", "932": "pretzel", "933": "cheeseburger", "934": "hotdog, hot dog, red hot", "935": "mashed potato", "936": "head cabbage", "937": "broccoli", "938": "cauliflower", "939": "zucchini, courgette", "940": "spaghetti squash", "941": "acorn squash", "942": "butternut squash", "943": "cucumber, cuke", "944": "artichoke, globe artichoke", "945": "bell pepper", "946": "cardoon", "947": "mushroom", "948": "Granny Smith", "949": "strawberry", "950": "orange", "951": "lemon", "952": "fig", "953": "pineapple, ananas", "954": "banana", "955": "jackfruit, jak, jack", "956": "custard apple", "957": "pomegranate", "958": "hay", "959": "carbonara", "960": "chocolate sauce, chocolate syrup", "961": "dough", "962": "meat loaf, meatloaf", "963": "pizza, pizza pie", "964": "potpie", "965": "burrito", "966": "red wine", "967": "espresso", "968": "cup", "969": "eggnog", "970": "alp", "971": "bubble", "972": "cliff, drop, drop-off", "973": "coral reef", "974": "geyser", "975": "lakeside, lakeshore", "976": "promontory, headland, head, foreland", "977": "sandbar, sand bar", "978": "seashore, coast, seacoast, sea-coast", "979": "valley, vale", "980": "volcano", "981": "ballplayer, baseball player", "982": "groom, bridegroom", "983": "scuba diver", "984": "rapeseed", "985": "daisy", "986": "yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum", "987": "corn", "988": "acorn", "989": "hip, rose hip, rosehip", "990": "buckeye, horse chestnut, conker", "991": "coral fungus", "992": "agaric", "993": "gyromitra", "994": "stinkhorn, carrion fungus", "995": "earthstar", "996": "hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa", "997": "bolete", "998": "ear, spike, capitulum", "999": "toilet tissue, toilet paper, bathroom tissue"}}}}, {"name": "lexicon", "sequence": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 349126026.0, "num_examples": 3000}], "download_size": 340943693, "dataset_size": 349126026.0}}
|
2023-02-10T18:05:32+00:00
|
4fdd6d26a7b3f2466aa7e12dd9de3bfa4dd69a82
|
# Dataset Card for "Imagenet1k_sample_validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Multimodal-Fatima/Imagenet1k_sample_validation
|
[
"region:us"
] |
2023-02-10T18:05:33+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "tench, Tinca tinca", "1": "goldfish, Carassius auratus", "2": "great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias", "3": "tiger shark, Galeocerdo cuvieri", "4": "hammerhead, hammerhead shark", "5": "electric ray, crampfish, numbfish, torpedo", "6": "stingray", "7": "cock", "8": "hen", "9": "ostrich, Struthio camelus", "10": "brambling, Fringilla montifringilla", "11": "goldfinch, Carduelis carduelis", "12": "house finch, linnet, Carpodacus mexicanus", "13": "junco, snowbird", "14": "indigo bunting, indigo finch, indigo bird, Passerina cyanea", "15": "robin, American robin, Turdus migratorius", "16": "bulbul", "17": "jay", "18": "magpie", "19": "chickadee", "20": "water ouzel, dipper", "21": "kite", "22": "bald eagle, American eagle, Haliaeetus leucocephalus", "23": "vulture", "24": "great grey owl, great gray owl, Strix nebulosa", "25": "European fire salamander, Salamandra salamandra", "26": "common newt, Triturus vulgaris", "27": "eft", "28": "spotted salamander, Ambystoma maculatum", "29": "axolotl, mud puppy, Ambystoma mexicanum", "30": "bullfrog, Rana catesbeiana", "31": "tree frog, tree-frog", "32": "tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui", "33": "loggerhead, loggerhead turtle, Caretta caretta", "34": "leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea", "35": "mud turtle", "36": "terrapin", "37": "box turtle, box tortoise", "38": "banded gecko", "39": "common iguana, iguana, Iguana iguana", "40": "American chameleon, anole, Anolis carolinensis", "41": "whiptail, whiptail lizard", "42": "agama", "43": "frilled lizard, Chlamydosaurus kingi", "44": "alligator lizard", "45": "Gila monster, Heloderma suspectum", "46": "green lizard, Lacerta viridis", "47": "African chameleon, Chamaeleo chamaeleon", "48": "Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis", "49": "African crocodile, Nile crocodile, Crocodylus niloticus", "50": "American alligator, Alligator mississipiensis", "51": "triceratops", "52": "thunder snake, worm snake, Carphophis amoenus", "53": "ringneck snake, ring-necked snake, ring snake", "54": "hognose snake, puff adder, sand viper", "55": "green snake, grass snake", "56": "king snake, kingsnake", "57": "garter snake, grass snake", "58": "water snake", "59": "vine snake", "60": "night snake, Hypsiglena torquata", "61": "boa constrictor, Constrictor constrictor", "62": "rock python, rock snake, Python sebae", "63": "Indian cobra, Naja naja", "64": "green mamba", "65": "sea snake", "66": "horned viper, cerastes, sand viper, horned asp, Cerastes cornutus", "67": "diamondback, diamondback rattlesnake, Crotalus adamanteus", "68": "sidewinder, horned rattlesnake, Crotalus cerastes", "69": "trilobite", "70": "harvestman, daddy longlegs, Phalangium opilio", "71": "scorpion", "72": "black and gold garden spider, Argiope aurantia", "73": "barn spider, Araneus cavaticus", "74": "garden spider, Aranea diademata", "75": "black widow, Latrodectus mactans", "76": "tarantula", "77": "wolf spider, hunting spider", "78": "tick", "79": "centipede", "80": "black grouse", "81": "ptarmigan", "82": "ruffed grouse, partridge, Bonasa umbellus", "83": "prairie chicken, prairie grouse, prairie fowl", "84": "peacock", "85": "quail", "86": "partridge", "87": "African grey, African gray, Psittacus erithacus", "88": "macaw", "89": "sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita", "90": "lorikeet", "91": "coucal", "92": "bee eater", "93": "hornbill", "94": "hummingbird", "95": "jacamar", "96": "toucan", "97": "drake", "98": "red-breasted merganser, Mergus serrator", "99": "goose", "100": "black swan, Cygnus atratus", "101": "tusker", "102": "echidna, spiny anteater, anteater", "103": "platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus", "104": "wallaby, brush kangaroo", "105": "koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus", "106": "wombat", "107": "jellyfish", "108": "sea anemone, anemone", "109": "brain coral", "110": "flatworm, platyhelminth", "111": "nematode, nematode worm, roundworm", "112": "conch", "113": "snail", "114": "slug", "115": "sea slug, nudibranch", "116": "chiton, coat-of-mail shell, sea cradle, polyplacophore", "117": "chambered nautilus, pearly nautilus, nautilus", "118": "Dungeness crab, Cancer magister", "119": "rock crab, Cancer irroratus", "120": "fiddler crab", "121": "king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica", "122": "American lobster, Northern lobster, Maine lobster, Homarus americanus", "123": "spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish", "124": "crayfish, crawfish, crawdad, crawdaddy", "125": "hermit crab", "126": "isopod", "127": "white stork, Ciconia ciconia", "128": "black stork, Ciconia nigra", "129": "spoonbill", "130": "flamingo", "131": "little blue heron, Egretta caerulea", "132": "American egret, great white heron, Egretta albus", "133": "bittern", "134": "crane", "135": "limpkin, Aramus pictus", "136": "European gallinule, Porphyrio porphyrio", "137": "American coot, marsh hen, mud hen, water hen, Fulica americana", "138": "bustard", "139": "ruddy turnstone, Arenaria interpres", "140": "red-backed sandpiper, dunlin, Erolia alpina", "141": "redshank, Tringa totanus", "142": "dowitcher", "143": "oystercatcher, oyster catcher", "144": "pelican", "145": "king penguin, Aptenodytes patagonica", "146": "albatross, mollymawk", "147": "grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus", "148": "killer whale, killer, orca, grampus, sea wolf, Orcinus orca", "149": "dugong, Dugong dugon", "150": "sea lion", "151": "Chihuahua", "152": "Japanese spaniel", "153": "Maltese dog, Maltese terrier, Maltese", "154": "Pekinese, Pekingese, Peke", "155": "Shih-Tzu", "156": "Blenheim spaniel", "157": "papillon", "158": "toy terrier", "159": "Rhodesian ridgeback", "160": "Afghan hound, Afghan", "161": "basset, basset hound", "162": "beagle", "163": "bloodhound, sleuthhound", "164": "bluetick", "165": "black-and-tan coonhound", "166": "Walker hound, Walker foxhound", "167": "English foxhound", "168": "redbone", "169": "borzoi, Russian wolfhound", "170": "Irish wolfhound", "171": "Italian greyhound", "172": "whippet", "173": "Ibizan hound, Ibizan Podenco", "174": "Norwegian elkhound, elkhound", "175": "otterhound, otter hound", "176": "Saluki, gazelle hound", "177": "Scottish deerhound, deerhound", "178": "Weimaraner", "179": "Staffordshire bullterrier, Staffordshire bull terrier", "180": "American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier", "181": "Bedlington terrier", "182": "Border terrier", "183": "Kerry blue terrier", "184": "Irish terrier", "185": "Norfolk terrier", "186": "Norwich terrier", "187": "Yorkshire terrier", "188": "wire-haired fox terrier", "189": "Lakeland terrier", "190": "Sealyham terrier, Sealyham", "191": "Airedale, Airedale terrier", "192": "cairn, cairn terrier", "193": "Australian terrier", "194": "Dandie Dinmont, Dandie Dinmont terrier", "195": "Boston bull, Boston terrier", "196": "miniature schnauzer", "197": "giant schnauzer", "198": "standard schnauzer", "199": "Scotch terrier, Scottish terrier, Scottie", "200": "Tibetan terrier, chrysanthemum dog", "201": "silky terrier, Sydney silky", "202": "soft-coated wheaten terrier", "203": "West Highland white terrier", "204": "Lhasa, Lhasa apso", "205": "flat-coated retriever", "206": "curly-coated retriever", "207": "golden retriever", "208": "Labrador retriever", "209": "Chesapeake Bay retriever", "210": "German short-haired pointer", "211": "vizsla, Hungarian pointer", "212": "English setter", "213": "Irish setter, red setter", "214": "Gordon setter", "215": "Brittany spaniel", "216": "clumber, clumber spaniel", "217": "English springer, English springer spaniel", "218": "Welsh springer spaniel", "219": "cocker spaniel, English cocker spaniel, cocker", "220": "Sussex spaniel", "221": "Irish water spaniel", "222": "kuvasz", "223": "schipperke", "224": "groenendael", "225": "malinois", "226": "briard", "227": "kelpie", "228": "komondor", "229": "Old English sheepdog, bobtail", "230": "Shetland sheepdog, Shetland sheep dog, Shetland", "231": "collie", "232": "Border collie", "233": "Bouvier des Flandres, Bouviers des Flandres", "234": "Rottweiler", "235": "German shepherd, German shepherd dog, German police dog, alsatian", "236": "Doberman, Doberman pinscher", "237": "miniature pinscher", "238": "Greater Swiss Mountain dog", "239": "Bernese mountain dog", "240": "Appenzeller", "241": "EntleBucher", "242": "boxer", "243": "bull mastiff", "244": "Tibetan mastiff", "245": "French bulldog", "246": "Great Dane", "247": "Saint Bernard, St Bernard", "248": "Eskimo dog, husky", "249": "malamute, malemute, Alaskan malamute", "250": "Siberian husky", "251": "dalmatian, coach dog, carriage dog", "252": "affenpinscher, monkey pinscher, monkey dog", "253": "basenji", "254": "pug, pug-dog", "255": "Leonberg", "256": "Newfoundland, Newfoundland dog", "257": "Great Pyrenees", "258": "Samoyed, Samoyede", "259": "Pomeranian", "260": "chow, chow chow", "261": "keeshond", "262": "Brabancon griffon", "263": "Pembroke, Pembroke Welsh corgi", "264": "Cardigan, Cardigan Welsh corgi", "265": "toy poodle", "266": "miniature poodle", "267": "standard poodle", "268": "Mexican hairless", "269": "timber wolf, grey wolf, gray wolf, Canis lupus", "270": "white wolf, Arctic wolf, Canis lupus tundrarum", "271": "red wolf, maned wolf, Canis rufus, Canis niger", "272": "coyote, prairie wolf, brush wolf, Canis latrans", "273": "dingo, warrigal, warragal, Canis dingo", "274": "dhole, Cuon alpinus", "275": "African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus", "276": "hyena, hyaena", "277": "red fox, Vulpes vulpes", "278": "kit fox, Vulpes macrotis", "279": "Arctic fox, white fox, Alopex lagopus", "280": "grey fox, gray fox, Urocyon cinereoargenteus", "281": "tabby, tabby cat", "282": "tiger cat", "283": "Persian cat", "284": "Siamese cat, Siamese", "285": "Egyptian cat", "286": "cougar, puma, catamount, mountain lion, painter, panther, Felis concolor", "287": "lynx, catamount", "288": "leopard, Panthera pardus", "289": "snow leopard, ounce, Panthera uncia", "290": "jaguar, panther, Panthera onca, Felis onca", "291": "lion, king of beasts, Panthera leo", "292": "tiger, Panthera tigris", "293": "cheetah, chetah, Acinonyx jubatus", "294": "brown bear, bruin, Ursus arctos", "295": "American black bear, black bear, Ursus americanus, Euarctos americanus", "296": "ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus", "297": "sloth bear, Melursus ursinus, Ursus ursinus", "298": "mongoose", "299": "meerkat, mierkat", "300": "tiger beetle", "301": "ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle", "302": "ground beetle, carabid beetle", "303": "long-horned beetle, longicorn, longicorn beetle", "304": "leaf beetle, chrysomelid", "305": "dung beetle", "306": "rhinoceros beetle", "307": "weevil", "308": "fly", "309": "bee", "310": "ant, emmet, pismire", "311": "grasshopper, hopper", "312": "cricket", "313": "walking stick, walkingstick, stick insect", "314": "cockroach, roach", "315": "mantis, mantid", "316": "cicada, cicala", "317": "leafhopper", "318": "lacewing, lacewing fly", "319": "dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk", "320": "damselfly", "321": "admiral", "322": "ringlet, ringlet butterfly", "323": "monarch, monarch butterfly, milkweed butterfly, Danaus plexippus", "324": "cabbage butterfly", "325": "sulphur butterfly, sulfur butterfly", "326": "lycaenid, lycaenid butterfly", "327": "starfish, sea star", "328": "sea urchin", "329": "sea cucumber, holothurian", "330": "wood rabbit, cottontail, cottontail rabbit", "331": "hare", "332": "Angora, Angora rabbit", "333": "hamster", "334": "porcupine, hedgehog", "335": "fox squirrel, eastern fox squirrel, Sciurus niger", "336": "marmot", "337": "beaver", "338": "guinea pig, Cavia cobaya", "339": "sorrel", "340": "zebra", "341": "hog, pig, grunter, squealer, Sus scrofa", "342": "wild boar, boar, Sus scrofa", "343": "warthog", "344": "hippopotamus, hippo, river horse, Hippopotamus amphibius", "345": "ox", "346": "water buffalo, water ox, Asiatic buffalo, Bubalus bubalis", "347": "bison", "348": "ram, tup", "349": "bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis", "350": "ibex, Capra ibex", "351": "hartebeest", "352": "impala, Aepyceros melampus", "353": "gazelle", "354": "Arabian camel, dromedary, Camelus dromedarius", "355": "llama", "356": "weasel", "357": "mink", "358": "polecat, fitch, foulmart, foumart, Mustela putorius", "359": "black-footed ferret, ferret, Mustela nigripes", "360": "otter", "361": "skunk, polecat, wood pussy", "362": "badger", "363": "armadillo", "364": "three-toed sloth, ai, Bradypus tridactylus", "365": "orangutan, orang, orangutang, Pongo pygmaeus", "366": "gorilla, Gorilla gorilla", "367": "chimpanzee, chimp, Pan troglodytes", "368": "gibbon, Hylobates lar", "369": "siamang, Hylobates syndactylus, Symphalangus syndactylus", "370": "guenon, guenon monkey", "371": "patas, hussar monkey, Erythrocebus patas", "372": "baboon", "373": "macaque", "374": "langur", "375": "colobus, colobus monkey", "376": "proboscis monkey, Nasalis larvatus", "377": "marmoset", "378": "capuchin, ringtail, Cebus capucinus", "379": "howler monkey, howler", "380": "titi, titi monkey", "381": "spider monkey, Ateles geoffroyi", "382": "squirrel monkey, Saimiri sciureus", "383": "Madagascar cat, ring-tailed lemur, Lemur catta", "384": "indri, indris, Indri indri, Indri brevicaudatus", "385": "Indian elephant, Elephas maximus", "386": "African elephant, Loxodonta africana", "387": "lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens", "388": "giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca", "389": "barracouta, snoek", "390": "eel", "391": "coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch", "392": "rock beauty, Holocanthus tricolor", "393": "anemone fish", "394": "sturgeon", "395": "gar, garfish, garpike, billfish, Lepisosteus osseus", "396": "lionfish", "397": "puffer, pufferfish, blowfish, globefish", "398": "abacus", "399": "abaya", "400": "academic gown, academic robe, judge's robe", "401": "accordion, piano accordion, squeeze box", "402": "acoustic guitar", "403": "aircraft carrier, carrier, flattop, attack aircraft carrier", "404": "airliner", "405": "airship, dirigible", "406": "altar", "407": "ambulance", "408": "amphibian, amphibious vehicle", "409": "analog clock", "410": "apiary, bee house", "411": "apron", "412": "ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin", "413": "assault rifle, assault gun", "414": "backpack, back pack, knapsack, packsack, rucksack, haversack", "415": "bakery, bakeshop, bakehouse", "416": "balance beam, beam", "417": "balloon", "418": "ballpoint, ballpoint pen, ballpen, Biro", "419": "Band Aid", "420": "banjo", "421": "bannister, banister, balustrade, balusters, handrail", "422": "barbell", "423": "barber chair", "424": "barbershop", "425": "barn", "426": "barometer", "427": "barrel, cask", "428": "barrow, garden cart, lawn cart, wheelbarrow", "429": "baseball", "430": "basketball", "431": "bassinet", "432": "bassoon", "433": "bathing cap, swimming cap", "434": "bath towel", "435": "bathtub, bathing tub, bath, tub", "436": "beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon", "437": "beacon, lighthouse, beacon light, pharos", "438": "beaker", "439": "bearskin, busby, shako", "440": "beer bottle", "441": "beer glass", "442": "bell cote, bell cot", "443": "bib", "444": "bicycle-built-for-two, tandem bicycle, tandem", "445": "bikini, two-piece", "446": "binder, ring-binder", "447": "binoculars, field glasses, opera glasses", "448": "birdhouse", "449": "boathouse", "450": "bobsled, bobsleigh, bob", "451": "bolo tie, bolo, bola tie, bola", "452": "bonnet, poke bonnet", "453": "bookcase", "454": "bookshop, bookstore, bookstall", "455": "bottlecap", "456": "bow", "457": "bow tie, bow-tie, bowtie", "458": "brass, memorial tablet, plaque", "459": "brassiere, bra, bandeau", "460": "breakwater, groin, groyne, mole, bulwark, seawall, jetty", "461": "breastplate, aegis, egis", "462": "broom", "463": "bucket, pail", "464": "buckle", "465": "bulletproof vest", "466": "bullet train, bullet", "467": "butcher shop, meat market", "468": "cab, hack, taxi, taxicab", "469": "caldron, cauldron", "470": "candle, taper, wax light", "471": "cannon", "472": "canoe", "473": "can opener, tin opener", "474": "cardigan", "475": "car mirror", "476": "carousel, carrousel, merry-go-round, roundabout, whirligig", "477": "carpenter's kit, tool kit", "478": "carton", "479": "car wheel", "480": "cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM", "481": "cassette", "482": "cassette player", "483": "castle", "484": "catamaran", "485": "CD player", "486": "cello, violoncello", "487": "cellular telephone, cellular phone, cellphone, cell, mobile phone", "488": "chain", "489": "chainlink fence", "490": "chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour", "491": "chain saw, chainsaw", "492": "chest", "493": "chiffonier, commode", "494": "chime, bell, gong", "495": "china cabinet, china closet", "496": "Christmas stocking", "497": "church, church building", "498": "cinema, movie theater, movie theatre, movie house, picture palace", "499": "cleaver, meat cleaver, chopper", "500": "cliff dwelling", "501": "cloak", "502": "clog, geta, patten, sabot", "503": "cocktail shaker", "504": "coffee mug", "505": "coffeepot", "506": "coil, spiral, volute, whorl, helix", "507": "combination lock", "508": "computer keyboard, keypad", "509": "confectionery, confectionary, candy store", "510": "container ship, containership, container vessel", "511": "convertible", "512": "corkscrew, bottle screw", "513": "cornet, horn, trumpet, trump", "514": "cowboy boot", "515": "cowboy hat, ten-gallon hat", "516": "cradle", "517": "crane2", "518": "crash helmet", "519": "crate", "520": "crib, cot", "521": "Crock Pot", "522": "croquet ball", "523": "crutch", "524": "cuirass", "525": "dam, dike, dyke", "526": "desk", "527": "desktop computer", "528": "dial telephone, dial phone", "529": "diaper, nappy, napkin", "530": "digital clock", "531": "digital watch", "532": "dining table, board", "533": "dishrag, dishcloth", "534": "dishwasher, dish washer, dishwashing machine", "535": "disk brake, disc brake", "536": "dock, dockage, docking facility", "537": "dogsled, dog sled, dog sleigh", "538": "dome", "539": "doormat, welcome mat", "540": "drilling platform, offshore rig", "541": "drum, membranophone, tympan", "542": "drumstick", "543": "dumbbell", "544": "Dutch oven", "545": "electric fan, blower", "546": "electric guitar", "547": "electric locomotive", "548": "entertainment center", "549": "envelope", "550": "espresso maker", "551": "face powder", "552": "feather boa, boa", "553": "file, file cabinet, filing cabinet", "554": "fireboat", "555": "fire engine, fire truck", "556": "fire screen, fireguard", "557": "flagpole, flagstaff", "558": "flute, transverse flute", "559": "folding chair", "560": "football helmet", "561": "forklift", "562": "fountain", "563": "fountain pen", "564": "four-poster", "565": "freight car", "566": "French horn, horn", "567": "frying pan, frypan, skillet", "568": "fur coat", "569": "garbage truck, dustcart", "570": "gasmask, respirator, gas helmet", "571": "gas pump, gasoline pump, petrol pump, island dispenser", "572": "goblet", "573": "go-kart", "574": "golf ball", "575": "golfcart, golf cart", "576": "gondola", "577": "gong, tam-tam", "578": "gown", "579": "grand piano, grand", "580": "greenhouse, nursery, glasshouse", "581": "grille, radiator grille", "582": "grocery store, grocery, food market, market", "583": "guillotine", "584": "hair slide", "585": "hair spray", "586": "half track", "587": "hammer", "588": "hamper", "589": "hand blower, blow dryer, blow drier, hair dryer, hair drier", "590": "hand-held computer, hand-held microcomputer", "591": "handkerchief, hankie, hanky, hankey", "592": "hard disc, hard disk, fixed disk", "593": "harmonica, mouth organ, harp, mouth harp", "594": "harp", "595": "harvester, reaper", "596": "hatchet", "597": "holster", "598": "home theater, home theatre", "599": "honeycomb", "600": "hook, claw", "601": "hoopskirt, crinoline", "602": "horizontal bar, high bar", "603": "horse cart, horse-cart", "604": "hourglass", "605": "iPod", "606": "iron, smoothing iron", "607": "jack-o'-lantern", "608": "jean, blue jean, denim", "609": "jeep, landrover", "610": "jersey, T-shirt, tee shirt", "611": "jigsaw puzzle", "612": "jinrikisha, ricksha, rickshaw", "613": "joystick", "614": "kimono", "615": "knee pad", "616": "knot", "617": "lab coat, laboratory coat", "618": "ladle", "619": "lampshade, lamp shade", "620": "laptop, laptop computer", "621": "lawn mower, mower", "622": "lens cap, lens cover", "623": "letter opener, paper knife, paperknife", "624": "library", "625": "lifeboat", "626": "lighter, light, igniter, ignitor", "627": "limousine, limo", "628": "liner, ocean liner", "629": "lipstick, lip rouge", "630": "Loafer", "631": "lotion", "632": "loudspeaker, speaker, speaker unit, loudspeaker system, speaker system", "633": "loupe, jeweler's loupe", "634": "lumbermill, sawmill", "635": "magnetic compass", "636": "mailbag, postbag", "637": "mailbox, letter box", "638": "maillot", "639": "maillot, tank suit", "640": "manhole cover", "641": "maraca", "642": "marimba, xylophone", "643": "mask", "644": "matchstick", "645": "maypole", "646": "maze, labyrinth", "647": "measuring cup", "648": "medicine chest, medicine cabinet", "649": "megalith, megalithic structure", "650": "microphone, mike", "651": "microwave, microwave oven", "652": "military uniform", "653": "milk can", "654": "minibus", "655": "miniskirt, mini", "656": "minivan", "657": "missile", "658": "mitten", "659": "mixing bowl", "660": "mobile home, manufactured home", "661": "Model T", "662": "modem", "663": "monastery", "664": "monitor", "665": "moped", "666": "mortar", "667": "mortarboard", "668": "mosque", "669": "mosquito net", "670": "motor scooter, scooter", "671": "mountain bike, all-terrain bike, off-roader", "672": "mountain tent", "673": "mouse, computer mouse", "674": "mousetrap", "675": "moving van", "676": "muzzle", "677": "nail", "678": "neck brace", "679": "necklace", "680": "nipple", "681": "notebook, notebook computer", "682": "obelisk", "683": "oboe, hautboy, hautbois", "684": "ocarina, sweet potato", "685": "odometer, hodometer, mileometer, milometer", "686": "oil filter", "687": "organ, pipe organ", "688": "oscilloscope, scope, cathode-ray oscilloscope, CRO", "689": "overskirt", "690": "oxcart", "691": "oxygen mask", "692": "packet", "693": "paddle, boat paddle", "694": "paddlewheel, paddle wheel", "695": "padlock", "696": "paintbrush", "697": "pajama, pyjama, pj's, jammies", "698": "palace", "699": "panpipe, pandean pipe, syrinx", "700": "paper towel", "701": "parachute, chute", "702": "parallel bars, bars", "703": "park bench", "704": "parking meter", "705": "passenger car, coach, carriage", "706": "patio, terrace", "707": "pay-phone, pay-station", "708": "pedestal, plinth, footstall", "709": "pencil box, pencil case", "710": "pencil sharpener", "711": "perfume, essence", "712": "Petri dish", "713": "photocopier", "714": "pick, plectrum, plectron", "715": "pickelhaube", "716": "picket fence, paling", "717": "pickup, pickup truck", "718": "pier", "719": "piggy bank, penny bank", "720": "pill bottle", "721": "pillow", "722": "ping-pong ball", "723": "pinwheel", "724": "pirate, pirate ship", "725": "pitcher, ewer", "726": "plane, carpenter's plane, woodworking plane", "727": "planetarium", "728": "plastic bag", "729": "plate rack", "730": "plow, plough", "731": "plunger, plumber's helper", "732": "Polaroid camera, Polaroid Land camera", "733": "pole", "734": "police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria", "735": "poncho", "736": "pool table, billiard table, snooker table", "737": "pop bottle, soda bottle", "738": "pot, flowerpot", "739": "potter's wheel", "740": "power drill", "741": "prayer rug, prayer mat", "742": "printer", "743": "prison, prison house", "744": "projectile, missile", "745": "projector", "746": "puck, hockey puck", "747": "punching bag, punch bag, punching ball, punchball", "748": "purse", "749": "quill, quill pen", "750": "quilt, comforter, comfort, puff", "751": "racer, race car, racing car", "752": "racket, racquet", "753": "radiator", "754": "radio, wireless", "755": "radio telescope, radio reflector", "756": "rain barrel", "757": "recreational vehicle, RV, R.V.", "758": "reel", "759": "reflex camera", "760": "refrigerator, icebox", "761": "remote control, remote", "762": "restaurant, eating house, eating place, eatery", "763": "revolver, six-gun, six-shooter", "764": "rifle", "765": "rocking chair, rocker", "766": "rotisserie", "767": "rubber eraser, rubber, pencil eraser", "768": "rugby ball", "769": "rule, ruler", "770": "running shoe", "771": "safe", "772": "safety pin", "773": "saltshaker, salt shaker", "774": "sandal", "775": "sarong", "776": "sax, saxophone", "777": "scabbard", "778": "scale, weighing machine", "779": "school bus", "780": "schooner", "781": "scoreboard", "782": "screen, CRT screen", "783": "screw", "784": "screwdriver", "785": "seat belt, seatbelt", "786": "sewing machine", "787": "shield, buckler", "788": "shoe shop, shoe-shop, shoe store", "789": "shoji", "790": "shopping basket", "791": "shopping cart", "792": "shovel", "793": "shower cap", "794": "shower curtain", "795": "ski", "796": "ski mask", "797": "sleeping bag", "798": "slide rule, slipstick", "799": "sliding door", "800": "slot, one-armed bandit", "801": "snorkel", "802": "snowmobile", "803": "snowplow, snowplough", "804": "soap dispenser", "805": "soccer ball", "806": "sock", "807": "solar dish, solar collector, solar furnace", "808": "sombrero", "809": "soup bowl", "810": "space bar", "811": "space heater", "812": "space shuttle", "813": "spatula", "814": "speedboat", "815": "spider web, spider's web", "816": "spindle", "817": "sports car, sport car", "818": "spotlight, spot", "819": "stage", "820": "steam locomotive", "821": "steel arch bridge", "822": "steel drum", "823": "stethoscope", "824": "stole", "825": "stone wall", "826": "stopwatch, stop watch", "827": "stove", "828": "strainer", "829": "streetcar, tram, tramcar, trolley, trolley car", "830": "stretcher", "831": "studio couch, day bed", "832": "stupa, tope", "833": "submarine, pigboat, sub, U-boat", "834": "suit, suit of clothes", "835": "sundial", "836": "sunglass", "837": "sunglasses, dark glasses, shades", "838": "sunscreen, sunblock, sun blocker", "839": "suspension bridge", "840": "swab, swob, mop", "841": "sweatshirt", "842": "swimming trunks, bathing trunks", "843": "swing", "844": "switch, electric switch, electrical switch", "845": "syringe", "846": "table lamp", "847": "tank, army tank, armored combat vehicle, armoured combat vehicle", "848": "tape player", "849": "teapot", "850": "teddy, teddy bear", "851": "television, television system", "852": "tennis ball", "853": "thatch, thatched roof", "854": "theater curtain, theatre curtain", "855": "thimble", "856": "thresher, thrasher, threshing machine", "857": "throne", "858": "tile roof", "859": "toaster", "860": "tobacco shop, tobacconist shop, tobacconist", "861": "toilet seat", "862": "torch", "863": "totem pole", "864": "tow truck, tow car, wrecker", "865": "toyshop", "866": "tractor", "867": "trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi", "868": "tray", "869": "trench coat", "870": "tricycle, trike, velocipede", "871": "trimaran", "872": "tripod", "873": "triumphal arch", "874": "trolleybus, trolley coach, trackless trolley", "875": "trombone", "876": "tub, vat", "877": "turnstile", "878": "typewriter keyboard", "879": "umbrella", "880": "unicycle, monocycle", "881": "upright, upright piano", "882": "vacuum, vacuum cleaner", "883": "vase", "884": "vault", "885": "velvet", "886": "vending machine", "887": "vestment", "888": "viaduct", "889": "violin, fiddle", "890": "volleyball", "891": "waffle iron", "892": "wall clock", "893": "wallet, billfold, notecase, pocketbook", "894": "wardrobe, closet, press", "895": "warplane, military plane", "896": "washbasin, handbasin, washbowl, lavabo, wash-hand basin", "897": "washer, automatic washer, washing machine", "898": "water bottle", "899": "water jug", "900": "water tower", "901": "whiskey jug", "902": "whistle", "903": "wig", "904": "window screen", "905": "window shade", "906": "Windsor tie", "907": "wine bottle", "908": "wing", "909": "wok", "910": "wooden spoon", "911": "wool, woolen, woollen", "912": "worm fence, snake fence, snake-rail fence, Virginia fence", "913": "wreck", "914": "yawl", "915": "yurt", "916": "web site, website, internet site, site", "917": "comic book", "918": "crossword puzzle, crossword", "919": "street sign", "920": "traffic light, traffic signal, stoplight", "921": "book jacket, dust cover, dust jacket, dust wrapper", "922": "menu", "923": "plate", "924": "guacamole", "925": "consomme", "926": "hot pot, hotpot", "927": "trifle", "928": "ice cream, icecream", "929": "ice lolly, lolly, lollipop, popsicle", "930": "French loaf", "931": "bagel, beigel", "932": "pretzel", "933": "cheeseburger", "934": "hotdog, hot dog, red hot", "935": "mashed potato", "936": "head cabbage", "937": "broccoli", "938": "cauliflower", "939": "zucchini, courgette", "940": "spaghetti squash", "941": "acorn squash", "942": "butternut squash", "943": "cucumber, cuke", "944": "artichoke, globe artichoke", "945": "bell pepper", "946": "cardoon", "947": "mushroom", "948": "Granny Smith", "949": "strawberry", "950": "orange", "951": "lemon", "952": "fig", "953": "pineapple, ananas", "954": "banana", "955": "jackfruit, jak, jack", "956": "custard apple", "957": "pomegranate", "958": "hay", "959": "carbonara", "960": "chocolate sauce, chocolate syrup", "961": "dough", "962": "meat loaf, meatloaf", "963": "pizza, pizza pie", "964": "potpie", "965": "burrito", "966": "red wine", "967": "espresso", "968": "cup", "969": "eggnog", "970": "alp", "971": "bubble", "972": "cliff, drop, drop-off", "973": "coral reef", "974": "geyser", "975": "lakeside, lakeshore", "976": "promontory, headland, head, foreland", "977": "sandbar, sand bar", "978": "seashore, coast, seacoast, sea-coast", "979": "valley, vale", "980": "volcano", "981": "ballplayer, baseball player", "982": "groom, bridegroom", "983": "scuba diver", "984": "rapeseed", "985": "daisy", "986": "yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum", "987": "corn", "988": "acorn", "989": "hip, rose hip, rosehip", "990": "buckeye, horse chestnut, conker", "991": "coral fungus", "992": "agaric", "993": "gyromitra", "994": "stinkhorn, carrion fungus", "995": "earthstar", "996": "hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa", "997": "bolete", "998": "ear, spike, capitulum", "999": "toilet tissue, toilet paper, bathroom tissue"}}}}, {"name": "lexicon", "sequence": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "validation", "num_bytes": 406246742.0, "num_examples": 3000}], "download_size": 398667087, "dataset_size": 406246742.0}}
|
2023-02-10T18:05:59+00:00
|
42c9798b0e5d580170c11b3b9c615195bafc1c09
|
# Dataset Card for "Europarl-ST-processed-mt-ro"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/Europarl-ST-processed-mt-ro
|
[
"region:us"
] |
2023-02-10T18:08:28+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": {"class_label": {"names": {"0": "de", "1": "en", "2": "es", "3": "fr", "4": "it", "5": "nl", "6": "pl", "7": "pt", "8": "ro"}}}}], "splits": [{"name": "train", "num_bytes": 139150159, "num_examples": 384704}, {"name": "valid", "num_bytes": 18067165, "num_examples": 48280}, {"name": "test", "num_bytes": 19720811, "num_examples": 53360}], "download_size": 66531208, "dataset_size": 176938135}}
|
2023-02-10T18:08:43+00:00
|
3b1cd700203f4a613a4f306270220f32514be2cd
|
# Dataset Card for MCoNaLa
## Dataset Description
- **Homepage:** https://github.com/zorazrw/multilingual-conala
- **Repository:** https://github.com/zorazrw/multilingual-conala
- **Paper:** https://arxiv.org/pdf/2203.08388.pdf
- **Leaderboard:** https://explainaboard.inspiredco.ai/leaderboards?show_mine=false&sort_dir=desc&sort_field=created_at&dataset=mconala
### Dataset Summary
MCoNaLa is a Multilingual Code/Natural Language Challenge dataset with 896 NL-Code pairs in three languages: Spanish, Japanese, and Russian.
### Languages
Spanish, Japanese, Russian; Python
## Dataset Structure
### How to Use
```bash
from datasets import load_dataset
# Spanish subset
load_dataset("neulab/mconala", "es")
DatasetDict({
test: Dataset({
features: ['question_id', 'intent', 'rewritten_intent', 'snippet'],
num_rows: 341
})
})
# Japanese subset
load_dataset("neulab/mconala", "ja")
DatasetDict({
test: Dataset({
features: ['question_id', 'intent', 'rewritten_intent', 'snippet'],
num_rows: 210
})
})
# Russian subset
load_dataset("neulab/mconala", "ru")
DatasetDict({
test: Dataset({
features: ['question_id', 'intent', 'rewritten_intent', 'snippet'],
num_rows: 345
})
})
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|question_id|int|StackOverflow post id of the sample|
|intent|string|Title of the Stackoverflow post as the initial NL intent|
|rewritten_intent|string|nl intent rewritten by human annotators|
|snippet|string|Python code solution to the NL intent|
### Data Splits
The dataset contains 341, 210, and 345 samples in Spanish, Japanese, and Russian.
### Citation Information
```
@article{wang2022mconala,
title={MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages},
author={Zhiruo Wang, Grace Cuenca, Shuyan Zhou, Frank F. Xu, Graham Neubig},
journal={arXiv preprint arXiv:2203.08388},
year={2022}
}
```
|
neulab/mconala
|
[
"task_categories:text-generation",
"task_categories:translation",
"size_categories:n<1K",
"language:es",
"language:ja",
"language:ru",
"license:cc-by-sa-4.0",
"code generation",
"arxiv:2203.08388",
"region:us"
] |
2023-02-10T18:08:54+00:00
|
{"language": ["es", "ja", "ru"], "license": "cc-by-sa-4.0", "size_categories": ["n<1K"], "task_categories": ["text-generation", "translation"], "pretty_name": "mconala", "tags": ["code generation"]}
|
2023-02-10T19:01:31+00:00
|
7a19a42b0594f4359985347355253fc8c675443b
|
# Dataset Card for "Europarl-ST-processed-mt-pl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tj-solergibert/Europarl-ST-processed-mt-pl
|
[
"region:us"
] |
2023-02-10T18:17:56+00:00
|
{"dataset_info": {"features": [{"name": "source_text", "dtype": "string"}, {"name": "dest_text", "dtype": "string"}, {"name": "dest_lang", "dtype": {"class_label": {"names": {"0": "de", "1": "en", "2": "es", "3": "fr", "4": "it", "5": "nl", "6": "pl", "7": "pt", "8": "ro"}}}}], "splits": [{"name": "train", "num_bytes": 137540859, "num_examples": 384704}, {"name": "valid", "num_bytes": 17934207, "num_examples": 48280}, {"name": "test", "num_bytes": 19550172, "num_examples": 53360}], "download_size": 65800571, "dataset_size": 175025238}}
|
2023-02-10T18:18:09+00:00
|
5f2872187573d4b52ddec4e1ac04ad1714200f96
|
Use the 25% suffix array to deduplicate the full Oscar, i.e. remove any document which has an at least 100-char span overlapping with the 25% chunk we selected in the previous bullet. This is more permissive and leaves us with 136 million documents or 31% of the original dataset. Also for reasons the explanation of which would probably involve terms like power laws, we still remove most of the most pervasive duplicates - so I'm pretty optimistic about this being useful.
|
datablations/oscar-dedup-expanded
|
[
"region:us"
] |
2023-02-10T18:42:08+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "meta", "struct": [{"name": "warc_headers", "struct": [{"name": "warc-record-id", "dtype": "string"}, {"name": "warc-date", "dtype": "string"}, {"name": "content-type", "dtype": "string"}, {"name": "content-length", "dtype": "int32"}, {"name": "warc-type", "dtype": "string"}, {"name": "warc-identified-content-language", "dtype": "string"}, {"name": "warc-refers-to", "dtype": "string"}, {"name": "warc-target-uri", "dtype": "string"}, {"name": "warc-block-digest", "dtype": "string"}]}, {"name": "identification", "struct": [{"name": "label", "dtype": "string"}, {"name": "prob", "dtype": "float32"}]}, {"name": "annotations", "sequence": "string"}, {"name": "line_identifications", "list": [{"name": "label", "dtype": "string"}, {"name": "prob", "dtype": "float32"}]}]}, {"name": "perplexity_score", "dtype": "float64"}, {"name": "text_length", "dtype": "int64"}, {"name": "url", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "dup_ratio", "dtype": "float64"}, {"name": "pairs", "sequence": {"sequence": "int64"}}, {"name": "repetitions", "sequence": "binary"}, {"name": "included_in_dedup", "dtype": "bool"}, {"name": "cluster", "sequence": "int64"}, {"name": "has_dup_25", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 3188540880787, "num_examples": 431992659}], "download_size": 1732364041898, "dataset_size": 3188540880787}}
|
2023-05-10T05:57:52+00:00
|
f485ff8709653f6d8f8fbbe02345d9babc7d9563
|
miladfa7/amazon
|
[
"license:other",
"region:us"
] |
2023-02-10T18:44:50+00:00
|
{"license": "other"}
|
2023-02-10T18:44:50+00:00
|
|
f121903e5515e633045fdde4484c466f1a51d96a
|
cryptonation/nbahistory
|
[
"license:openrail",
"region:us"
] |
2023-02-10T18:49:21+00:00
|
{"license": "openrail"}
|
2023-02-10T18:55:04+00:00
|
|
a550782239f9dc2a915c11d187d65887029f0e2a
|
# Dataset Card for "squad-pt-v1.1"
Dataset squad-v1.1 traduzido pelo grupo [(www.deeplearningbrasil.com.br)](www.deeplearningbrasil.com.br). Todos os créditos ao grupo pela tradução e aos [autores originais](https://rajpurkar.github.io/SQuAD-explorer/).
|
tgsc/squad-pt-v1.1
|
[
"language:pt",
"region:us"
] |
2023-02-10T19:33:56+00:00
|
{"language": "pt", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 84838259, "num_examples": 87510}, {"name": "validation", "num_bytes": 11150628, "num_examples": 10570}], "download_size": 22898021, "dataset_size": 95988887}}
|
2023-07-13T11:02:03+00:00
|
0f67ba79e08f00187ecad10f049829dfda19ffbd
|
# Dataset Card for "test-model-outputs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
lewtun/test-model-outputs
|
[
"region:us"
] |
2023-02-10T19:42:02+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "source", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "outputs", "list": [{"name": "model", "dtype": "string"}, {"name": "outputs", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 982, "num_examples": 1}], "download_size": 5435, "dataset_size": 982}}
|
2023-02-10T19:43:22+00:00
|
419b33d702956b3a5ee8520ce96947673c049652
|
# Dataset Card for "patched_test_p_10_m1_predictions_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
roa7n/patched_test_p_10_m1_predictions_v2
|
[
"region:us"
] |
2023-02-10T19:53:53+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sequence_str", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "m1_preds", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 1566287530, "num_examples": 2843834}], "download_size": 138365947, "dataset_size": 1566287530}}
|
2023-02-10T19:54:22+00:00
|
3e67cc276131c4d34aa5c8d1892f8133a0739ffb
|
# Habr dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
- [Data Instances](#data-instances)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Description
**Summary:** Dataset of posts and comments from [habr.com](https://habr.com/ru/all/), a Russian collaborative blog about IT, computer science and anything related to the Internet.
**Script:** [create_habr.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_habr.py)
**Point of Contact:** [Ilya Gusev]([email protected])
**Languages:** Russian, English, some programming code.
## Usage
Prerequisites:
```bash
pip install datasets zstandard jsonlines pysimdjson
```
Dataset iteration:
```python
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/habr', split="train", streaming=True)
for example in dataset:
print(example["text_markdown"])
```
## Data Instances
```
{
"id": 12730,
"language": "ru",
"url": "https://habr.com/ru/post/12730/",
"text_markdown": "...",
"text_html": "...",
"lead_markdown": "...",
"lead_html": "...",
"type": "article",
"labels": [],
"original_author": null,
"original_url": null,
"time_published": 1185962380,
"author": "...",
"title": "Хочешь в университет — сделай презентацию",
"statistics": {
"commentsCount": 23,
"favoritesCount": 1,
"readingCount": 1542,
"score": 7,
"votesCount": 15,
"votesCountPlus": 11,
"votesCountMinus": 4
},
"hubs": [
"itcompanies"
],
"flows": [
"popsci"
],
"tags": [
"PowerPoint",
"презентация",
"абитуриенты",
],
"reading_time": 1,
"format": null,
"complexity": null,
"comments": {
"id": [11653537, 11653541],
"parent_id": [null, 11653537],
"level": [0, 1],
"time_published": [1185963192, 1185967886],
"score": [-1, 0],
"votes": [1, 0],
"message_html": ["...", "..."],
"author": ["...", "..."],
"children": [[11653541], []]
}
}
```
You can use this little helper to unflatten sequences:
```python
def revert_flattening(records):
fixed_records = []
for key, values in records.items():
if not fixed_records:
fixed_records = [{} for _ in range(len(values))]
for i, value in enumerate(values):
fixed_records[i][key] = value
return fixed_records
```
The original JSONL is already unflattened.
## Source Data
* The data source is the [Habr](https://habr.com/) website.
* API call example: [post 709430](https://habr.com/kek/v2/articles/709430).
* Processing script is [here](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_habr.py).
## Personal and Sensitive Information
The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible.
|
IlyaGusev/habr
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:ru",
"language:en",
"region:us"
] |
2023-02-10T20:36:09+00:00
|
{"language": ["ru", "en"], "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "id", "dtype": "uint32"}, {"name": "language", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text_markdown", "dtype": "string"}, {"name": "text_html", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "original_author", "dtype": "string"}, {"name": "original_url", "dtype": "string"}, {"name": "lead_html", "dtype": "string"}, {"name": "lead_markdown", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "time_published", "dtype": "uint64"}, {"name": "statistics", "struct": [{"name": "commentsCount", "dtype": "uint32"}, {"name": "favoritesCount", "dtype": "uint32"}, {"name": "readingCount", "dtype": "uint32"}, {"name": "score", "dtype": "int32"}, {"name": "votesCount", "dtype": "int32"}, {"name": "votesCountPlus", "dtype": "int32"}, {"name": "votesCountMinus", "dtype": "int32"}]}, {"name": "labels", "sequence": "string"}, {"name": "hubs", "sequence": "string"}, {"name": "flows", "sequence": "string"}, {"name": "tags", "sequence": "string"}, {"name": "reading_time", "dtype": "uint32"}, {"name": "format", "dtype": "string"}, {"name": "complexity", "dtype": "string"}, {"name": "comments", "sequence": [{"name": "id", "dtype": "uint64"}, {"name": "parent_id", "dtype": "uint64"}, {"name": "level", "dtype": "uint32"}, {"name": "time_published", "dtype": "uint64"}, {"name": "score", "dtype": "int32"}, {"name": "votes", "dtype": "uint32"}, {"name": "message_html", "dtype": "string"}, {"name": "message_markdown", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "children", "sequence": "uint64"}]}], "splits": [{"name": "train", "num_bytes": 19968161329, "num_examples": 302049}], "download_size": 3485570346, "dataset_size": 19968161329}}
|
2023-03-09T23:16:35+00:00
|
1f8e8fd492a1bca73d6999f7ec3fd8366363ba78
|
from https://domains-index.com/
|
breadlicker45/100k-websites
|
[
"region:us"
] |
2023-02-10T22:04:42+00:00
|
{}
|
2023-02-10T22:05:35+00:00
|
fd4ca9902476c8a62edbda901d09ba63eedb830e
|
# Dataset Card for "mediasum-summary-matching"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
karukas/mediasum-summary-matching
|
[
"region:us"
] |
2023-02-11T00:04:28+00:00
|
{"dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4149687650, "num_examples": 443596}, {"name": "validation", "num_bytes": 92028438, "num_examples": 10000}, {"name": "test", "num_bytes": 94033599, "num_examples": 10000}], "download_size": 2438334598, "dataset_size": 4335749687}}
|
2023-02-11T00:05:53+00:00
|
a73c22348d3238d79eb28e8a9769d8bd25ca2f0e
|
# Dataset Card for "VALUE_wikitext2_lexical"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext2_lexical
|
[
"region:us"
] |
2023-02-11T00:08:14+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 1220298, "num_examples": 1796}, {"name": "train", "num_bytes": 10702271, "num_examples": 15501}, {"name": "validation", "num_bytes": 1097319, "num_examples": 1604}], "download_size": 7724483, "dataset_size": 13019888}}
|
2023-02-11T00:08:19+00:00
|
6046ed39f5e7d14102388ff5dc4cc94c8826544b
|
# Dataset Card for grade-school-math-instructions
OpenAI's [grade-school-math](https://github.com/openai/grade-school-math) dataset converted into instructions.
## Citation Information
```bibtex
@article{cobbe2021gsm8k,
title={Training Verifiers to Solve Math Word Problems},
author={Cobbe, Karl and Kosaraju, Vineet and Bavarian, Mohammad and Chen, Mark and Jun, Heewoo and Kaiser, Lukasz and Plappert, Matthias and Tworek, Jerry and Hilton, Jacob and Nakano, Reiichiro and Hesse, Christopher and Schulman, John},
journal={arXiv preprint arXiv:2110.14168},
year={2021}
}
```
|
qwedsacf/grade-school-math-instructions
|
[
"region:us"
] |
2023-02-11T01:32:53+00:00
|
{"dataset_info": {"features": [{"name": "INSTRUCTION", "dtype": "string"}, {"name": "RESPONSE", "dtype": "string"}, {"name": "SOURCE", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4804916, "num_examples": 8792}], "download_size": 2554896, "dataset_size": 4804916}}
|
2023-02-11T01:59:26+00:00
|
fd6171cef49e0bae2af2b137e996766bcc0a271e
|
# Dataset Card for "VALUE_wikitext2_negative_inversion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liuyanchen1015/VALUE_wikitext2_negative_inversion
|
[
"region:us"
] |
2023-02-11T02:06:52+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 647, "num_examples": 1}, {"name": "train", "num_bytes": 1026, "num_examples": 1}, {"name": "validation", "num_bytes": 631, "num_examples": 1}], "download_size": 18138, "dataset_size": 2304}}
|
2023-02-11T02:06:57+00:00
|
c7bda74048748f55749cd663c3d8d1025a841fd9
|
# Dataset Card for H4 Stack Exchange Preferences Dataset
## Dataset Description
- **Homepage:** https://archive.org/details/stackexchange
- **Repository:** (private for now) https://github.com/huggingface/h4
- **Point of Contact:** Nathan Lambert, [email protected]
- **Size of downloaded dataset:** 22.13 GB
- **Number of instructions:** 10,741,532
### Dataset Summary
This dataset contains questions and answers from the [Stack Overflow Data Dump](https://archive.org/details/stackexchange) for the purpose of **preference model training**.
Importantly, the questions have been filtered to fit the following criteria for preference models (following closely from [Askell et al. 2021](https://arxiv.org/abs/2112.00861)): *have >=2 answers*.
This data could also be used for instruction fine-tuning and language model training.
The questions are grouped with answers that are assigned a score corresponding to the Anthropic paper:
```
score = log2 (1 + upvotes) rounded to the nearest integer, plus 1 if the answer was accepted by the questioner (we assign a score of −1 if the number of upvotes is negative).
```
Some important notes when using this dataset for preference model pretraining (PMP), which can be ignored for other uses:
* the data will likely need to be filtered more due to matching scores.
* see section 4.1 of Askel et al 2021 for instructions on using each pair of samples twice via the following `binarization` (for better pre-training initialization):
```
Subsequently, we created a binary dataset by applying a ‘binarization’ procedure to the ranked dataset. That
is, for every ranked pair A > B, we transform it into two independent binary comparisons:
GOOD:A > BAD:A
BAD:B > GOOD:B
```
To see all the stackexchanges used in this data, please see [this file](https://huggingface.co/datasets/HuggingFaceH4/pmp-stack-exchange/blob/main/stack_exchanges.json).
Unfortunately, sharing the binarized data directly without metadata violates the license, so we have shared a script for binarization.
### Using the data
Here is a script from our internal tooling used to create a binarized dataset:
```
# Copyright 2023 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import random
from argparse import ArgumentParser
from pathlib import Path
import numpy as np
from datasets import Dataset, concatenate_datasets, load_dataset
from h4.data.utils import save_dataset_shards
H4_DIR = Path(__file__).resolve().parents[3]
DATA_DIR = H4_DIR / "data"
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument("--debug", action="store_true", help="Added print statements / limit data size for debugging")
parser.add_argument(
"--output_dir",
default=f"{DATA_DIR}/pmp-binarized",
type=str,
help="Where to save the processed dataset",
)
parser.add_argument(
"--exchange_name",
type=str,
default=None,
help="Optional argument to specify a specific subsection of the dataset",
)
parser.add_argument(
"--binary_score", type=int, default=8, help="Score assigned to binarized pairs for preference data."
)
parser.add_argument(
"--stream_data", action="store_true", help="Optionally stream data, which can be useful with weaker computers"
)
parser.set_defaults(debug=False, stream_data=False) # default will process full dataset
args = parser.parse_args()
specific_exchange = args.exchange_name
stream_dataset = args.stream_data
binary_score = args.binary_score
if specific_exchange:
data_dir = "data/" + args.exchange_name
else:
data_dir = None
if args.debug:
data_len_limit = 10000
else:
data_len_limit = np.inf
dataset = load_dataset(
"HuggingFaceH4/pmp-stack-exchange",
data_dir=data_dir,
split="train",
streaming=stream_dataset,
)
pmp_data = []
for i, d in enumerate(iter(dataset)):
# check debug limit, quit if in debug mode (don't save)
if i > data_len_limit:
print("Early exit for debug mode!")
print(pmp_data)
break
question = d["question"]
answers = d["answers"]
num_answers = len(answers)
answer_scores = [a["pm_score"] for a in answers]
if len(np.unique(answer_scores)) < 2:
print(f"PM Scores are {answer_scores}, skipping this question {i}")
else:
# Sample 2 unique scores for binarization
dif_scores = False
while not dif_scores:
# print("infinite loop...?")
two_answers = random.sample(answers, 2)
if two_answers[0]["pm_score"] != two_answers[1]["pm_score"]:
dif_scores = True
answer_0 = two_answers[0]
answer_1 = two_answers[1]
text_0 = "Question: " + question + "\n" + "Answer: " + answer_0["text"]
text_1 = "Question: " + question + "\n" + "Answer: " + answer_1["text"]
score_0 = binary_score
score_1 = binary_score
pmp_data.append({"context": text_0, "score": score_0})
pmp_data.append({"context": text_1, "score": score_1})
# Save binarized data
sublist_len = 100000
print(f"Dataset length is {len(pmp_data)}")
# bypass known issue in arrow https://issues.apache.org/jira/browse/ARROW-17137
print(f"Processed dataset length > {sublist_len}, processing to HF dataset in chunks")
chunks = [pmp_data[x : x + sublist_len] for x in range(0, len(pmp_data), sublist_len)]
ds_chunks = [Dataset.from_list(ch) for ch in chunks]
ds = concatenate_datasets(ds_chunks)
save_dataset_shards(ds, args.output_dir, subset="stackexchange", shard_size="100MB")
```
### Languages
This is intended to be English only, thought other languages may be present. Some Stack Exchanges that are omitted include:
```
spanish: es.meta.stackoverflow.com, es.stackoverflow.com
japanese: ja.meta.stackoverflow.com, ja.stackoverflow.com
portugese: pt.stackoverflow.com, pt.meta.stackoverflow.com
russian: ru.stackoverflow, ru.meta.stackoverflow
```
### Licensing Information
License: https://creativecommons.org/licenses/by-sa/4.0/
The cc-by-sa 4.0 licensing, while intentionally permissive, does require attribution:
Attribution — You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work).
Specifically the attribution requirements are as follows:
1. Visually display or otherwise indicate the source of the content as coming from the Stack Exchange Network. This requirement is satisfied with a discreet text blurb, or some other unobtrusive but clear visual indication.
2. Ensure that any Internet use of the content includes a hyperlink directly to the original question on the source site on the Network (e.g., http://stackoverflow.com/questions/12345)
3. Visually display or otherwise clearly indicate the author names for every question and answer used
4. Ensure that any Internet use of the content includes a hyperlink for each author name directly back to his or her user profile page on the source site on the Network (e.g., http://stackoverflow.com/users/12345/username), directly to the Stack Exchange domain, in standard HTML (i.e. not through a Tinyurl or other such indirect hyperlink, form of obfuscation or redirection), without any “nofollow” command or any other such means of avoiding detection by search engines, and visible even with JavaScript disabled.
For more information, see the Stack Exchange Terms of Service.
### Citation Information
```
@online{h4stackexchange,
author = {Lambert, Nathan and Tunstall, Lewis and Rajani, Nazneen and Thrush, Tristan},
title = {HuggingFace H4 Stack Exchange Preference Dataset},
year = 2023,
url = {https://huggingface.co/datasets/HuggingFaceH4/stack-exchange-preferences},
}
```
|
HuggingFaceH4/stack-exchange-preferences
|
[
"task_categories:question-answering",
"size_categories:10M<n<100M",
"language:en",
"license:cc-by-sa-4.0",
"RLHF",
"preferences",
"human-feedback",
"Stack Exchange",
"arxiv:2112.00861",
"region:us"
] |
2023-02-11T03:24:28+00:00
|
{"language": ["en"], "license": "cc-by-sa-4.0", "size_categories": ["10M<n<100M"], "task_categories": ["question-answering"], "pretty_name": "H4 Stack Exchange Preferences Dataset", "tags": ["RLHF", "preferences", "human-feedback", "Stack Exchange"], "download_size": 22132072448}
|
2023-03-08T03:37:53+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.