sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
799c189759a6c6eff6cf0840a002181fc54aaa47 | # Dataset Card for "dreambooth-hackathon-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | alikanakar/dreambooth-hackathon-images | [
"region:us"
]
| 2023-01-16T20:00:05+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 13484975.0, "num_examples": 20}], "download_size": 0, "dataset_size": 13484975.0}} | 2023-01-16T20:17:49+00:00 |
834272d7214d21ede9d22f0604e8e2a39e00f31d | MatthewWaller/cifar_stable_diffusion | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"region:us"
]
| 2023-01-16T20:38:30+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 136240130, "num_examples": 60000}], "download_size": 137069319, "dataset_size": 136240130}} | 2023-01-16T20:51:00+00:00 |
|
9bffb5fd906d12ed785e463b7aa72ed9fd5ef68b | khazen2/SAR_1st | [
"license:cc0-1.0",
"region:us"
]
| 2023-01-16T20:54:13+00:00 | {"license": "cc0-1.0"} | 2023-01-16T20:56:09+00:00 |
|
030dcb9ec61c436299b1df10d90ae1cbe1d1b401 |
<div align="center">
<img width="640" alt="keremberke/indoor-scene-classification" src="https://huggingface.co/datasets/keremberke/indoor-scene-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['meeting_room', 'cloister', 'stairscase', 'restaurant', 'hairsalon', 'children_room', 'dining_room', 'lobby', 'museum', 'laundromat', 'computerroom', 'grocerystore', 'hospitalroom', 'buffet', 'office', 'warehouse', 'garage', 'bookstore', 'florist', 'locker_room', 'inside_bus', 'subway', 'fastfood_restaurant', 'auditorium', 'studiomusic', 'airport_inside', 'pantry', 'restaurant_kitchen', 'casino', 'movietheater', 'kitchen', 'waitingroom', 'artstudio', 'toystore', 'kindergarden', 'trainstation', 'bedroom', 'mall', 'corridor', 'bar', 'classroom', 'shoeshop', 'dentaloffice', 'videostore', 'laboratorywet', 'tv_studio', 'church_inside', 'operating_room', 'jewelleryshop', 'bathroom', 'clothingstore', 'closet', 'winecellar', 'livingroom', 'nursery', 'gameroom', 'inside_subway', 'deli', 'bakery', 'library', 'prisoncell', 'gym', 'concert_hall', 'greenhouse', 'elevator', 'poolinside', 'bowling']
```
### Number of Images
```json
{'train': 10885, 'test': 1558, 'valid': 3128}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/indoor-scene-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/popular-benchmarks/mit-indoor-scene-recognition/dataset/5](https://universe.roboflow.com/popular-benchmarks/mit-indoor-scene-recognition/dataset/5?ref=roboflow2huggingface)
### Citation
```
```
### License
MIT
### Dataset Summary
This dataset was exported via roboflow.com on October 24, 2022 at 4:09 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 15571 images.
Indoor-scenes are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 416x416 (Stretch)
No image augmentation techniques were applied.
| keremberke/indoor-scene-classification | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"Retail",
"Pest Control",
"Benchmark",
"region:us"
]
| 2023-01-16T20:56:17+00:00 | {"task_categories": ["image-classification"], "tags": ["roboflow", "roboflow2huggingface", "Retail", "Pest Control", "Benchmark"]} | 2023-01-16T21:04:18+00:00 |
a549a284a1fefdc761ad459ee85f50c5ad8138ef |
<div align="center">
<img width="640" alt="keremberke/german-traffic-sign-detection" src="https://huggingface.co/datasets/keremberke/german-traffic-sign-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['animals', 'construction', 'cycles crossing', 'danger', 'no entry', 'pedestrian crossing', 'school crossing', 'snow', 'stop', 'bend', 'bend left', 'bend right', 'give way', 'go left', 'go left or straight', 'go right', 'go right or straight', 'go straight', 'keep left', 'keep right', 'no overtaking', 'no overtaking -trucks-', 'no traffic both ways', 'no trucks', 'priority at next intersection', 'priority road', 'restriction ends', 'restriction ends -overtaking -trucks--', 'restriction ends -overtaking-', 'restriction ends 80', 'road narrows', 'roundabout', 'slippery road', 'speed limit 100', 'speed limit 120', 'speed limit 20', 'speed limit 30', 'speed limit 50', 'speed limit 60', 'speed limit 70', 'speed limit 80', 'traffic signal', 'uneven road']
```
### Number of Images
```json
{'test': 54, 'valid': 108, 'train': 383}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/german-traffic-sign-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark/dataset/1](https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ gtsdb---german-traffic-sign-detection-benchmark_dataset,
title = { GTSDB - German Traffic Sign Detection Benchmark Dataset },
type = { Open Source Dataset },
author = { Mohamed Traore },
howpublished = { \\url{ https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark } },
url = { https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jul },
note = { visited on 2023-01-16 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 16, 2023 at 9:04 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 545 images.
Signs are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
| keremberke/german-traffic-sign-detection | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Self Driving",
"Transportation",
"region:us"
]
| 2023-01-16T21:04:50+00:00 | {"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface", "Self Driving", "Transportation"]} | 2023-01-16T21:06:06+00:00 |
9d6cd89e55db7fbc129449387b3da7debcf7b6c4 |
<div align="center">
<img width="640" alt="keremberke/satellite-building-segmentation" src="https://huggingface.co/datasets/keremberke/satellite-building-segmentation/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['building']
```
### Number of Images
```json
{'train': 6764, 'valid': 1934, 'test': 967}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/satellite-building-segmentation", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation/dataset/1](https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ buildings-instance-segmentation_dataset,
title = { Buildings Instance Segmentation Dataset },
type = { Open Source Dataset },
author = { Roboflow Universe Projects },
howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation } },
url = { https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { jan },
note = { visited on 2023-01-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 16, 2023 at 9:09 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 9665 images.
Buildings are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
| keremberke/satellite-building-segmentation | [
"task_categories:image-segmentation",
"roboflow",
"roboflow2huggingface",
"Aerial",
"Logistics",
"Construction",
"Damage Risk",
"Other",
"region:us"
]
| 2023-01-16T21:09:30+00:00 | {"task_categories": ["image-segmentation"], "tags": ["roboflow", "roboflow2huggingface", "Aerial", "Logistics", "Construction", "Damage Risk", "Other"]} | 2023-01-18T09:41:34+00:00 |
694c61350faf9a6622586d6cf50f45e1631862dc |
<div align="center">
<img width="640" alt="keremberke/hard-hat-detection" src="https://huggingface.co/datasets/keremberke/hard-hat-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['hardhat', 'no-hardhat']
```
### Number of Images
```json
{'test': 2001, 'train': 13782, 'valid': 3962}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/hard-hat-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5/dataset/2](https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5/dataset/2?ref=roboflow2huggingface)
### Citation
```
@misc{ hard-hats-fhbh5_dataset,
title = { Hard Hats Dataset },
type = { Open Source Dataset },
author = { Roboflow Universe Projects },
howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5 } },
url = { https://universe.roboflow.com/roboflow-universe-projects/hard-hats-fhbh5 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-16 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 16, 2023 at 9:17 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 19745 images.
Hardhat-ppe are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
| keremberke/hard-hat-detection | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Construction",
"Utilities",
"Manufacturing",
"Logistics",
"Ppe",
"Assembly Line",
"Warehouse",
"Factory",
"Damage Risk",
"region:us"
]
| 2023-01-16T21:22:25+00:00 | {"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface", "Construction", "Utilities", "Manufacturing", "Logistics", "Ppe", "Assembly Line", "Warehouse", "Factory", "Construction", "Logistics", "Utilities", "Damage Risk", "Ppe"]} | 2023-01-16T21:39:24+00:00 |
08e0f818471ccb445da08d847a20d3a654e0d50e |
<div align="center">
<img width="640" alt="keremberke/excavator-detector" src="https://huggingface.co/datasets/keremberke/excavator-detector/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['excavators', 'dump truck', 'wheel loader']
```
### Number of Images
```json
{'test': 144, 'train': 2245, 'valid': 267}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/excavator-detector", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0/dataset/3](https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0/dataset/3?ref=roboflow2huggingface)
### Citation
```
@misc{ excavators-cwlh0_dataset,
title = { Excavators Dataset },
type = { Open Source Dataset },
author = { Mohamed Sabek },
howpublished = { \\url{ https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0 } },
url = { https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-01-16 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on April 4, 2022 at 8:56 AM GMT
It includes 2656 images.
Excavator are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
| keremberke/excavator-detector | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Manufacturing",
"Construction",
"Machinery",
"region:us"
]
| 2023-01-16T21:40:15+00:00 | {"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface", "Manufacturing", "Construction", "Machinery"]} | 2023-01-16T21:43:21+00:00 |
d2937ad1fa0bacc0c1e48d6abf1b021a442c256c | BoodBooed/Hitl | [
"license:afl-3.0",
"region:us"
]
| 2023-01-16T22:27:43+00:00 | {"license": "afl-3.0"} | 2023-01-16T22:33:56+00:00 |
|
5c714d8eb8a75d11a4c984ced60c3aa10cc89cb8 | # Dataset Card for "nilc-masked-punctuation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tiagoblima/nilc-masked-punctuation | [
"region:us"
]
| 2023-01-17T00:09:50+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "reference", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 376331, "num_examples": 1236}], "download_size": 228368, "dataset_size": 376331}} | 2023-01-17T00:11:33+00:00 |
9a8114051c0c4015bc8fe02801a047ea7d461fc3 | # Dataset Card for "pmcoa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gabrielaltay/pmcoa | [
"region:us"
]
| 2023-01-17T00:15:57+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "pmid", "dtype": "string"}, {"name": "accession_id", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "last_updated", "dtype": "string"}, {"name": "retracted", "dtype": "string"}, {"name": "citation", "dtype": "string"}, {"name": "decoded_as", "dtype": "string"}, {"name": "journal", "dtype": "string"}, {"name": "year", "dtype": "int32"}, {"name": "doi", "dtype": "string"}, {"name": "oa_subset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 206274456770, "num_examples": 4935779}, {"name": "validation", "num_bytes": 4046140044, "num_examples": 87794}], "download_size": 111297924087, "dataset_size": 210320596814}} | 2023-01-17T01:13:20+00:00 |
9f634125576e9cc9698d0bfc66dbcacaabc1abe2 | bongsoo/news_talk_ko_en | [
"license:apache-2.0",
"region:us"
]
| 2023-01-17T01:26:02+00:00 | {"license": "apache-2.0"} | 2023-01-17T01:31:55+00:00 |
|
960448f73503112d4226baeb8eb41d3fb5ae2506 |
## Dataset Description
- **Repository:** https://reasonwithpal.com/
- **Paper:** [PaL: Program-Aided Language Model](https://arxiv.org/abs/2211.10435)
### Dataset Summary
This is the harder version of gsm8k math reasoning dataset (https://huggingface.co/datasets/gsm8k).
We construct this dataset by replacing the numbers in the questions of GSM8K with larger numbers that are less common.
### Supported Tasks and Leaderboards
This dataset is used to evaluate math reasoning
### Languages
English - Numbers
## Dataset Structure
```python
dataset = load_dataset("reasoning-machines/gsm-hard")
DatasetDict({
train: Dataset({
features: ['input', 'code', 'target'],
num_rows: 1319
})
})
```
### Data Fields
train/dev/test:
- input: The question
- code: The corresponding code solution to the question
- target: The answer
### Citation Information
```
@article{gao2022pal,
title={PAL: Program-aided Language Models},
author={Gao, Luyu and Madaan, Aman and Zhou, Shuyan and Alon, Uri and Liu, Pengfei and Yang, Yiming and Callan, Jamie and Neubig, Graham},
journal={arXiv preprint arXiv:2211.10435},
year={2022}
}
``` | reasoning-machines/gsm-hard | [
"task_categories:text2text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:gsm8k (https://huggingface.co/datasets/gsm8k)",
"language:code",
"license:mit",
"math_reasoning",
"symbolic_reasoning",
"arxiv:2211.10435",
"region:us"
]
| 2023-01-17T03:05:50+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced", "expert-generated"], "language": ["code"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["gsm8k (https://huggingface.co/datasets/gsm8k)"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "gsm-hard", "tags": ["math_reasoning", "symbolic_reasoning"]} | 2023-01-17T03:21:10+00:00 |
ac99c145f51aa5dc96e72a8a0e028065b73e5088 | arefm/second_experiment_data | [
"license:apache-2.0",
"region:us"
]
| 2023-01-17T03:44:21+00:00 | {"license": "apache-2.0"} | 2023-01-17T03:50:23+00:00 |
|
4c6c8c51d5b175257930879e1354d7c1f88c3a53 |
# Quakeflow_NC
## Introduction
This dataset is part of the data (1970-2020) from [NCEDC (Northern California Earthquake Data Center)](https://ncedc.org/index.html) and is organized as several HDF5 files. The dataset structure is shown below, and you can find more information about the format at [AI4EPS](https://ai4eps.github.io/homepage/ml4earth/seismic_event_format1/))
Cite the NCEDC and PhaseNet:
Zhu, W., & Beroza, G. C. (2018). PhaseNet: A Deep-Neural-Network-Based Seismic Arrival Time Picking Method. arXiv preprint arXiv:1803.03211.
NCEDC (2014), Northern California Earthquake Data Center. UC Berkeley Seismological Laboratory. Dataset. doi:10.7932/NCEDC.
Acknowledge the NCEDC:
Waveform data, metadata, or data products for this study were accessed through the Northern California Earthquake Data Center (NCEDC), doi:10.7932/NCEDC.
```
Group: / len:16227
|- Group: /nc71111584 len:2
| |-* begin_time = 2020-01-02T07:01:19.620
| |-* depth_km = 3.69
| |-* end_time = 2020-01-02T07:03:19.620
| |-* event_id = nc71111584
| |-* event_time = 2020-01-02T07:01:48.240
| |-* event_time_index = 2862
| |-* latitude = 37.6545
| |-* longitude = -118.8798
| |-* magnitude = -0.15
| |-* magnitude_type = D
| |-* num_stations = 2
| |- Dataset: /nc71111584/NC.MCB..HH (shape:(3, 12000))
| | |- (dtype=float32)
| | | |-* azimuth = 233.0
| | | |-* component = ['E' 'N' 'Z']
| | | |-* distance_km = 1.9
| | | |-* dt_s = 0.01
| | | |-* elevation_m = 2391.0
| | | |-* emergence_angle = 159.0
| | | |-* event_id = ['nc71111584' 'nc71111584']
| | | |-* latitude = 37.6444
| | | |-* location =
| | | |-* longitude = -118.8968
| | | |-* network = NC
| | | |-* phase_index = [3000 3101]
| | | |-* phase_polarity = ['U' 'N']
| | | |-* phase_remark = ['IP' 'ES']
| | | |-* phase_score = [1 2]
| | | |-* phase_time = ['2020-01-02T07:01:49.620' '2020-01-02T07:01:50.630']
| | | |-* phase_type = ['P' 'S']
| | | |-* snr = [2.82143 3.055604 1.8412642]
| | | |-* station = MCB
| | | |-* unit = 1e-6m/s
| |- Dataset: /nc71111584/NC.MCB..HN (shape:(3, 12000))
| | |- (dtype=float32)
| | | |-* azimuth = 233.0
| | | |-* component = ['E' 'N' 'Z']
......
```
## How to use
### Requirements
- datasets
- h5py
- fsspec
- torch (for PyTorch)
### Usage
Import the necessary packages:
```python
import h5py
import numpy as np
import torch
from torch.utils.data import Dataset, IterableDataset, DataLoader
from datasets import load_dataset
```
We have 6 configurations for the dataset:
- "station"
- "event"
- "station_train"
- "event_train"
- "station_test"
- "event_test"
"station" yields station-based samples one by one, while "event" yields event-based samples one by one. The configurations with no suffix are the full dataset, while the configurations with suffix "_train" and "_test" only have corresponding split of the full dataset. Train split contains data from 1970 to 2019, while test split contains data in 2020.
The sample of `station` is a dictionary with the following keys:
- `data`: the waveform with shape `(3, nt)`, the default time length is 8192
- `phase_pick`: the probability of the phase pick with shape `(3, nt)`, the first dimension is noise, P and S
- `event_location`: the event location with shape `(4,)`, including latitude, longitude, depth and time
- `station_location`: the station location with shape `(3,)`, including latitude, longitude and depth
The sample of `event` is a dictionary with the following keys:
- `data`: the waveform with shape `(n_station, 3, nt)`, the default time length is 8192
- `phase_pick`: the probability of the phase pick with shape `(n_station, 3, nt)`, the first dimension is noise, P and S
- `event_center`: the probability of the event time with shape `(n_station, feature_nt)`, default feature time length is 512
- `event_location`: the space-time coordinates of the event with shape `(n_staion, 4, feature_nt)`
- `event_location_mask`: the probability mask of the event time with shape `(n_station, feature_nt)`
- `station_location`: the space coordinates of the station with shape `(n_station, 3)`, including latitude, longitude and depth
The default configuration is `station_test`. You can specify the configuration by argument `name`. For example:
```python
# load dataset
# ATTENTION: Streaming(Iterable Dataset) is difficult to support because of the feature of HDF5
# So we recommend to directly load the dataset and convert it into iterable later
# The dataset is very large, so you need to wait for some time at the first time
# to load "station_test" with test split
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", split="test")
# or
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="station_test", split="test")
# to load "event" with train split
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="event", split="train")
```
#### Usage for `station`
Then you can change the dataset into PyTorch format iterable dataset, and view the first sample:
```python
quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="station_test", split="test")
# for PyTorch DataLoader, we need to divide the dataset into several shards
num_workers=4
quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
# because add examples formatting to get tensors when using the "torch" format
# has not been implemented yet, we need to manually add the formatting when using iterable dataset
# if you want to use dataset directly, just use
# quakeflow_nc.with_format("torch")
quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
try:
isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
except:
raise Exception("quakeflow_nc is not an IterableDataset")
# print the first sample of the iterable dataset
for example in quakeflow_nc:
print("\nIterable test\n")
print(example.keys())
for key in example.keys():
print(key, example[key].shape, example[key].dtype)
break
dataloader = DataLoader(quakeflow_nc, batch_size=4, num_workers=num_workers)
for batch in dataloader:
print("\nDataloader test\n")
print(batch.keys())
for key in batch.keys():
print(key, batch[key].shape, batch[key].dtype)
break
```
#### Usage for `event`
Then you can change the dataset into PyTorch format dataset, and view the first sample (Don't forget to reorder the keys):
```python
quakeflow_nc = datasets.load_dataset("AI4EPS/quakeflow_nc", split="test", name="event_test")
# for PyTorch DataLoader, we need to divide the dataset into several shards
num_workers=4
quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
try:
isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
except:
raise Exception("quakeflow_nc is not an IterableDataset")
# print the first sample of the iterable dataset
for example in quakeflow_nc:
print("\nIterable test\n")
print(example.keys())
for key in example.keys():
print(key, example[key].shape, example[key].dtype)
break
dataloader = DataLoader(quakeflow_nc, batch_size=1, num_workers=num_workers)
for batch in dataloader:
print("\nDataloader test\n")
print(batch.keys())
for key in batch.keys():
print(key, batch[key].shape, batch[key].dtype)
break
``` | AI4EPS/quakeflow_nc | [
"license:mit",
"doi:10.57967/hf/0716",
"region:us"
]
| 2023-01-17T06:40:21+00:00 | {"license": "mit"} | 2024-01-06T21:20:05+00:00 |
72269a262d92a4461a3dc00cb2081783810a5def | # Dataset Card for "beautiful_interesting_spectacular_photo_Marilyn_Monroe_25000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_interesting_spectacular_photo_Marilyn_Monroe_25000 | [
"region:us"
]
| 2023-01-17T07:34:04+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 148583825.0, "num_examples": 265}], "download_size": 148582108, "dataset_size": 148583825.0}} | 2023-01-17T07:34:34+00:00 |
5656dbae459cf15b3a112d46bb6b5484cabcd2d2 |
# Dataset Card for DocLayNet
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/
- **Repository:** https://github.com/DS4SD/DocLayNet
- **Paper:** https://doi.org/10.1145/3534678.3539043
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout
2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals
3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.
4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models
5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.
### Supported Tasks and Leaderboards
We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see https://ds4sd.github.io/icdar23-doclaynet/.
## Dataset Structure
### Data Fields
DocLayNet provides four types of data assets:
1. PNG images of all pages, resized to square `1025 x 1025px`
2. Bounding-box annotations in COCO format for each PNG image
3. Extra: Single-page PDF files matching each PNG image
4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content
The COCO image record are defined like this example
```js
...
{
"id": 1,
"width": 1025,
"height": 1025,
"file_name": "132a855ee8b23533d8ae69af0049c038171a06ddfcac892c3c6d7e6b4091c642.png",
// Custom fields:
"doc_category": "financial_reports" // high-level document category
"collection": "ann_reports_00_04_fancy", // sub-collection name
"doc_name": "NASDAQ_FFIN_2002.pdf", // original document filename
"page_no": 9, // page number in original document
"precedence": 0, // Annotation order, non-zero in case of redundant double- or triple-annotation
},
...
```
The `doc_category` field uses one of the following constants:
```
financial_reports,
scientific_articles,
laws_and_regulations,
government_tenders,
manuals,
patents
```
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Dataset Creation
### Annotations
#### Annotation process
The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf).
#### Who are the annotators?
Annotations are crowdsourced.
## Additional Information
### Dataset Curators
The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [[email protected]](mailto:[email protected]).
Curators:
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Licensing Information
License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/)
### Citation Information
```bib
@article{doclaynet2022,
title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation},
doi = {10.1145/3534678.353904},
url = {https://doi.org/10.1145/3534678.3539043},
author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
year = {2022},
isbn = {9781450393850},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
pages = {3743–3751},
numpages = {9},
location = {Washington DC, USA},
series = {KDD '22}
}
```
### Contributions
Thanks to [@dolfim-ibm](https://github.com/dolfim-ibm), [@cau-git](https://github.com/cau-git) for adding this dataset.
| ds4sd/DocLayNet | [
"task_categories:object-detection",
"task_categories:image-segmentation",
"task_ids:instance-segmentation",
"annotations_creators:crowdsourced",
"size_categories:10K<n<100K",
"license:other",
"layout-segmentation",
"COCO",
"document-understanding",
"PDF",
"region:us"
]
| 2023-01-17T07:51:59+00:00 | {"annotations_creators": ["crowdsourced"], "license": "other", "size_categories": ["10K<n<100K"], "task_categories": ["object-detection", "image-segmentation"], "task_ids": ["instance-segmentation"], "pretty_name": "DocLayNet", "tags": ["layout-segmentation", "COCO", "document-understanding", "PDF"]} | 2023-01-25T17:01:19+00:00 |
56bc65c3c190cba99284fb1a4e04d4483c3ac7ba | pengGG/kanqilaibucuo | [
"license:openrail",
"region:us"
]
| 2023-01-17T10:00:59+00:00 | {"license": "openrail"} | 2023-01-17T10:00:59+00:00 |
|
4f3d26f6e6fe500cc866c471056265d9c4a5ad5e |
# Dataset Card for "super_glue"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/google-research-datasets/boolean-questions](https://github.com/google-research-datasets/boolean-questions)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 55.66 MB
- **Size of the generated dataset:** 238.01 MB
- **Total amount of disk used:** 293.67 MB
### Dataset Summary
SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after
GLUE with a new set of more difficult language understanding tasks, improved
resources, and a new public leaderboard.
BoolQ (Boolean Questions, Clark et al., 2019a) is a QA task where each example consists of a short
passage and a yes/no question about the passage. The questions are provided anonymously and
unsolicited by users of the Google search engine, and afterwards paired with a paragraph from a
Wikipedia article containing the answer. Following the original work, we evaluate with accuracy.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### axb
- **Size of downloaded dataset files:** 0.03 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.26 MB
An example of 'test' looks as follows.
```
```
#### axg
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.05 MB
- **Total amount of disk used:** 0.06 MB
An example of 'test' looks as follows.
```
```
#### boolq
- **Size of downloaded dataset files:** 3.93 MB
- **Size of the generated dataset:** 9.92 MB
- **Total amount of disk used:** 13.85 MB
An example of 'train' looks as follows.
```
```
#### cb
- **Size of downloaded dataset files:** 0.07 MB
- **Size of the generated dataset:** 0.19 MB
- **Total amount of disk used:** 0.27 MB
An example of 'train' looks as follows.
```
```
#### copa
- **Size of downloaded dataset files:** 0.04 MB
- **Size of the generated dataset:** 0.12 MB
- **Total amount of disk used:** 0.16 MB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### axb
- `sentence1`: a `string` feature.
- `sentence2`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `not_entailment` (1).
#### axg
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `not_entailment` (1).
#### boolq
- `question`: a `string` feature.
- `passage`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `False` (0), `True` (1).
#### cb
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `contradiction` (1), `neutral` (2).
#### copa
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `choice1` (0), `choice2` (1).
### Data Splits
#### axb
| |test|
|---|---:|
|axb|1104|
#### axg
| |test|
|---|---:|
|axg| 356|
#### boolq
| |train|validation|test|
|-----|----:|---------:|---:|
|boolq| 9427| 3270|3245|
#### cb
| |train|validation|test|
|---|----:|---------:|---:|
|cb | 250| 56| 250|
#### copa
| |train|validation|test|
|----|----:|---------:|---:|
|copa| 400| 100| 500|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{clark2019boolq,
title={BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions},
author={Clark, Christopher and Lee, Kenton and Chang, Ming-Wei, and Kwiatkowski, Tom and Collins, Michael, and Toutanova, Kristina},
booktitle={NAACL},
year={2019}
}
@article{wang2019superglue,
title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},
author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},
journal={arXiv preprint arXiv:1905.00537},
year={2019}
}
Note that each SuperGLUE dataset has its own citation. Please see the source to
get the correct citation for each contained dataset.
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. | Xieyiyiyi/ceshi0119 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_ids:natural-language-inference",
"task_ids:word-sense-disambiguation",
"task_ids:coreference-resolution",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other",
"language:en",
"license:unknown",
"superglue",
"NLU",
"natural language understanding",
"region:us"
]
| 2023-01-17T10:08:24+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other"], "task_categories": ["text-classification", "token-classification", "question-answering"], "task_ids": ["natural-language-inference", "word-sense-disambiguation", "coreference-resolution", "extractive-qa"], "pretty_name": "SuperGLUE", "tags": ["superglue", "NLU", "natural language understanding"], "dataset_info": [{"config_name": "boolq", "features": [{"name": "question", "dtype": "string"}, {"name": "passage", "dtype": "string"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}], "splits": [{"name": "test", "num_bytes": 2107997, "num_examples": 3245}, {"name": "train", "num_bytes": 6179206, "num_examples": 9427}, {"name": "validation", "num_bytes": 2118505, "num_examples": 3270}], "download_size": 4118001, "dataset_size": 10405708}, {"config_name": "cb", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "contradiction", "2": "neutral"}}}}], "splits": [{"name": "test", "num_bytes": 93660, "num_examples": 250}, {"name": "train", "num_bytes": 87218, "num_examples": 250}, {"name": "validation", "num_bytes": 21894, "num_examples": 56}], "download_size": 75482, "dataset_size": 202772}, {"config_name": "copa", "features": [{"name": "premise", "dtype": "string"}, {"name": "choice1", "dtype": "string"}, {"name": "choice2", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "choice1", "1": "choice2"}}}}], "splits": [{"name": "test", "num_bytes": 60303, "num_examples": 500}, {"name": "train", "num_bytes": 49599, "num_examples": 400}, {"name": "validation", "num_bytes": 12586, "num_examples": 100}], "download_size": 43986, "dataset_size": 122488}, {"config_name": "multirc", "features": [{"name": "paragraph", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "idx", "struct": [{"name": "paragraph", "dtype": "int32"}, {"name": "question", "dtype": "int32"}, {"name": "answer", "dtype": "int32"}]}, {"name": "label", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}], "splits": [{"name": "test", "num_bytes": 14996451, "num_examples": 9693}, {"name": "train", "num_bytes": 46213579, "num_examples": 27243}, {"name": "validation", "num_bytes": 7758918, "num_examples": 4848}], "download_size": 1116225, "dataset_size": 68968948}, {"config_name": "record", "features": [{"name": "passage", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "entities", "sequence": "string"}, {"name": "entity_spans", "sequence": [{"name": "text", "dtype": "string"}, {"name": "start", "dtype": "int32"}, {"name": "end", "dtype": "int32"}]}, {"name": "answers", "sequence": "string"}, {"name": "idx", "struct": [{"name": "passage", "dtype": "int32"}, {"name": "query", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 179232052, "num_examples": 100730}, {"name": "validation", "num_bytes": 17479084, "num_examples": 10000}, {"name": "test", "num_bytes": 17200575, "num_examples": 10000}], "download_size": 51757880, "dataset_size": 213911711}, {"config_name": "rte", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}], "splits": [{"name": "test", "num_bytes": 975799, "num_examples": 3000}, {"name": "train", "num_bytes": 848745, "num_examples": 2490}, {"name": "validation", "num_bytes": 90899, "num_examples": 277}], "download_size": 750920, "dataset_size": 1915443}, {"config_name": "wic", "features": [{"name": "word", "dtype": "string"}, {"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "start1", "dtype": "int32"}, {"name": "start2", "dtype": "int32"}, {"name": "end1", "dtype": "int32"}, {"name": "end2", "dtype": "int32"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}], "splits": [{"name": "test", "num_bytes": 180593, "num_examples": 1400}, {"name": "train", "num_bytes": 665183, "num_examples": 5428}, {"name": "validation", "num_bytes": 82623, "num_examples": 638}], "download_size": 396213, "dataset_size": 928399}, {"config_name": "wsc", "features": [{"name": "text", "dtype": "string"}, {"name": "span1_index", "dtype": "int32"}, {"name": "span2_index", "dtype": "int32"}, {"name": "span1_text", "dtype": "string"}, {"name": "span2_text", "dtype": "string"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}], "splits": [{"name": "test", "num_bytes": 31572, "num_examples": 146}, {"name": "train", "num_bytes": 89883, "num_examples": 554}, {"name": "validation", "num_bytes": 21637, "num_examples": 104}], "download_size": 32751, "dataset_size": 143092}, {"config_name": "wsc.fixed", "features": [{"name": "text", "dtype": "string"}, {"name": "span1_index", "dtype": "int32"}, {"name": "span2_index", "dtype": "int32"}, {"name": "span1_text", "dtype": "string"}, {"name": "span2_text", "dtype": "string"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}], "splits": [{"name": "test", "num_bytes": 31568, "num_examples": 146}, {"name": "train", "num_bytes": 89883, "num_examples": 554}, {"name": "validation", "num_bytes": 21637, "num_examples": 104}], "download_size": 32751, "dataset_size": 143088}, {"config_name": "axb", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}], "splits": [{"name": "test", "num_bytes": 238392, "num_examples": 1104}], "download_size": 33950, "dataset_size": 238392}, {"config_name": "axg", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "idx", "dtype": "int32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}], "splits": [{"name": "test", "num_bytes": 53581, "num_examples": 356}], "download_size": 10413, "dataset_size": 53581}]} | 2024-01-29T12:47:23+00:00 |
af6e95118fce8a71f8d7eebf279c403b1b9b8876 | # Dataset Card for "praang-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ihanif/praang-images | [
"region:us"
]
| 2023-01-17T11:27:10+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 7404618.0, "num_examples": 23}], "download_size": 5551951, "dataset_size": 7404618.0}} | 2023-01-17T11:27:22+00:00 |
7d02c47036a5eddb519c924eb937f3ccaceb5743 |
# Dataset Card for "football-dataset"
Dummy dataset of 6 football players with a caption that can be used to fine-tune any Image Captioning model. | ybelkada/football-dataset | [
"region:us"
]
| 2023-01-17T11:46:21+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2073622.0, "num_examples": 6}], "download_size": 2074835, "dataset_size": 2073622.0}} | 2023-01-17T11:47:41+00:00 |
81d5ce0c103d9fe05879b50949ed41c40b96de69 |

# Dataset Card for CommitPack
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigcode-project/octopack
- **Paper:** [OctoPack: Instruction Tuning Code Large Language Models](https://arxiv.org/abs/2308.07124)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
### Dataset Summary
> CommitPack is is a 4TB dataset of commits scraped from GitHub repositories that are permissively licensed.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigcode-project/octopack).
- **Languages:** 350
- **OctoPack🐙🎒:**
<table>
<tr>
<th>Data</t>
<td><a href=https://huggingface.co/datasets/bigcode/commitpack>CommitPack</a></td>
<td>4TB of GitHub commits across 350 programming languages</td>
</tr>
<tr>
<th></t>
<td><a href=https://huggingface.co/datasets/bigcode/commitpackft>CommitPackFT</a></td>
<td>Filtered version of CommitPack for high-quality commit messages that resemble instructions</td>
</tr>
<tr>
<th>Model</t>
<td><a href=https://huggingface.co/bigcode/octocoder>OctoCoder</a></td>
<td>StarCoder (16B parameters) instruction tuned on CommitPackFT + OASST</td>
</tr>
<tr>
<th></t>
<td><a href=https://huggingface.co/bigcode/octogeex>OctoGeeX</a></td>
<td>CodeGeeX2 (6B parameters) instruction tuned on CommitPackFT + OASST</td>
</tr>
<tr>
<th>Evaluation</t>
<td><a href=https://huggingface.co/datasets/bigcode/humanevalpack>HumanEvalPack</a></td>
<td>Extension of OpenAI's HumanEval to cover 3 scenarios across 6 languages</td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example looks as follows:
```json
{
'commit': '0c17311f7fd511f5dae8f8e4acc2dce1a2de3cf5',
'old_file': 'main.py',
'new_file': 'main.py',
'old_contents': "import numpy as np\nimport matplotlib.pyplot as plt\n\n# generate sample data\nx_data = np.linspace(-5, 5, 20)\ny_data = np.random.normal(0.0, 1.0, x_data.size)\n\nplt.plot(x_data, y_data, 'o')\nplt.show()\n",
'new_contents': "import math\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# generate sample data\nx_data = np.linspace(-math.pi, math.pi, 30)\ny_data = np.sin(x_data) + np.random.normal(0.0, 0.1, x_data.size)\n\nplt.plot(x_data, y_data, 'o')\nplt.show()\n\n",
'subject': 'Change to sin() function with noise',
'message': 'Change to sin() function with noise\n',
'lang': 'Python',
'license': 'mit',
'repos': 'MorganR/basic-gaussian-process',
'returncode': 0,
'stderr': ''
}
```
### Data Fields
The data fields are the same among all splits:
- `commit`: unique commit id
- `old_file`: name of the file before the commit
- `new_file`: name of the file after the commit
- `old_contents`: contents of the file before the commit
- `new_contents`: contents of the file after the commit
- `subject`: subject of the commit (this is used for all experiments in the paper)
- `message`: message of the commit (commonly the same as the subject)
- `lang`: programming language
- `license`: license of the repository the code stems from, one of `['mit', 'artistic-2.0', 'isc', 'cc0-1.0', 'epl-1.0', 'mpl-2.0', 'unlicense', 'unknown', 'apache-2.0', 'bsd-3-clause', 'agpl-3.0', 'lgpl-2.1', 'bsd-2-clause']`
- `repos`: name of the the repository the code stems from (if multiple, they are comma separated)
- `returncode`: if applicable errorcode during scraping (0 = no error)
- 'stderr': if applicable the error that occured during scraping (empty = no error)
### Data Splits
| Name | Megabytes | % of total | Samples | % of total |
| --- | --- | --- | --- | --- |
| total | 3709175.78 | 100.0% | 57700105 | 100.0% |
| json | 583293.816 | 15.7257% | 3495038 | 6.0572% |
| xml | 279208.676 | 7.5275% | 1923159 | 3.333% |
| text | 270662.596 | 7.2971% | 1389525 | 2.4082% |
| javascript | 262824.844 | 7.0858% | 5401937 | 9.3621% |
| objective-c++ | 239009.3 | 6.4437% | 32227 | 0.0559% |
| python | 234311.564 | 6.3171% | 6189601 | 10.7272% |
| c | 200876.804 | 5.4157% | 2779478 | 4.8171% |
| c++ | 186585.256 | 5.0304% | 2402294 | 4.1634% |
| markdown | 171849.952 | 4.6331% | 7645354 | 13.2502% |
| java | 127103.448 | 3.4267% | 3744377 | 6.4894% |
| html | 105305.284 | 2.839% | 2366841 | 4.102% |
| yaml | 100466.64 | 2.7086% | 2592787 | 4.4936% |
| go | 86444.624 | 2.3306% | 1183612 | 2.0513% |
| csv | 82946.192 | 2.2362% | 79268 | 0.1374% |
| php | 74961.64 | 2.021% | 2555419 | 4.4288% |
| jupyter-notebook | 66854.08 | 1.8024% | 94000 | 0.1629% |
| gettext-catalog | 62296.88 | 1.6795% | 168327 | 0.2917% |
| sql | 56802.764 | 1.5314% | 132772 | 0.2301% |
| unity3d-asset | 39535.008 | 1.0659% | 17867 | 0.031% |
| typescript | 39254.804 | 1.0583% | 572136 | 0.9916% |
| web-ontology-language | 36435.464 | 0.9823% | 7458 | 0.0129% |
| ruby | 35830.74 | 0.966% | 2928702 | 5.0757% |
| c# | 33669.652 | 0.9077% | 923157 | 1.5999% |
| nix | 33547.92 | 0.9045% | 221281 | 0.3835% |
| shell | 25109.952 | 0.677% | 1017977 | 1.7643% |
| perl | 21148.928 | 0.5702% | 374266 | 0.6486% |
| tex | 17471.108 | 0.471% | 89283 | 0.1547% |
| css | 16306.632 | 0.4396% | 548818 | 0.9512% |
| restructuredtext | 15613.888 | 0.421% | 494037 | 0.8562% |
| rust | 15011.296 | 0.4047% | 296214 | 0.5134% |
| groff | 12020.188 | 0.3241% | 32923 | 0.0571% |
| ini | 8375.164 | 0.2258% | 297100 | 0.5149% |
| scala | 8325.96 | 0.2245% | 316064 | 0.5478% |
| coffeescript | 6795.14 | 0.1832% | 292446 | 0.5068% |
| haskell | 6306.12 | 0.17% | 217325 | 0.3766% |
| swift | 5902.716 | 0.1591% | 319289 | 0.5534% |
| lua | 5763.12 | 0.1554% | 139091 | 0.2411% |
| svg | 5645.44 | 0.1522% | 27095 | 0.047% |
| gas | 5585.384 | 0.1506% | 15121 | 0.0262% |
| ocaml | 5355.4 | 0.1444% | 81360 | 0.141% |
| erlang | 5043.32 | 0.136% | 93685 | 0.1624% |
| makefile | 4238.512 | 0.1143% | 343379 | 0.5951% |
| asciidoc | 4138.588 | 0.1116% | 96671 | 0.1675% |
| emacs-lisp | 3988.652 | 0.1075% | 83228 | 0.1442% |
| scss | 3944.936 | 0.1064% | 288190 | 0.4995% |
| clojure | 3523.408 | 0.095% | 158674 | 0.275% |
| org | 3126.22 | 0.0843% | 30198 | 0.0523% |
| common-lisp | 2954.904 | 0.0797% | 74628 | 0.1293% |
| diff | 2586.048 | 0.0697% | 21021 | 0.0364% |
| groovy | 2569.14 | 0.0693% | 110057 | 0.1907% |
| html+erb | 2450.676 | 0.0661% | 225379 | 0.3906% |
| nesc | 2439.564 | 0.0658% | 473 | 0.0008% |
| dart | 2395.796 | 0.0646% | 56873 | 0.0986% |
| powershell | 2289.276 | 0.0617% | 55381 | 0.096% |
| f# | 2289.236 | 0.0617% | 66840 | 0.1158% |
| dm | 2223.144 | 0.0599% | 55584 | 0.0963% |
| kotlin | 2219.248 | 0.0598% | 124266 | 0.2154% |
| pascal | 2194.676 | 0.0592% | 42511 | 0.0737% |
| jsx | 2124.744 | 0.0573% | 139148 | 0.2412% |
| viml | 1948.208 | 0.0525% | 74062 | 0.1284% |
| actionscript | 1844.148 | 0.0497% | 28819 | 0.0499% |
| cython | 1736.588 | 0.0468% | 25927 | 0.0449% |
| turtle | 1698.948 | 0.0458% | 3882 | 0.0067% |
| less | 1616.564 | 0.0436% | 88634 | 0.1536% |
| mathematica | 1475.044 | 0.0398% | 925 | 0.0016% |
| xslt | 1441.456 | 0.0389% | 27956 | 0.0485% |
| scheme | 1249.244 | 0.0337% | 30546 | 0.0529% |
| perl6 | 1223.16 | 0.033% | 12167 | 0.0211% |
| edn | 1186.94 | 0.032% | 2289 | 0.004% |
| fortran | 1178.548 | 0.0318% | 13463 | 0.0233% |
| java-server-pages | 1173.072 | 0.0316% | 53574 | 0.0928% |
| standard-ml | 1133.476 | 0.0306% | 20097 | 0.0348% |
| cmake | 1132.068 | 0.0305% | 58446 | 0.1013% |
| json5 | 1108.2 | 0.0299% | 1827 | 0.0032% |
| vala | 1104.512 | 0.0298% | 14822 | 0.0257% |
| vue | 1093.8 | 0.0295% | 68967 | 0.1195% |
| freemarker | 1032.332 | 0.0278% | 36216 | 0.0628% |
| graphql | 1004.844 | 0.0271% | 2009 | 0.0035% |
| twig | 958.96 | 0.0259% | 39588 | 0.0686% |
| tcl | 869.832 | 0.0235% | 16407 | 0.0284% |
| pod | 859.016 | 0.0232% | 14922 | 0.0259% |
| dockerfile | 849.728 | 0.0229% | 259379 | 0.4495% |
| yacc | 845.704 | 0.0228% | 8230 | 0.0143% |
| postscript | 800.728 | 0.0216% | 903 | 0.0016% |
| racket | 796.64 | 0.0215% | 16615 | 0.0288% |
| eagle | 785.684 | 0.0212% | 2237 | 0.0039% |
| haxe | 772.896 | 0.0208% | 28447 | 0.0493% |
| julia | 752.068 | 0.0203% | 22695 | 0.0393% |
| handlebars | 740.816 | 0.02% | 49842 | 0.0864% |
| smarty | 720.944 | 0.0194% | 41065 | 0.0712% |
| visual-basic | 681.516 | 0.0184% | 10511 | 0.0182% |
| literate-haskell | 673.74 | 0.0182% | 10729 | 0.0186% |
| smalltalk | 665.892 | 0.018% | 11741 | 0.0203% |
| isabelle | 655.82 | 0.0177% | 8359 | 0.0145% |
| nimrod | 652.86 | 0.0176% | 12023 | 0.0208% |
| zig | 621.384 | 0.0168% | 4290 | 0.0074% |
| m4 | 603.584 | 0.0163% | 12465 | 0.0216% |
| max | 603.56 | 0.0163% | 2259 | 0.0039% |
| elixir | 558.116 | 0.015% | 35473 | 0.0615% |
| mako | 543.012 | 0.0146% | 8943 | 0.0155% |
| arduino | 534.176 | 0.0144% | 32350 | 0.0561% |
| jade | 531.4 | 0.0143% | 46993 | 0.0814% |
| haml | 502.012 | 0.0135% | 74792 | 0.1296% |
| elm | 481.968 | 0.013% | 18542 | 0.0321% |
| purebasic | 474.276 | 0.0128% | 36 | 0.0001% |
| coldfusion | 470.78 | 0.0127% | 9263 | 0.0161% |
| lean | 470.032 | 0.0127% | 7507 | 0.013% |
| r | 454.32 | 0.0122% | 12858 | 0.0223% |
| cuda | 437.668 | 0.0118% | 11450 | 0.0198% |
| textile | 425.116 | 0.0115% | 18491 | 0.032% |
| robotframework | 421.612 | 0.0114% | 9211 | 0.016% |
| abap | 409.62 | 0.011% | 1955 | 0.0034% |
| rdoc | 397.028 | 0.0107% | 38760 | 0.0672% |
| llvm | 382.2 | 0.0103% | 10727 | 0.0186% |
| ada | 380.7 | 0.0103% | 13258 | 0.023% |
| batchfile | 372.16 | 0.01% | 43674 | 0.0757% |
| qml | 361.452 | 0.0097% | 19360 | 0.0336% |
| jasmin | 359.82 | 0.0097% | 4782 | 0.0083% |
| assembly | 343.62 | 0.0093% | 8126 | 0.0141% |
| g-code | 334.964 | 0.009% | 3690 | 0.0064% |
| cucumber | 331.38 | 0.0089% | 26677 | 0.0462% |
| html+php | 323.348 | 0.0087% | 18381 | 0.0319% |
| kicad | 321.936 | 0.0087% | 759 | 0.0013% |
| api-blueprint | 317.852 | 0.0086% | 4765 | 0.0083% |
| eiffel | 311.48 | 0.0084% | 373 | 0.0006% |
| toml | 292.676 | 0.0079% | 63517 | 0.1101% |
| modelica | 284.616 | 0.0077% | 2611 | 0.0045% |
| bitbake | 277.576 | 0.0075% | 43239 | 0.0749% |
| lex | 275.96 | 0.0074% | 705 | 0.0012% |
| stylus | 273.056 | 0.0074% | 21967 | 0.0381% |
| protocol-buffer | 254.124 | 0.0069% | 9202 | 0.0159% |
| unknown | 252.228 | 0.0068% | 30570 | 0.053% |
| nit | 244.54 | 0.0066% | 4951 | 0.0086% |
| factor | 241.192 | 0.0065% | 15378 | 0.0267% |
| xs | 239.04 | 0.0064% | 3215 | 0.0056% |
| sass | 230.648 | 0.0062% | 23144 | 0.0401% |
| parrot-internal-representation | 230.196 | 0.0062% | 6231 | 0.0108% |
| html+django | 217.04 | 0.0059% | 10535 | 0.0183% |
| mediawiki | 214.324 | 0.0058% | 10188 | 0.0177% |
| logos | 212.296 | 0.0057% | 1733 | 0.003% |
| genshi | 209.3 | 0.0056% | 956 | 0.0017% |
| coldfusion-cfc | 208.164 | 0.0056% | 4410 | 0.0076% |
| xtend | 179.544 | 0.0048% | 7775 | 0.0135% |
| sqf | 168.656 | 0.0045% | 7778 | 0.0135% |
| vhdl | 155.948 | 0.0042% | 2185 | 0.0038% |
| antlr | 143.548 | 0.0039% | 3651 | 0.0063% |
| systemverilog | 140.192 | 0.0038% | 3944 | 0.0068% |
| hcl | 136.752 | 0.0037% | 13379 | 0.0232% |
| asp | 136.104 | 0.0037% | 4286 | 0.0074% |
| nsis | 129.124 | 0.0035% | 4048 | 0.007% |
| inform-7 | 120.188 | 0.0032% | 184 | 0.0003% |
| slim | 119.036 | 0.0032% | 18726 | 0.0325% |
| groovy-server-pages | 117.368 | 0.0032% | 6695 | 0.0116% |
| ceylon | 116.144 | 0.0031% | 7256 | 0.0126% |
| fish | 111.28 | 0.003% | 15351 | 0.0266% |
| processing | 108.58 | 0.0029% | 5912 | 0.0102% |
| component-pascal | 105.5 | 0.0028% | 43 | 0.0001% |
| lasso | 104.168 | 0.0028% | 67 | 0.0001% |
| glsl | 99.488 | 0.0027% | 9478 | 0.0164% |
| saltstack | 98.196 | 0.0026% | 12314 | 0.0213% |
| xbase | 94.424 | 0.0025% | 1670 | 0.0029% |
| autohotkey | 94.22 | 0.0025% | 1452 | 0.0025% |
| liquid | 93.792 | 0.0025% | 2651 | 0.0046% |
| purescript | 92.412 | 0.0025% | 5024 | 0.0087% |
| agda | 92.06 | 0.0025% | 4956 | 0.0086% |
| inno-setup | 91.36 | 0.0025% | 3014 | 0.0052% |
| oz | 90.476 | 0.0024% | 1551 | 0.0027% |
| chapel | 89.62 | 0.0024% | 26447 | 0.0458% |
| arc | 87.212 | 0.0024% | 758 | 0.0013% |
| opencl | 86.432 | 0.0023% | 2489 | 0.0043% |
| graphviz-dot | 85.804 | 0.0023% | 1525 | 0.0026% |
| pawn | 85.424 | 0.0023% | 580 | 0.001% |
| jsoniq | 75.152 | 0.002% | 1343 | 0.0023% |
| bluespec | 72.38 | 0.002% | 2500 | 0.0043% |
| smali | 71.38 | 0.0019% | 174 | 0.0003% |
| krl | 69.868 | 0.0019% | 1879 | 0.0033% |
| maple | 68.284 | 0.0018% | 1311 | 0.0023% |
| unrealscript | 67.668 | 0.0018% | 585 | 0.001% |
| ooc | 63.188 | 0.0017% | 3416 | 0.0059% |
| pure-data | 62.624 | 0.0017% | 603 | 0.001% |
| xquery | 61.956 | 0.0017% | 2237 | 0.0039% |
| digital-command-language | 59.644 | 0.0016% | 833 | 0.0014% |
| moonscript | 59.208 | 0.0016% | 1951 | 0.0034% |
| awk | 57.176 | 0.0015% | 2206 | 0.0038% |
| pike | 52.872 | 0.0014% | 1262 | 0.0022% |
| livescript | 51.228 | 0.0014% | 5194 | 0.009% |
| solidity | 50.856 | 0.0014% | 3689 | 0.0064% |
| monkey | 48.256 | 0.0013% | 1367 | 0.0024% |
| jsonld | 48.012 | 0.0013% | 462 | 0.0008% |
| zephir | 42.684 | 0.0012% | 1265 | 0.0022% |
| crystal | 41.924 | 0.0011% | 4217 | 0.0073% |
| rhtml | 41.02 | 0.0011% | 4551 | 0.0079% |
| stata | 40.684 | 0.0011% | 1344 | 0.0023% |
| idris | 39.896 | 0.0011% | 3025 | 0.0052% |
| raml | 39.388 | 0.0011% | 948 | 0.0016% |
| openscad | 37.732 | 0.001% | 2178 | 0.0038% |
| red | 35.26 | 0.001% | 1108 | 0.0019% |
| c2hs-haskell | 34.472 | 0.0009% | 1021 | 0.0018% |
| cycript | 33.96 | 0.0009% | 197 | 0.0003% |
| applescript | 33.512 | 0.0009% | 1304 | 0.0023% |
| mupad | 32.488 | 0.0009% | 178 | 0.0003% |
| literate-agda | 31.384 | 0.0008% | 567 | 0.001% |
| boo | 31.172 | 0.0008% | 26289 | 0.0456% |
| sourcepawn | 29.528 | 0.0008% | 717 | 0.0012% |
| qmake | 29.508 | 0.0008% | 3632 | 0.0063% |
| ragel-in-ruby-host | 28.296 | 0.0008% | 888 | 0.0015% |
| io | 27.952 | 0.0008% | 1247 | 0.0022% |
| desktop | 27.648 | 0.0007% | 5021 | 0.0087% |
| propeller-spin | 26.772 | 0.0007% | 625 | 0.0011% |
| thrift | 26.748 | 0.0007% | 1007 | 0.0017% |
| volt | 25.052 | 0.0007% | 1660 | 0.0029% |
| xproc | 24.212 | 0.0007% | 914 | 0.0016% |
| igor-pro | 23.748 | 0.0006% | 388 | 0.0007% |
| lolcode | 23.74 | 0.0006% | 24861 | 0.0431% |
| html+eex | 21.412 | 0.0006% | 2100 | 0.0036% |
| logtalk | 20.428 | 0.0006% | 1035 | 0.0018% |
| mirah | 20.104 | 0.0005% | 706 | 0.0012% |
| gnuplot | 19.676 | 0.0005% | 889 | 0.0015% |
| literate-coffeescript | 19.016 | 0.0005% | 1041 | 0.0018% |
| jflex | 18.608 | 0.0005% | 555 | 0.001% |
| emberscript | 18.392 | 0.0005% | 1024 | 0.0018% |
| cobol | 17.0 | 0.0005% | 24953 | 0.0432% |
| yang | 16.94 | 0.0005% | 597 | 0.001% |
| rebol | 16.468 | 0.0004% | 239 | 0.0004% |
| linker-script | 16.084 | 0.0004% | 1604 | 0.0028% |
| cartocss | 15.916 | 0.0004% | 555 | 0.001% |
| urweb | 13.068 | 0.0004% | 304 | 0.0005% |
| rmarkdown | 13.032 | 0.0004% | 750 | 0.0013% |
| darcs-patch | 13.008 | 0.0004% | 80 | 0.0001% |
| csound | 12.852 | 0.0003% | 229 | 0.0004% |
| squirrel | 12.844 | 0.0003% | 531 | 0.0009% |
| apl | 12.56 | 0.0003% | 586 | 0.001% |
| hlsl | 12.168 | 0.0003% | 1529 | 0.0026% |
| latte | 11.888 | 0.0003% | 1380 | 0.0024% |
| pony | 11.836 | 0.0003% | 624 | 0.0011% |
| ioke | 10.86 | 0.0003% | 373 | 0.0006% |
| hy | 10.512 | 0.0003% | 879 | 0.0015% |
| uno | 10.356 | 0.0003% | 628 | 0.0011% |
| pan | 10.336 | 0.0003% | 637 | 0.0011% |
| xojo | 10.308 | 0.0003% | 642 | 0.0011% |
| papyrus | 10.256 | 0.0003% | 130 | 0.0002% |
| stan | 10.252 | 0.0003% | 540 | 0.0009% |
| slash | 9.904 | 0.0003% | 640 | 0.0011% |
| supercollider | 9.796 | 0.0003% | 318 | 0.0006% |
| vcl | 9.456 | 0.0003% | 747 | 0.0013% |
| smt | 9.032 | 0.0002% | 117 | 0.0002% |
| glyph | 8.948 | 0.0002% | 7 | 0.0% |
| wisp | 8.736 | 0.0002% | 262 | 0.0005% |
| renpy | 8.3 | 0.0002% | 421 | 0.0007% |
| clips | 7.728 | 0.0002% | 450 | 0.0008% |
| dns-zone | 7.56 | 0.0002% | 54 | 0.0001% |
| sas | 7.536 | 0.0002% | 269 | 0.0005% |
| rouge | 7.196 | 0.0002% | 396 | 0.0007% |
| ec | 7.032 | 0.0002% | 94 | 0.0002% |
| dylan | 6.82 | 0.0002% | 280 | 0.0005% |
| tcsh | 6.524 | 0.0002% | 748 | 0.0013% |
| aspectj | 6.332 | 0.0002% | 451 | 0.0008% |
| netlogo | 6.304 | 0.0002% | 140 | 0.0002% |
| gap | 6.096 | 0.0002% | 46 | 0.0001% |
| fancy | 5.952 | 0.0002% | 675 | 0.0012% |
| coq | 5.744 | 0.0002% | 80 | 0.0001% |
| click | 5.74 | 0.0002% | 9 | 0.0% |
| capn-proto | 5.644 | 0.0002% | 330 | 0.0006% |
| flux | 5.572 | 0.0002% | 47 | 0.0001% |
| forth | 5.512 | 0.0001% | 265 | 0.0005% |
| ats | 5.424 | 0.0001% | 383 | 0.0007% |
| netlinx | 5.172 | 0.0001% | 144 | 0.0002% |
| clean | 5.068 | 0.0001% | 171 | 0.0003% |
| parrot-assembly | 4.664 | 0.0001% | 227 | 0.0004% |
| alloy | 4.644 | 0.0001% | 203 | 0.0004% |
| lfe | 4.576 | 0.0001% | 287 | 0.0005% |
| gdscript | 4.488 | 0.0001% | 460 | 0.0008% |
| augeas | 4.444 | 0.0001% | 395 | 0.0007% |
| sparql | 4.404 | 0.0001% | 1036 | 0.0018% |
| lilypond | 4.308 | 0.0001% | 265 | 0.0005% |
| scilab | 4.088 | 0.0001% | 375 | 0.0006% |
| autoit | 4.06 | 0.0001% | 279 | 0.0005% |
| myghty | 3.864 | 0.0001% | 105 | 0.0002% |
| blitzmax | 3.74 | 0.0001% | 220 | 0.0004% |
| creole | 3.416 | 0.0001% | 337 | 0.0006% |
| harbour | 3.336 | 0.0001% | 107 | 0.0002% |
| piglatin | 3.168 | 0.0001% | 513 | 0.0009% |
| opa | 3.164 | 0.0001% | 211 | 0.0004% |
| sage | 3.032 | 0.0001% | 414 | 0.0007% |
| ston | 2.848 | 0.0001% | 414 | 0.0007% |
| maxscript | 2.8 | 0.0001% | 47 | 0.0001% |
| lsl | 2.68 | 0.0001% | 74 | 0.0001% |
| gentoo-ebuild | 2.576 | 0.0001% | 601 | 0.001% |
| nu | 2.38 | 0.0001% | 170 | 0.0003% |
| bro | 2.34 | 0.0001% | 333 | 0.0006% |
| xc | 2.02 | 0.0001% | 88 | 0.0002% |
| j | 1.808 | 0.0% | 142 | 0.0002% |
| metal | 1.724 | 0.0% | 151 | 0.0003% |
| module-management-system | 1.544 | 0.0% | 91 | 0.0002% |
| webidl | 1.508 | 0.0% | 96 | 0.0002% |
| tea | 1.468 | 0.0% | 29 | 0.0001% |
| redcode | 1.272 | 0.0% | 149 | 0.0003% |
| shen | 1.2 | 0.0% | 71 | 0.0001% |
| pov-ray-sdl | 1.136 | 0.0% | 104 | 0.0002% |
| x10 | 1.008 | 0.0% | 33 | 0.0001% |
| brainfuck | 0.964 | 0.0% | 167 | 0.0003% |
| ninja | 0.952 | 0.0% | 187 | 0.0003% |
| golo | 0.896 | 0.0% | 115 | 0.0002% |
| webassembly | 0.86 | 0.0% | 83 | 0.0001% |
| self | 0.824 | 0.0% | 15 | 0.0% |
| labview | 0.808 | 0.0% | 61 | 0.0001% |
| octave | 0.804 | 0.0% | 12 | 0.0% |
| pogoscript | 0.804 | 0.0% | 74 | 0.0001% |
| d | 0.796 | 0.0% | 20 | 0.0% |
| http | 0.736 | 0.0% | 140 | 0.0002% |
| ecl | 0.664 | 0.0% | 48 | 0.0001% |
| chuck | 0.584 | 0.0% | 99 | 0.0002% |
| gosu | 0.524 | 0.0% | 60 | 0.0001% |
| parrot | 0.52 | 0.0% | 17 | 0.0% |
| opal | 0.472 | 0.0% | 69 | 0.0001% |
| objective-j | 0.456 | 0.0% | 37 | 0.0001% |
| kit | 0.412 | 0.0% | 48 | 0.0001% |
| gams | 0.376 | 0.0% | 18 | 0.0% |
| prolog | 0.276 | 0.0% | 35 | 0.0001% |
| clarion | 0.268 | 0.0% | 13 | 0.0% |
| mask | 0.252 | 0.0% | 37 | 0.0001% |
| brightscript | 0.244 | 0.0% | 28 | 0.0% |
| scaml | 0.184 | 0.0% | 31 | 0.0001% |
| matlab | 0.164 | 0.0% | 29 | 0.0001% |
| idl | 0.148 | 0.0% | 1 | 0.0% |
| ags-script | 0.124 | 0.0% | 31 | 0.0001% |
| lookml | 0.12 | 0.0% | 10 | 0.0% |
| apacheconf | 0.108 | 0.0% | 59 | 0.0001% |
| oxygene | 0.104 | 0.0% | 9 | 0.0% |
| txl | 0.096 | 0.0% | 3 | 0.0% |
| grammatical-framework | 0.088 | 0.0% | 39 | 0.0001% |
| renderscript | 0.064 | 0.0% | 54 | 0.0001% |
| mtml | 0.052 | 0.0% | 13 | 0.0% |
| unified-parallel-c | 0.052 | 0.0% | 6 | 0.0% |
| dogescript | 0.04 | 0.0% | 10 | 0.0% |
| gentoo-eclass | 0.04 | 0.0% | 6 | 0.0% |
| zimpl | 0.04 | 0.0% | 7 | 0.0% |
| irc-log | 0.036 | 0.0% | 9 | 0.0% |
| fantom | 0.028 | 0.0% | 11 | 0.0% |
| numpy | 0.028 | 0.0% | 1 | 0.0% |
| cirru | 0.024 | 0.0% | 4 | 0.0% |
| xpages | 0.024 | 0.0% | 7 | 0.0% |
| nginx | 0.02 | 0.0% | 6 | 0.0% |
| objdump | 0.02 | 0.0% | 1 | 0.0% |
| python-traceback | 0.02 | 0.0% | 10 | 0.0% |
| realbasic | 0.012 | 0.0% | 1 | 0.0% |
| befunge | 0.008 | 0.0% | 2 | 0.0% |
| bison | 0.008 | 0.0% | 1 | 0.0% |
| m | 0.008 | 0.0% | 1 | 0.0% |
| omgrofl | 0.008 | 0.0% | 1 | 0.0% |
## Additional Information
### Licensing Information
Each sample comes from a code repository with a permissive license. The license is provided by the `license` field for each sample.
### Citation Information
```bibtex
@article{muennighoff2023octopack,
title={OctoPack: Instruction Tuning Code Large Language Models},
author={Niklas Muennighoff and Qian Liu and Armel Zebaze and Qinkai Zheng and Binyuan Hui and Terry Yue Zhuo and Swayam Singh and Xiangru Tang and Leandro von Werra and Shayne Longpre},
journal={arXiv preprint arXiv:2308.07124},
year={2023}
}
```
| bigcode/commitpack | [
"language:code",
"license:mit",
"arxiv:2308.07124",
"region:us"
]
| 2023-01-17T11:53:28+00:00 | {"language": ["code"], "license": "mit", "pretty_name": "CommitPack"} | 2023-08-20T06:13:13+00:00 |
62dd2fa378030288c44443b85daa305a6829cb9f | ChristophSchuhmann/Imagenet-1k-SD-1.4 | [
"license:apache-2.0",
"region:us"
]
| 2023-01-17T12:31:41+00:00 | {"license": "apache-2.0"} | 2023-01-28T12:05:26+00:00 |
|
16d2d61d2e3989a492ce1bc2aa74d541f3b5f0f6 |
# Dataset Card for LFID Seismic Data
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LIFD DataSets homepage](https://github.com/cemac/LIFD_ML_Datasets)
- **Repository:** LIFD GitHub Repo](https://github.com/cemac/LIFD_ML_Datasets)
- **Point of Contact:** [*coming soon*]()
### Dataset Summary
A description of the dataset:
### Supported Tasks and Leaderboards
*coming soon - Kaggle links?*
### Data Fields
SAC files
## Dataset Creation
All seismic data were downloaded through the IRIS Wilber 3 system (https://ds.iris.edu/wilber3/) or IRIS Web Services (https://service.iris.edu/), including the following seismic networks: (1) the AZ (ANZA; UC San Diego, 1982); (2) the TA (Transportable Array; IRIS, 2003); (3) the US (USNSN, Albuquerque, 1990); (4) the IU (GSN; Albuquerque, 1988).
### Source Data
## Additional Information
### Dataset Curators
### Licensing Information
### Citation Information
### Contributions
| cemachelen/LIFD_Seismic_Data | [
"task_categories:feature-extraction",
"task_categories:image-to-image",
"task_categories:time-series-forecasting",
"task_categories:object-detection",
"task_categories:unconditional-image-generation",
"task_ids:multivariate-time-series-forecasting",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:monolingual",
"language:en",
"license:mit",
"region:us"
]
| 2023-01-17T12:59:25+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["other"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["feature-extraction", "image-to-image", "time-series-forecasting", "object-detection", "unconditional-image-generation"], "task_ids": ["multivariate-time-series-forecasting"], "pretty_name": "LIFD Seismic Data", "tags": []} | 2023-01-19T14:32:45+00:00 |
83d060a6e2eb69b6c89369676ef3a88bcb23a4ff | the api I used to get the Calories may be messed up. | breadlicker45/Calorie-dataset | [
"license:other",
"region:us"
]
| 2023-01-17T13:15:22+00:00 | {"license": "other"} | 2023-02-10T22:28:47+00:00 |
04669dcb51c15513cdc808ff7920b25be05781d1 |
# Ekman Taxomony of KOTE(Korean Online That-gul Emotions) datasets
I mapped 44 emotion types in the KOTE dataset to 7 Ekman Taxonomy (Disgust, Fear, Sadness, Surprise, Joy, + No Emotion).
For the mapping, I referred to the clustering results in the KOTE paper (https://arxiv.org/pdf/2205.05300.pdf).
The distance between each emotion and Ekman basic emotion (Disgust, Fear, Sadness, Surprise, Joy, + No Emotion) was calculated and configured to map to the nearest basic emotion.
# Emotion Grouping
Disgust: fed up, shock, disgust, contempt
Anger: anger, irritation, dissatisfaction, preposterous
Fear: pathetic, distrust, disappointment, embarrassment, shame, guilt, gessepany, fear, anxiety
Sadness: compassion, sadness, sorrow, despair, exhaustion, laziness, reluctant, boredom
No Emotion: no emotion arrogance, resolute
Surprise: realization, surprise, respect, Interest
Joy: Expectancy, Welcome, Care, attracted, Excitement, joy, happiness, admiration, pride, gratitude, relief, comfort
annotations_creators: https://github.com/searle-j/KOTE, language: "Korean", license: mit
| kjhkjh95/kote_ekman | [
"arxiv:2205.05300",
"region:us"
]
| 2023-01-17T13:59:02+00:00 | {} | 2023-01-17T15:18:28+00:00 |
412e816a570022fc6c1e22a3c8d5e15639b0246b | awacke1/NPI-Providers-And-Facilities-By-Taxonomy | [
"size_categories:100M<n<1B",
"language:en",
"license:mit",
"npi",
"provider",
"health",
"medical",
"behavioral",
"mental",
"doctors",
"biomed",
"region:us"
]
| 2023-01-17T14:40:52+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["100M<n<1B"], "tags": ["npi", "provider", "health", "medical", "behavioral", "mental", "doctors", "biomed"]} | 2023-01-18T14:26:32+00:00 |
|
3e786a1f95948505a9cdd19172822822e15f6fbf | # Dataset Card for "kpe_long_docs_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | RobertoMCA97/kpe_long_docs_test | [
"region:us"
]
| 2023-01-17T15:01:18+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "document", "sequence": "string"}, {"name": "doc_bio_tags", "sequence": "string"}], "splits": [{"name": "semeval2010_test", "num_bytes": 11151877, "num_examples": 100}, {"name": "nus", "num_bytes": 23814618, "num_examples": 211}, {"name": "duc2001", "num_bytes": 3523199, "num_examples": 308}, {"name": "ldkp3k_test", "num_bytes": 285969940, "num_examples": 3413}], "download_size": 77767836, "dataset_size": 324459634}} | 2023-01-17T15:02:41+00:00 |
373906f601c0b9b701a460d8231b9881dd01c0c6 | # AutoTrain Dataset for project: attempt
## Dataset Description
This dataset has been automatically processed by AutoTrain for project attempt.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<800x1000 RGB PIL image>",
"target": 13
},
{
"image": "<254x512 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Adult Chara', 'Adult Chara and Young Chara', 'Chara', 'Female Kris', 'Kris', 'Kris and Adult Chara', 'Kris and Chara', 'Kris and Female Chara', 'Kris and Male Chara', 'Kris and The Player', 'Kris and a Soul', 'Kris next to the Ghost of Chara', 'Male Kris', 'Male Kris and Female Kris', 'StoryShift Chara', 'StoryShift Chara and Young Chara', 'Teen Chara and Young Chara', 'Teenager Chara and Young Chara', 'Young Chara'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 277 |
| valid | 80 |
| AdamOswald1/autotrain-data-attempt | [
"task_categories:image-classification",
"region:us"
]
| 2023-01-17T15:12:55+00:00 | {"task_categories": ["image-classification"]} | 2023-01-17T15:21:15+00:00 |
fcadb7ed3488a139f6cc7ef204423811678f6744 |
# Dataset Card for Beans
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** (https://huggingface.co/datasets/poolrf2001/FaceMask)
- **Repository:** (https://huggingface.co/datasets/poolrf2001/FaceMask)
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### Dataset Summary
Beans leaf dataset with images of diseased and health leaves.
### Supported Tasks and Leaderboards
- `image-classification`: Based on a leaf image, the goal of this task is to predict the disease type (Angular Leaf Spot and Bean Rust), if any.
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=128x128 at 0x16BAA72A4A8>,
'labels': 1
}
```
### Data Fields
The data instances have the following fields:
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `labels`: an `int` classification label.
Class Label Mappings:
```json
{
"mask_weared_incorrect": 0,
"with_mask": 1,
"without_mask": 2,
}
```
### Data Splits
| |train|validation|test|
|-------------|----:|---------:|---:|
|# of examples|1500 |180 |180 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@ONLINE {beansdata,
author="Pool",
title="FaceMask dataset",
month="January",
year="2023",
url="https://github.com/poolrf2001/maskFace"
}
```
### Contributions
| poolrf2001/FaceMask | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
]
| 2023-01-17T16:37:30+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "pretty_name": "FaceMask", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "mask_weared_incorrect", "1": "with_mask", "2": "without_mask"}}}}], "splits": [{"name": "train", "num_bytes": 38806014, "num_examples": 1500}, {"name": "validation", "num_bytes": 4758962, "num_examples": 180}, {"name": "test", "num_bytes": 4693735, "num_examples": 180}], "download_size": 48258711, "dataset_size": 49140913}} | 2023-01-17T22:58:52+00:00 |
f79ac28deed233b642a05c14820e8b6fbe6a1d8f | # AutoTrain Dataset for project: alt
## Dataset Description
This dataset has been automatically processed by AutoTrain for project alt.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<600x600 RGB PIL image>",
"target": 1
},
{
"image": "<1024x590 RGB PIL image>",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Adult Chara', 'Adult Chara and Young Chara', 'Chara', 'Female Kris', 'Kris', 'Kris and Adult Chara', 'Kris and Chara', 'Kris and Female Chara', 'Kris and Male Chara', 'Kris and The Player', 'Kris and a Soul', 'Kris next to the Ghost of Chara', 'Male Kris', 'Male Kris and Female Kris', 'StoryShift Chara', 'StoryShift Chara and Young Chara', 'Teen Chara and Young Chara', 'Teenager Chara and Young Chara', 'Young Chara'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 243 |
| valid | 243 |
| AdamOswald1/autotrain-data-alt | [
"task_categories:image-classification",
"region:us"
]
| 2023-01-17T17:09:01+00:00 | {"task_categories": ["image-classification"]} | 2023-01-17T17:12:46+00:00 |
de9747e81dd1af03d17291798431b56423ab1db4 |
## Dataset Description
- **Homepage:** [Face Mask Detection Dataset](https://www.kaggle.com/datasets/vijaykumar1799/face-mask-detection)
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
## Dataset Summary
A dataset from [kaggle](https://www.kaggle.com/datasets/vijaykumar1799/face-mask-detection). origin: https://dphi.tech/challenges/data-sprint-76-human-activity-recognition/233/data
### Introduction
-
### PROBLEM STATEMENT
-
### About Files
- Train - contains all the images that are to be used for training your model. In this folder you will find 15 folders namely - 'calling', ’clapping’, ’cycling’, ’dancing’, ‘drinking’, ‘eating’, ‘fighting’, ‘hugging’, ‘laughing’, ‘listeningtomusic’, ‘running’, ‘sitting’, ‘sleeping’, texting’, ‘using_laptop’ which contain the images of the respective human activities.
- Test - contains 5400 images of Human Activities. For these images you are required to make predictions as the respective class names -'calling', ’clapping’, ’cycling’, ’dancing’, ‘drinking’, ‘eating’, ‘fighting’, ‘hugging’, ‘laughing’, ‘listeningtomusic’, ‘running’, ‘sitting’, ‘sleeping’, texting’, ‘using_laptop’.
- Testing_set.csv - this is the order of the predictions for each image that is to be submitted on the platform. Make sure the predictions you download are with their image’s filename in the same order as given in this file.
- sample_submission: This is a csv file that contains the sample submission for the data sprint.
### Data Fields
The data instances have the following fields:
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `labels`: an `int` classification label. All `test` data is labeled 0.
### Class Label Mappings:
```
{
'mask_weared_incorrect': 0,
'with_mask': 1,
'without_mask': 2
}
```
### Data Splits
| | train | test | validation|
|---------------|--------|------|----------:|
| # of examples | 1500 | 180 | 180
### Data Size
- download: 46 MiB
- generated: 46.8 MiB
- total: 92.8 MiB
```pycon
>>> from datasets import load_dataset
>>> ds = load_dataset("poolrf2001/mask")
>>> ds
DatasetDict({
test: Dataset({
features: ['image', 'labels'],
num_rows: 180
})
train: Dataset({
features: ['image', 'labels'],
num_rows: 1500
})
validation: Dataset({
features: ['image', 'labels'],
num_rows: 180
})
})
>>> ds["train"].features
{'image': Image(decode=True, id=None),
'labels': ClassLabel(num_classes=3, names=['mask_weared_incorrect', 'with_mask', 'without_mask'], id=None)}
>>> ds["train"][0]
{'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=180x180>,
'labels': 1}
``` | poolrf2001/mask | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:odbl",
"region:us"
]
| 2023-01-17T17:10:01+00:00 | {"language": ["en"], "license": ["odbl"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "pretty_name": "Face Mask Detection"} | 2023-01-17T22:16:12+00:00 |
0da8dfb24526cd625b8a35d5b1092f710e87420e | # AutoTrain Dataset for project: testttt
## Dataset Description
This dataset has been automatically processed by AutoTrain for project testttt.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<113x220 RGB PIL image>",
"target": 2
},
{
"image": "<1280x720 RGB PIL image>",
"target": 2
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Adult Chara', 'Adult Chara and Young Chara', 'Chara', 'Female Kris', 'Kris', 'Kris and Adult Chara', 'Kris and Chara', 'Kris and Female Chara', 'Kris and Male Chara', 'Kris and The Player', 'Kris and a Soul', 'Kris next to the Ghost of Chara', 'Male Kris', 'Male Kris and Female Kris', 'StoryShift Chara', 'StoryShift Chara and Young Chara', 'Teen Chara and Young Chara', 'Teenager Chara and Young Chara', 'Young Chara'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 184 |
| valid | 58 |
| AdamOswald1/autotrain-data-testttt | [
"task_categories:image-classification",
"region:us"
]
| 2023-01-17T17:16:50+00:00 | {"task_categories": ["image-classification"]} | 2023-01-17T17:28:18+00:00 |
f5a3293e2b9a21083fd4f16383be35c49e8f03bf | # Dataset Card for "beautiful_interesting_spectacular_photo_portrait_Marilyn_Monroe_25000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_interesting_spectacular_photo_portrait_Marilyn_Monroe_25000 | [
"region:us"
]
| 2023-01-17T17:24:22+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 120049326.0, "num_examples": 228}], "download_size": 120049639, "dataset_size": 120049326.0}} | 2023-01-17T17:47:40+00:00 |
a3b79592f955871e5444bbcfb1ae72f35804f19d | # AutoTrain Dataset for project: let
## Dataset Description
This dataset has been automatically processed by AutoTrain for project let.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<600x600 RGB PIL image>",
"target": 1
},
{
"image": "<1024x590 RGB PIL image>",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Adult Chara', 'Adult Chara and Young Chara', 'Chara', 'Female Kris', 'Kris', 'Kris and Adult Chara', 'Kris and Chara', 'Kris and Female Chara', 'Kris and Male Chara', 'Kris and The Player', 'Kris and a Soul', 'Kris next to the Ghost of Chara', 'Male Kris', 'Male Kris and Female Kris', 'StoryShift Chara', 'StoryShift Chara and Young Chara', 'Teen Chara and Young Chara', 'Teenager Chara and Young Chara', 'Young Chara'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 242 |
| valid | 242 |
| AdamOswald1/autotrain-data-let | [
"task_categories:image-classification",
"region:us"
]
| 2023-01-17T17:30:42+00:00 | {"task_categories": ["image-classification"]} | 2023-01-17T17:33:00+00:00 |
85c101c7d80e7514d5ce1ffc51a5b8faa888ce9a | # Dataset Card for "sm-diffusion-256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | abelc/sm-diffusion-256 | [
"region:us"
]
| 2023-01-17T17:52:54+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "audio_file", "dtype": "string"}, {"name": "slice", "dtype": "int16"}], "splits": [{"name": "train", "num_bytes": 1420346.0, "num_examples": 32}], "download_size": 1420748, "dataset_size": 1420346.0}} | 2023-01-17T17:53:08+00:00 |
aaf7a293404474a1ca0c154dc223c11db759f57e | # Dataset Card for "italo-diffusion-256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | abelc/italo-diffusion-256 | [
"region:us"
]
| 2023-01-17T17:54:34+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "audio_file", "dtype": "string"}, {"name": "slice", "dtype": "int16"}], "splits": [{"name": "train", "num_bytes": 29319809.0, "num_examples": 658}], "download_size": 29297971, "dataset_size": 29319809.0}} | 2023-01-17T17:56:00+00:00 |
1287aa40dd7809c836854657cc2640ca4b39be71 | # Dataset Card for "telegram_de_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | carexl8/telegram_de_ru | [
"region:us"
]
| 2023-01-17T20:29:31+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "time", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "language tags", "sequence": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 5938949, "num_examples": 10191}], "download_size": 1869587, "dataset_size": 5938949}} | 2023-04-25T21:04:20+00:00 |
0bc8f4b30c42ce70ccb2493fd7cabc4b6188626f | # Dataset Card for "olm-wikipedia-20221220-1-percent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-wikipedia-20221220-1-percent | [
"region:us"
]
| 2023-01-17T20:47:06+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 209366020.9708762, "num_examples": 65879}], "download_size": 123017868, "dataset_size": 209366020.9708762}} | 2023-01-17T20:47:18+00:00 |
90d2c6da950a6168fcee20ec69e194a034f44eef |
<div align="center">
<img width="640" alt="keremberke/protective-equipment-detection" src="https://huggingface.co/datasets/keremberke/protective-equipment-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['glove', 'goggles', 'helmet', 'mask', 'no_glove', 'no_goggles', 'no_helmet', 'no_mask', 'no_shoes', 'shoes']
```
### Number of Images
```json
{'valid': 3570, 'test': 1935, 'train': 6473}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/protective-equipment-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi/dataset/7](https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi/dataset/7?ref=roboflow2huggingface)
### Citation
```
@misc{ ppes-kaxsi_dataset,
title = { PPEs Dataset },
type = { Open Source Dataset },
author = { Personal Protective Equipment },
howpublished = { \\url{ https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi } },
url = { https://universe.roboflow.com/personal-protective-equipment/ppes-kaxsi },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jul },
note = { visited on 2023-01-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on July 7, 2022 at 3:49 PM GMT
It includes 11978 images.
Ppe-equipements are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
| keremberke/protective-equipment-detection | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Manufacturing",
"region:us"
]
| 2023-01-17T20:53:31+00:00 | {"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface", "Manufacturing"]} | 2023-01-18T21:21:55+00:00 |
3ee36c43c9ce7104d93176747f98fb91861a38e5 | # Dataset Card for "olm-wikipedia-20221220-1-percent-tokenized-568"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-wikipedia-20221220-1-percent-tokenized-568 | [
"region:us"
]
| 2023-01-17T20:56:11+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 300340980, "num_examples": 87819}], "download_size": 100193548, "dataset_size": 300340980}} | 2023-01-17T20:56:22+00:00 |
eea25d1105868f81289af0f1cb500ddf88e484bb | # Dataset Card for "financial_phrasebank_split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cyrilzhang/financial_phrasebank_split | [
"region:us"
]
| 2023-01-17T21:26:00+00:00 | {"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}], "splits": [{"name": "train", "num_bytes": 611259.9339661576, "num_examples": 4361}, {"name": "test", "num_bytes": 67980.06603384235, "num_examples": 485}], "download_size": 418548, "dataset_size": 679240.0}} | 2023-01-17T21:26:08+00:00 |
113e1b27260b0b7070e15c7fbe71c812abe8c279 | # Dataset Card for "dreambooth_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/dreambooth_test | [
"region:us"
]
| 2023-01-17T22:49:53+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 5590808.0, "num_examples": 5}, {"name": "validation", "num_bytes": 37346797.0, "num_examples": 32}], "download_size": 1169134, "dataset_size": 42937605.0}} | 2023-01-17T23:23:29+00:00 |
10860918d9160a08b6b55ed717fa7a580725052b |
# Dataset Card for ReazonSpeech
## Dataset Description
- **Homepage:** https://research.reazon.jp/projects/ReazonSpeech
- **GitHub:** https://github.com/reazon-research/reazonspeech
## Dataset Summary
This dataset contains a diverse set of natural Japanese speech, collected
from terrestrial television streams. It contains more than 35000 hours of
audio.
Paper: [ReazonSpeech: A Free and Massive Corpus for Japanese ASR](https://research.reazon.jp/_static/reazonspeech_nlp2023.pdf)
### Disclaimer
**TO USE THIS DATASET, YOU MUST AGREE THAT YOU WILL USE THE DATASET
SOLELY FOR THE PURPOSE OF JAPANESE COPYRIGHT ACT ARTICLE 30-4.**
## Dataset Format
Audio files are available in FLAC format, sampled at 16000 hz.
Each audio file is accompanied with a transcription.
```
{
'name': '000/0000000000000.flac',
'audio': {
'path': '/path/to/000/0000000000000.flac',
'array': array([ 0.01000000, ...], dtype=float32),
'sampling_rate': 16000
},
'transcription': '今日のニュースをお伝えします。'
}
```
We provide 5 different dataset sizes. Here is the list of available
sizes and their approximate recording hours.
| Name | Size | Hours |
| -------- | ----- | ----------- |
| `tiny` | 600MB | 8.5 hours |
| `small` | 6GB | 100 hours |
| `medium` | 65GB | 1000 hours |
| `large` | 330GB | 5000 hours |
| `all` | 2.3TB | 35000 hours |
You can access this dataset through Hugging Face `datasets` library.
```
from datasets import load_dataset
ds = load_dataset("reazon-research/reazonspeech", "all", trust_remote_code=True)
```
## Access the older versions
If you want to access the older versions of ReazonSpeech corpus,
you can use the following tags.
| Name | Size | Hours |
| ----------- | ----- | ----------- |
| `small-v1` | 350MB | 5 hours |
| `medium-v1` | 22GB | 300 hours |
| `all-v1` | 1TB | 19000 hours |
## License
[CDLA-Sharing-1.0](https://cdla.dev/sharing-1-0/)
TO USE THIS DATASET, YOU MUST AGREE THAT YOU WILL USE THE DATASET
SOLELY FOR THE PURPOSE OF JAPANESE COPYRIGHT ACT ARTICLE 30-4.
| reazon-research/reazonspeech | [
"task_categories:automatic-speech-recognition",
"size_categories:10M<n<100M",
"language:ja",
"license:other",
"region:us"
]
| 2023-01-17T23:03:48+00:00 | {"language": ["ja"], "license": "other", "size_categories": ["10M<n<100M"], "task_categories": ["automatic-speech-recognition"], "pretty_name": "ReazonSpeech"} | 2024-01-21T07:55:59+00:00 |
980f33e8374ad0a3954b9841611644da2547b501 | # Dataset Card for "OxfordPets_test_facebook_opt_350m_Visclues_20"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/OxfordPets_test_facebook_opt_350m_Visclues_20 | [
"region:us"
]
| 2023-01-17T23:11:19+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_3", "num_bytes": 277693.0, "num_examples": 20}, {"name": "fewshot_5", "num_bytes": 292064.0, "num_examples": 20}, {"name": "fewshot_1", "num_bytes": 263406.0, "num_examples": 20}, {"name": "fewshot_2", "num_bytes": 270668.0, "num_examples": 20}], "download_size": 784934, "dataset_size": 1103831.0}} | 2023-01-17T23:35:01+00:00 |
7f8aa66317b438eeac50d62de5db7870656c6e03 | # Dataset Card for "sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | anjalyjayakrishnan/sample | [
"region:us"
]
| 2023-01-18T03:29:02+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "package_name", "dtype": "string"}, {"name": "review", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "star", "dtype": "int64"}, {"name": "version_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1508, "num_examples": 5}, {"name": "test", "num_bytes": 956, "num_examples": 5}], "download_size": 7783, "dataset_size": 2464}} | 2023-02-07T00:42:26+00:00 |
37773c2e6034a85d3581590de7b38abbb2d85e96 | # Dataset Card for GermanRentalAgreements
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub](https://github.com/sebischair/Legal-Sentence-Classification-Datasets-and-Models)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/JoelNiklaus) for adding this dataset.
| joelniklaus/german_rental_agreements | [
"region:us"
]
| 2023-01-18T04:02:40+00:00 | {} | 2023-01-18T04:03:25+00:00 |
e1385bb979a4d10d5a65350e2bf4b606cbf426b1 | # Dataset Card for "beautiful_interesting_spectacular_photo_dog_25000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_interesting_spectacular_photo_dog_25000 | [
"region:us"
]
| 2023-01-18T06:36:25+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 361773346.0, "num_examples": 504}], "download_size": 361776700, "dataset_size": 361773346.0}} | 2023-01-18T06:37:24+00:00 |
c5742eb7ad92ad0303a94cccbb4003a7da7138f5 | # Dataset Card for "dreambooth_test_with_reg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/dreambooth_test_with_reg | [
"region:us"
]
| 2023-01-18T06:59:08+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 183792899.0, "num_examples": 200}, {"name": "validation", "num_bytes": 37346753.0, "num_examples": 32}], "download_size": 78739258, "dataset_size": 221139652.0}} | 2023-01-18T08:09:31+00:00 |
9ad01b0d6fbe7f3952e91be6d421a38f2a3cf6c6 | Joe02/obui | [
"license:other",
"region:us"
]
| 2023-01-18T07:35:14+00:00 | {"license": "other"} | 2023-03-25T00:32:14+00:00 |
|
1199a0e08751903da75b67410b654bb092e6e4e8 |
# Dataset Card for Livedoor News Corpus
[](https://github.com/shunk031/huggingface-datasets_livedoor-news-corpus/actions/workflows/ci.yaml)

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.rondhuit.com/download.html#ldcc
- **Repository:** https://github.com/shunk031/huggingface-datasets_livedoor-news-corpus
### Dataset Summary
> 本コーパスは、NHN Japan 株式会社が運営する「livedoor ニュース」のうち、下記のクリエイティブ・コモンズライセンスが適用されるニュース記事を収集し、可能な限り HTML タグを取り除いて作成したものです。
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
```python
from datasets import load_dataset
dataset = load_dataset(
"shunk031/livedoor-news-corpus",
train_ratio=0.8,
val_ratio=0.1,
test_ratio=0.1,
random_state=42,
shuffle=True,
)
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['url', 'date', 'title', 'content', 'category'],
# num_rows: 5894
# })
# validation: Dataset({
# features: ['url', 'date', 'title', 'content', 'category'],
# num_rows: 737
# })
# test: Dataset({
# features: ['url', 'date', 'title', 'content', 'category'],
# num_rows: 736
# })
# })
```
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
> 各記事ファイルにはクリエイティブ・コモンズライセンス「表示 – 改変禁止」が適用されます。 クレジット表示についてはニュースカテゴリにより異なるため、ダウンロードしたファイルを展開したサブディレクトリにあるそれぞれの LICENSE.txt をご覧ください。 livedoor は NHN Japan 株式会社の登録商標です。
### Citation Information
[More Information Needed]
### Contributions
Thanks to [RONDHUIT Co., Ltd.](https://www.rondhuit.com/) for creating this dataset.
| shunk031/livedoor-news-corpus | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"language_creators:found",
"multilinguality:monolingual",
"language:ja",
"license:cc-by-nd-4.0",
"region:us"
]
| 2023-01-18T08:30:24+00:00 | {"annotations_creators": [], "language_creators": ["found"], "language": ["ja"], "license": ["cc-by-nd-4.0"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "livedoor-news-corpus", "tags": []} | 2023-10-28T04:40:17+00:00 |
e6f919319d63d54785310ade0180cd7c7b7dca3d | # Dataset Card for "processed_bert_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Poulami/processed_bert_dataset | [
"region:us"
]
| 2023-01-18T08:30:53+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 24027177600.0, "num_examples": 6674216}], "download_size": 5886705553, "dataset_size": 24027177600.0}} | 2023-01-18T09:23:25+00:00 |
84bc4ab1c9c6399e8d6f01c458bd9ef71fe8d397 | # WEC-Eng
A large-scale dataset for cross-document event coreference extracted from English Wikipedia. </br>
- **Repository (Code for generating WEC):** https://github.com/AlonEirew/extract-wec
- **Paper:** https://aclanthology.org/2021.naacl-main.198/
### Languages
English
## Load Dataset
You can read in WEC-Eng files as follows (using the **huggingface_hub** library):
```json
from huggingface_hub import hf_hub_url, cached_download
import json
REPO_ID = "datasets/biu-nlp/WEC-Eng"
splits_files = ["Dev_Event_gold_mentions_validated.json",
"Test_Event_gold_mentions_validated.json",
"Train_Event_gold_mentions.json"]
wec_eng = list()
for split_file in splits_files:
wec_eng.append(json.load(open(cached_download(
hf_hub_url(REPO_ID, split_file)), "r")))
```
## Dataset Structure
### Data Splits
- **Final version of the English CD event coreference dataset**<br>
- Train - Train_Event_gold_mentions.json
- Dev - Dev_Event_gold_mentions_validated.json
- Test - Test_Event_gold_mentions_validated.json
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| Clusters | 7,042 | 233 | 322 |
| Event Mentions | 40,529 | 1250 | 1,893 |
- **The non (within clusters) controlled version of the dataset (lexical diversity)**<br>
- All (experimental) - All_Event_gold_mentions_unfiltered.json
### Data Instances
```json
{
"coref_chain": 2293469,
"coref_link": "Family Values Tour 1998",
"doc_id": "House of Pain",
"mention_context": [
"From",
"then",
"on",
",",
"the",
"members",
"continued",
"their"
],
"mention_head": "Tour",
"mention_head_lemma": "Tour",
"mention_head_pos": "PROPN",
"mention_id": "108172",
"mention_index": 1,
"mention_ner": "UNK",
"mention_type": 8,
"predicted_coref_chain": null,
"sent_id": 2,
"tokens_number": [
50,
51,
52,
53
],
"tokens_str": "Family Values Tour 1998",
"topic_id": -1
}
```
### Data Fields
|Field|Value Type|Value|
|---|:---:|---|
|coref_chain|Numeric|Coreference chain/cluster ID|
|coref_link|String|Coreference link wikipeida page/article title|
|doc_id|String|Mention page/article title|
|mention_context|List[String]|Tokenized mention paragraph (including mention)|
|mention_head|String|Mention span head token|
|mention_head_lemma|String|Mention span head token lemma|
|mention_head_pos|String|Mention span head token POS|
|mention_id|String|Mention id|
|mention_index|Numeric|Mention index in json file|
|mention_ner|String|Mention NER|
|tokens_number|List[Numeric]|Mentions tokens ids within the context|
|tokens_str|String|Mention span text|
|topic_id|Ignore|Ignore|
|mention_type|Ignore|Ignore|
|predicted_coref_chain|Ignore|Ignore|
|sent_id|Ignore|Ignore|
## Citation
```
@inproceedings{eirew-etal-2021-wec,
title = "{WEC}: Deriving a Large-scale Cross-document Event Coreference dataset from {W}ikipedia",
author = "Eirew, Alon and
Cattan, Arie and
Dagan, Ido",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.198",
doi = "10.18653/v1/2021.naacl-main.198",
pages = "2498--2510",
abstract = "Cross-document event coreference resolution is a foundational task for NLP applications involving multi-text processing. However, existing corpora for this task are scarce and relatively small, while annotating only modest-size clusters of documents belonging to the same topic. To complement these resources and enhance future research, we present Wikipedia Event Coreference (WEC), an efficient methodology for gathering a large-scale dataset for cross-document event coreference from Wikipedia, where coreference links are not restricted within predefined topics. We apply this methodology to the English Wikipedia and extract our large-scale WEC-Eng dataset. Notably, our dataset creation method is generic and can be applied with relatively little effort to other Wikipedia languages. To set baseline results, we develop an algorithm that adapts components of state-of-the-art models for within-document coreference resolution to the cross-document setting. Our model is suitably efficient and outperforms previously published state-of-the-art results for the task.",
}
```
## License
We provide the following data sets under a <a href="https://creativecommons.org/licenses/by-sa/3.0/deed.en_US">Creative Commons Attribution-ShareAlike 3.0 Unported License</a>. It is based on content extracted from Wikipedia that is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License
## Contact
If you have any questions please create a Github issue at https://github.com/AlonEirew/extract-wec. | biu-nlp/WEC-Eng | [
"region:us"
]
| 2023-01-18T09:11:52+00:00 | {} | 2023-01-18T13:47:10+00:00 |
15439bd777b2fb82f090c80e12d4da40c06522b4 |
<div align="center">
<img width="640" alt="keremberke/chest-xray-classification" src="https://huggingface.co/datasets/keremberke/chest-xray-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['NORMAL', 'PNEUMONIA']
```
### Number of Images
```json
{'train': 4077, 'test': 582, 'valid': 1165}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/chest-xray-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/mohamed-traore-2ekkp/chest-x-rays-qjmia/dataset/2](https://universe.roboflow.com/mohamed-traore-2ekkp/chest-x-rays-qjmia/dataset/2?ref=roboflow2huggingface)
### Citation
```
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on March 31, 2022 at 3:11 PM GMT
It includes 5824 images.
Pneumonia are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
| keremberke/chest-xray-classification | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"Biology",
"region:us"
]
| 2023-01-18T09:22:08+00:00 | {"task_categories": ["image-classification"], "tags": ["roboflow", "roboflow2huggingface", "Biology"]} | 2023-01-18T09:25:27+00:00 |
0a9f333828628586dcc023e6e108e0e003ca7f71 | # Dataset Card for "dfg_augmented_mbpp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | reshinthadith/dfg_augmented_mbpp | [
"region:us"
]
| 2023-01-18T09:26:49+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32138, "num_examples": 95}], "download_size": 17897, "dataset_size": 32138}} | 2023-01-18T09:27:02+00:00 |
27f567c7bdad157df4fc2e3d53b6fd957a9d38a4 |
<div align="center">
<img width="640" alt="keremberke/painting-style-classification" src="https://huggingface.co/datasets/keremberke/painting-style-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['Realism', 'Art_Nouveau_Modern', 'Analytical_Cubism', 'Cubism', 'Expressionism', 'Action_painting', 'Synthetic_Cubism', 'Symbolism', 'Ukiyo_e', 'Naive_Art_Primitivism', 'Post_Impressionism', 'Impressionism', 'Fauvism', 'Rococo', 'Minimalism', 'Mannerism_Late_Renaissance', 'Color_Field_Painting', 'High_Renaissance', 'Romanticism', 'Pop_Art', 'Contemporary_Realism', 'Baroque', 'New_Realism', 'Pointillism', 'Northern_Renaissance', 'Early_Renaissance', 'Abstract_Expressionism']
```
### Number of Images
```json
{'valid': 1295, 'train': 4493, 'test': 629}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/painting-style-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/art-dataset/wiki-art/dataset/1](https://universe.roboflow.com/art-dataset/wiki-art/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ wiki-art_dataset,
title = { wiki art Dataset },
type = { Open Source Dataset },
author = { Art Dataset },
howpublished = { \\url{ https://universe.roboflow.com/art-dataset/wiki-art } },
url = { https://universe.roboflow.com/art-dataset/wiki-art },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { mar },
note = { visited on 2023-01-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on March 9, 2022 at 1:47 AM GMT
It includes 6417 images.
27 are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 416x416 (Stretch)
No image augmentation techniques were applied.
| keremberke/painting-style-classification | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"region:us"
]
| 2023-01-18T09:27:05+00:00 | {"task_categories": ["image-classification"], "tags": ["roboflow", "roboflow2huggingface"]} | 2023-01-18T09:30:28+00:00 |
fc5adaa52a367e1554fb32f433a25b167d140c04 | Kokoboy/Ayaka_Skin | [
"license:openrail",
"region:us"
]
| 2023-01-18T09:36:07+00:00 | {"license": "openrail"} | 2023-01-18T11:54:10+00:00 |
|
34b5d5763e73dd7e4ab81acf6518d0acbd893c9c |
<div align="center">
<img width="640" alt="keremberke/table-extraction" src="https://huggingface.co/datasets/keremberke/table-extraction/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['bordered', 'borderless']
```
### Number of Images
```json
{'test': 34, 'train': 238, 'valid': 70}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/table-extraction", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/mohamed-traore-2ekkp/table-extraction-pdf/dataset/2](https://universe.roboflow.com/mohamed-traore-2ekkp/table-extraction-pdf/dataset/2?ref=roboflow2huggingface)
### Citation
```
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 18, 2023 at 9:41 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 342 images.
Data-table are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
| keremberke/table-extraction | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Documents",
"region:us"
]
| 2023-01-18T09:42:19+00:00 | {"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface", "Documents"]} | 2023-01-18T09:43:03+00:00 |
3cde19e1bd95af17f0bd5b24cec75b249814b0f4 |
<div align="center">
<img width="640" alt="keremberke/plane-detection" src="https://huggingface.co/datasets/keremberke/plane-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['planes']
```
### Number of Images
```json
{'test': 25, 'valid': 50, 'train': 175}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/plane-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/skybot-cam/overhead-plane-detector/dataset/4](https://universe.roboflow.com/skybot-cam/overhead-plane-detector/dataset/4?ref=roboflow2huggingface)
### Citation
```
@misc{ overhead-plane-detector_dataset,
title = { Overhead Plane Detector Dataset },
type = { Open Source Dataset },
author = { SkyBot Cam },
howpublished = { \\url{ https://universe.roboflow.com/skybot-cam/overhead-plane-detector } },
url = { https://universe.roboflow.com/skybot-cam/overhead-plane-detector },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jan },
note = { visited on 2023-01-27 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on March 30, 2022 at 3:11 PM GMT
It includes 250 images.
Planes are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
| keremberke/plane-detection | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"region:us"
]
| 2023-01-18T09:43:30+00:00 | {"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface"]} | 2023-01-27T13:46:18+00:00 |
972efa5575dfa6c3eef01e935f6a029089b61daa | # The CoreSearch Dataset
A large-scale dataset for cross-document event coreference **search**</br>
- **Paper:** [Cross-document Event Coreference Search: Task, Dataset and Modeling](https://arxiv.org/abs/2210.12654)
- **<ins>CoreSearchV2:</ins>** A cleaner version of this dataset is now available at [https://huggingface.co/datasets/biu-nlp/CoreSearchV2](https://huggingface.co/datasets/biu-nlp/CoreSearchV2)
### Languages
English
## Load Dataset
You can read/download the dataset files following Huggingface Hub instructions.</br>
For example, below code will load CoreSearch DPR folder:
```python
from huggingface_hub import hf_hub_url, cached_download
import json
REPO_ID = "datasets/Intel/CoreSearch"
DPR_FILES = "/dpr/"
dpr_files = ["dpr/Dev.json", "dpr/Train.json", "dpr/Test.json"]
dpr_jsons = list()
for _file in dpr_files:
dpr_jsons.append(json.load(open(cached_download(
hf_hub_url(REPO_ID, _file)), "r")))
```
### Data Splits
- **Final version of the CD event coreference search dataset**<br>
| | Train | Valid | Test | Total |
| ----- | ------ | ----- | ---- | ---- |
| WEC-Eng Validated Data | | | | |
| # Clusters | 237 | 49 | 236 | 522 |
| # Passages (with Mentions) | 1,503 | 341 | 1,266 | 3,110 |
| # Added Destructor Passages | 922,736 | 923,376 | 923,746 | 2,769,858 |
| # Total Passages | 924,239 | 923,717 | 925,012 | 2,772,968 |
## Citation
```
@inproceedings{eirew-etal-2022-cross,
title = "Cross-document Event Coreference Search: Task, Dataset and Modeling",
author = "Eirew, Alon and
Caciularu, Avi and
Dagan, Ido",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.58",
pages = "900--913",
abstract = "The task of Cross-document Coreference Resolution has been traditionally formulated as requiring to identify all coreference links across a given set of documents. We propose an appealing, and often more applicable, complementary set up for the task {--} Cross-document Coreference Search, focusing in this paper on event coreference. Concretely, given a mention in context of an event of interest, considered as a query, the task is to find all coreferring mentions for the query event in a large document collection. To support research on this task, we create a corresponding dataset, which is derived from Wikipedia while leveraging annotations in the available Wikipedia Event Coreferecene dataset (WEC-Eng). Observing that the coreference search setup is largely analogous to the setting of Open Domain Question Answering, we adapt the prominent Deep Passage Retrieval (DPR) model to our setting, as an appealing baseline. Finally, we present a novel model that integrates a powerful coreference scoring scheme into the DPR architecture, yielding improved performance.",
}
```
## License
We provide the following data sets under a <a href="https://creativecommons.org/licenses/by-sa/3.0/deed.en_US">Creative Commons Attribution-ShareAlike 3.0 Unported License</a>. It is based on content extracted from Wikipedia that is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License
## Contact
If you have any questions please create a Github issue at <a href="https://github.com/AlonEirew/CoreSearch">https://github.com/AlonEirew/CoreSearch</a>. | biu-nlp/CoreSearch | [
"arxiv:2210.12654",
"region:us"
]
| 2023-01-18T09:49:31+00:00 | {} | 2023-03-23T09:39:55+00:00 |
566b806ef764bafa34d823b57aea1cbdc068265c | # AutoTrain Dataset for project: test
## Dataset Description
This dataset has been automatically processed by AutoTrain for project test.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"context": "The constitution of Jordan grants its monarch the right to withhold assent to laws passed by its parliament. Article 93 of that document gives the Jordanian sovereign six months to sign or veto any legislation sent to him from the National Assembly; if he vetoes it within that timeframe, the assembly may override his veto by a two-thirds vote of both houses; otherwise, the law does not go into effect (but it may be reconsidered in the next session of the assembly). If the monarch fails to act within six months of the bill being presented to him, it becomes law without his signature.",
"question": "What happens if the soverign doesn't sign the bill within the six-month time frame?",
"answers.text": [
", it becomes law without his signature"
],
"answers.answer_start": [
550
],
"feat_id": [
"572ab241be1ee31400cb818b"
],
"feat_title": [
"Royal_assent"
]
},
{
"context": "The modern Greek theatre was born after the Greek independence, in the early 19th century, and initially was influenced by the Heptanesean theatre and melodrama, such as the Italian opera. The Nobile Teatro di San Giacomo di Corf\u00f9 was the first theatre and opera house of modern Greece and the place where the first Greek opera, Spyridon Xyndas' The Parliamentary Candidate (based on an exclusively Greek libretto) was performed. During the late 19th and early 20th century, the Athenian theatre scene was dominated by revues, musical comedies, operettas and nocturnes and notable playwrights included Spyridon Samaras, Dionysios Lavrangas, Theophrastos Sakellaridis and others.",
"question": "What was the first Greek opera?",
"answers.text": [
"The Parliamentary Candidate"
],
"answers.answer_start": [
346
],
"feat_id": [
"57267a75dd62a815002e8683"
],
"feat_title": [
"Greece"
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)",
"feat_id": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"feat_title": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 104204 |
| valid | 26051 |
| 96harsh56/autotrain-data-test | [
"region:us"
]
| 2023-01-18T10:02:07+00:00 | {} | 2023-02-15T06:29:58+00:00 |
fc9f5f881b814de9f5d73c489a80a32e764579f6 | https://colab.research.google.com/drive/16nyxZPS7-ZDFwp7tn_q72Jxyv0dzK1MP?usp=sharing
```
@article{Kejriwal2020DoFC,
title={Do Fine-tuned Commonsense Language Models Really Generalize?},
author={Mayank Kejriwal and Ke Shen},
journal={ArXiv},
year={2020},
volume={abs/2011.09159}
}
```
added for
```
@article{sileo2023tasksource,
title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
author={Sileo, Damien},
url= {https://arxiv.org/abs/2301.05948},
journal={arXiv preprint arXiv:2301.05948},
year={2023}
}
``` | tasksource/cycic_multiplechoice | [
"task_categories:multiple-choice",
"language:en",
"license:apache-2.0",
"arxiv:2301.05948",
"region:us"
]
| 2023-01-18T10:59:28+00:00 | {"language": ["en"], "license": "apache-2.0", "task_categories": ["multiple-choice"]} | 2023-01-18T12:15:47+00:00 |
ad7eb0d2022e36d762903851ef3ac1d612da96be | https://storage.googleapis.com/ai2-mosaic/public/cycic/CycIC-train-dev.zip
https://colab.research.google.com/drive/16nyxZPS7-ZDFwp7tn_q72Jxyv0dzK1MP?usp=sharing
```
@article{Kejriwal2020DoFC,
title={Do Fine-tuned Commonsense Language Models Really Generalize?},
author={Mayank Kejriwal and Ke Shen},
journal={ArXiv},
year={2020},
volume={abs/2011.09159}
}
```
added for
```
@article{sileo2023tasksource,
title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
author={Sileo, Damien},
url= {https://arxiv.org/abs/2301.05948},
journal={arXiv preprint arXiv:2301.05948},
year={2023}
}
``` | tasksource/cycic_classification | [
"task_categories:question-answering",
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"arxiv:2301.05948",
"region:us"
]
| 2023-01-18T11:03:35+00:00 | {"language": ["en"], "license": "apache-2.0", "task_categories": ["question-answering", "text-classification"]} | 2023-05-31T07:47:48+00:00 |
23087f93ef072d1a828aa2845d08a2f1f0d1bd92 |
# Арифметические задачи для диалоговой системы
Датасет содержит сэмплы с простыми математическими заданиями примерно такого вида:
```
- Фонарик Федора работает от 2 батареек, а фонарик Лехи от 6. Сколько батареек нужно фонарикам Федора и Лехи в сумме?
- 2+6=8, столько батареек потребуется.
- Теперь прибавь к результату 469, что получилось?
- 8 плюс 469 равно 477
- Подели на 53, что получилось?
- 9
```
Основная масса задач связана с арифметическими действиями. Есть некоторое количество задач
на поиск корней квадратного уравнения:
```
- Найди действительные корни квадратного уравнения a⋅x²+b⋅x+c для a=45, b=225, c=-270
- Тут два действительных корня -6 и 1
```
Также есть пополняемый набор задач с раскрытым ходом решения:
```
- В болотистых лесах проживает 8 сусликов. Охотник съедает по одному суслику каждые 9 дней. Сколько сусликов останется через 12 дней?
- За 12 дней охотник пообедает 1 раз. Поэтому останется 8-1=7 сусликов.
```
Некоторые задачи построены так, чтобы заставить модель обращать внимание не просто на
наличие чисел, а на контекст их употребления:
```
- Вика принесла в школу 5 мандаринов. Друзья попросили ее поделиться с ними мандаринами. Она отдала им 3 штуки. Сколько мандаринов Вика отдала?
- 3
```
Иногда числа в задаче не имеют отношения к сути задачи, что должно еще сильнее побуждать решающую модель учитывать контекст:
```
- Перемножив восемь и семь, учитель средней школы №77 получил 5084. Он верно посчитал?
- Учитель средней школы №77 ошибся, так как 8*7=56, а не 5084
```
## Формат данных
Каждый сэмпл содержит список связанных реплик без префикса "- ", образующих цепочку арифметических заданий, в которых
условие новой задачи требует анализа как минимум предыдущей реплики.
## Лексическая вариативность ответов
Для многих задач ответ сформулирован не просто как число, в него добавлен сопутствующий текст:
```
- Чему равно 2+2?
- 2+2 равно 4
```
## Метрики генеративных моделей
После файнтюна (1 эпоха, lr=1e-5) на 90% датасета, получаются такие метрики на тестовой части:
```
Модель Среднее отклонение числового ответа Доля верных ответов
в сравнении с верным
sberbank-ai/rugpt3small_based_on_gpt2 8.03e+02% 0.057
sberbank-ai/rugpt3medium_based_on_gpt2 2.89e+02% 0.085
sberbank-ai/rugpt3large_based_on_gpt2 1.58e+02% 0.131
facebook/xglm-2.9B 8.13e+02% 0.224
```
## Генератор сэмплов
При формировании датасета использовался движок шаблонной генерации из этого репозитория: [https://github.com/Koziev/math](https://github.com/Koziev/math).
## Использование датасета
Датасет используется для тренировки [чатбота](https://github.com/Koziev/chatbot).
| inkoziev/arithmetic | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"language_creators:machine-generated",
"multilinguality:monolingual",
"language:ru",
"license:cc-by-nc-4.0",
"region:us"
]
| 2023-01-18T11:18:15+00:00 | {"language_creators": ["machine-generated"], "language": ["ru"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "source_datasets": [], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa"], "pretty_name": "arithmetic", "tags": []} | 2023-02-18T12:40:43+00:00 |
e19db6759252ca92467b067536ff74ae14e0a5f5 |
# Dataset Card for LILA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Usage](#dataset-usage)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://lila.science/
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** [[email protected]]([email protected])
### Dataset Summary
LILA Camera Traps is an aggregate data set of images taken by camera traps, which are devices that automatically (e.g. via motion detection) capture images of wild animals to help ecological research.
This data set is the first time when disparate camera trap data sets have been aggregated into a single training environment with a single [taxonomy](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/).
This data set consists of only camera trap image data sets, whereas the broader [LILA](lila.science/) website also has other data sets related to biology and conservation, intended as a resource for both machine learning (ML) researchers and those that want to harness ML for this topic.
See below for information about each specific dataset that LILA contains:
<details>
<summary> Caltech Camera Traps </summary>
This data set contains 243,100 images from 140 camera locations in the Southwestern United States, with labels for 21 animal categories (plus empty), primarily at the species level (for example, the most common labels are opossum, raccoon, and coyote), and approximately 66,000 bounding box annotations. Approximately 70% of images are labeled as empty.
More information about this data set is available [here](https://beerys.github.io/CaltechCameraTraps/).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
For questions about this data set, contact [email protected].
If you use this data set, please cite the associated manuscript:
```bibtex
@inproceedings{DBLP:conf/eccv/BeeryHP18,
author = {Sara Beery and
Grant Van Horn and
Pietro Perona},
title = {Recognition in Terra Incognita},
booktitle = {Computer Vision - {ECCV} 2018 - 15th European Conference, Munich,
Germany, September 8-14, 2018, Proceedings, Part {XVI}},
pages = {472--489},
year = {2018},
crossref = {DBLP:conf/eccv/2018-16},
url = {https://doi.org/10.1007/978-3-030-01270-0\_28},
doi = {10.1007/978-3-030-01270-0\_28},
timestamp = {Mon, 08 Oct 2018 17:08:07 +0200},
biburl = {https://dblp.org/rec/bib/conf/eccv/BeeryHP18},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
</details>
<details>
<summary> ENA24 </summary>
This data set contains approximately 10,000 camera trap images representing 23 classes from Eastern North America, with bounding boxes on each image. The most common classes are “American Crow”, “American Black Bear”, and “Dog”.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
Please cite this manuscript if you use this data set:
```bibtex
@article{yousif2019dynamic,
title={Dynamic Programming Selection of Object Proposals for Sequence-Level Animal Species Classification in the Wild},
author={Yousif, Hayder and Kays, Roland and He, Zhihai},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2019},
publisher={IEEE}
}
```
For questions about this data set, contact [Hayder Yousif]([email protected]).
</details>
<details>
<summary> Missouri Camera Traps </summary>
This data set contains approximately 25,000 camera trap images representing 20 species (for example, the most common labels are red deer, mouflon, and white-tailed deer). Images within each sequence share the same species label (even though the animal may not have been recorded in all the images in the sequence). Around 900 bounding boxes are included. These are very challenging sequences with highly cluttered and dynamic scenes. Spatial resolutions of the images vary from 1920 × 1080 to 2048 × 1536. Sequence lengths vary from 3 to more than 300 frames.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
If you use this data set, please cite the associated manuscript:
```bibtex
@article{zhang2016animal,
title={Animal detection from highly cluttered natural scenes using spatiotemporal object region proposals and patch verification},
author={Zhang, Zhi and He, Zhihai and Cao, Guitao and Cao, Wenming},
journal={IEEE Transactions on Multimedia},
volume={18},
number={10},
pages={2079--2092},
year={2016},
publisher={IEEE}
}
```
For questions about this data set, contact [Hayder Yousif]([email protected]) and [Zhi Zhang]([email protected]).
</details>
<details>
<summary> North American Camera Trap Images (NACTI) </summary>
This data set contains 3.7M camera trap images from five locations across the United States, with labels for 28 animal categories, primarily at the species level (for example, the most common labels are cattle, boar, and red deer). Approximately 12% of images are labeled as empty. We have also added bounding box annotations to 8892 images (mostly vehicles and birds).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
Please cite this manuscript if you use this data set:
```bibtex
@article{tabak2019machine,
title={Machine learning to classify animal species in camera trap images: Applications in ecology},
author={Tabak, Michael A and Norouzzadeh, Mohammad S and Wolfson, David W and Sweeney, Steven J and VerCauteren, Kurt C and Snow, Nathan P and Halseth, Joseph M and Di Salvo, Paul A and Lewis, Jesse S and White, Michael D and others},
journal={Methods in Ecology and Evolution},
volume={10},
number={4},
pages={585--590},
year={2019},
publisher={Wiley Online Library}
}
```
For questions about this data set, contact [[email protected]]([email protected]).
</details>
<details>
<summary> WCS Camera Traps </summary>
This data set contains approximately 1.4M camera trap images representing around 675 species from 12 countries, making it one of the most diverse camera trap data sets available publicly. Data were provided by the [Wildlife Conservation Society](https://www.wcs.org/). The most common classes are tayassu pecari (peccary), meleagris ocellata (ocellated turkey), and bos taurus (cattle). A complete list of classes and associated image counts is available here. Approximately 50% of images are empty. We have also added approximately 375,000 bounding box annotations to approximately 300,000 of those images, which come from sequences covering almost all locations.
Sequences are inferred from timestamps, so may not strictly represent bursts. Images were labeled at a combination of image and sequence level, so – as is the case with most camera trap data sets – empty images may be labeled as non-empty (if an animal was present in one frame of a sequence but not in others). Images containing humans are referred to in metadata, but are not included in the data files. You can find more information about the data set [on the LILA website](https://lila.science/datasets/wcscameratraps).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Wellington Camera Traps </summary>
This data set contains 270,450 images from 187 camera locations in Wellington, New Zealand. The cameras (Bushnell 119537, 119476, and 119436) recorded sequences of three images when triggered. Each sequence was labelled by citizen scientists and/or professional ecologists from Victoria University of Wellington into 17 classes: 15 animal categories (for example, the most common labels are bird, cat, and hedgehog), empty, and unclassifiable. Approximately 17% of images are labeled as empty. Images within each sequence share the same species label (even though the animal may not have been recorded in all three images).
If you use this data set, please cite the associated manuscript:
```bibtex
@article{anton2018monitoring,
title={Monitoring the mammalian fauna of urban areas using remote cameras and citizen science},
author={Anton, Victor and Hartley, Stephen and Geldenhuis, Andre and Wittmer, Heiko U},
journal={Journal of Urban Ecology},
volume={4},
number={1},
pages={juy002},
year={2018},
publisher={Oxford University Press}
}
```
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
For questions about this data set, contact [Victor Anton]([email protected]).
</details>
<details>
<summary> Island Conservation Camera Traps </summary>
This data set contains approximately 123,000 camera trap images from 123 camera locations from 7 islands in 6 countries. Data were provided by Island Conservation during projects conducted to prevent the extinction of threatened species on islands.
The most common classes are rabbit, rat, petrel, iguana, cat, goat, and pig, with both rat and cat represented between multiple island sites representing significantly different ecosystems (tropical forest, dry forest, and temperate forests). Additionally, this data set represents data from locations and ecosystems that, to our knowledge, are not well represented in publicly available datasets including >1,000 images each of iguanas, petrels, and shearwaters. A complete list of classes and associated image counts is available here. Approximately 60% of the images are empty. We have also included approximately 65,000 bounding box annotations for about 50,000 images.
In general cameras were dispersed across each project site to detect the presence of invasive vertebrate species that threaten native island species. Cameras were set to capture bursts of photos for each motion detection event (between three and eight photos) with a set delay between events (10 to 30 seconds) to minimize the number of photos. Images containing humans are referred to in metadata, but are not included in the data files.
For questions about this data set, contact [David Will]([email protected]) at Island Conservation.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
The original data set included a “human” class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata. If those images are important to your work, contact us; in some cases it will be possible to release those images under an alternative license.
</details>
<details>
<summary> Channel Islands Camera Traps </summary>
This data set contains 246,529 camera trap images from 73 camera locations in the Channel Islands, California. All animals are annotated with bounding boxes. Data were provided by The Nature Conservancy. Animals are classified as rodent1 (82914), fox (48150), bird (11099), skunk (1071), or other (159). 114,949 images (47%) are empty. All images of rats were taken on islands already known to have rat populations.
If you use these data in a publication or report, please use the following citation:
The Nature Conservancy (2021): Channel Islands Camera Traps 1.0. The Nature Conservancy. Dataset.
For questions about this data set, contact [Nathaniel Rindlaub]([email protected]) at The Nature Conservancy.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
The original data set included a “human” class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata.
</details>
<details>
<summary> Idaho Camera Traps </summary>
This data set contains approximately 1.5 million camera trap images from Idaho. Labels are provided for 62 categories, most of which are animal classes (“deer”, “elk”, and “cattle” are the most common animal classes), but labels also include some state indicators (e.g. “snow on lens”, “foggy lens”). Approximately 70.5% of images are labeled as empty. Annotations were assigned to image sequences, rather than individual images, so annotations are meaningful only at the sequence level.
The metadata contains references to images containing humans, but these have been removed from the dataset (along with images containing vehicles and domestic dogs).
Images were provided by the Idaho Department of Fish and Game. No representations or warranties are made regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose. Some information shared under this agreement may not have undergone quality assurance procedures and should be considered provisional. Images may not be sold in any format, but may be used for scientific publications. Please acknowledge the Idaho Department of Fish and Game when using images for publication or scientific communication.
</details>
<details>
<summary> Snapshot Serengeti </summary>
This data set contains approximately 2.65M sequences of camera trap images, totaling 7.1M images, from seasons one through eleven of the [Snapshot Serengeti project](https://snapshotserengeti.org/) -- the flagship project of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Serengeti National Park in Tanzania is best known for the massive annual migrations of wildebeest and zebra that drive the cycling of its dynamic ecosystem.
Labels are provided for 61 categories, primarily at the species level (for example, the most common labels are wildebeest, zebra, and Thomson’s gazelle). Approximately 76% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshotserengeti-v-2-0/SnapshotSerengeti_S1-11_v2.1.species_list.csv). We have also added approximately 150,000 bounding box annotations to approximately 78,000 of those images.
The images and species-level labels are described in more detail in the associated manuscript:
```bibtex
@misc{dryad_5pt92,
title = {Data from: Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna},
author = {Swanson, AB and Kosmala, M and Lintott, CJ and Simpson, RJ and Smith, A and Packer, C},
year = {2015},
journal = {Scientific Data},
URL = {https://doi.org/10.5061/dryad.5pt92},
doi = {doi:10.5061/dryad.5pt92},
publisher = {Dryad Digital Repository}
}
```
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Karoo </summary>
This data set contains 14889 sequences of camera trap images, totaling 38074 images, from the [Snapshot Karoo](https://www.zooniverse.org/projects/shuebner729/snapshot-karoo) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Karoo National Park, located in the arid Nama Karoo biome of South Africa, is defined by its endemic vegetation and mountain landscapes. Its unique topographical gradient has led to a surprising amount of biodiversity, with 58 mammals and more than 200 bird species recorded, as well as a multitude of reptilian species.
Labels are provided for 38 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, hartebeestred, and kudu). Approximately 83.02% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KAR/SnapshotKaroo_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Kgalagadi </summary>
This data set contains 3611 sequences of camera trap images, totaling 10222 images, from the [Snapshot Kgalagadi](https://www.zooniverse.org/projects/shuebner729/snapshot-kgalagadi/) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. The Kgalagadi Transfrontier Park stretches from the Namibian border across South Africa and into Botswana, covering a landscape commonly referred to as the Kalahari – an arid savanna. This region is of great interest to help us understand how animals cope with extreme temperatures at both ends of the scale.
Labels are provided for 31 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, birdother, and ostrich). Approximately 76.14% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KGA/SnapshotKgalagadi_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Enonkishu </summary>
This data set contains 13301 sequences of camera trap images, totaling 28544 images, from the [Snapshot Enonkishu](https://www.zooniverse.org/projects/aguthmann/snapshot-enonkishu) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Enonkishu Conservancy is located on the northern boundary of the Mara-Serengeti ecosystem in Kenya, and is managed by a consortium of stakeholders and land-owning Maasai families. Their aim is to promote coexistence between wildlife and livestock in order to encourage regenerative grazing and build stability in the Mara conservancies.
Labels are provided for 39 categories, primarily at the species level (for example, the most common labels are impala, warthog, and zebra). Approximately 64.76% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/ENO/SnapshotEnonkishu_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Camdeboo </summary>
This data set contains 12132 sequences of camera trap images, totaling 30227 images, from the [Snapshot Camdeboo](https://www.zooniverse.org/projects/shuebner729/snapshot-camdeboo) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Camdeboo National Park, South Africa is crucial habitat for many birds on a global scale, with greater than fifty endemic and near-endemic species and many migratory species.
Labels are provided for 43 categories, primarily at the species level (for example, the most common labels are kudu, springbok, and ostrich). Approximately 43.74% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/CDB/SnapshotCamdeboo_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Mountain Zebra </summary>
This data set contains 71688 sequences of camera trap images, totaling 73034 images, from the [Snapshot Mountain Zebra](https://www.zooniverse.org/projects/meredithspalmer/snapshot-mountain-zebra/) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Mountain Zebra National Park is located in the Eastern Cape of South Africa in a transitional area between several distinct biomes, which means it is home to many endemic species. As the name suggests, this park contains the largest remnant population of Cape Mountain zebras, ~700 as of 2019 and increasing steadily every year.
Labels are provided for 54 categories, primarily at the species level (for example, the most common labels are zebramountain, kudu, and springbok). Approximately 91.23% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/MTZ/SnapshotMountainZebra_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Kruger </summary>
This data set contains 4747 sequences of camera trap images, totaling 10072 images, from the [Snapshot Kruger](https://www.zooniverse.org/projects/shuebner729/snapshot-kruger) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Kruger National Park, South Africa has been a refuge for wildlife since its establishment in 1898, and it houses one of the most diverse wildlife assemblages remaining in Africa. The Snapshot Safari grid was established in 2018 as part of a research project assessing the impacts of large mammals on plant life as boundary fences were removed and wildlife reoccupied areas of previous extirpation.
Labels are provided for 46 categories, primarily at the species level (for example, the most common labels are impala, elephant, and buffalo). Approximately 61.60% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KRU/SnapshotKruger_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner]([email protected]) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> SWG Camera Traps </summary>
This data set contains 436,617 sequences of camera trap images from 982 locations in Vietnam and Lao, totaling 2,039,657 images. Labels are provided for 120 categories, primarily at the species level (for example, the most common labels are “Eurasian Wild Pig”, “Large-antlered Muntjac”, and “Unidentified Murid”). Approximately 12.98% of images are labeled as empty. A full list of species and associated image counts is available here. 101,659 bounding boxes are provided on 88,135 images.
This data set is provided by the Saola Working Group; providers include:
- IUCN SSC Asian Wild Cattle Specialist Group’s Saola Working Group (SWG)
- Asian Arks
- Wildlife Conservation Society (Lao)
- WWF Lao
- Integrated Conservation of Biodiversity and Forests project, Lao (ICBF)
- Center for Environment and Rural Development, Vinh University, Vietnam
If you use these data in a publication or report, please use the following citation:
SWG (2021): Northern and Central Annamites Camera Traps 2.0. IUCN SSC Asian Wild Cattle Specialist Group’s Saola Working Group. Dataset.
For questions about this data set, contact [email protected].
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Orinoquia Camera Traps </summary>
This data set contains 104,782 images collected from a 50-camera-trap array deployed from January to July 2020 within the private natural reserves El Rey Zamuro (31 km2) and Las Unamas (40 km2), located in the Meta department in the Orinoquía region in central Colombia. We deployed cameras using a stratified random sampling design across forest core area strata. Cameras were spaced 1 km apart from one another, located facing wildlife trails, and deployed with no bait. Images were stored and reviewed by experts using the Wildlife Insights platform.
This data set contains 51 classes, predominantly mammals such as the collared peccary, black agouti, spotted paca, white-lipped peccary, lowland tapir, and giant anteater. Approximately 20% of images are empty.
The main purpose of the study is to understand how humans, wildlife, and domestic animals interact in multi-functional landscapes (e.g., agricultural livestock areas with native forest remnants). However, this data set was also used to review model performance of AI-powered platforms – Wildlife Insights (WI), MegaDetector (MD), and Machine Learning for Wildlife Image Classification (MLWIC2). We provide a demonstration of the use of WI, MD, and MLWIC2 and R code for evaluating model performance of these platforms in the accompanying [GitHub repository](https://github.com/julianavelez1/Processing-Camera-Trap-Data-Using-AI).
If you use these data in a publication or report, please use the following citation:
```bibtex
@article{velez2022choosing,
title={Choosing an Appropriate Platform and Workflow for Processing Camera Trap Data using Artificial Intelligence},
author={V{\'e}lez, Juliana and Castiblanco-Camacho, Paula J and Tabak, Michael A and Chalmers, Carl and Fergus, Paul and Fieberg, John},
journal={arXiv preprint arXiv:2202.02283},
year={2022}
}
```
For questions about this data set, contact [Juliana Velez Gomez]([email protected]).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
### Supported Tasks and Leaderboards
No leaderboards exist for LILA.
### Languages
The [LILA taxonomy](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/) is provided in English.
## Dataset Structure
### Data Instances
The data annotations are provided in [COCO Camera Traps](https://github.com/Microsoft/CameraTraps/blob/master/data_management/README.md#coco-cameratraps-format) format.
All of the datasets share a common category taxonomy, which is defined on the [LILA website](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/).
### Data Fields
Different datasets may have slightly varying fields, which include:
`file_name`: the file name \
`width` and `height`: the dimensions of the image \
`study`: which research study the image was collected as part of \
`location` : the name of the location at which the image was taken \
`annotations`: information about image annotation, which includes the taxonomy information, bounding box/boxes (`bbox`/`bboxes`) if any, as well as any other annotation information. \
`image` : the `path` to download the image and any other information that is available, e.g. its size in `bytes`.
### Data Splits
This dataset does not have a predefined train/test split.
## Dataset Creation
### Curation Rationale
The datasets that constitute LILA have been provided by the organizations, projects and researchers who collected them.
### Source Data
#### Initial data collection and normalization
N/A
#### Who are the source language producers?
N/A
### Annotations
#### Annotation process
Each dataset has been annotated by the members of the project/organization that provided it.
#### Who are the annotators?
The annotations have been provided by domain experts in fields such as biology and ecology.
### Personal and Sensitive Information
Some of the original data sets included a “human” class label; for privacy reasons, these images were removed. Those labels are still present in the metadata. If those images are important to your work, contact the [LILA maintainers](mailto:[email protected]), since in some cases it will be possible to release those images under an alternative license.
## Considerations for Using the Data
### Social Impact of Dataset
Machine learning depends on labeled data, but accessing such data in biology and conservation is a challenge. Consequently, everyone benefits when labeled data is made available. Biologists and conservation scientists benefit by having data to train on, and free hosting allows teams to multiply the impact of their data (we suggest listing this benefit in grant proposals that fund data collection). ML researchers benefit by having data to experiment with.
### Discussion of Biases
These datasets do not represent global diversity, but are examples of local ecosystems and animals.
### Other Known Limitations
N/A
## Additional Information
### Working with Taxonomies
All the taxonomy categories are saved as ClassLabels, which can be converted to strings as needed. Strings can likewise be converted to integers as needed, to filter the dataset. In the example below we filter the "Caltech Camera Traps" dataset to find all the entries with a "felis catus" as the species for the first annotation.
```python
dataset = load_dataset("society-ethics/lila_camera_traps", "Caltech Camera Traps", split="train")
taxonomy = dataset.features["annotations"].feature["taxonomy"]
# Filters to show only cats
cats = dataset.filter(lambda x: x["annotations"]["taxonomy"][0]["species"] == taxonomy["species"].str2int("felis catus"))
```
The original common names have been saved with their taxonomy mappings in this repository in `common_names_to_tax.json`. These can be used, for example, to map from a taxonomy combination to a common name to help make queries more legible. Note, however, that there is a small number of duplicate common names with different taxonomy values which you will need to disambiguate.
The following example loads the first "sea turtle" in the "Island Conservation Camera Traps" dataset.
```python
LILA_COMMON_NAMES_TO_TAXONOMY = pd.read_json("https://huggingface.co/datasets/society-ethics/lila_camera_traps/raw/main/data/common_names_to_tax.json", lines=True).set_index("common_name")
dataset = load_dataset("society-ethics/lila_camera_traps", "Island Conservation Camera Traps", split="train")
taxonomy = dataset.features["annotations"].feature["taxonomy"]
sea_turtle = LILA_COMMON_NAMES_TO_TAXONOMY.loc["sea turtle"].to_dict()
sea_turtle = {k: taxonomy[k].str2int(v) if v is not None else v for k, v in sea_turtle.items()} # Map to ClassLabel integers
sea_turtle_dataset = ds.filter(lambda x: x["annotations"]["taxonomy"][0] == sea_turtle)
```
The example below selects a random item from the dataset, and then maps from the taxonomy to a common name:
```python
LILA_COMMON_NAMES_TO_TAXONOMY = pd.read_json("https://huggingface.co/datasets/society-ethics/lila_camera_traps/raw/main/data/common_names_to_tax.json", lines=True).set_index("common_name")
dataset = load_dataset("society-ethics/lila_camera_traps", "Caltech Camera Traps", split="train")
taxonomy = dataset.features["annotations"].feature["taxonomy"]
random_entry = dataset.shuffle()[0]
filter_taxonomy = random_entry["annotations"]["taxonomy"][0]
filter_keys = list(map(lambda x: (x[0], taxonomy[x[0]].int2str(x[1])), filter(lambda x: x[1] is not None, list(filter_taxonomy.items()))))
if len(filter_keys) > 0:
print(LILA_COMMON_NAMES_TO_TAXONOMY[np.logical_and.reduce([
LILA_COMMON_NAMES_TO_TAXONOMY[k] == v for k,v in filter_keys
])])
else:
print("No common name found for the item.")
```
### Dataset Curators
LILA BC is maintained by a working group that includes representatives from Ecologize, Zooniverse, the Evolving AI Lab, Snapshot Safari, and Microsoft AI for Earth. Hosting on Microsoft Azure is provided by Microsoft AI for Earth.
### Licensing Information
Many, but not all, LILA data sets were released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/). Check the details of the specific dataset you are using in its section above.
### Citation Information
Citations for each dataset (if they exist) are provided in its section above.
### Contributions
Thanks to [@NimaBoscarino](https://github.com/NimaBoscarino/) for adding this dataset.
| polinaeterna/lila_camera_traps | [
"task_categories:image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:other",
"biodiversity",
"camera trap data",
"wildlife monitoring",
"region:us"
]
| 2023-01-18T12:10:16+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["image-classification"], "pretty_name": "LILA Camera Traps", "tags": ["biodiversity", "camera trap data", "wildlife monitoring"], "duplicated_from": "society-ethics/lila_camera_traps"} | 2023-01-18T12:10:17+00:00 |
371f5d2be3e3cbf1c4a3baeb88debbd507fcb7d8 |
# Dataset Card for GLUE
## Table of Contents
- [Dataset Card for GLUE](#dataset-card-for-glue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [ax](#ax)
- [cola](#cola)
- [mnli](#mnli)
- [mnli_matched](#mnli_matched)
- [mnli_mismatched](#mnli_mismatched)
- [mrpc](#mrpc)
- [qnli](#qnli)
- [qqp](#qqp)
- [rte](#rte)
- [sst2](#sst2)
- [stsb](#stsb)
- [wnli](#wnli)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [ax](#ax-1)
- [cola](#cola-1)
- [mnli](#mnli-1)
- [mnli_matched](#mnli_matched-1)
- [mnli_mismatched](#mnli_mismatched-1)
- [mrpc](#mrpc-1)
- [qnli](#qnli-1)
- [qqp](#qqp-1)
- [rte](#rte-1)
- [sst2](#sst2-1)
- [stsb](#stsb-1)
- [wnli](#wnli-1)
- [Data Fields](#data-fields)
- [ax](#ax-2)
- [cola](#cola-2)
- [mnli](#mnli-2)
- [mnli_matched](#mnli_matched-2)
- [mnli_mismatched](#mnli_mismatched-2)
- [mrpc](#mrpc-2)
- [qnli](#qnli-2)
- [qqp](#qqp-2)
- [rte](#rte-2)
- [sst2](#sst2-2)
- [stsb](#stsb-2)
- [wnli](#wnli-2)
- [Data Splits](#data-splits)
- [ax](#ax-3)
- [cola](#cola-3)
- [mnli](#mnli-3)
- [mnli_matched](#mnli_matched-3)
- [mnli_mismatched](#mnli_mismatched-3)
- [mrpc](#mrpc-3)
- [qnli](#qnli-3)
- [qqp](#qqp-3)
- [rte](#rte-3)
- [sst2](#sst2-3)
- [stsb](#stsb-3)
- [wnli](#wnli-3)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 955.33 MB
- **Size of the generated dataset:** 229.68 MB
- **Total amount of disk used:** 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
#### ax
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.44 MB
An example of 'test' looks as follows.
```
{
"premise": "The cat sat on the mat.",
"hypothesis": "The cat did not sit on the mat.",
"label": -1,
"idx: 0
}
```
#### cola
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.58 MB
- **Total amount of disk used:** 0.94 MB
An example of 'train' looks as follows.
```
{
"sentence": "Our friends won't buy this analysis, let alone the next one we propose.",
"label": 1,
"id": 0
}
```
#### mnli
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 78.65 MB
- **Total amount of disk used:** 376.95 MB
An example of 'train' looks as follows.
```
{
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
"hypothesis": "Product and geography are what make cream skimming work.",
"label": 1,
"idx": 0
}
```
#### mnli_matched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.52 MB
- **Total amount of disk used:** 301.82 MB
An example of 'test' looks as follows.
```
{
"premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.",
"hypothesis": "Hierbas is a name worth looking out for.",
"label": -1,
"idx": 0
}
```
#### mnli_mismatched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.73 MB
- **Total amount of disk used:** 302.02 MB
An example of 'test' looks as follows.
```
{
"premise": "What have you decided, what are you going to do?",
"hypothesis": "So what's your decision?,
"label": -1,
"idx": 0
}
```
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
#### ax
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### cola
- `sentence`: a `string` feature.
- `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1).
- `idx`: a `int32` feature.
#### mnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_matched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_mismatched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
#### ax
| |test|
|---|---:|
|ax |1104|
#### cola
| |train|validation|test|
|----|----:|---------:|---:|
|cola| 8551| 1043|1063|
#### mnli
| |train |validation_matched|validation_mismatched|test_matched|test_mismatched|
|----|-----:|-----------------:|--------------------:|-----------:|--------------:|
|mnli|392702| 9815| 9832| 9796| 9847|
#### mnli_matched
| |validation|test|
|------------|---------:|---:|
|mnli_matched| 9815|9796|
#### mnli_mismatched
| |validation|test|
|---------------|---------:|---:|
|mnli_mismatched| 9832|9847|
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{warstadt2018neural,
title={Neural Network Acceptability Judgments},
author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},
journal={arXiv preprint arXiv:1805.12471},
year={2018}
}
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
Note that each GLUE dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
```
### Contributions
Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. | mariosasko/glue | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:sentiment-classification",
"task_ids:text-scoring",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"qa-nli",
"coreference-nli",
"paraphrase-identification",
"region:us"
]
| 2023-01-18T12:19:24+00:00 | {"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["acceptability-classification", "natural-language-inference", "semantic-similarity-scoring", "sentiment-classification", "text-scoring"], "paperswithcode_id": "glue", "pretty_name": "GLUE (General Language Understanding Evaluation benchmark)", "configs": ["ax", "cola", "mnli", "mnli_matched", "mnli_mismatched", "mrpc", "qnli", "qqp", "rte", "sst2", "stsb", "wnli"], "tags": ["qa-nli", "coreference-nli", "paraphrase-identification"], "dataset_info": [{"config_name": "cola", "features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "unacceptable", "1": "acceptable"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 61049, "num_examples": 1063}, {"name": "train", "num_bytes": 489149, "num_examples": 8551}, {"name": "validation", "num_bytes": 60850, "num_examples": 1043}], "download_size": 376971, "dataset_size": 611048}, {"config_name": "sst2", "features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 217556, "num_examples": 1821}, {"name": "train", "num_bytes": 4715283, "num_examples": 67349}, {"name": "validation", "num_bytes": 106692, "num_examples": 872}], "download_size": 7439277, "dataset_size": 5039531}, {"config_name": "mrpc", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not_equivalent", "1": "equivalent"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 443498, "num_examples": 1725}, {"name": "train", "num_bytes": 946146, "num_examples": 3668}, {"name": "validation", "num_bytes": 106142, "num_examples": 408}], "download_size": 1494541, "dataset_size": 1495786}, {"config_name": "qqp", "features": [{"name": "question1", "dtype": "string"}, {"name": "question2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not_duplicate", "1": "duplicate"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 50901116, "num_examples": 363846}, {"name": "validation", "num_bytes": 5653794, "num_examples": 40430}, {"name": "test", "num_bytes": 55171431, "num_examples": 390965}], "download_size": 41696084, "dataset_size": 111726341}, {"config_name": "stsb", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": "float32"}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 170847, "num_examples": 1379}, {"name": "train", "num_bytes": 758394, "num_examples": 5749}, {"name": "validation", "num_bytes": 217012, "num_examples": 1500}], "download_size": 802872, "dataset_size": 1146253}, {"config_name": "mnli", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test_matched", "num_bytes": 1854787, "num_examples": 9796}, {"name": "test_mismatched", "num_bytes": 1956866, "num_examples": 9847}, {"name": "train", "num_bytes": 74865118, "num_examples": 392702}, {"name": "validation_matched", "num_bytes": 1839926, "num_examples": 9815}, {"name": "validation_mismatched", "num_bytes": 1955384, "num_examples": 9832}], "download_size": 312783507, "dataset_size": 82472081}, {"config_name": "mnli_mismatched", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 1956866, "num_examples": 9847}, {"name": "validation", "num_bytes": 1955384, "num_examples": 9832}], "download_size": 312783507, "dataset_size": 3912250}, {"config_name": "mnli_matched", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 1854787, "num_examples": 9796}, {"name": "validation", "num_bytes": 1839926, "num_examples": 9815}], "download_size": 312783507, "dataset_size": 3694713}, {"config_name": "qnli", "features": [{"name": "question", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 1376516, "num_examples": 5463}, {"name": "train", "num_bytes": 25677924, "num_examples": 104743}, {"name": "validation", "num_bytes": 1371727, "num_examples": 5463}], "download_size": 10627589, "dataset_size": 28426167}, {"config_name": "rte", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "not_entailment"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 975936, "num_examples": 3000}, {"name": "train", "num_bytes": 848888, "num_examples": 2490}, {"name": "validation", "num_bytes": 90911, "num_examples": 277}], "download_size": 697150, "dataset_size": 1915735}, {"config_name": "wnli", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not_entailment", "1": "entailment"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 37992, "num_examples": 146}, {"name": "train", "num_bytes": 107517, "num_examples": 635}, {"name": "validation", "num_bytes": 12215, "num_examples": 71}], "download_size": 28999, "dataset_size": 157724}, {"config_name": "ax", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 238392, "num_examples": 1104}], "download_size": 222257, "dataset_size": 238392}], "train-eval-index": [{"config": "cola", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "sst2", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "mrpc", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "qqp", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question1": "text1", "question2": "text2", "label": "target"}}, {"config": "stsb", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "mnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation_matched"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_mismatched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_matched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "qnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question": "text1", "sentence": "text2", "label": "target"}}, {"config": "rte", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "wnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}]} | 2023-06-08T15:42:25+00:00 |
cdb053c69acbcf6cc8bf7bc904a12c85ee0fda06 | metaeval/mega | [
"license:apache-2.0",
"region:us"
]
| 2023-01-18T12:20:22+00:00 | {"license": "apache-2.0"} | 2023-03-24T13:55:03+00:00 |
|
f25a499240f8653404c89da4f1763c0a75cb0cd0 |
# My Solid Theme
## Description
A copy of the solid theme
## Preview

## Contributions
Thanks to [@freddyaboulton](https://huggingface.co/freddyaboulton) for adding this gradio theme!
| freddyaboulton/my-solid-theme | [
"license:apache-2.0",
"gradio-theme",
"region:us"
]
| 2023-01-18T12:28:59+00:00 | {"license": "apache-2.0", "tags": ["gradio-theme"], "title": "My Solid Theme", "colorFrom": "orange", "colorTo": "purple", "sdk": "gradio", "sdk_version": "3.16.2", "app_file": "app.py", "pinned": false} | 2023-01-18T21:04:08+00:00 |
395d4ad97de47310033ae51dc324c07fb595058f | MBJC/diffsinger_keqing | [
"license:mit",
"region:us"
]
| 2023-01-18T12:32:11+00:00 | {"license": "mit"} | 2023-01-18T12:32:11+00:00 |
|
c6a5c807ec7896626b62ac3db727afafc4958b41 | RUC-DataLab/rel-heter | [
"region:us"
]
| 2023-01-18T12:32:57+00:00 | {} | 2023-01-18T14:27:12+00:00 |
|
fb7f7d7102fd040c4211002b0c43e3ab727afffc | # UTK Faces
Original paper: [Age Progression/Regression by Conditional Adversarial Autoencoder](https://arxiv.org/abs/1702.08423)
Homepage: https://susanqq.github.io/UTKFace/
Bibtex:
```
@inproceedings{zhifei2017cvpr,
title={Age Progression/Regression by Conditional Adversarial Autoencoder},
author={Zhang, Zhifei, Song, Yang, and Qi, Hairong},
booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2017},
organization={IEEE}
}
``` | nlphuji/utk_faces | [
"arxiv:1702.08423",
"region:us"
]
| 2023-01-18T12:50:13+00:00 | {} | 2023-01-18T13:10:37+00:00 |
8e418a32628e853f1ba384c3f3ee6eb26b2a8aa5 |
## Required installation
```bash
pip3 install pypdf2 pdf2image
sudo apt-get install poppler-utils
``` | jordyvl/unit-test_PDFfolder | [
"license:cc-by-nc-4.0",
"region:us"
]
| 2023-01-18T13:25:33+00:00 | {"license": "cc-by-nc-4.0"} | 2023-01-18T19:52:11+00:00 |
4d0ff18143b5a7e1b1e79beb540c04549d1e59d3 |
# Human ChatGPT Comparison Corpus (HC3)
We propose the first human-ChatGPT comparison corpus, named **HC3** dataset.
This dataset is introduced in our paper:
- Paper: [***How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection***](https://arxiv.org/abs/2301.07597)
Code, models and analysis are available on our GitHub:
- GitHub: [**Chatgpt-Comparison-Detection project** 🔬](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection)
# Dataset Copyright
If the source datasets used in this corpus has a specific license which is stricter than CC-BY-SA, our products follow the same. If not, they follow CC-BY-SA license.
See [dataset copyright](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection#dataset-copyright).
# Citation
Checkout this papaer [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597)
```
@article{guo-etal-2023-hc3,
title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection",
author = "Guo, Biyang and
Zhang, Xin and
Wang, Ziyuan and
Jiang, Minqi and
Nie, Jinran and
Ding, Yuxuan and
Yue, Jianwei and
Wu, Yupeng",
journal={arXiv preprint arxiv:2301.07597}
year = "2023",
}
``` | Hello-SimpleAI/HC3 | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"task_categories:zero-shot-classification",
"size_categories:10K<n<100K",
"language:en",
"language:zh",
"license:cc-by-sa-4.0",
"ChatGPT",
"SimpleAI",
"Detection",
"OOD",
"arxiv:2301.07597",
"region:us"
]
| 2023-01-18T14:01:20+00:00 | {"language": ["en", "zh"], "license": "cc-by-sa-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "question-answering", "sentence-similarity", "zero-shot-classification"], "tags": ["ChatGPT", "SimpleAI", "Detection", "OOD"]} | 2023-01-21T13:10:10+00:00 |
09a687b8dc164b89e7df95abf15df3b216bc31c2 |
# Human ChatGPT Comparison Corpus (HC3)
We propose the first human-ChatGPT comparison corpus, named **HC3** dataset.
This dataset is introduced in our paper:
- Paper: [***How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection***](https://arxiv.org/abs/2301.07597)
Code, models and analysis are available on our GitHub:
- GitHub: [**Chatgpt-Comparison-Detection project** 🔬](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection)
# Dataset Copyright
If the source datasets used in this corpus has a specific license which is stricter than CC-BY-SA, our products follow the same. If not, they follow CC-BY-SA license.
See [dataset copyright](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection#dataset-copyright).
# Citation
Checkout this papaer [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597)
```
@article{guo-etal-2023-hc3,
title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection",
author = "Guo, Biyang and
Zhang, Xin and
Wang, Ziyuan and
Jiang, Minqi and
Nie, Jinran and
Ding, Yuxuan and
Yue, Jianwei and
Wu, Yupeng",
journal={arXiv preprint arxiv:2301.07597}
year = "2023",
}
``` | Hello-SimpleAI/HC3-Chinese | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"task_categories:zero-shot-classification",
"size_categories:10K<n<100K",
"language:en",
"language:zh",
"license:cc-by-sa-4.0",
"ChatGPT",
"SimpleAI",
"Detection",
"OOD",
"arxiv:2301.07597",
"region:us"
]
| 2023-01-18T14:20:45+00:00 | {"language": ["en", "zh"], "license": "cc-by-sa-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "question-answering", "sentence-similarity", "zero-shot-classification"], "tags": ["ChatGPT", "SimpleAI", "Detection", "OOD"]} | 2023-01-21T13:11:49+00:00 |
1b08362748ebeaa8c330a2ea8a77ec548194b977 | # Dataset Card for "boostcamp-docvqa-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Ssunbell/boostcamp-docvqa-v2 | [
"region:us"
]
| 2023-01-18T14:27:39+00:00 | {"dataset_info": {"features": [{"name": "questionId", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "image", "sequence": {"sequence": {"sequence": "uint8"}}}, {"name": "docId", "dtype": "int64"}, {"name": "ucsf_document_id", "dtype": "string"}, {"name": "ucsf_document_page_no", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "data_split", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "boxes", "sequence": {"sequence": "int64"}}], "splits": [{"name": "train", "num_bytes": 6381793673, "num_examples": 39454}, {"name": "val", "num_bytes": 869361798, "num_examples": 5349}], "download_size": 2578867675, "dataset_size": 7251155471}} | 2023-01-18T14:37:24+00:00 |
ac39b2d465010fa9973aefa4a4559ffd1fd07fe9 |
# Dataset Card for ruMeme Descriptions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This is a dataset of more than 2500 memes in Russian and their descriptions from parsing https://vk.com/textmeme.
### Supported Tasks and Leaderboards
`text2image` - generate meme from its textual description
`image2text` - generate description of given meme
### Languages
The text in the dataset is in only in Russian. The associated BCP-47 code is `ru`.
## Dataset Structure
### Data Fields
- `Image`: Meme itself at 512 by 512px (image)
- `Text`: Description (str)
### Data Splits
There is not enough examples yet to split it to train/test/val in my opinion.
## Dataset Creation
As already mentioned, data was gathered from parsing https://vk.com/textmeme. | foldl/rumeme-desc | [
"size_categories:1K<n<10K",
"language:ru",
"license:cc-by-sa-4.0",
"ru",
"memes",
"text2image",
"image2text",
"region:us"
]
| 2023-01-18T14:28:37+00:00 | {"language": ["ru"], "license": "cc-by-sa-4.0", "size_categories": ["1K<n<10K"], "pretty_name": "rumeme-desc", "tags": ["ru", "memes", "text2image", "image2text"]} | 2023-01-18T19:31:38+00:00 |
b5b1adff8fbbcdbb1e781f70132a0475bbdee29e | # Dataset Card for "boostcamp-docvqa-v2-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Ssunbell/boostcamp-docvqa-v2-test | [
"region:us"
]
| 2023-01-18T14:40:14+00:00 | {"dataset_info": {"features": [{"name": "questionId", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "image", "sequence": {"sequence": {"sequence": "uint8"}}}, {"name": "docId", "dtype": "int64"}, {"name": "ucsf_document_id", "dtype": "string"}, {"name": "ucsf_document_page_no", "dtype": "string"}, {"name": "data_split", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "boxes", "sequence": {"sequence": "int64"}}], "splits": [{"name": "test", "num_bytes": 843083964, "num_examples": 5188}], "download_size": 296773802, "dataset_size": 843083964}} | 2023-01-18T14:41:30+00:00 |
527191cf7de562f1de121863db951abdd9deaab4 | Manwani/AravalliMountains | [
"license:cc-by-3.0",
"region:us"
]
| 2023-01-18T14:58:59+00:00 | {"license": "cc-by-3.0"} | 2023-01-18T15:04:33+00:00 |
|
b852e960ac5ed4d775014b497014003a171e3ba3 | # Dataset Card for "pc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | taldarim/pc | [
"region:us"
]
| 2023-01-18T16:12:32+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "Results interpretation", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Frameworks usage", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Algorithms design", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Algorithms implementation", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Launching problem", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Performance issue", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 95159, "num_examples": 58}], "download_size": 50809, "dataset_size": 95159}} | 2023-01-18T16:12:40+00:00 |
fe2ef29cc43f75a4d33430f41e62c319048758a5 | # Dataset Card for "symptoms"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | taldarim/symptoms | [
"region:us"
]
| 2023-01-18T16:12:41+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "No results", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "No idea of proper plugin choices", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Wrong results", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Inconsistent results between simulators and devices", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "No idea of algorithms design for general functionalities", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "No idea of algorithms implementation for general functionalities", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Input data importing failure", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "No idea of supported devices", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Inconsistent results between different versions of the operating system", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Software hangs", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "No idea of frameworks comparison", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "No idea of plugins integration", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "No idea of software configuration", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Inconsistent results between different versions of the plugin", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Plugin loading failure", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Application running failure", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "No idea of algorithms implementation for runtime functionalities", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Software lags", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Inconsistent results between different devices", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "Device unrecognizable", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 101655, "num_examples": 58}], "download_size": 58604, "dataset_size": 101655}} | 2023-01-18T16:12:47+00:00 |
4d01eaf83da481d4e77877cb6ba7ed10076b2d22 | # Dataset Card for "dreambooth_prior_reg_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/dreambooth_prior_reg_images | [
"region:us"
]
| 2023-01-18T16:21:48+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 44656947.0, "num_examples": 100}], "download_size": 44658302, "dataset_size": 44656947.0}} | 2023-01-18T16:22:02+00:00 |
69bdf4dfd62e3108c06d1d687b16aa28f03d1776 | # Dataset Card for "dreambooth_test_with_prior_reg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/dreambooth_test_with_prior_reg | [
"region:us"
]
| 2023-01-18T16:26:10+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 156473412.0, "num_examples": 200}, {"name": "validation", "num_bytes": 37346753.0, "num_examples": 32}], "download_size": 51418519, "dataset_size": 193820165.0}} | 2023-01-18T16:27:05+00:00 |
219fbc0b34adcbbd711f937fdeb6207798b0927c |
# Dataset Card for The Pile
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://pile.eleuther.ai/
- **Repository:** https://github.com/EleutherAI/the-pile
- **Paper:** [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027)
- **Leaderboard:**
- **Point of Contact:** [EleutherAI](mailto:[email protected])
**This version of the pile relies on `mystic.the-eye.eu`, a mirror of `the-eye.eu` which is currently down for me.**
### Dataset Summary
The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality
datasets combined together.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset is in English (`EN`).
## Dataset Structure
### Data Instances
#### all
```
{
'meta': {'pile_set_name': 'Pile-CC'},
'text': 'It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web. Playing on...'
}
```
#### enron_emails
```
{
'text': 'Name\t\t\tNew Title\t\t\t\tEffective Date\t\t\tMid Year promotion Yes/No\n\nFloyd, Jodie\t\tSr Cust Svc Rep (no change)\t\t7/16/01\t\t\t\tNo\n\nBuehler, Craig\t\tSr Mkt/Sup Analyst (no change)\t\t7/16/01\t\t\t\tNo\n\nWagoner, Mike\t\tTeam Advisor - Gas Control\t\t7/1/01\t\t\t\tNo\n\nClapper, Karen\t\tSr Cust Svc Rep\t\t\t8/1/01\t\t\t\tYes\n\nGreaney, Chris\t\tSr Cust Svc Rep\t\t\t8/1/01\t\t\t\tYes\n\nWilkens, Jerry\t\tSr Cust Svc Rep\t\t\t8/1/01\t\t\t\tYes\n\nMinton, Kevin\t\tPipeline Controller\t\t\t8/1/01\t\t\t\tYes\n\nCox, Don\t\tPipeline Controller\t\t\t8/1/01\t\t\t\tYes\n\nHanagriff, Richard\tSr Accounting Control Spec\t\t8/1/01\t\t\t\tYes\n\n\nThanks,\nMS'
'meta': "{}",
}
```
#### europarl
```
{
'text': 'Uvádění biocidních přípravků na trh - Nový návrh revize týkající se biocidních přípravků (rozprava) \nPředsedající\nDalším bodem je společná rozprava o následujících tématech:\nzpráva paní Sârbuové za Výbor pro životní prostředí, veřejné zdraví a bezpečnost potravin o návrhu...'
'meta': "{'language': 'cs'}",
}
```
#### free_law
```
{
'meta': "{'case_jurisdiction': 'scotus.tar.gz', 'case_ID': '110921.json','date_created': '2010-04-28T17:12:49Z'}",
'text': '\n461 U.S. 238 (1983)\nOLIM ET AL.\nv.\nWAKINEKONA\nNo. 81-1581.\nSupreme Court of United States.\nArgued...'
}
```
#### hacker_news
```
{
'text': "\nChina Deserves Donald Trump - rm2889\nhttps://www.nytimes.com/2019/05/21/opinion/china-trump-trade.html\n======\nNotPaidToPost\n> so he’d be wise to curb his nationalistic “no-one-tells-China-what-to-do”\n> bluster\n\nThis comment highlights both ignorance of Chinese history and continuing\nAmerican arrogance.\n\nChina has been painfully dictated what to do during the last 200 years. This\nhas had a profound effect on the country and has led to the collapse of\nimperial rule and the drive to 'rejuvenate'...",
'meta': "{'id': '19979654'}",
}
```
#### nih_exporter
```
{
'text': "The National Domestic Violence Hotline (NDVH) and the National Dating Abuse Helpline (NDAH), which are supported by the Division of Family Violence Prevention and Services within the Family and Youth Services Bureau, serve as critical partners in the intervention, prevention, and resource assistance efforts of the network of family violence, domestic violence, and dating violence service providers. They provide crisis intervention and support services; information about resources on domestic...",
'meta': " {'APPLICATION_ID': 100065}",
}
```
#### pubmed
```
{
'meta': {'pmid': 11409574, 'language': 'eng'},
'text': 'Epidemiology of hypoxaemia in children with acute lower respiratory infection.\nTo determine the prevalence of hypoxaemia in children aged under 5 years suffering acute lower respiratory infections (ALRI), the risk factors for hypoxaemia in children under 5 years of age with ALRI, and the association of hypoxaemia with an increased risk of dying in children of the same age. Systematic review of the published literature. Out-patient clinics, emergency departments and hospitalisation wards in 23 health centres from 10 countries. Cohort studies reporting the frequency of hypoxaemia in children under 5 years of age with ALRI, and the association between hypoxaemia and the risk of dying. Prevalence of hypoxaemia measured in children with ARI and relative risks for the association between the severity of illness and the frequency of hypoxaemia, and between hypoxaemia and the risk of dying. Seventeen published studies were found that included 4,021 children under 5 with acute respiratory infections (ARI) and reported the prevalence of hypoxaemia. Out-patient children and those with a clinical diagnosis of upper ARI had a low risk of hypoxaemia (pooled estimate of 6% to 9%). The prevalence increased to 31% and to 43% in patients in emergency departments and in cases with clinical pneumonia, respectively, and it was even higher among hospitalised children (47%) and in those with radiographically confirmed pneumonia (72%). The cumulated data also suggest that hypoxaemia is more frequent in children living at high altitude. Three papers reported an association between hypoxaemia and death, with relative risks varying between 1.4 and 4.6. Papers describing predictors of hypoxaemia have focused on clinical signs for detecting hypoxaemia rather than on identifying risk factors for developing this complication. Hypoxaemia is a common and potentially lethal complication of ALRI in children under 5, particularly among those with severe disease and those living at high altitude. Given the observed high prevalence of hypoxaemia and its likely association with increased mortality, efforts should be made to improve the detection of hypoxaemia and to provide oxygen earlier to more children with severe ALRI.'
}
```
#### pubmed_central
```
{
'meta': "{id': 'PMC5595690'}",
'text': 'Introduction {#acel12642-sec-0001}\n============\n\nAlzheimer\\\'s disease (AD), the most common cause of...'
}
```
#### ubuntu_irc
```
{
'text': "#ubuntu 2004-07-05\n* Window 3\n* \tServer: [0] <None>\n* \tScreen: 0x817e90c\n* \tGeometry Info: [0 11 0 11 11 11] \n* \tCO, LI are [94 49] \n* \tCurrent channel: #ubuntu\n* \tQuery User: <None> \n*\tPrompt: <None>\n* \tSecond status line is OFF\n* \tSplit line is ON triple is OFF\n* \tLogging is ON\n* \tLogfile is irclogs/ubuntu.log\n* \tNotification is OFF\n* \tHold mode is OFF\n* \tWindow level is NONE\n* \tLastlog level is ALL\n* \tNotify level is ALL\n<mdz> lifeless: using tla effectively for all packages in Warty requ...",
'meta': "{'channel': 'ubuntu', 'month': 7}"
}
```
#### uspto
```
{
'text': "1. Field of the Invention\nIn an extensive plant breeding program, Grant Merrill, originator and now deceased, originated a large number of new and distinct varieties of fruit trees, and which included the herein-claimed variety of peach tree. Such plant breeding program was undertaken in originator's experimental orchard located near Exeter, Tulare County, Calif.\n2. Prior Varieties\nAmong the existent varieties of peach trees which were known to originator, particular reference is made to Gemfree (U.S. Plant Pat. No. 1,409) and June Lady (U.S. Plant Pat. No. 3,022) hereinafter mentioned for the purpose of comparison.",
'meta': "{'bibliographic_information': {'Patent Number': 'PP0049700', 'Series Code': '6', 'Application Number': '2845415', 'Application Type': '6', 'Art unit': '337', 'Application Filing Date': '19810720', 'Title of Invention': 'Peach tree (A3-10)', 'Issue Date': '19830104', 'Number of Claims': '1', 'Exemplary Claim Number(s)': '1', 'Primary Examiner': 'Bagwill; Robert E.', 'Number of Drawing Sheets': '1', 'Number of figures': '1'}, 'source_file': 'https://bulkdata.uspto.gov/data/patent/grant/redbook/fulltext/1983/pftaps19830104_wk01.zip', 'abstract': 'A peach tree which is large, vigorous, and spreading; foliated with large, lanceolate leaves having a finely serrate margin, a petiole of medium length and thickness, and medium size, reniform glands; blooms from medium size, conic, plump, pubescent buds; the flowers, medium in blooming period compared with other varieties, being of medium size, and pink; and is a regular and very productive bearer of medium but variable size, round truncate, clingstone fruit having yellow skin substantially overspread with red, yellow flesh mottled with red adjacent the skin, and an amber stone.', 'classifications': [{'OCL': ['Plt', '43'], 'EDF': ['3'], 'ICL': ['A01H', '503'], 'FSC': ['Plt'], 'FSS': ['43']}], 'inventors': [{'inventor name': 'Merrill, deceased; Grant', 'Street': '325 Breese Ave.', 'City': 'late of Red Bluff', 'State': 'CA'}, {'inventor name': 'Merrill, executrix; by Lucile B.', 'Street': '325 Breese Ave.', 'City': 'Red Bluff', 'State': 'CA', 'Zip code': '96080'}]}"
}
```
### Data Fields
#### all
- `text` (str): Text.
- `meta` (dict): Metadata of the data instance with keys:
- pile_set_name: Name of the subset.
#### enron_emails
- `text` (str): Text.
- `meta` (str): Metadata of the data instance.
#### europarl
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: language.
#### free_law
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: case_ID, case_jurisdiction, date_created.
#### hacker_news
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: id.
#### nih_exporter
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: APPLICATION_ID.
#### pubmed
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: pmid, language.
#### pubmed_central
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: ID of the data instance.
#### ubuntu_irc
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: channel, month.
#### uspto
- `text` (str): Text.
- `meta` (str): Metadata of the data instance with: bibliographic_information, source_file, abstract, classifications,
inventors.
### Data Splits
The "all" configuration is composed of 3 splits: train, validation and test.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Please refer to the specific license depending on the subset you use:
- PubMed Central: [MIT License](https://github.com/EleutherAI/pile-pubmedcentral/blob/master/LICENSE)
### Citation Information
```
@misc{gao2020pile,
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Leo Gao and Stella Biderman and Sid Black and Laurence Golding and Travis Hoppe and Charles Foster and Jason Phang and Horace He and Anish Thite and Noa Nabeshima and Shawn Presser and Connor Leahy},
year={2020},
eprint={2101.00027},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| jonatli/the_pile_mystic | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:2101.00027",
"region:us"
]
| 2023-01-18T16:28:37+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "the-pile", "pretty_name": "The Pile"} | 2023-01-18T16:31:17+00:00 |
316faf8285d7ff4a4fd96c18129d83dfc3f223ab |
# Dawood Theme
## Description
My Theme!
## Preview
Add an image preview of your theme here!
## Contributions
Thanks to [@dawood](https://huggingface.co/dawood) for adding this gradio theme!
| dawood/dawood-theme | [
"gradio-theme",
"region:us"
]
| 2023-01-18T16:32:44+00:00 | {"tags": ["gradio-theme"]} | 2023-01-18T16:32:45+00:00 |
920960c8af62a00aa6fefb49adb2904b422353b8 | # Dataset Card for "c4-clusters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ola13/c4-clusters | [
"region:us"
]
| 2023-01-18T17:17:57+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "meta", "struct": [{"name": "perplexity_score", "dtype": "float64"}]}, {"name": "text_length", "dtype": "int64"}, {"name": "domain", "dtype": "null"}, {"name": "perplexity", "dtype": "float64"}, {"name": "dup_ratio", "dtype": "float64"}, {"name": "pairs", "sequence": {"sequence": "int64"}}, {"name": "repetitions", "sequence": "binary"}, {"name": "cluster", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 1061375955254, "num_examples": 364868892}], "download_size": 137201241092, "dataset_size": 1061375955254}} | 2023-01-20T13:22:45+00:00 |
e21a65a60de0d1d1ba8ab44c0afc832dd1b48bc2 |
# Dataset Card for [scnclab2023]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [email protected]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
The Dataset has been created using the GPT-3 API by providing a prompt with some manually created clinical notes.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The annotation has been done using [Argilla](https://github.com/argilla-io)
#### Who are the annotators?
The sinthetical clinical notes have been annotated by a group of three biomedical experts
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Note that this is not a real dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | relevanthint/scnclab2023 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"bio",
"clinic",
"cancer",
"region:us"
]
| 2023-01-18T18:34:17+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "scnclab2023", "pretty_name": "Synthetical Clinical Notes - Clab 2023", "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-allergies", "2": "I-allergies", "3": "B-biomarkers", "4": "I-biomarkers", "5": "B-cancer_symptoms", "6": "I-cancer_symptoms", "7": "B-cancer_type", "8": "I-cancer_type", "9": "B-date", "10": "I-date", "11": "B-diagnosis", "12": "I-diagnosis", "13": "B-gender", "14": "I-gender", "15": "B-imaging_options", "16": "I-imaging_options", "17": "B-test_result", "18": "I-test_result", "19": "B-treatment", "20": "I-treatment"}}}}]}, "tags": ["bio", "clinic", "cancer"]} | 2023-01-19T22:35:17+00:00 |
acfabcad7a4ad9046bc9494240eba44ff6724916 | # Dataset Card for "rick-and-morty-all-seasons-v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | storia/rick-and-morty-all-seasons | [
"region:us"
]
| 2023-01-18T18:45:36+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "subtitle", "dtype": "string"}, {"name": "caption", "dtype": "string"}, {"name": "characters", "dtype": "string"}, {"name": "frame", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1637895252.464, "num_examples": 15264}, {"name": "test", "num_bytes": 5458443.0, "num_examples": 46}], "download_size": 1363032355, "dataset_size": 1643353695.464}} | 2023-01-18T18:46:10+00:00 |
32cef4e92bd2e27a4423d089b9554adb575d9ea6 | # Dataset Card for "illustrated_ads_images_labels_only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | davanstrien/illustrated_ads_images_labels_only | [
"size_categories:n<1K",
"region:us"
]
| 2023-01-18T20:42:43+00:00 | {"size_categories": ["n<1K"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "text-only", "1": "illustrations"}}}}], "splits": [{"name": "train", "num_bytes": 47581375, "num_examples": 549}], "download_size": 47599430, "dataset_size": 47581375}} | 2023-01-18T20:49:56+00:00 |
5f3ad8a9d484ac56f9423f5076009105fcbd96ab | kweyamba/lunas-set | [
"task_categories:table-question-answering",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:openrail",
"inventory",
"price",
"expiration",
"medicine",
"region:us"
]
| 2023-01-18T21:08:31+00:00 | {"language": ["en"], "license": "openrail", "size_categories": ["10K<n<100K"], "task_categories": ["table-question-answering", "question-answering"], "pretty_name": "lunas", "tags": ["inventory", "price", "expiration", "medicine"]} | 2023-01-19T09:08:11+00:00 |
|
5007b08f0ba5a4f93bb4f7e1654711b376830cd1 | # Dataset Card for "mls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | juancopi81/mls | [
"task_categories:automatic-speech-recognition",
"whisper",
"whispering",
"medium",
"region:us"
]
| 2023-01-18T22:16:12+00:00 | {"task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "CHANNEL_NAME", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "TITLE", "dtype": "string"}, {"name": "DESCRIPTION", "dtype": "string"}, {"name": "TRANSCRIPTION", "dtype": "string"}, {"name": "SEGMENTS", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2690661, "num_examples": 142}], "download_size": 1117834, "dataset_size": 2690661}, "tags": ["whisper", "whispering", "medium"]} | 2023-01-24T13:51:58+00:00 |
67f7d06b0e302380e5865c74bfa319dcfeca61e4 | # Dataset Card for "legal_dataset2023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | marcus2000/legal_dataset2023 | [
"region:us"
]
| 2023-01-18T22:23:23+00:00 | {"dataset_info": {"features": [{"name": "0", "dtype": "string"}, {"name": "1", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 110824374, "num_examples": 1723}, {"name": "test", "num_bytes": 21065187, "num_examples": 306}], "download_size": 41312472, "dataset_size": 131889561}} | 2023-01-18T22:31:59+00:00 |
772a1acf05ee05d3c38f3e4f173c25b2b11d1b8c | # Dataset Card for "olm-wikipedia-20221220-1-percent-tokenized-766"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-wikipedia-20221220-1-percent-tokenized-766 | [
"region:us"
]
| 2023-01-18T22:33:22+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 300178944, "num_examples": 65143}], "download_size": 93964466, "dataset_size": 300178944}} | 2023-01-18T22:33:27+00:00 |
a98c94fac808ebea2f7c871631f960ebf0ca1a1b |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset consists of roughly 480k english (classified using nltk language classifier) lyrics with some more meta data. The meta data was taken from the million playlist challenge @ AICrowd. The lyrics were crawled using the song and artist name with the lyricsgenius python package. There is no guarantee that the lyrics are the correct one though the data was cleaned and verified. The lyrics crawled came with the song name in its payload, if the song names in the payload and from our side don't match (using the package fuzzywuzzy string matching with a score of under 60) then it wasn't included in this set of lyrics. Still some lyrics might be wrong due to the nature of the data.
49'985 rows have a list of genres, crawled from the official Spotify API. This list of genres are from the artist of the song since spotify doesn't provide genres for every individual song.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | brunokreiner/genius-lyrics | [
"region:us"
]
| 2023-01-18T22:39:24+00:00 | {} | 2023-03-07T21:57:02+00:00 |
3109be33b36f6282aa9fbc36d6841c3e40cb614c | # FairFace (val set)
Original paper: [Fairface: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation](https://openaccess.thecvf.com/content/WACV2021/papers/Karkkainen_FairFace_Face_Attribute_Dataset_for_Balanced_Race_Gender_and_Age_WACV_2021_paper.pdf)
Homepage: https://github.com/joojs/fairface
Bibtex:
```
@inproceedings{karkkainenfairface,
title={FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age for Bias Measurement and Mitigation},
author={Karkkainen, Kimmo and Joo, Jungseock},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
year={2021},
pages={1548--1558}
}
``` | nlphuji/fairface_val_padding_025 | [
"region:us"
]
| 2023-01-18T22:46:25+00:00 | {} | 2023-01-18T22:57:00+00:00 |
a4496d6922555370b95ffd53d4a418e832cfc771 | # Dataset Card for "pexel_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/pexel_images | [
"region:us"
]
| 2023-01-18T23:42:55+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 27590932.0, "num_examples": 80}], "download_size": 27589857, "dataset_size": 27590932.0}} | 2023-01-18T23:43:07+00:00 |
c764741cdbf1a67ea7b3659988f04f007b89b2dc | # Dataset Card for "pexel_images_prior_reg_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/pexel_images_prior_reg_images | [
"region:us"
]
| 2023-01-19T00:04:10+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 459796785.0, "num_examples": 1000}], "download_size": 459802313, "dataset_size": 459796785.0}} | 2023-01-19T01:41:04+00:00 |
e58b4e08b0b07b8df5615ee6dcad7019345538e9 | # Dataset Card for "portrait_dreambooth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/portrait_dreambooth | [
"region:us"
]
| 2023-01-19T00:17:14+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 123049169.0, "num_examples": 286}, {"name": "validation", "num_bytes": 7122908.0, "num_examples": 20}], "download_size": 123406667, "dataset_size": 130172077.0}} | 2023-01-19T00:17:49+00:00 |
09555eaa60091bd3f6df8cbd74a4f976df14fe6d | # Dataset Card for "pexel_images_lots"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/pexel_images_lots | [
"region:us"
]
| 2023-01-19T00:57:13+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2466579957.125, "num_examples": 7999}], "download_size": 2418558487, "dataset_size": 2466579957.125}} | 2023-01-19T21:21:23+00:00 |
5d4ccb4d4ad68f90e3f12c48ab2686b2a6d7b482 | dyllanwli/dataproduct_metadata_tqa | [
"license:apache-2.0",
"region:us"
]
| 2023-01-19T00:59:32+00:00 | {"license": "apache-2.0"} | 2023-01-19T00:59:32+00:00 |
|
bc857fc726710c8a4e43362ccf6259d3129213cf | annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: Tech Channels Metadata
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- youtube
- video
- video metadata
- tech
- science and tech
task_categories:
- other
task_ids: []
| alexignite/YouTube_channel_data | [
"region:us"
]
| 2023-01-19T02:15:52+00:00 | {} | 2023-01-19T16:04:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.