sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
47ce8d535c88f453e4178f67c036f7dc4e9ce223
|
# Dataset of izumi (Pokémon)
This is the dataset of izumi (Pokémon), containing 18 images and their tags.
The core tags of this character are `black_hair, blue_eyes, long_hair, blue_hair, dark-skinned_female, dark_skin, multicolored_hair, breasts, goggles_on_head, two-tone_hair, hair_over_one_eye, eyeshadow, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 18 | 17.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/izumi_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 18 | 10.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/izumi_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 40 | 20.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/izumi_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 18 | 15.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/izumi_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 40 | 28.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/izumi_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/izumi_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------|
| 0 | 18 |  |  |  |  |  | 1girl, goggles, solo, blush, nipples, smile, lipstick, navel |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | goggles | solo | blush | nipples | smile | lipstick | navel |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------|:-------|:--------|:----------|:--------|:-----------|:--------|
| 0 | 18 |  |  |  |  |  | X | X | X | X | X | X | X | X |
|
CyberHarem/izumi_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T19:52:12+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T14:08:36+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of izumi (Pokémon)
==========================
This is the dataset of izumi (Pokémon), containing 18 images and their tags.
The core tags of this character are 'black\_hair, blue\_eyes, long\_hair, blue\_hair, dark-skinned\_female, dark\_skin, multicolored\_hair, breasts, goggles\_on\_head, two-tone\_hair, hair\_over\_one\_eye, eyeshadow, large\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
84b0b39f3011ae97e1c9ccfcf5a0053ea6f8c867
|
# Dataset of asada_shino (Sword Art Online)
This is the dataset of asada_shino (Sword Art Online), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/asada_shino_swordartonline
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T19:53:20+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:50+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of asada_shino (Sword Art Online)
This is the dataset of asada_shino (Sword Art Online), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of asada_shino (Sword Art Online)\n\nThis is the dataset of asada_shino (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of asada_shino (Sword Art Online)\n\nThis is the dataset of asada_shino (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
85
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of asada_shino (Sword Art Online)\n\nThis is the dataset of asada_shino (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
90f95895618a12d28ba99e9bff0c211920d488ec
|
Stanford alpaca turkish: [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
|
TFLai/Turkish-Alpaca
|
[
"license:apache-2.0",
"region:us"
] |
2023-08-16T20:08:40+00:00
|
{"license": "apache-2.0"}
|
2023-11-04T11:16:33+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
Stanford alpaca turkish: Stanford Alpaca
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
ffcc38be7eb7c61380fe8aeeceac2632e6f5d72a
|
# Dataset of blue (Pokémon)
This is the dataset of blue (Pokémon), containing 97 images and their tags.
The core tags of this character are `brown_hair, green_eyes, spiked_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 97 | 56.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/blue_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 97 | 49.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/blue_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 153 | 78.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/blue_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 97 | 55.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/blue_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 153 | 87.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/blue_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/blue_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1boy, holding_poke_ball, male_focus, solo, necklace, poke_ball_(basic), jacket, smile |
| 1 | 5 |  |  |  |  |  | 1boy, bangs, grin, male_focus, necklace, short_hair, brown_eyes, long_sleeves, pokemon_(creature), purple_shirt, teeth, jacket, pants, poke_ball, boots, brown_footwear, holding, one_eye_closed, solo |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1boy | holding_poke_ball | male_focus | solo | necklace | poke_ball_(basic) | jacket | smile | bangs | grin | short_hair | brown_eyes | long_sleeves | pokemon_(creature) | purple_shirt | teeth | pants | poke_ball | boots | brown_footwear | holding | one_eye_closed |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------|:--------------------|:-------------|:-------|:-----------|:--------------------|:---------|:--------|:--------|:-------|:-------------|:-------------|:---------------|:---------------------|:---------------|:--------|:--------|:------------|:--------|:-----------------|:----------|:-----------------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | | X | X | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/blue_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T20:15:36+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T14:33:10+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of blue (Pokémon)
=========================
This is the dataset of blue (Pokémon), containing 97 images and their tags.
The core tags of this character are 'brown\_hair, green\_eyes, spiked\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
e1e1ccf3dab8461436a20e103ac173dcb8152809
|
# Dataset Card for "eng-conversations_no-tokenizer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aimona/eng-conversations_no-tokenizer
|
[
"region:us"
] |
2023-08-16T20:16:47+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "instructions", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 647195713, "num_examples": 30052}], "download_size": 247595314, "dataset_size": 647195713}}
|
2023-08-16T20:17:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "eng-conversations_no-tokenizer"
More Information needed
|
[
"# Dataset Card for \"eng-conversations_no-tokenizer\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"eng-conversations_no-tokenizer\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"eng-conversations_no-tokenizer\"\n\nMore Information needed"
] |
146a370f71c987ebed14526ae1e325c46a537865
|
# Dataset Card for "toy_dataset_gold"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rookshanks/toy_dataset_gold
|
[
"region:us"
] |
2023-08-16T20:24:17+00:00
|
{"dataset_info": {"features": [{"name": "question", "sequence": "int64"}, {"name": "answer", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 127776, "num_examples": 1000}, {"name": "validation", "num_bytes": 125168, "num_examples": 1000}, {"name": "test", "num_bytes": 129120, "num_examples": 1000}], "download_size": 22445, "dataset_size": 382064}}
|
2023-08-16T20:24:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "toy_dataset_gold"
More Information needed
|
[
"# Dataset Card for \"toy_dataset_gold\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"toy_dataset_gold\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"toy_dataset_gold\"\n\nMore Information needed"
] |
0b401455f2fcf9d1d46fac8aa148f0afaa9ef10b
|
# Dataset of quinella (Sword Art Online)
This is the dataset of quinella (Sword Art Online), containing 152 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/quinella_swordartonline
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T20:33:10+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:54+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of quinella (Sword Art Online)
This is the dataset of quinella (Sword Art Online), containing 152 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of quinella (Sword Art Online)\n\nThis is the dataset of quinella (Sword Art Online), containing 152 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of quinella (Sword Art Online)\n\nThis is the dataset of quinella (Sword Art Online), containing 152 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
79
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of quinella (Sword Art Online)\n\nThis is the dataset of quinella (Sword Art Online), containing 152 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
fd8e9e436afe2c9bfa68ee77a7ebe67056743150
|
# Dataset of tiese_shtolienen (Sword Art Online)
This is the dataset of tiese_shtolienen (Sword Art Online), containing 71 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/tiese_shtolienen_swordartonline
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T20:50:54+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:56+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of tiese_shtolienen (Sword Art Online)
This is the dataset of tiese_shtolienen (Sword Art Online), containing 71 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of tiese_shtolienen (Sword Art Online)\n\nThis is the dataset of tiese_shtolienen (Sword Art Online), containing 71 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of tiese_shtolienen (Sword Art Online)\n\nThis is the dataset of tiese_shtolienen (Sword Art Online), containing 71 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
87
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of tiese_shtolienen (Sword Art Online)\n\nThis is the dataset of tiese_shtolienen (Sword Art Online), containing 71 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
86cf821deb9a7da708bae0849d2fd63b1f118d94
|
# Dataset of sortiliena_serlut (Sword Art Online)
This is the dataset of sortiliena_serlut (Sword Art Online), containing 44 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/sortiliena_serlut_swordartonline
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T20:59:18+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:58+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of sortiliena_serlut (Sword Art Online)
This is the dataset of sortiliena_serlut (Sword Art Online), containing 44 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of sortiliena_serlut (Sword Art Online)\n\nThis is the dataset of sortiliena_serlut (Sword Art Online), containing 44 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of sortiliena_serlut (Sword Art Online)\n\nThis is the dataset of sortiliena_serlut (Sword Art Online), containing 44 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
87
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of sortiliena_serlut (Sword Art Online)\n\nThis is the dataset of sortiliena_serlut (Sword Art Online), containing 44 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
91394fff494ac44d69f48f66b239e51f86a42081
|
# Dataset of eureka/ユリーカ (Pokémon)
This is the dataset of eureka/ユリーカ (Pokémon), containing 146 images and their tags.
The core tags of this character are `blonde_hair, blue_eyes, short_hair, side_ponytail`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 146 | 88.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eureka_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 146 | 72.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eureka_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 248 | 119.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eureka_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 146 | 86.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eureka_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 248 | 138.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eureka_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/eureka_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 23 |  |  |  |  |  | bangs, ahoge, male_focus, 1boy, glasses, jumpsuit, long_sleeves, medium_hair, open_mouth, smile, tongue, pokemon_(creature), shoes, holding |
| 1 | 20 |  |  |  |  |  | 1girl, open_mouth, :d, pokemon_(creature), tongue, eyelashes, brown_shirt, bike_shorts, holding, bare_arms, pink_footwear, shoes, standing, blush, white_skirt, looking_at_viewer, sleeveless_shirt, teeth |
| 2 | 6 |  |  |  |  |  | 1girl, open_mouth, smile, bag, bike_shorts, pokemon_(creature), solo |
| 3 | 5 |  |  |  |  |  | 2girls, smile, open_mouth, bike_shorts, barefoot, blush, solo_focus |
| 4 | 6 |  |  |  |  |  | nipples, 1girl, loli, navel, open_mouth, small_breasts, smile, water, nude, blush, pussy, solo, uncensored |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | bangs | ahoge | male_focus | 1boy | glasses | jumpsuit | long_sleeves | medium_hair | open_mouth | smile | tongue | pokemon_(creature) | shoes | holding | 1girl | :d | eyelashes | brown_shirt | bike_shorts | bare_arms | pink_footwear | standing | blush | white_skirt | looking_at_viewer | sleeveless_shirt | teeth | bag | solo | 2girls | barefoot | solo_focus | nipples | loli | navel | small_breasts | water | nude | pussy | uncensored |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------------|:-------|:----------|:-----------|:---------------|:--------------|:-------------|:--------|:---------|:---------------------|:--------|:----------|:--------|:-----|:------------|:--------------|:--------------|:------------|:----------------|:-----------|:--------|:--------------|:--------------------|:-------------------|:--------|:------|:-------|:---------|:-----------|:-------------|:----------|:-------|:--------|:----------------|:--------|:-------|:--------|:-------------|
| 0 | 23 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 20 |  |  |  |  |  | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | | | | | | | | | X | X | | X | | | X | | | | X | | | | | | | | | X | X | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | | | | | | | | | X | X | | | | | | | | | X | | | | X | | | | | | | X | X | X | | | | | | | | |
| 4 | 6 |  |  |  |  |  | | | | | | | | | X | X | | | | | X | | | | | | | | X | | | | | | X | | | | X | X | X | X | X | X | X | X |
|
CyberHarem/eureka_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T21:10:19+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T12:31:06+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of eureka/ユリーカ (Pokémon)
================================
This is the dataset of eureka/ユリーカ (Pokémon), containing 146 images and their tags.
The core tags of this character are 'blonde\_hair, blue\_eyes, short\_hair, side\_ponytail', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
92df702aad0fb5984a6d9db7d1a04e8f79ac228a
|
# Dataset of lajournee/ラジュルネ (Pokémon)
This is the dataset of lajournee/ラジュルネ (Pokémon), containing 32 images and their tags.
The core tags of this character are `hat, breasts, long_hair, red_eyes, pink_hair, red_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 32 | 27.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lajournee_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 32 | 17.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lajournee_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 64 | 33.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lajournee_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 32 | 25.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lajournee_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 64 | 43.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lajournee_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/lajournee_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 19 |  |  |  |  |  | 1girl, bare_shoulders, elbow_gloves, solo, dress, looking_at_viewer, black_gloves, pantyhose, eyelashes, open_mouth, brown_hair, holding_poke_ball, pink_headwear, poke_ball_(basic), top_hat, brown_eyes, hand_on_hip |
| 1 | 11 |  |  |  |  |  | 1girl, blush, hetero, penis, solo_focus, 1boy, censored, nipples, open_mouth, large_breasts, sex, pantyhose, pussy, vaginal, bare_shoulders, elbow_gloves, handjob, nude |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | elbow_gloves | solo | dress | looking_at_viewer | black_gloves | pantyhose | eyelashes | open_mouth | brown_hair | holding_poke_ball | pink_headwear | poke_ball_(basic) | top_hat | brown_eyes | hand_on_hip | blush | hetero | penis | solo_focus | 1boy | censored | nipples | large_breasts | sex | pussy | vaginal | handjob | nude |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:---------------|:-------|:--------|:--------------------|:---------------|:------------|:------------|:-------------|:-------------|:--------------------|:----------------|:--------------------|:----------|:-------------|:--------------|:--------|:---------|:--------|:-------------|:-------|:-----------|:----------|:----------------|:------|:--------|:----------|:----------|:-------|
| 0 | 19 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 1 | 11 |  |  |  |  |  | X | X | X | | | | | X | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/lajournee_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T21:15:48+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T13:55:00+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of lajournee/ラジュルネ (Pokémon)
====================================
This is the dataset of lajournee/ラジュルネ (Pokémon), containing 32 images and their tags.
The core tags of this character are 'hat, breasts, long\_hair, red\_eyes, pink\_hair, red\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
f57673a6ca64496d2087e9e5eae12a09dac7d520
|
# Dataset of team_rocket_underling (Pokémon)
This is the dataset of team_rocket_underling (Pokémon), containing 42 images and their tags.
The core tags of this character are `hat, breasts, black_headwear, green_hair, green_eyes, short_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 42 | 25.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/team_rocket_underling_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 42 | 18.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/team_rocket_underling_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 103 | 37.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/team_rocket_underling_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 42 | 24.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/team_rocket_underling_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 103 | 45.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/team_rocket_underling_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/team_rocket_underling_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 42 |  |  |  |  |  | 1girl, grey_gloves, solo, belt, simple_background, blush, open_mouth, black_dress, white_background, looking_at_viewer, thighhighs, smile, poke_ball_(basic) |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | grey_gloves | solo | belt | simple_background | blush | open_mouth | black_dress | white_background | looking_at_viewer | thighhighs | smile | poke_ball_(basic) |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:-------|:-------|:--------------------|:--------|:-------------|:--------------|:-------------------|:--------------------|:-------------|:--------|:--------------------|
| 0 | 42 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/team_rocket_underling_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T21:18:46+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T22:10:48+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of team\_rocket\_underling (Pokémon)
============================================
This is the dataset of team\_rocket\_underling (Pokémon), containing 42 images and their tags.
The core tags of this character are 'hat, breasts, black\_headwear, green\_hair, green\_eyes, short\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
aa952f26a8b4026ea4e210e2a77edf1171d529d8
|
# Dataset of anzu/アンズ (Pokémon)
This is the dataset of anzu/アンズ (Pokémon), containing 82 images and their tags.
The core tags of this character are `purple_hair, purple_eyes, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 82 | 52.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anzu_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 82 | 37.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anzu_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 138 | 62.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anzu_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 82 | 48.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anzu_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 138 | 77.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anzu_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/anzu_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, blush, ninja, closed_mouth, looking_at_viewer, solo, purple_scarf, smile |
| 1 | 7 |  |  |  |  |  | 1girl, ninja, scarf, fishnets, ponytail, black_hair, japanese_clothes, pokemon_(creature) |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | ninja | closed_mouth | looking_at_viewer | solo | purple_scarf | smile | scarf | fishnets | ponytail | black_hair | japanese_clothes | pokemon_(creature) |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:--------|:---------------|:--------------------|:-------|:---------------|:--------|:--------|:-----------|:-----------|:-------------|:-------------------|:---------------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | |
| 1 | 7 |  |  |  |  |  | X | | X | | | | | | X | X | X | X | X | X |
|
CyberHarem/anzu_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T21:26:41+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T13:27:34+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of anzu/アンズ (Pokémon)
=============================
This is the dataset of anzu/アンズ (Pokémon), containing 82 images and their tags.
The core tags of this character are 'purple\_hair, purple\_eyes, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
808e89902a7eac4fc28c22cf177144d814872acd
|
# Wikipedia english dataset for text generation models
A dataset I made using another dataset and some python scripts to format it for text generation models.
|
metaltiger775/text-generation-wikipedia
|
[
"task_categories:text-generation",
"size_categories:100M<n<1B",
"language:eng",
"license:mit",
"text generation",
"dataset",
"nlp",
"region:us"
] |
2023-08-16T21:39:15+00:00
|
{"language": ["eng"], "license": "mit", "size_categories": ["100M<n<1B"], "task_categories": ["text-generation"], "pretty_name": "Wikipedia Dataset for Test Generation", "tags": ["text generation", "dataset", "nlp"], "dataset_info": [{"config_name": 1, "features": [{"sentence": "string"}, {"next_word": "string"}]}], "viewer": true}
|
2023-08-17T23:25:22+00:00
|
[] |
[
"eng"
] |
TAGS
#task_categories-text-generation #size_categories-100M<n<1B #language-English #license-mit #text generation #dataset #nlp #region-us
|
# Wikipedia english dataset for text generation models
A dataset I made using another dataset and some python scripts to format it for text generation models.
|
[
"# Wikipedia english dataset for text generation models\n\nA dataset I made using another dataset and some python scripts to format it for text generation models."
] |
[
"TAGS\n#task_categories-text-generation #size_categories-100M<n<1B #language-English #license-mit #text generation #dataset #nlp #region-us \n",
"# Wikipedia english dataset for text generation models\n\nA dataset I made using another dataset and some python scripts to format it for text generation models."
] |
[
47,
32
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-100M<n<1B #language-English #license-mit #text generation #dataset #nlp #region-us \n# Wikipedia english dataset for text generation models\n\nA dataset I made using another dataset and some python scripts to format it for text generation models."
] |
28547b13160c8038950b469b1e207ffd41580162
|
# Dataset of langley/ラングレー (Pokémon)
This is the dataset of langley/ラングレー (Pokémon), containing 101 images and their tags.
The core tags of this character are `hat, short_hair, green_eyes, breasts, pink_hair, red_hair, cabbie_hat`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 101 | 71.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/langley_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 101 | 47.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/langley_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 202 | 90.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/langley_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 101 | 65.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/langley_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 202 | 117.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/langley_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/langley_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, gloves, skirt, solo, boots, green_footwear, short_sleeves, full_body, looking_at_viewer, smile, yellow_headwear, black_choker, blue_eyes, green_vest, hand_on_hip, standing, zettai_ryouiki, blush, collarbone, green_thighhighs, hair_between_eyes, shirt, simple_background |
| 1 | 5 |  |  |  |  |  | 1girl, gloves, solo, blush, choker, navel, nipples, panties, thighhighs, large_breasts, smile |
| 2 | 17 |  |  |  |  |  | hetero, 1girl, sex, blush, penis, vaginal, thighhighs, 1boy, choker, gloves, nipples, solo_focus, nude, open_mouth, sweat, large_breasts, navel, spread_legs, boots, cum_in_pussy, mosaic_censoring, tears |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | gloves | skirt | solo | boots | green_footwear | short_sleeves | full_body | looking_at_viewer | smile | yellow_headwear | black_choker | blue_eyes | green_vest | hand_on_hip | standing | zettai_ryouiki | blush | collarbone | green_thighhighs | hair_between_eyes | shirt | simple_background | choker | navel | nipples | panties | thighhighs | large_breasts | hetero | sex | penis | vaginal | 1boy | solo_focus | nude | open_mouth | sweat | spread_legs | cum_in_pussy | mosaic_censoring | tears |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:--------|:-------|:--------|:-----------------|:----------------|:------------|:--------------------|:--------|:------------------|:---------------|:------------|:-------------|:--------------|:-----------|:-----------------|:--------|:-------------|:-------------------|:--------------------|:--------|:--------------------|:---------|:--------|:----------|:----------|:-------------|:----------------|:---------|:------|:--------|:----------|:-------|:-------------|:-------|:-------------|:--------|:--------------|:---------------|:-------------------|:--------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | | X | | | | | | X | | | | | | | | X | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | |
| 2 | 17 |  |  |  |  |  | X | X | | | X | | | | | | | | | | | | | X | | | | | | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/langley_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T21:42:27+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T12:20:46+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of langley/ラングレー (Pokémon)
==================================
This is the dataset of langley/ラングレー (Pokémon), containing 101 images and their tags.
The core tags of this character are 'hat, short\_hair, green\_eyes, breasts, pink\_hair, red\_hair, cabbie\_hat', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
43b3ba8e19762d904f158441014c31815f9cef02
|
# Dataset of lematin/ルミタン (Pokémon)
This is the dataset of lematin/ルミタン (Pokémon), containing 28 images and their tags.
The core tags of this character are `breasts, green_hair, hat, green_eyes, drill_hair, large_breasts, long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 28 | 16.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lematin_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 28 | 13.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lematin_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 54 | 22.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lematin_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 28 | 16.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lematin_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 54 | 27.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lematin_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/lematin_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, bare_shoulders, elbow_gloves, cleavage, solo, dress, huge_breasts, smile, simple_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | elbow_gloves | cleavage | solo | dress | huge_breasts | smile | simple_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:---------------|:-----------|:-------|:--------|:---------------|:--------|:--------------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X |
|
CyberHarem/lematin_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T21:46:48+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T13:53:33+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of lematin/ルミタン (Pokémon)
=================================
This is the dataset of lematin/ルミタン (Pokémon), containing 28 images and their tags.
The core tags of this character are 'breasts, green\_hair, hat, green\_eyes, drill\_hair, large\_breasts, long\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
d2a3c213263b6ab8219ef977838bfb656e044116
|
# Dataset of touko/トウコ (Pokémon)
This is the dataset of touko/トウコ (Pokémon), containing 500 images and their tags.
The core tags of this character are `brown_hair, long_hair, blue_eyes, hat, baseball_cap, sidelocks, breasts, high_ponytail, ponytail, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 602.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/touko_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 345.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/touko_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1237 | 733.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/touko_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 533.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/touko_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1237 | 1.01 GiB | [Download](https://huggingface.co/datasets/CyberHarem/touko_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/touko_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 15 |  |  |  |  |  | 1girl, completely_nude, looking_at_viewer, nipples, outdoors, pussy, solo, ass, looking_back, blush, shiny_skin, sweat, night_sky, star_(sky), antenna_hair, anus, from_behind, grin, teeth, public_indecency, wristband |
| 1 | 18 |  |  |  |  |  | 1girl, denim_shorts, short_shorts, wristband, looking_at_viewer, solo, white_shirt, black_vest, simple_background, sleeveless_shirt, poke_ball_(basic), smile, white_background, closed_mouth, holding_poke_ball, exposed_pocket, ass, bag, looking_back, blue_shorts, cutoffs, from_behind |
| 2 | 5 |  |  |  |  |  | 1girl, black_vest, sleeveless_shirt, solo, white_shirt, closed_mouth, holding_poke_ball, poke_ball_(basic), upper_body, wristband, collarbone, eyelashes, looking_at_viewer, white_background |
| 3 | 12 |  |  |  |  |  | 1girl, boots, denim_shorts, short_shorts, wristband, black_vest, sleeveless_shirt, white_shirt, full_body, black_footwear, standing, blue_shorts, solo, bag, looking_at_viewer, smile |
| 4 | 5 |  |  |  |  |  | 1girl, denim_shorts, holding_poke_ball, poke_ball_(basic), pokemon_(creature), vest, wristband, smile |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | completely_nude | looking_at_viewer | nipples | outdoors | pussy | solo | ass | looking_back | blush | shiny_skin | sweat | night_sky | star_(sky) | antenna_hair | anus | from_behind | grin | teeth | public_indecency | wristband | denim_shorts | short_shorts | white_shirt | black_vest | simple_background | sleeveless_shirt | poke_ball_(basic) | smile | white_background | closed_mouth | holding_poke_ball | exposed_pocket | bag | blue_shorts | cutoffs | upper_body | collarbone | eyelashes | boots | full_body | black_footwear | standing | pokemon_(creature) | vest |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------------|:--------------------|:----------|:-----------|:--------|:-------|:------|:---------------|:--------|:-------------|:--------|:------------|:-------------|:---------------|:-------|:--------------|:-------|:--------|:-------------------|:------------|:---------------|:---------------|:--------------|:-------------|:--------------------|:-------------------|:--------------------|:--------|:-------------------|:---------------|:--------------------|:-----------------|:------|:--------------|:----------|:-------------|:-------------|:------------|:--------|:------------|:-----------------|:-----------|:---------------------|:-------|
| 0 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 18 |  |  |  |  |  | X | | X | | | | X | X | X | | | | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | | X | | | | X | | | | | | | | | | | | | | X | | | X | X | | X | X | | X | X | X | | | | | X | X | X | | | | | | |
| 3 | 12 |  |  |  |  |  | X | | X | | | | X | | | | | | | | | | | | | | X | X | X | X | X | | X | | X | | | | | X | X | | | | | X | X | X | X | | |
| 4 | 5 |  |  |  |  |  | X | | | | | | | | | | | | | | | | | | | | X | X | | | | | | X | X | | | X | | | | | | | | | | | | X | X |
|
CyberHarem/touko_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T21:54:38+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T13:54:07+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of touko/トウコ (Pokémon)
==============================
This is the dataset of touko/トウコ (Pokémon), containing 500 images and their tags.
The core tags of this character are 'brown\_hair, long\_hair, blue\_eyes, hat, baseball\_cap, sidelocks, breasts, high\_ponytail, ponytail, large\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
753b14068682ac5fdf3de47dc1b695bc8f26331d
|
# Dataset Card for "TALI"
## Table of Contents
1. Dataset Description
1. Abstract
2. Brief Description
2. Dataset Information
1. Modalities
2. Dataset Variants
3. Dataset Statistics
4. Data Fields
5. Data Splits
3. Dataset Creation
4. Dataset Use
5. Additional Information
## Dataset Description
### Abstract
TALI is a large-scale, tetramodal dataset designed to facilitate a shift from unimodal and duomodal to tetramodal research in deep learning. It aligns text, video, images, and audio, providing a rich resource for innovative self-supervised learning tasks and multimodal research. TALI enables exploration of how different modalities and data/model scaling affect downstream performance, with the aim of inspiring diverse research ideas and enhancing understanding of model capabilities and robustness in deep learning.
### Brief Description
TALI (Temporally and semantically Aligned Audio, Language and Images) is a dataset that uses the Wikipedia Image Text (WIT) captions and article titles to search Youtube for videos that match the captions. It then downloads the video, audio, and subtitles from these videos. The result is a rich multimodal dataset that has multiple caption types related to both the WiT Images, and the Youtube videos. This enables learning to take place between either temporally or semantically aligned text, images, audio and video.
## Dataset Information
### Modalities
The TALI dataset consists of the following modalities:
1. Image:
1. Wikipedia caption image
2. Randomly sampled image from youtube video
2. Text
1. Wikipedia Caption Text
2. Wikipedia Title Text
3. Wikipedia Main Body Text
4. YouTube Subtitle Text
5. YouTube Description Text
6. YouTube Title Text
3. Audio
1. YouTube Content Audio
4. Video
1. YouTube Content Video
## Usage:
To get started with TALI, you can load the dataset via Hugging Face's `datasets` library through our helper functions. The reason we don't use `datasets` directly is because we found huggingface_hub downloads much faster and reliable. For a full set of possible configurations look at [examples.py](examples.py). Here's a basic usage example:
First install the tali package:
### Installation
For the default install use:
```bash
pip install git+https://github.com/AntreasAntoniou/TALI
```
For the dev install use:
```bash
pip install git+https://github.com/AntreasAntoniou/TALI[dev]
```
Then use the dataset using:
### Examples
Import relevant helper functions
```python
import pathlib
from enum import Enum
import torch
from tqdm.auto import tqdm
from tali.data import (
SubModalityTypes,
TALIBaseTransform,
TALIBaseTransformConfig,
VideoFramesFormat,
default_transforms,
load_dataset_via_hub,
)
```
#### TALI with default transforms (CLIP and Whisper) and no streaming
```python
def tali_with_transforms_no_streaming(
dataset_storage_path: pathlib.Path | str,
):
if isinstance(dataset_storage_path, str):
dataset_storage_path = pathlib.Path(dataset_storage_path)
dataset = load_dataset_via_hub(
dataset_storage_path, dataset_name="Antreas/TALI"
)["train"]
(
image_transforms,
text_transforms,
audio_transforms,
video_transforms,
) = default_transforms()
preprocessing_transform = TALIBaseTransform(
cache_dir=dataset_storage_path / "cache",
text_tokenizer=text_transforms,
image_tokenizer=image_transforms,
audio_tokenizer=audio_transforms,
video_tokenizer=video_transforms,
config=TALIBaseTransformConfig(
root_filepath=dataset_storage_path,
modality_list=[
SubModalityTypes.youtube_content_video,
SubModalityTypes.youtube_content_audio,
SubModalityTypes.youtube_random_video_frame,
SubModalityTypes.youtube_subtitle_text,
SubModalityTypes.youtube_description_text,
SubModalityTypes.youtube_title_text,
SubModalityTypes.wikipedia_caption_image,
SubModalityTypes.wikipedia_caption_text,
SubModalityTypes.wikipedia_main_body_text,
SubModalityTypes.wikipedia_title_text,
],
video_frames_format=VideoFramesFormat.PIL,
),
)
for sample in tqdm(dataset):
sample = preprocessing_transform(sample)
print(list(sample.keys()))
for key, value in sample.items():
if hasattr(value, "shape"):
print(key, value.shape)
elif isinstance(value, torch.Tensor):
print(key, value.shape)
elif hasattr(value, "__len__"):
print(key, len(value))
print(key, type(value))
break
```
#### TALI with no transforms and no streaming, returning text as text, images as PIL images, videos as a list of PIL images, and audio as a sequence of floats
```python
def tali_without_transforms_no_streaming(
dataset_storage_path: pathlib.Path | str,
):
if isinstance(dataset_storage_path, str):
dataset_storage_path = pathlib.Path(dataset_storage_path)
dataset = load_dataset_via_hub(
dataset_storage_path, dataset_name="Antreas/TALI"
)["train"]
preprocessing_transform = TALIBaseTransform(
cache_dir=dataset_storage_path / "cache",
text_tokenizer=None,
image_tokenizer=None,
audio_tokenizer=None,
video_tokenizer=None,
config=TALIBaseTransformConfig(
root_filepath=dataset_storage_path,
modality_list=[
SubModalityTypes.youtube_content_video,
SubModalityTypes.youtube_content_audio,
SubModalityTypes.youtube_random_video_frame,
SubModalityTypes.youtube_subtitle_text,
SubModalityTypes.youtube_description_text,
SubModalityTypes.youtube_title_text,
SubModalityTypes.wikipedia_caption_image,
SubModalityTypes.wikipedia_caption_text,
SubModalityTypes.wikipedia_main_body_text,
SubModalityTypes.wikipedia_title_text,
],
video_frames_format=VideoFramesFormat.PIL,
),
)
for sample in tqdm(dataset):
sample = preprocessing_transform(sample)
print(list(sample.keys()))
for key, value in sample.items():
if hasattr(value, "shape"):
print(key, value.shape)
elif isinstance(value, torch.Tensor):
print(key, value.shape)
elif hasattr(value, "__len__"):
print(key, len(value))
print(key, type(value))
break
```
#### TALI with default transforms and streaming
```python
def tali_with_transforms_streaming(
dataset_storage_path: pathlib.Path | str,
):
if isinstance(dataset_storage_path, str):
dataset_storage_path = pathlib.Path(dataset_storage_path)
dataset = load_dataset_via_hub(
dataset_storage_path, dataset_name="Antreas/TALI", streaming=True
)["train"]
(
image_transforms,
text_transforms,
audio_transforms,
video_transforms,
) = default_transforms()
preprocessing_transform = TALIBaseTransform(
cache_dir=dataset_storage_path / "cache",
text_tokenizer=text_transforms,
image_tokenizer=image_transforms,
audio_tokenizer=audio_transforms,
video_tokenizer=video_transforms,
config=TALIBaseTransformConfig(
root_filepath=dataset_storage_path,
modality_list=[
SubModalityTypes.youtube_content_video,
SubModalityTypes.youtube_content_audio,
SubModalityTypes.youtube_random_video_frame,
SubModalityTypes.youtube_subtitle_text,
SubModalityTypes.youtube_description_text,
SubModalityTypes.youtube_title_text,
SubModalityTypes.wikipedia_caption_image,
SubModalityTypes.wikipedia_caption_text,
SubModalityTypes.wikipedia_main_body_text,
SubModalityTypes.wikipedia_title_text,
],
video_frames_format=VideoFramesFormat.PIL,
),
)
for sample in tqdm(dataset):
sample = preprocessing_transform(sample)
print(list(sample.keys()))
for key, value in sample.items():
if hasattr(value, "shape"):
print(key, value.shape)
elif isinstance(value, torch.Tensor):
print(key, value.shape)
elif hasattr(value, "__len__"):
print(key, len(value))
print(key, type(value))
break
```
#### TALI with no transforms and streaming, returning text as text, images as PIL images, videos as a list of PIL images, and audio as a sequence of floats
```python
def tali_without_transforms_streaming(
dataset_storage_path: pathlib.Path | str,
):
if isinstance(dataset_storage_path, str):
dataset_storage_path = pathlib.Path(dataset_storage_path)
dataset = load_dataset_via_hub(
dataset_storage_path, dataset_name="Antreas/TALI", streaming=True
)["train"]
preprocessing_transform = TALIBaseTransform(
cache_dir=dataset_storage_path / "cache",
text_tokenizer=None,
image_tokenizer=None,
audio_tokenizer=None,
video_tokenizer=None,
config=TALIBaseTransformConfig(
root_filepath=dataset_storage_path,
modality_list=[
SubModalityTypes.youtube_content_video,
SubModalityTypes.youtube_content_audio,
SubModalityTypes.youtube_random_video_frame,
SubModalityTypes.youtube_subtitle_text,
SubModalityTypes.youtube_description_text,
SubModalityTypes.youtube_title_text,
SubModalityTypes.wikipedia_caption_image,
SubModalityTypes.wikipedia_caption_text,
SubModalityTypes.wikipedia_main_body_text,
SubModalityTypes.wikipedia_title_text,
],
video_frames_format=VideoFramesFormat.PIL,
),
)
for sample in tqdm(dataset):
sample = preprocessing_transform(sample)
print(list(sample.keys()))
for key, value in sample.items():
if hasattr(value, "shape"):
print(key, value.shape)
elif isinstance(value, torch.Tensor):
print(key, value.shape)
elif hasattr(value, "__len__"):
print(key, len(value))
print(key, type(value))
break
```
### Dataset Statistics
TBA
## Dataset Creation
The TALI dataset was created by starting from the WiT dataset and using either the context_page_description or page_title as a source-query to search YouTube for video that were creative commons opted-in, and, not age restricted. The top 100 result titles were returned and compared with the source-query using the CLIP text embeddings of the largest CLIP model available. The top-1 title’s video based on the CLIP ranking was chosen and downloaded. The video was broken into 30-second segments and the top-10 segments for eachvideo were chosen based on the distance between the CLIP image embedding of the first image of each segment and the video’s title text. The image, audio, and subtitle frames were extracted from these segments. At sampling time, one of these 10 segments is randomly selected, and a 10-second segment is chosen out of the 30-second clip. The result is 200 video frames (spread throughout the 10-second segment), and 160000 audio frames (10 seconds).
## Dataset Use
TALI is designed for use in a wide range of multimodal research tasks, including but not limited to:
- Multimodal understanding and reasoning
- Self-supervised learning
- Multimodal alignment and translation
- Multimodal summarization
- Multimodal question answering
## Dataset Curators: Antreas Antoniou
Citation Information: TBA
Contributions: Thanks to all contributors including data curators, annotators, and software developers.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Antreas/TALI
|
[
"task_categories:zero-shot-classification",
"size_categories:1M<n<10M",
"license:cc-by-4.0",
"video",
"audio",
"text",
"image",
"tetramodal",
"multimodal",
"youtube",
"wikipedia",
"region:us"
] |
2023-08-16T21:59:13+00:00
|
{"license": "cc-by-4.0", "size_categories": ["1M<n<10M"], "task_categories": ["zero-shot-classification"], "pretty_name": "TALI", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "image_url", "dtype": "string"}, {"name": "item_idx", "dtype": "int64"}, {"name": "wit_features", "struct": [{"name": "attribution_passes_lang_id", "sequence": "bool"}, {"name": "caption_alt_text_description", "sequence": "string"}, {"name": "caption_reference_description", "sequence": "string"}, {"name": "caption_title_and_reference_description", "sequence": "string"}, {"name": "context_page_description", "sequence": "string"}, {"name": "context_section_description", "sequence": "string"}, {"name": "hierarchical_section_title", "sequence": "string"}, {"name": "is_main_image", "sequence": "bool"}, {"name": "language", "sequence": "string"}, {"name": "page_changed_recently", "sequence": "bool"}, {"name": "page_title", "sequence": "string"}, {"name": "page_url", "sequence": "string"}, {"name": "section_title", "sequence": "string"}]}, {"name": "wit_idx", "dtype": "int64"}, {"name": "youtube_title_text", "dtype": "string"}, {"name": "youtube_description_text", "dtype": "string"}, {"name": "youtube_video_content", "dtype": "binary"}, {"name": "youtube_video_starting_time", "dtype": "string"}, {"name": "youtube_subtitle_text", "dtype": "string"}, {"name": "youtube_video_size", "dtype": "int64"}, {"name": "youtube_video_file_path", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1902638101655.625, "num_examples": 1052915}, {"name": "val", "num_bytes": 104485442867.25, "num_examples": 57958}, {"name": "test", "num_bytes": 111107332347.375, "num_examples": 61389}], "download_size": 2058391040534, "dataset_size": 2118230876870.25}, "tags": ["video", "audio", "text", "image", "tetramodal", "multimodal", "youtube", "wikipedia"]}
|
2023-12-13T09:02:28+00:00
|
[] |
[] |
TAGS
#task_categories-zero-shot-classification #size_categories-1M<n<10M #license-cc-by-4.0 #video #audio #text #image #tetramodal #multimodal #youtube #wikipedia #region-us
|
# Dataset Card for "TALI"
## Table of Contents
1. Dataset Description
1. Abstract
2. Brief Description
2. Dataset Information
1. Modalities
2. Dataset Variants
3. Dataset Statistics
4. Data Fields
5. Data Splits
3. Dataset Creation
4. Dataset Use
5. Additional Information
## Dataset Description
### Abstract
TALI is a large-scale, tetramodal dataset designed to facilitate a shift from unimodal and duomodal to tetramodal research in deep learning. It aligns text, video, images, and audio, providing a rich resource for innovative self-supervised learning tasks and multimodal research. TALI enables exploration of how different modalities and data/model scaling affect downstream performance, with the aim of inspiring diverse research ideas and enhancing understanding of model capabilities and robustness in deep learning.
### Brief Description
TALI (Temporally and semantically Aligned Audio, Language and Images) is a dataset that uses the Wikipedia Image Text (WIT) captions and article titles to search Youtube for videos that match the captions. It then downloads the video, audio, and subtitles from these videos. The result is a rich multimodal dataset that has multiple caption types related to both the WiT Images, and the Youtube videos. This enables learning to take place between either temporally or semantically aligned text, images, audio and video.
## Dataset Information
### Modalities
The TALI dataset consists of the following modalities:
1. Image:
1. Wikipedia caption image
2. Randomly sampled image from youtube video
2. Text
1. Wikipedia Caption Text
2. Wikipedia Title Text
3. Wikipedia Main Body Text
4. YouTube Subtitle Text
5. YouTube Description Text
6. YouTube Title Text
3. Audio
1. YouTube Content Audio
4. Video
1. YouTube Content Video
## Usage:
To get started with TALI, you can load the dataset via Hugging Face's 'datasets' library through our helper functions. The reason we don't use 'datasets' directly is because we found huggingface_hub downloads much faster and reliable. For a full set of possible configurations look at URL. Here's a basic usage example:
First install the tali package:
### Installation
For the default install use:
For the dev install use:
Then use the dataset using:
### Examples
Import relevant helper functions
#### TALI with default transforms (CLIP and Whisper) and no streaming
#### TALI with no transforms and no streaming, returning text as text, images as PIL images, videos as a list of PIL images, and audio as a sequence of floats
#### TALI with default transforms and streaming
#### TALI with no transforms and streaming, returning text as text, images as PIL images, videos as a list of PIL images, and audio as a sequence of floats
### Dataset Statistics
TBA
## Dataset Creation
The TALI dataset was created by starting from the WiT dataset and using either the context_page_description or page_title as a source-query to search YouTube for video that were creative commons opted-in, and, not age restricted. The top 100 result titles were returned and compared with the source-query using the CLIP text embeddings of the largest CLIP model available. The top-1 title’s video based on the CLIP ranking was chosen and downloaded. The video was broken into 30-second segments and the top-10 segments for eachvideo were chosen based on the distance between the CLIP image embedding of the first image of each segment and the video’s title text. The image, audio, and subtitle frames were extracted from these segments. At sampling time, one of these 10 segments is randomly selected, and a 10-second segment is chosen out of the 30-second clip. The result is 200 video frames (spread throughout the 10-second segment), and 160000 audio frames (10 seconds).
## Dataset Use
TALI is designed for use in a wide range of multimodal research tasks, including but not limited to:
- Multimodal understanding and reasoning
- Self-supervised learning
- Multimodal alignment and translation
- Multimodal summarization
- Multimodal question answering
## Dataset Curators: Antreas Antoniou
Citation Information: TBA
Contributions: Thanks to all contributors including data curators, annotators, and software developers.
More Information needed
|
[
"# Dataset Card for \"TALI\"",
"## Table of Contents\n1. Dataset Description\n 1. Abstract\n 2. Brief Description\n2. Dataset Information\n 1. Modalities\n 2. Dataset Variants\n 3. Dataset Statistics\n 4. Data Fields\n 5. Data Splits\n3. Dataset Creation\n4. Dataset Use\n5. Additional Information",
"## Dataset Description",
"### Abstract\nTALI is a large-scale, tetramodal dataset designed to facilitate a shift from unimodal and duomodal to tetramodal research in deep learning. It aligns text, video, images, and audio, providing a rich resource for innovative self-supervised learning tasks and multimodal research. TALI enables exploration of how different modalities and data/model scaling affect downstream performance, with the aim of inspiring diverse research ideas and enhancing understanding of model capabilities and robustness in deep learning.",
"### Brief Description\nTALI (Temporally and semantically Aligned Audio, Language and Images) is a dataset that uses the Wikipedia Image Text (WIT) captions and article titles to search Youtube for videos that match the captions. It then downloads the video, audio, and subtitles from these videos. The result is a rich multimodal dataset that has multiple caption types related to both the WiT Images, and the Youtube videos. This enables learning to take place between either temporally or semantically aligned text, images, audio and video.",
"## Dataset Information",
"### Modalities\nThe TALI dataset consists of the following modalities:\n\n1. Image:\n 1. Wikipedia caption image\n 2. Randomly sampled image from youtube video\n2. Text\n 1. Wikipedia Caption Text\n 2. Wikipedia Title Text\n 3. Wikipedia Main Body Text\n 4. YouTube Subtitle Text\n 5. YouTube Description Text\n 6. YouTube Title Text\n3. Audio\n 1. YouTube Content Audio\n4. Video\n 1. YouTube Content Video",
"## Usage:\nTo get started with TALI, you can load the dataset via Hugging Face's 'datasets' library through our helper functions. The reason we don't use 'datasets' directly is because we found huggingface_hub downloads much faster and reliable. For a full set of possible configurations look at URL. Here's a basic usage example:\n\nFirst install the tali package:",
"### Installation\n\nFor the default install use:\n\n\n\nFor the dev install use:\n\n\n\nThen use the dataset using:",
"### Examples\nImport relevant helper functions",
"#### TALI with default transforms (CLIP and Whisper) and no streaming",
"#### TALI with no transforms and no streaming, returning text as text, images as PIL images, videos as a list of PIL images, and audio as a sequence of floats",
"#### TALI with default transforms and streaming",
"#### TALI with no transforms and streaming, returning text as text, images as PIL images, videos as a list of PIL images, and audio as a sequence of floats",
"### Dataset Statistics\nTBA",
"## Dataset Creation\nThe TALI dataset was created by starting from the WiT dataset and using either the context_page_description or page_title as a source-query to search YouTube for video that were creative commons opted-in, and, not age restricted. The top 100 result titles were returned and compared with the source-query using the CLIP text embeddings of the largest CLIP model available. The top-1 title’s video based on the CLIP ranking was chosen and downloaded. The video was broken into 30-second segments and the top-10 segments for eachvideo were chosen based on the distance between the CLIP image embedding of the first image of each segment and the video’s title text. The image, audio, and subtitle frames were extracted from these segments. At sampling time, one of these 10 segments is randomly selected, and a 10-second segment is chosen out of the 30-second clip. The result is 200 video frames (spread throughout the 10-second segment), and 160000 audio frames (10 seconds).",
"## Dataset Use\nTALI is designed for use in a wide range of multimodal research tasks, including but not limited to:\n\n- Multimodal understanding and reasoning\n- Self-supervised learning\n- Multimodal alignment and translation\n- Multimodal summarization\n- Multimodal question answering",
"## Dataset Curators: Antreas Antoniou\nCitation Information: TBA\nContributions: Thanks to all contributors including data curators, annotators, and software developers.\n\nMore Information needed"
] |
[
"TAGS\n#task_categories-zero-shot-classification #size_categories-1M<n<10M #license-cc-by-4.0 #video #audio #text #image #tetramodal #multimodal #youtube #wikipedia #region-us \n",
"# Dataset Card for \"TALI\"",
"## Table of Contents\n1. Dataset Description\n 1. Abstract\n 2. Brief Description\n2. Dataset Information\n 1. Modalities\n 2. Dataset Variants\n 3. Dataset Statistics\n 4. Data Fields\n 5. Data Splits\n3. Dataset Creation\n4. Dataset Use\n5. Additional Information",
"## Dataset Description",
"### Abstract\nTALI is a large-scale, tetramodal dataset designed to facilitate a shift from unimodal and duomodal to tetramodal research in deep learning. It aligns text, video, images, and audio, providing a rich resource for innovative self-supervised learning tasks and multimodal research. TALI enables exploration of how different modalities and data/model scaling affect downstream performance, with the aim of inspiring diverse research ideas and enhancing understanding of model capabilities and robustness in deep learning.",
"### Brief Description\nTALI (Temporally and semantically Aligned Audio, Language and Images) is a dataset that uses the Wikipedia Image Text (WIT) captions and article titles to search Youtube for videos that match the captions. It then downloads the video, audio, and subtitles from these videos. The result is a rich multimodal dataset that has multiple caption types related to both the WiT Images, and the Youtube videos. This enables learning to take place between either temporally or semantically aligned text, images, audio and video.",
"## Dataset Information",
"### Modalities\nThe TALI dataset consists of the following modalities:\n\n1. Image:\n 1. Wikipedia caption image\n 2. Randomly sampled image from youtube video\n2. Text\n 1. Wikipedia Caption Text\n 2. Wikipedia Title Text\n 3. Wikipedia Main Body Text\n 4. YouTube Subtitle Text\n 5. YouTube Description Text\n 6. YouTube Title Text\n3. Audio\n 1. YouTube Content Audio\n4. Video\n 1. YouTube Content Video",
"## Usage:\nTo get started with TALI, you can load the dataset via Hugging Face's 'datasets' library through our helper functions. The reason we don't use 'datasets' directly is because we found huggingface_hub downloads much faster and reliable. For a full set of possible configurations look at URL. Here's a basic usage example:\n\nFirst install the tali package:",
"### Installation\n\nFor the default install use:\n\n\n\nFor the dev install use:\n\n\n\nThen use the dataset using:",
"### Examples\nImport relevant helper functions",
"#### TALI with default transforms (CLIP and Whisper) and no streaming",
"#### TALI with no transforms and no streaming, returning text as text, images as PIL images, videos as a list of PIL images, and audio as a sequence of floats",
"#### TALI with default transforms and streaming",
"#### TALI with no transforms and streaming, returning text as text, images as PIL images, videos as a list of PIL images, and audio as a sequence of floats",
"### Dataset Statistics\nTBA",
"## Dataset Creation\nThe TALI dataset was created by starting from the WiT dataset and using either the context_page_description or page_title as a source-query to search YouTube for video that were creative commons opted-in, and, not age restricted. The top 100 result titles were returned and compared with the source-query using the CLIP text embeddings of the largest CLIP model available. The top-1 title’s video based on the CLIP ranking was chosen and downloaded. The video was broken into 30-second segments and the top-10 segments for eachvideo were chosen based on the distance between the CLIP image embedding of the first image of each segment and the video’s title text. The image, audio, and subtitle frames were extracted from these segments. At sampling time, one of these 10 segments is randomly selected, and a 10-second segment is chosen out of the 30-second clip. The result is 200 video frames (spread throughout the 10-second segment), and 160000 audio frames (10 seconds).",
"## Dataset Use\nTALI is designed for use in a wide range of multimodal research tasks, including but not limited to:\n\n- Multimodal understanding and reasoning\n- Self-supervised learning\n- Multimodal alignment and translation\n- Multimodal summarization\n- Multimodal question answering",
"## Dataset Curators: Antreas Antoniou\nCitation Information: TBA\nContributions: Thanks to all contributors including data curators, annotators, and software developers.\n\nMore Information needed"
] |
[
62,
9,
53,
4,
123,
125,
4,
75,
93,
22,
10,
19,
43,
10,
42,
8,
238,
66,
43
] |
[
"passage: TAGS\n#task_categories-zero-shot-classification #size_categories-1M<n<10M #license-cc-by-4.0 #video #audio #text #image #tetramodal #multimodal #youtube #wikipedia #region-us \n# Dataset Card for \"TALI\"## Table of Contents\n1. Dataset Description\n 1. Abstract\n 2. Brief Description\n2. Dataset Information\n 1. Modalities\n 2. Dataset Variants\n 3. Dataset Statistics\n 4. Data Fields\n 5. Data Splits\n3. Dataset Creation\n4. Dataset Use\n5. Additional Information## Dataset Description### Abstract\nTALI is a large-scale, tetramodal dataset designed to facilitate a shift from unimodal and duomodal to tetramodal research in deep learning. It aligns text, video, images, and audio, providing a rich resource for innovative self-supervised learning tasks and multimodal research. TALI enables exploration of how different modalities and data/model scaling affect downstream performance, with the aim of inspiring diverse research ideas and enhancing understanding of model capabilities and robustness in deep learning.### Brief Description\nTALI (Temporally and semantically Aligned Audio, Language and Images) is a dataset that uses the Wikipedia Image Text (WIT) captions and article titles to search Youtube for videos that match the captions. It then downloads the video, audio, and subtitles from these videos. The result is a rich multimodal dataset that has multiple caption types related to both the WiT Images, and the Youtube videos. This enables learning to take place between either temporally or semantically aligned text, images, audio and video.## Dataset Information### Modalities\nThe TALI dataset consists of the following modalities:\n\n1. Image:\n 1. Wikipedia caption image\n 2. Randomly sampled image from youtube video\n2. Text\n 1. Wikipedia Caption Text\n 2. Wikipedia Title Text\n 3. Wikipedia Main Body Text\n 4. YouTube Subtitle Text\n 5. YouTube Description Text\n 6. YouTube Title Text\n3. Audio\n 1. YouTube Content Audio\n4. Video\n 1. YouTube Content Video"
] |
bae593ffabdb4eb1e3fbf1b4d3904e6171610a69
|
# Dataset of araragi/アララギ博士 (Pokémon)
This is the dataset of araragi/アララギ博士 (Pokémon), containing 286 images and their tags.
The core tags of this character are `breasts, earrings, green_eyes, brown_hair, large_breasts, short_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 286 | 193.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/araragi_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 286 | 131.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/araragi_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 574 | 245.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/araragi_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 286 | 177.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/araragi_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 574 | 311.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/araragi_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/araragi_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 18 |  |  |  |  |  | 1girl, jewelry, labcoat, solo, smile, cleavage, green_skirt, mature_female, pencil_skirt |
| 1 | 12 |  |  |  |  |  | 1girl, jewelry, poke_ball_(basic), smile, solo, holding_poke_ball, cleavage, looking_at_viewer, blonde_hair, blush, labcoat |
| 2 | 21 |  |  |  |  |  | 1girl, solo, jewelry, nipples, smile, pussy, female_pubic_hair, nude, looking_at_viewer, navel, blush, mature_female, simple_background |
| 3 | 6 |  |  |  |  |  | 1girl, hetero, nipples, sex, solo_focus, vaginal, 1boy, cowgirl_position, girl_on_top, jewelry, nude, open_mouth, blonde_hair, blush, cum_in_pussy, penis, uncensored |
| 4 | 10 |  |  |  |  |  | 1girl, solo_focus, 1boy, blush, hetero, jewelry, nipples, penis, shirt_lift, censored, huge_breasts, fellatio |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | jewelry | labcoat | solo | smile | cleavage | green_skirt | mature_female | pencil_skirt | poke_ball_(basic) | holding_poke_ball | looking_at_viewer | blonde_hair | blush | nipples | pussy | female_pubic_hair | nude | navel | simple_background | hetero | sex | solo_focus | vaginal | 1boy | cowgirl_position | girl_on_top | open_mouth | cum_in_pussy | penis | uncensored | shirt_lift | censored | huge_breasts | fellatio |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------|:----------|:-------|:--------|:-----------|:--------------|:----------------|:---------------|:--------------------|:--------------------|:--------------------|:--------------|:--------|:----------|:--------|:--------------------|:-------|:--------|:--------------------|:---------|:------|:-------------|:----------|:-------|:-------------------|:--------------|:-------------|:---------------|:--------|:-------------|:-------------|:-----------|:---------------|:-----------|
| 0 | 18 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 12 |  |  |  |  |  | X | X | X | X | X | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 2 | 21 |  |  |  |  |  | X | X | | X | X | | | X | | | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | | | | | | | | | | | X | X | X | | | X | | | X | X | X | X | X | X | X | X | X | X | X | | | | |
| 4 | 10 |  |  |  |  |  | X | X | | | | | | | | | | | | X | X | | | | | | X | | X | | X | | | | | X | | X | X | X | X |
|
CyberHarem/araragi_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T22:35:08+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T13:44:38+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of araragi/アララギ博士 (Pokémon)
===================================
This is the dataset of araragi/アララギ博士 (Pokémon), containing 286 images and their tags.
The core tags of this character are 'breasts, earrings, green\_eyes, brown\_hair, large\_breasts, short\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
52b552462c47b61ca05b5cf9529b261f75bfb6ff
|
# Dataset of haruka/ハルカ (Pokémon)
This is the dataset of haruka/ハルカ (Pokémon), containing 500 images and their tags.
The core tags of this character are `brown_hair, blue_eyes, breasts, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 488.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/haruka_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 302.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/haruka_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1186 | 640.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/haruka_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 444.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/haruka_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1186 | 873.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/haruka_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/haruka_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, looking_at_viewer, red_hairband, red_shirt, sleeveless_shirt, solo, white_shorts, bow_hairband, collarbone, holding_poke_ball, long_hair, poke_ball_(basic), short_shorts, simple_background, standing, white_background, bracelet, cowboy_shot, hair_ribbon, open_mouth, red_ribbon, :d, fanny_pack, floating_hair, black_shorts, blush, striped_ribbon, bike_shorts_under_shorts, shiny_hair |
| 1 | 5 |  |  |  |  |  | 1girl, bike_shorts, black_shorts, bow_hairband, collarbone, hair_ribbon, open_mouth, red_hairband, red_ribbon, red_shirt, short_shorts, sleeveless_shirt, white_shorts, blush, long_hair, looking_at_viewer, simple_background, sitting, white_background, shiny_hair, solo, striped_ribbon, :d, hair_between_eyes, holding, medium_breasts, pokemon_(creature) |
| 2 | 9 |  |  |  |  |  | 1girl, bow_hairband, hair_ribbon, long_hair, red_hairband, red_shirt, sleeveless_shirt, solo, upper_body, blush, collarbone, red_ribbon, shiny_hair, simple_background, striped_ribbon, white_background, open_mouth, looking_at_viewer, smile |
| 3 | 18 |  |  |  |  |  | 1girl, bandana, gloves, smile, holding_poke_ball, poke_ball_(basic), solo, bike_shorts, open_mouth, looking_at_viewer, simple_background, white_background |
| 4 | 11 |  |  |  |  |  | 1girl, red_bandana, short_sleeves, white_skirt, fanny_pack, open_mouth, :d, eyelashes, solo, blush, collared_shirt, simple_background, tongue, white_background, holding_poke_ball, looking_at_viewer, white_gloves, poke_ball_(basic), yellow_bag, bike_shorts_under_skirt |
| 5 | 7 |  |  |  |  |  | 1girl, bike_shorts, cowboy_shot, long_hair, looking_at_viewer, miniskirt, red_jacket, red_shirt, short_sleeves, white_skirt, holding_poke_ball, poke_ball_(basic), red_bandana, solo, standing, :d, open_mouth, shiny_hair, white_background, short_shorts, black_shorts, blush, collarbone, simple_background, white_gloves |
| 6 | 5 |  |  |  |  |  | 1girl, bandana, bike_shorts, black_socks, collared_dress, shoes, sleeveless_dress, standing, :d, eyelashes, fanny_pack, full_body, open_mouth, orange_dress, orange_footwear, tongue, blush, medium_hair, pokemon_(creature), solo, white_gloves, grey_eyes, holding_poke_ball, knees, poke_ball_(basic), simple_background |
| 7 | 14 |  |  |  |  |  | 1girl, navel, solo, large_breasts, smile, cleavage, red_bikini, outdoors, alternate_breast_size, beach, day, looking_at_viewer, open_mouth, red_bandana, side-tie_bikini_bottom, sky, water |
| 8 | 5 |  |  |  |  |  | 1boy, 1girl, blush, collarbone, eyelashes, hetero, medium_hair, navel, nipples, outdoors, penis, red_bandana, sex, spread_legs, vaginal, completely_nude, day, uncensored, cloud, hair_between_eyes, on_back, open_mouth, rock, shiny_skin, sky, tongue, water, :d, arms_behind_back, bush, closed_mouth, cum_in_pussy, cum_on_body, grass, medium_breasts, pov, raised_eyebrows, tree, veins |
| 9 | 17 |  |  |  |  |  | 1girl, looking_back, 1boy, hetero, solo_focus, penis, smile, buttjob, looking_at_viewer, ejaculation, bike_shorts, blush, huge_ass, ass_focus, large_breasts, uncensored, clothed_female_nude_male, open_mouth, pov, red_bandana, red_shirt |
| 10 | 5 |  |  |  |  |  | 1girl, earrings, hair_bow, smile, bracelet, looking_at_viewer, navel, pink_bow, solo, blush, buttons, eyelashes, full_body, grey_eyes, hands_up, midriff, pink_footwear, shorts_under_skirt, clenched_hands, closed_mouth, hair_ribbon, heart, high_heels, knees, medium_hair, one_side_up, open_mouth, shirt, shoes, simple_background, standing, white_background |
| 11 | 5 |  |  |  |  |  | 1girl, blush, eyelashes, fake_animal_ears, official_alternate_costume, rabbit_ears, short_sleeves, wrist_cuffs, yellow_hairband, grey_eyes, pantyhose, pink_choker, solo, bunny_pose, hands_up, open_mouth, pink_dress, tongue, :d, closed_mouth, looking_at_viewer, pink_footwear, shoes, sparkle |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | red_hairband | red_shirt | sleeveless_shirt | solo | white_shorts | bow_hairband | collarbone | holding_poke_ball | long_hair | poke_ball_(basic) | short_shorts | simple_background | standing | white_background | bracelet | cowboy_shot | hair_ribbon | open_mouth | red_ribbon | :d | fanny_pack | floating_hair | black_shorts | blush | striped_ribbon | bike_shorts_under_shorts | shiny_hair | bike_shorts | sitting | hair_between_eyes | holding | medium_breasts | pokemon_(creature) | upper_body | smile | bandana | gloves | red_bandana | short_sleeves | white_skirt | eyelashes | collared_shirt | tongue | white_gloves | yellow_bag | bike_shorts_under_skirt | miniskirt | red_jacket | black_socks | collared_dress | shoes | sleeveless_dress | full_body | orange_dress | orange_footwear | medium_hair | grey_eyes | knees | navel | large_breasts | cleavage | red_bikini | outdoors | alternate_breast_size | beach | day | side-tie_bikini_bottom | sky | water | 1boy | hetero | nipples | penis | sex | spread_legs | vaginal | completely_nude | uncensored | cloud | on_back | rock | shiny_skin | arms_behind_back | bush | closed_mouth | cum_in_pussy | cum_on_body | grass | pov | raised_eyebrows | tree | veins | looking_back | solo_focus | buttjob | ejaculation | huge_ass | ass_focus | clothed_female_nude_male | earrings | hair_bow | pink_bow | buttons | hands_up | midriff | pink_footwear | shorts_under_skirt | clenched_hands | heart | high_heels | one_side_up | shirt | fake_animal_ears | official_alternate_costume | rabbit_ears | wrist_cuffs | yellow_hairband | pantyhose | pink_choker | bunny_pose | pink_dress | sparkle |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------------------|:---------------|:------------|:-------------------|:-------|:---------------|:---------------|:-------------|:--------------------|:------------|:--------------------|:---------------|:--------------------|:-----------|:-------------------|:-----------|:--------------|:--------------|:-------------|:-------------|:-----|:-------------|:----------------|:---------------|:--------|:-----------------|:---------------------------|:-------------|:--------------|:----------|:--------------------|:----------|:-----------------|:---------------------|:-------------|:--------|:----------|:---------|:--------------|:----------------|:--------------|:------------|:-----------------|:---------|:---------------|:-------------|:--------------------------|:------------|:-------------|:--------------|:-----------------|:--------|:-------------------|:------------|:---------------|:------------------|:--------------|:------------|:--------|:--------|:----------------|:-----------|:-------------|:-----------|:------------------------|:--------|:------|:-------------------------|:------|:--------|:-------|:---------|:----------|:--------|:------|:--------------|:----------|:------------------|:-------------|:--------|:----------|:-------|:-------------|:-------------------|:-------|:---------------|:---------------|:--------------|:--------|:------|:------------------|:-------|:--------|:---------------|:-------------|:----------|:--------------|:-----------|:------------|:---------------------------|:-----------|:-----------|:-----------|:----------|:-----------|:----------|:----------------|:---------------------|:-----------------|:--------|:-------------|:--------------|:--------|:-------------------|:-----------------------------|:--------------|:--------------|:------------------|:------------|:--------------|:-------------|:-------------|:----------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | X | | X | X | | X | | | X | X | X | X | | | X | X | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | X | X | X | X | X | X | | X | X | | X | | | X | | X | | | X | X | X | | | | | X | X | | X | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 18 |  |  |  |  |  | X | X | | | | X | | | | X | | X | | X | | X | | | | X | | | | | | | | | | X | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 11 |  |  |  |  |  | X | X | | | | X | | | | X | | X | | X | | X | | | | X | | X | X | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | X | | X | | X | | | X | X | X | X | X | X | X | X | | X | | X | | X | | | X | X | | | X | X | | | | | | | | | | X | X | X | | | | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | | | | | X | | | | X | | X | | X | X | | | | | X | | X | X | | | X | | | | X | | | | | X | | | X | | | | | X | | X | X | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 14 |  |  |  |  |  | X | X | | | | X | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | | | | | | | | X | | | | | | | | | | | X | | X | | | | X | | | | | | X | | X | | | | | | X | | | X | | X | | | | | | | | | | | | | X | | | X | | | | X | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 17 |  |  |  |  |  | X | X | | X | | | | | | | | | | | | | | | | X | | | | | | X | | | | X | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | X | | X | | | | | X | | | | | | | | | | | X | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 10 | 5 |  |  |  |  |  | X | X | | | | X | | | | | | | | X | X | X | X | | X | X | | | | | | X | | | | | | | | | | | X | | | | | | X | | | | | | | | | | X | | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 11 | 5 |  |  |  |  |  | X | X | | | | X | | | | | | | | | | | | | | X | | X | | | | X | | | | | | | | | | | | | | | X | | X | | X | | | | | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | | X | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/haruka_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T22:39:05+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T13:41:58+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of haruka/ハルカ (Pokémon)
===============================
This is the dataset of haruka/ハルカ (Pokémon), containing 500 images and their tags.
The core tags of this character are 'brown\_hair, blue\_eyes, breasts, bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
11331b414b9c920b07141be871a5e1ea866853b2
|
# Dataset of yellow (Pokémon)
This is the dataset of yellow (Pokémon), containing 283 images and their tags.
The core tags of this character are `blonde_hair, long_hair, ponytail, bangs, hat`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 283 | 183.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yellow_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 283 | 137.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yellow_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 450 | 236.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yellow_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 283 | 173.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yellow_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 450 | 297.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yellow_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/yellow_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, :d, long_sleeves, open_mouth, pants, shirt, tongue, tunic, brown_belt, looking_at_viewer, poke_ball, pokemon_(creature), short_hair, blush, boots, character_name, green_eyes, holding_fishing_rod |
| 1 | 5 |  |  |  |  |  | 1girl, blush, long_sleeves, looking_at_viewer, shirt, simple_background, solo, :d, open_mouth, upper_body, green_eyes, white_background, belt, from_side, grey_background, yellow_eyes |
| 2 | 6 |  |  |  |  |  | 1girl, solo, blush, brown_eyes, looking_at_viewer, simple_background, smile, upper_body, white_background, closed_mouth |
| 3 | 6 |  |  |  |  |  | 1girl, flower, pokemon_(creature), smile, one_eye_closed, open_mouth, yellow_eyes |
| 4 | 8 |  |  |  |  |  | 1girl, poke_ball_(basic), smile, solo, androgynous, short_hair, straw_hat, belt, boots, holding_fishing_rod, reverse_trap, simple_background, white_background, yellow_eyes, holding_poke_ball |
| 5 | 6 |  |  |  |  |  | 1boy, 1girl, hetero, solo_focus, nipples, nude, sex, yellow_eyes, open_mouth, penis, blush, cum_in_pussy, medium_breasts, vaginal |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | :d | long_sleeves | open_mouth | pants | shirt | tongue | tunic | brown_belt | looking_at_viewer | poke_ball | pokemon_(creature) | short_hair | blush | boots | character_name | green_eyes | holding_fishing_rod | simple_background | solo | upper_body | white_background | belt | from_side | grey_background | yellow_eyes | brown_eyes | smile | closed_mouth | flower | one_eye_closed | poke_ball_(basic) | androgynous | straw_hat | reverse_trap | holding_poke_ball | 1boy | hetero | solo_focus | nipples | nude | sex | penis | cum_in_pussy | medium_breasts | vaginal |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----|:---------------|:-------------|:--------|:--------|:---------|:--------|:-------------|:--------------------|:------------|:---------------------|:-------------|:--------|:--------|:-----------------|:-------------|:----------------------|:--------------------|:-------|:-------------|:-------------------|:-------|:------------|:------------------|:--------------|:-------------|:--------|:---------------|:---------|:-----------------|:--------------------|:--------------|:------------|:---------------|:--------------------|:-------|:---------|:-------------|:----------|:-------|:------|:--------|:---------------|:-----------------|:----------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | | X | | | | X | | | | X | | | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | | | | | | | | | X | | | | X | | | | | X | X | X | X | | | | | X | X | X | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | | X | | | | | | | | X | | | | | | | | | | | | | | X | | X | | X | X | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | | | | | | | | | | | | X | | X | | | X | X | X | | X | X | | | X | | X | | | | X | X | X | X | X | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | | | X | | | | | | | | | | X | | | | | | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/yellow_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T23:07:49+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T14:54:52+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of yellow (Pokémon)
===========================
This is the dataset of yellow (Pokémon), containing 283 images and their tags.
The core tags of this character are 'blonde\_hair, long\_hair, ponytail, bangs, hat', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
aa245849bf864390106814b6ccdcbc3521f20b07
|
# Dataset of sina (Pokémon)
This is the dataset of sina (Pokémon), containing 67 images and their tags.
The core tags of this character are `dark_skin, dark-skinned_female, purple_hair, breasts, blue_eyes, short_hair, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 67 | 53.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sina_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 67 | 35.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sina_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 149 | 70.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sina_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 67 | 48.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sina_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 149 | 90.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sina_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/sina_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 20 |  |  |  |  |  | 1girl, smile, sunglasses, solo, baseball_cap, looking_at_viewer, bangs, cutoffs, denim_shorts, short_shorts, collarbone, simple_background, sleeveless_shirt, blue_shorts, bracelet, closed_mouth, hair_between_eyes, tinted_eyewear, white_background, white_shirt, black_headwear, tank_top, bare_shoulders |
| 1 | 6 |  |  |  |  |  | 1boy, 1girl, blush, hetero, solo_focus, uncensored, bra, erection, cleavage, large_penis, open_mouth, shirt, upper_teeth_only, veiny_penis |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | sunglasses | solo | baseball_cap | looking_at_viewer | bangs | cutoffs | denim_shorts | short_shorts | collarbone | simple_background | sleeveless_shirt | blue_shorts | bracelet | closed_mouth | hair_between_eyes | tinted_eyewear | white_background | white_shirt | black_headwear | tank_top | bare_shoulders | 1boy | blush | hetero | solo_focus | uncensored | bra | erection | cleavage | large_penis | open_mouth | shirt | upper_teeth_only | veiny_penis |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------------|:-------|:---------------|:--------------------|:--------|:----------|:---------------|:---------------|:-------------|:--------------------|:-------------------|:--------------|:-----------|:---------------|:--------------------|:-----------------|:-------------------|:--------------|:-----------------|:-----------|:-----------------|:-------|:--------|:---------|:-------------|:-------------|:------|:-----------|:-----------|:--------------|:-------------|:--------|:-------------------|:--------------|
| 0 | 20 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/sina_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T23:11:20+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T14:17:20+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of sina (Pokémon)
=========================
This is the dataset of sina (Pokémon), containing 67 images and their tags.
The core tags of this character are 'dark\_skin, dark-skinned\_female, purple\_hair, breasts, blue\_eyes, short\_hair, large\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
53b778fd70381cbfad29ce5a25683ab8f01db15f
|
# Dataset of battle_girl (Pokémon)
This is the dataset of battle_girl (Pokémon), containing 36 images and their tags.
The core tags of this character are `blue_hair, ponytail, breasts, long_hair, blue_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 36 | 24.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/battle_girl_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 36 | 17.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/battle_girl_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 74 | 34.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/battle_girl_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 36 | 23.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/battle_girl_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 74 | 41.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/battle_girl_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/battle_girl_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, midriff, solo, fingerless_gloves, navel, sports_bra, sweat, bike_shorts, crop_top, smile, toned |
| 1 | 6 |  |  |  |  |  | 1girl, hetero, 1boy, nipples, open_mouth, penis, solo_focus, blush, medium_breasts, navel, nude, sex, pussy, spread_legs, vaginal |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | midriff | solo | fingerless_gloves | navel | sports_bra | sweat | bike_shorts | crop_top | smile | toned | hetero | 1boy | nipples | open_mouth | penis | solo_focus | blush | medium_breasts | nude | sex | pussy | spread_legs | vaginal |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------|:-------|:--------------------|:--------|:-------------|:--------|:--------------|:-----------|:--------|:--------|:---------|:-------|:----------|:-------------|:--------|:-------------|:--------|:-----------------|:-------|:------|:--------|:--------------|:----------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/battle_girl_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T23:21:49+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T14:35:57+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of battle\_girl (Pokémon)
=================================
This is the dataset of battle\_girl (Pokémon), containing 36 images and their tags.
The core tags of this character are 'blue\_hair, ponytail, breasts, long\_hair, blue\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
5b6a0d24a5bc803f096b95d67c064aeb2bea680b
|
# Dataset of hex_maniac (Pokémon)
This is the dataset of hex_maniac (Pokémon), containing 500 images and their tags.
The core tags of this character are `long_hair, breasts, ahoge, hairband, messy_hair, purple_eyes, purple_hairband, purple_hair, hair_between_eyes, bangs, large_breasts, huge_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 632.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hex_maniac_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 324.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hex_maniac_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1260 | 713.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hex_maniac_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 541.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hex_maniac_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1260 | 1.04 GiB | [Download](https://huggingface.co/datasets/CyberHarem/hex_maniac_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hex_maniac_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | 1girl, @_@, alternate_breast_size, solo, very_long_hair, smile, navel, looking_at_viewer, cleavage, blush, micro_bikini, open_mouth, areola_slip, black_bikini, simple_background |
| 1 | 6 |  |  |  |  |  | 1girl, @_@, alternate_breast_size, blush, dress, long_sleeves, looking_at_viewer, open_mouth, smile, solo, sweater, heart, black_hair, simple_background, upper_body, white_background |
| 2 | 15 |  |  |  |  |  | 1girl, @_@, alternate_breast_size, cow_print, bikini, cow_horns, cow_ears, smile, solo, blush, thighhighs, fake_animal_ears, navel, neck_bell, elbow_gloves, looking_at_viewer, open_mouth, sweat, cow_girl, cowbell, collarbone, nipples, shiny, simple_background, very_long_hair |
| 3 | 5 |  |  |  |  |  | 1boy, 1girl, @_@, alternate_breast_size, blush, hetero, open_mouth, sweat, nude, penis, pokemon_(creature), solo_focus, black_hair, collarbone, pokephilia, uncensored, drooling, ejaculation, english_text, heart-shaped_pupils, interspecies, inverted_nipples, paizuri, smile |
| 4 | 19 |  |  |  |  |  | 1girl, alternate_breast_size, hetero, 1boy, nipples, solo_focus, @_@, penis, sex, blush, open_mouth, vaginal, smile, uncensored, navel, sweat, completely_nude, cum_in_pussy, girl_on_top, spread_legs, tongue, black_hair, cowgirl_position, heart |
| 5 | 6 |  |  |  |  |  | 1girl, anus, ass, looking_at_viewer, looking_back, pussy, uncensored, feet, soles, solo, toes, @_@, barefoot, blush, from_behind, nail_polish, smile, alternate_breast_size, black_hair, shiny_skin |
| 6 | 7 |  |  |  |  |  | 1girl, @_@, alternate_breast_size, fake_animal_ears, playboy_bunny, rabbit_ears, solo, strapless_leotard, cleavage, black_leotard, fishnet_pantyhose, looking_at_viewer, open_mouth, smile, thick_thighs, blush, bowtie, detached_collar, purple_leotard, simple_background, very_long_hair, wrist_cuffs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | @_@ | alternate_breast_size | solo | very_long_hair | smile | navel | looking_at_viewer | cleavage | blush | micro_bikini | open_mouth | areola_slip | black_bikini | simple_background | dress | long_sleeves | sweater | heart | black_hair | upper_body | white_background | cow_print | bikini | cow_horns | cow_ears | thighhighs | fake_animal_ears | neck_bell | elbow_gloves | sweat | cow_girl | cowbell | collarbone | nipples | shiny | 1boy | hetero | nude | penis | pokemon_(creature) | solo_focus | pokephilia | uncensored | drooling | ejaculation | english_text | heart-shaped_pupils | interspecies | inverted_nipples | paizuri | sex | vaginal | completely_nude | cum_in_pussy | girl_on_top | spread_legs | tongue | cowgirl_position | anus | ass | looking_back | pussy | feet | soles | toes | barefoot | from_behind | nail_polish | shiny_skin | playboy_bunny | rabbit_ears | strapless_leotard | black_leotard | fishnet_pantyhose | thick_thighs | bowtie | detached_collar | purple_leotard | wrist_cuffs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------|:------------------------|:-------|:-----------------|:--------|:--------|:--------------------|:-----------|:--------|:---------------|:-------------|:--------------|:---------------|:--------------------|:--------|:---------------|:----------|:--------|:-------------|:-------------|:-------------------|:------------|:---------|:------------|:-----------|:-------------|:-------------------|:------------|:---------------|:--------|:-----------|:----------|:-------------|:----------|:--------|:-------|:---------|:-------|:--------|:---------------------|:-------------|:-------------|:-------------|:-----------|:--------------|:---------------|:----------------------|:---------------|:-------------------|:----------|:------|:----------|:------------------|:---------------|:--------------|:--------------|:---------|:-------------------|:-------|:------|:---------------|:--------|:-------|:--------|:-------|:-----------|:--------------|:--------------|:-------------|:----------------|:--------------|:--------------------|:----------------|:--------------------|:---------------|:---------|:------------------|:-----------------|:--------------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | X | | X | | X | | X | | X | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | | X | | X | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | X | | | X | | | | X | | X | | | | | | | | X | | | | | | | | | | | X | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 19 |  |  |  |  |  | X | X | X | | | X | X | | | X | | X | | | | | | | X | X | | | | | | | | | | | X | | | | X | | X | X | | X | | X | | X | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | X | X | X | | X | | X | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | X | X | X | X | X | | X | X | X | | X | | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/hex_maniac_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T23:58:35+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T15:28:53+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of hex\_maniac (Pokémon)
================================
This is the dataset of hex\_maniac (Pokémon), containing 500 images and their tags.
The core tags of this character are 'long\_hair, breasts, ahoge, hairband, messy\_hair, purple\_eyes, purple\_hairband, purple\_hair, hair\_between\_eyes, bangs, large\_breasts, huge\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
b6c06878547114b3eeabf0888239be3362c372c8
|
# Dataset of flandre_scarlet/フランドール・スカーレット/플랑드르스칼렛 (Touhou)
This is the dataset of flandre_scarlet/フランドール・スカーレット/플랑드르스칼렛 (Touhou), containing 500 images and their tags.
The core tags of this character are `blonde_hair, wings, red_eyes, hat, mob_cap, bangs, one_side_up, bow, ribbon, white_headwear, hair_between_eyes, red_bow, red_ribbon, hat_ribbon, short_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 818.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/flandre_scarlet_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 442.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/flandre_scarlet_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1283 | 977.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/flandre_scarlet_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 713.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/flandre_scarlet_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1283 | 1.39 GiB | [Download](https://huggingface.co/datasets/CyberHarem/flandre_scarlet_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/flandre_scarlet_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, crystal, puffy_short_sleeves, red_skirt, red_vest, solo, looking_at_viewer, open_mouth, simple_background, white_shirt, yellow_ascot, blush, :d, fang, wrist_cuffs, cowboy_shot, side_ponytail, white_background, frilled_shirt_collar, hat_bow, skirt_set |
| 1 | 9 |  |  |  |  |  | 1girl, crystal, looking_at_viewer, open_mouth, puffy_short_sleeves, red_skirt, red_vest, solo, white_shirt, white_socks, yellow_ascot, :d, blush, full_body, red_footwear, frills, fang, simple_background, skirt_set, white_background, petticoat, wrist_cuffs, bobby_socks, mary_janes, white_bloomers |
| 2 | 5 |  |  |  |  |  | 1girl, blush, cowboy_shot, solo, standing, alternate_costume, bare_shoulders, closed_mouth, collarbone, crystal, looking_at_viewer, sleeveless_dress, smile, white_dress, bare_arms, no_headwear, outdoors, spaghetti_strap, sundress, day, depth_of_field, flower, frilled_dress, hair_ribbon, hat_bow, medium_hair, own_hands_together, skirt_hold, sky, small_breasts, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | crystal | puffy_short_sleeves | red_skirt | red_vest | solo | looking_at_viewer | open_mouth | simple_background | white_shirt | yellow_ascot | blush | :d | fang | wrist_cuffs | cowboy_shot | side_ponytail | white_background | frilled_shirt_collar | hat_bow | skirt_set | white_socks | full_body | red_footwear | frills | petticoat | bobby_socks | mary_janes | white_bloomers | standing | alternate_costume | bare_shoulders | closed_mouth | collarbone | sleeveless_dress | smile | white_dress | bare_arms | no_headwear | outdoors | spaghetti_strap | sundress | day | depth_of_field | flower | frilled_dress | hair_ribbon | medium_hair | own_hands_together | skirt_hold | sky | small_breasts |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------|:----------------------|:------------|:-----------|:-------|:--------------------|:-------------|:--------------------|:--------------|:---------------|:--------|:-----|:-------|:--------------|:--------------|:----------------|:-------------------|:-----------------------|:----------|:------------|:--------------|:------------|:---------------|:---------|:------------|:--------------|:-------------|:-----------------|:-----------|:--------------------|:-----------------|:---------------|:-------------|:-------------------|:--------|:--------------|:------------|:--------------|:-----------|:------------------|:-----------|:------|:-----------------|:---------|:----------------|:--------------|:--------------|:---------------------|:-------------|:------|:----------------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | X | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | | | | X | X | | | | | X | | | | X | | X | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/flandre_scarlet_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T00:04:05+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T08:25:36+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of flandre\_scarlet/フランドール・スカーレット/플랑드르스칼렛 (Touhou)
==========================================================
This is the dataset of flandre\_scarlet/フランドール・スカーレット/플랑드르스칼렛 (Touhou), containing 500 images and their tags.
The core tags of this character are 'blonde\_hair, wings, red\_eyes, hat, mob\_cap, bangs, one\_side\_up, bow, ribbon, white\_headwear, hair\_between\_eyes, red\_bow, red\_ribbon, hat\_ribbon, short\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
7e43dc4d28fa519d6fb1d129a6b51b4d4dc47ec1
|
<img alt="Monado SLAM Datasets cover image"
src="/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/extras/cover.png"
style="width: 720px;">
<a href="https://youtu.be/kIddwk1FrW8" target="_blank">
<video width="720" height="240" autoplay muted loop playsinline
preload="auto"><source
src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/overview.webm"
type="video/webm"/>Video tag not supported.</video>
</a>
# Monado SLAM Datasets
The [Monado SLAM datasets
(MSD)](https://huggingface.co/datasets/collabora/monado-slam-datasets), are
egocentric visual-inertial SLAM datasets recorded to improve the
[Basalt](https://gitlab.com/VladyslavUsenko/basalt)-based inside-out tracking
component of the [Monado](https://monado.dev) project. These have a permissive
license [CC-BY 4.0](http://creativecommons.org/licenses/by/4.0/), meaning you
can use them for any purpose you want, including commercial, and only a mention
of the original project is required. The creation of these datasets was
supported by [Collabora](https://collabora.com)
Monado is an open-source OpenXR runtime that you can use to make devices OpenXR
compatible. It also provides drivers for different existing hardware thanks to
different contributors in the community creating drivers for it. Monado provides
different XR-related modules that these drivers can use. To be more specific,
inside-out head tracking is one of those modules and, while you can use
different tracking systems, the main system is a [fork of
Basalt](https://gitlab.freedesktop.org/mateosss/basalt). Creating a good
open-source tracking solution requires a solid measurement pipeline to
understand how changes in the system affect tracking quality. For this reason,
the creation of these datasets was essential.
These datasets are very specific to the XR use case as they contain VI-SLAM
footage recorded from devices such as VR headsets, but other devices like phones
or AR glasses might be added in the future. These were made since current SLAM
datasets like EuRoC or TUM-VI were not specific enough for XR, or they didn't
have permissively enough usage licenses.
For questions or comments, you can use the Hugging Face
[Community](https://huggingface.co/datasets/collabora/monado-slam-datasets/discussions),
join Monado's discord [server](https://discord.gg/8RkJgRJ) and ask in the
`#slam` channel, or send an email to <[email protected]>.
## List of sequences
- [MI_valve_index](https://huggingface.co/datasets/collabora/monado-slam-datasets/tree/main/M_monado_datasets/MI_valve_index)
- [MIC_calibration](https://huggingface.co/datasets/collabora/monado-slam-datasets/tree/main/M_monado_datasets/MI_valve_index/MIC_calibration)
- [MIC01_camcalib1](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC01_camcalib1.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIC01_camcalib1.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIC02_camcalib2](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC02_camcalib2.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIC02_camcalib2.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIC03_camcalib3](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC03_camcalib3.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIC03_camcalib3.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIC04_imucalib1](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC04_imucalib1.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIC04_imucalib1.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIC05_imucalib2](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC05_imucalib2.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIC05_imucalib2.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIC06_imucalib3](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC06_imucalib3.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIC06_imucalib3.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIC07_camcalib4](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC07_camcalib4.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIC07_camcalib4.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIC08_camcalib5](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC08_camcalib5.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIC08_camcalib5.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIC09_imucalib4](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC09_imucalib4.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIC09_imucalib4.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIC10_imucalib5](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC10_imucalib5.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIC10_imucalib5.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIC11_camcalib6](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC11_camcalib6.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIC11_camcalib6.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIC12_imucalib6](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC12_imucalib6.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIC12_imucalib6.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIC13_camcalib7](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC13_camcalib7.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIC13_camcalib7.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIC14_camcalib8](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC14_camcalib8.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIC14_camcalib8.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIC15_imucalib7](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC15_imucalib7.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIC15_imucalib7.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIC16_imucalib8](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC16_imucalib8.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIC16_imucalib8.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIO_others](https://huggingface.co/datasets/collabora/monado-slam-datasets/tree/main/M_monado_datasets/MI_valve_index/MIO_others)
- [MIO01_hand_puncher_1](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIO_others/MIO01_hand_puncher_1.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIO01_hand_puncher_1.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIO02_hand_puncher_2](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIO_others/MIO02_hand_puncher_2.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIO02_hand_puncher_2.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIO03_hand_shooter_easy](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIO_others/MIO03_hand_shooter_easy.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIO03_hand_shooter_easy.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIO04_hand_shooter_hard](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIO_others/MIO04_hand_shooter_hard.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIO04_hand_shooter_hard.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIO05_inspect_easy](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIO_others/MIO05_inspect_easy.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIO05_inspect_easy.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIO06_inspect_hard](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIO_others/MIO06_inspect_hard.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIO06_inspect_hard.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIO07_mapping_easy](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIO_others/MIO07_mapping_easy.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIO07_mapping_easy.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIO08_mapping_hard](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIO_others/MIO08_mapping_hard.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIO08_mapping_hard.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIO09_short_1_updown](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIO_others/MIO09_short_1_updown.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIO09_short_1_updown.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIO10_short_2_panorama](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIO_others/MIO10_short_2_panorama.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIO10_short_2_panorama.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIO11_short_3_backandforth](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIO_others/MIO11_short_3_backandforth.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIO11_short_3_backandforth.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIO12_moving_screens](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIO_others/MIO12_moving_screens.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIO12_moving_screens.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIO13_moving_person](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIO_others/MIO13_moving_person.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIO13_moving_person.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIO14_moving_props](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIO_others/MIO14_moving_props.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIO14_moving_props.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIO15_moving_person_props](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIO_others/MIO15_moving_person_props.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIO15_moving_person_props.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIO16_moving_screens_person_props](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIO_others/MIO16_moving_screens_person_props.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIO16_moving_screens_person_props.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIP_playing](https://huggingface.co/datasets/collabora/monado-slam-datasets/tree/main/M_monado_datasets/MI_valve_index/MIP_playing)
- [MIPB_beat_saber](https://huggingface.co/datasets/collabora/monado-slam-datasets/tree/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPB_beat_saber)
- [MIPB01_beatsaber_100bills_360_normal](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPB_beat_saber/MIPB01_beatsaber_100bills_360_normal.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIPB01_beatsaber_100bills_360_normal.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIPB02_beatsaber_crabrave_360_hard](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPB_beat_saber/MIPB02_beatsaber_crabrave_360_hard.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIPB02_beatsaber_crabrave_360_hard.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIPB03_beatsaber_countryrounds_360_expert](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPB_beat_saber/MIPB03_beatsaber_countryrounds_360_expert.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIPB03_beatsaber_countryrounds_360_expert.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIPB04_beatsaber_fitbeat_hard](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPB_beat_saber/MIPB04_beatsaber_fitbeat_hard.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIPB04_beatsaber_fitbeat_hard.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIPB05_beatsaber_fitbeat_360_expert](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPB_beat_saber/MIPB05_beatsaber_fitbeat_360_expert.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIPB05_beatsaber_fitbeat_360_expert.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIPB06_beatsaber_fitbeat_expertplus_1](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPB_beat_saber/MIPB06_beatsaber_fitbeat_expertplus_1.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIPB06_beatsaber_fitbeat_expertplus_1.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIPB07_beatsaber_fitbeat_expertplus_2](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPB_beat_saber/MIPB07_beatsaber_fitbeat_expertplus_2.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIPB07_beatsaber_fitbeat_expertplus_2.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIPB08_beatsaber_long_session_1](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPB_beat_saber/MIPB08_beatsaber_long_session_1.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIPB08_beatsaber_long_session_1.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIPP_pistol_whip](https://huggingface.co/datasets/collabora/monado-slam-datasets/tree/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPP_pistol_whip)
- [MIPP01_pistolwhip_blackmagic_hard](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPP_pistol_whip/MIPP01_pistolwhip_blackmagic_hard.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIPP01_pistolwhip_blackmagic_hard.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIPP02_pistolwhip_lilith_hard](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPP_pistol_whip/MIPP02_pistolwhip_lilith_hard.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIPP02_pistolwhip_lilith_hard.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIPP03_pistolwhip_requiem_hard](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPP_pistol_whip/MIPP03_pistolwhip_requiem_hard.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIPP03_pistolwhip_requiem_hard.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIPP04_pistolwhip_revelations_hard](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPP_pistol_whip/MIPP04_pistolwhip_revelations_hard.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIPP04_pistolwhip_revelations_hard.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIPP05_pistolwhip_thefall_hard_2pistols](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPP_pistol_whip/MIPP05_pistolwhip_thefall_hard_2pistols.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIPP05_pistolwhip_thefall_hard_2pistols.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIPP06_pistolwhip_thegrave_hard](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPP_pistol_whip/MIPP06_pistolwhip_thegrave_hard.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIPP06_pistolwhip_thegrave_hard.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIPT_thrill_of_the_fight](https://huggingface.co/datasets/collabora/monado-slam-datasets/tree/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPT_thrill_of_the_fight)
- [MIPT01_thrillofthefight_setup](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPT_thrill_of_the_fight/MIPT01_thrillofthefight_setup.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIPT01_thrillofthefight_setup.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIPT02_thrillofthefight_fight_1](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPT_thrill_of_the_fight/MIPT02_thrillofthefight_fight_1.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIPT02_thrillofthefight_fight_1.webm" type="video/webm"/>Video tag not supported.</video></details>
- [MIPT03_thrillofthefight_fight_2](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPT_thrill_of_the_fight/MIPT03_thrillofthefight_fight_2.zip): <details style="display: inline;cursor: pointer;user-select: none"><summary>Preview 5x</summary><video width="320" height="320" controls preload="none"><source src="https://huggingface.co/datasets/collabora/monado-slam-datasets/resolve/main/M_monado_datasets/MI_valve_index/extras/previews/MIPT03_thrillofthefight_fight_2.webm" type="video/webm"/>Video tag not supported.</video></details>
## Valve Index datasets
These datasets were recorded using a Valve Index with the `vive` driver in
Monado and they have ground truth from 3 lighthouses tracking the headset through
the proprietary OpenVR implementation provided by SteamVR. The exact commit used
in Monado at the time of recording is
[a4e7765d](https://gitlab.freedesktop.org/mateosss/monado/-/commit/a4e7765d7219b06a0c801c7bb33f56d3ea69229d).
The datasets are in the ASL dataset format, the same as the [EuRoC
datasets](https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets).
Besides the main EuRoC format files, we provide some extra files with raw
timestamp data for exploring real time timestamp alignment techniques.
The dataset is post-processed to reduce as much as possible special treatment
from SLAM systems: camera-IMU and ground truth-IMU timestamp alignment, IMU
alignment and bias calibration have been applied, lighthouse tracked pose has
been converted to IMU pose, and so on. Most of the post-processing was done with
Basalt
[calibration](https://gitlab.com/VladyslavUsenko/basalt/-/blob/master/doc/Calibration.md?ref_type=heads#camera-imu-mocap-calibration)
and
[alignment](https://gitlab.com/VladyslavUsenko/basalt/-/blob/master/doc/Realsense.md?ref_type=heads#generating-time-aligned-ground-truth)
tools, as well as the
[xrtslam-metrics](https://gitlab.freedesktop.org/mateosss/xrtslam-metrics)
scripts for Monado tracking. The post-processing process is documented in [this
video][post-processing-video] which goes through making the [MIPB08] dataset ready
for use starting from its raw version.
### Data
#### Camera samples
In the `vive` driver from Monado, we don't have direct access to the camera
device timestamps but only to V4L2 timestamps. These are not exactly hardware
timestamps and have some offset with respect to the device clock in which the
IMU samples are timestamped.
The camera frames can be found in the `camX/data` directory as PNG files with
names corresponding to their V4L2 timestamps. The `camX/data.csv` file contains
aligned timestamps of each frame. The `camX/data.extra.csv` also contains the
original V4L2 timestamp and the "host timestamp" which is the time at which the
host computer had the frame ready to use after USB transmission. By separating
arrival time and exposure time algorithms can be made to be more robust for
real time operation.
The cameras of the Valve Index have global shutters with a resolution of 960×960
streaming at 54fps. They have auto exposure enabled. While the cameras of the
Index are RGB you will find only grayscale images in these datasets. The
original images are provided in YUYV422 format but only the luma component is
stored.
For each dataset, the camera timestamps are aligned with respect to IMU
timestamps by running visual-only odometry with Basalt on a 30-second subset of
the dataset. The resulting trajectory is then aligned with the
[`basalt_time_alignment`](https://gitlab.com/VladyslavUsenko/basalt/-/blob/master/doc/Realsense.md?ref_type=heads#generating-time-aligned-ground-truth)
tool that aligns the rotational velocities of the trajectory with the gyroscope
samples and returns the resulting offset in nanoseconds. That correction is then
applied to the dataset. Refer to the post-processing walkthrough
[video][post-processing-video] for more details.
#### IMU samples
The IMU timestamps are device timestamps, they come at about 1000Hz. We provide
an `imu0/data.raw.csv` file that contains the raw measurements without any axis
scale misalignment o bias correction. `imu0/data.csv` has the
scale misalignment and bias corrections applied so that the SLAM system can
ignore those corrections. `imu0/data.extra.csv` contains the arrival time of the
IMU sample to the host computer for algorithms that want to adapt themselves to
work in real time.
#### Ground truth information
The ground truth setup consists of three lighthouses 2.0 base stations and a
SteamVR session providing tracking data through the OpenVR API to Monado. While
not as precise as other MoCap tracking systems like OptiTrack or Vicon it
should still provide pretty good accuracy and precision close to the 1mm range.
There are different attempts at studying the accuracy of SteamVR tracking that
you can check out like
[this](https://dl.acm.org/doi/pdf/10.1145/3463914.3463921),
[this](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7956487/pdf/sensors-21-01622.pdf),
or [this](http://doc-ok.org/?p=1478). When a tracking system gets closer to
millimeter accuracy these datasets will no longer be as useful for improving it.
The raw ground truth data is stored in `gt/data.raw.csv`. OpenVR does not provide
timestamps and as such, the timestamps recorded are from when the host asks
OpenVR for the latest pose with a call to
[`GetDeviceToAbsoluteTrackingPose`](https://github.com/ValveSoftware/openvr/wiki/IVRSystem::GetDeviceToAbsoluteTrackingPose).
The poses contained in this file are not of the IMU but of the headset origin as
interpreted by SteamVR, which usually is between the middle of the eyes and
facing towards the displays. The file `gt/data.csv` corrects each entry of the
previous file with timestamps aligned with the IMU clock and poses of the IMU
instead of this headset origin.
#### Calibration
There are multiple calibration datasets in the
[`MIC_calibration`](https://huggingface.co/datasets/collabora/monado-slam-datasets/tree/main/M_monado_datasets/MI_valve_index/MIC_calibration)
directory. There are camera-focused and IMU-focused calibration datasets. See
the
[README.md](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/README.md)
file in there for more information on what each sequence is.
In the
[`MI_valve_index/extras`](https://huggingface.co/datasets/collabora/monado-slam-datasets/tree/main/M_monado_datasets/MI_valve_index/extras)
directory you can find the following files:
- [`calibration.json`](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/extras/calibration.json):
Calibration file produced with the
[`basalt_calibrate_imu`](https://gitlab.com/VladyslavUsenko/basalt/-/blob/master/doc/Calibration.md?ref_type=heads#camera-imu-mocap-calibration)
tool from
[`MIC01_camcalib1`](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC01_camcalib1.zip)
and
[`MIC04_imucalib1`](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/MIC_calibration/MIC04_imucalib1.zip)
datasets with camera-IMU time offset and IMU bias/misalignment info removed so
that it works with the fully the all the datasets by default which are fully
post-processed and don't require those fields.
- [`calibration.extra.json`](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/extras/calibration.extra.json):
Same as `calibration.json` but with the cam-IMU time offset and IMU bias and
misalignment information filled in.
- [`factory.json`](https://huggingface.co/datasets/collabora/monado-slam-datasets/blob/main/M_monado_datasets/MI_valve_index/extras/factory.json):
JSON file exposed by the headset's firmware with information of the device. It
includes camera and display calibration as well as more data that might be of
interest. It is not used but included for completeness' sake.
- [`other_calibrations/`](https://huggingface.co/datasets/collabora/monado-slam-datasets/tree/main/M_monado_datasets/MI_valve_index/extras/other_calibrations):
Calibration results obtained from the other calibration datasets. Shown for
comparison and ensuring that all of them have similar values.
`MICXX_camcalibY` has camera-only calibration produced with the
[`basalt_calibrate`](https://gitlab.com/VladyslavUsenko/basalt/-/blob/master/doc/Calibration.md?ref_type=heads#camera-calibration)
tool, while the corresponding `MICXX_imucalibY` datasets use these datasets as
a starting point and have the `basalt_calibrate_imu` calibration results.
##### Camera model
By default, the `calibration.json` file provides parameters `k1`, `k2`, `k3`,
and `k4` for the [Kannala-Brandt camera
model](https://vladyslavusenko.gitlab.io/basalt-headers/classbasalt_1_1KannalaBrandtCamera4.html#a423a4f1255e9971fe298dc6372345681)
with fish-eye distortion (also known as [OpenCV's
fish-eye](https://docs.opencv.org/3.4/db/d58/group__calib3d__fisheye.html#details)).
Calibrations with other camera models might be added later on, otherwise, you
can use the calibration sequences for custom calibrations.
##### IMU model
For the default `calibration.json` where all parameters are zero, you can ignore
any model and just use the measurements present in `imu0/data.csv` directly. If
instead, you want to use the raw measurements from `imu0/data.raw.csv` you will
need to apply the Basalt
[accelerometer](https://vladyslavusenko.gitlab.io/basalt-headers/classbasalt_1_1CalibAccelBias.html#details)
and
[gyroscope](https://vladyslavusenko.gitlab.io/basalt-headers/classbasalt_1_1CalibGyroBias.html#details)
models that use a misalignment-scale correction matrix together with a constant
initial bias. The random walk and white noise parameters were not computed and
default reasonable values are used instead.
#### Post-processing walkthrough
If you are interested in understanding the step-by-step procedure of
post-processing of the dataset, below is a video detailing the procedure for the
[MIPB08] dataset.
[](https://www.youtube.com/watch?v=0PX_6PNwrvQ)
### Sequences
- [MIC_calibration](https://huggingface.co/datasets/collabora/monado-slam-datasets/tree/main/M_monado_datasets/MI_valve_index/MIC_calibration):
Calibration sequences that record
[this](https://drive.google.com/file/d/1DqKWgePodCpAKJCd_Bz-hfiEQOSnn_k0)
calibration target from Kalibr with the squares of the target having sides of
3 cm. Some sequences are focused on camera calibration covering the image
planes of both stereo cameras while others on IMU calibration properly
exciting all six components of the IMU.
- [MIP_playing](https://huggingface.co/datasets/collabora/monado-slam-datasets/tree/main/M_monado_datasets/MI_valve_index/MIO_others):
Datasets in which the user is playing a particular VR game on SteamVR while
Monado records the datasets.
- [MIPB_beat_saber](https://huggingface.co/datasets/collabora/monado-slam-datasets/tree/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPB_beat_saber):
This contains different songs played at different speeds. The fitbeat song
is one that requires a lot of head movement while [MIPB08] is a long 40min
dataset with many levels played.
- [MIPP_pistol_whip](https://huggingface.co/datasets/collabora/monado-slam-datasets/tree/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPP_pistol_whip):
This is a shooting and music game, each dataset is a different level/song.
- [MIPT_thrill_of_the_fight](https://huggingface.co/datasets/collabora/monado-slam-datasets/tree/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPT_thrill_of_the_fight):
This is a boxing game.
- [MIO_others](https://huggingface.co/datasets/collabora/monado-slam-datasets/tree/main/M_monado_datasets/MI_valve_index/MIO_others):
These are other datasets that might be useful, they include play-pretend
scenarios in which the user is supposed to be playing some particular game,
then there is some inspection and scanning/mapping of the room, some very
short and lightweight datasets for quick testing, and some datasets with a lot
of movement around the environment.
### Evaluation
These are the results of running the
[current](https://gitlab.freedesktop.org/mateosss/basalt/-/commits/release-b67fa7a4?ref_type=tags)
Monado tracker that is based on
[Basalt](https://gitlab.com/VladyslavUsenko/basalt) on the dataset sequences.
| Seq. | Avg. time\* | Avg. feature count | ATE (m) | RTE 100ms (m) \*\* | SDM 0.01m (m/m) \*\*\* |
| :------ | :--------------- | :-------------------- | :---------------- | :---------------------- | :--------------------- |
| MIC01 | 12.24 ± 2.84 | [48 6] ± [72 6] | 0.076 ± 0.049 | 0.016551 ± 0.015004 | 0.7407 ± 0.5757 |
| MIC02 | 12.30 ± 2.60 | [33 7] ± [54 11] | 0.043 ± 0.028 | 0.012375 ± 0.011230 | 0.5788 ± 0.4279 |
| MIC03 | 15.89 ± 8.55 | [60 8] ± [107 13] | 0.048 ± 0.032 | 0.011344 ± 0.009992 | 0.6020 ± 0.3987 |
| MIC04 | 15.26 ± 2.84 | [65 9] ± [54 11] | 0.028 ± 0.016 | 0.005458 ± 0.003976 | 0.2808 ± 0.2033 |
| MIC05 | 16.10 ± 2.82 | [73 5] ± [69 6] | 0.023 ± 0.013 | 0.004795 ± 0.003358 | 0.2547 ± 0.1611 |
| MIC06 | 14.14 ± 2.42 | [40 7] ± [53 10] | 0.015 ± 0.005 | 0.003947 ± 0.003454 | 0.2875 ± 0.2542 |
| MIC07 | 13.42 ± 2.63 | [46 9] ± [64 12] | 0.036 ± 0.014 | 0.012776 ± 0.011853 | 0.5520 ± 0.3463 |
| MIC08 | 13.89 ± 2.86 | [53 5] ± [62 5] | 0.082 ± 0.062 | 0.022429 ± 0.020956 | 0.8559 ± 0.6402 |
| MIC09 | 12.73 ± 2.52 | [63 21] ± [37 12] | 0.008 ± 0.003 | 0.001492 ± 0.001318 | 0.2388 ± 0.3589 |
| MIC10 | 14.49 ± 2.51 | [50 5] ± [51 5] | 0.019 ± 0.012 | 0.003783 ± 0.003116 | 0.2666 ± 0.3451 |
| MIC11 | 13.72 ± 2.37 | [26 6] ± [39 7] | 0.017 ± 0.010 | 0.009898 ± 0.009069 | 0.4331 ± 0.3278 |
| MIC12 | 14.92 ± 2.56 | [38 4] ± [48 5] | 0.024 ± 0.010 | 0.005816 ± 0.004644 | 0.2932 ± 0.2500 |
| MIC13 | 13.99 ± 3.07 | [53 10] ± [79 15] | 0.029 ± 0.021 | 0.015463 ± 0.014354 | 0.8668 ± 0.9353 |
| MIC14 | 13.67 ± 2.39 | [24 5] ± [36 8] | 0.047 ± 0.012 | 0.007224 ± 0.006359 | 0.4577 ± 0.3446 |
| MIC15 | 14.17 ± 2.81 | [76 17] ± [43 9] | 0.016 ± 0.013 | 0.003837 ± 0.003543 | 0.2593 ± 0.1936 |
| MIC16 | 14.27 ± 2.43 | [48 8] ± [44 6] | 0.008 ± 0.005 | 0.003867 ± 0.003725 | 0.5167 ± 0.4840 |
| MIO01 | 10.04 ± 1.43 | [36 23] ± [28 18] | 0.605 ± 0.342 | 0.035671 ± 0.033611 | 0.4246 ± 0.5161 |
| MIO02 | 10.41 ± 1.48 | [32 18] ± [25 16] | 1.182 ± 0.623 | 0.063340 ± 0.059176 | 0.4681 ± 0.4329 |
| MIO03 | 10.24 ± 1.37 | [47 26] ± [26 16] | 0.087 ± 0.033 | 0.006293 ± 0.004259 | 0.2113 ± 0.2649 |
| MIO04 | 9.47 ± 1.08 | [27 16] ± [25 16] | 0.210 ± 0.100 | 0.013121 ± 0.010350 | 0.3086 ± 0.3715 |
| MIO05 | 9.95 ± 1.01 | [66 34] ± [33 21] | 0.040 ± 0.016 | 0.003188 ± 0.002192 | 0.1079 ± 0.1521 |
| MIO06 | 9.65 ± 1.06 | [44 28] ± [33 22] | 0.049 ± 0.019 | 0.010454 ± 0.008578 | 0.2620 ± 0.3684 |
| MIO07 | 9.63 ± 1.16 | [46 26] ± [30 19] | 0.019 ± 0.008 | 0.002442 ± 0.001355 | 0.0738 ± 0.0603 |
| MIO08 | 9.74 ± 0.87 | [29 22] ± [18 16] | 0.059 ± 0.021 | 0.007167 ± 0.004657 | 0.1644 ± 0.3433 |
| MIO09 | 9.94 ± 0.72 | [44 29] ± [14 8] | 0.006 ± 0.003 | 0.002940 ± 0.002024 | 0.0330 ± 0.0069 |
| MIO10 | 9.48 ± 0.82 | [35 21] ± [18 10] | 0.016 ± 0.009 | 0.004623 ± 0.003310 | 0.0620 ± 0.0340 |
| MIO11 | 9.34 ± 0.79 | [32 20] ± [19 10] | 0.024 ± 0.010 | 0.007255 ± 0.004821 | 0.0854 ± 0.0540 |
| MIO12 | 11.05 ± 2.20 | [43 23] ± [31 19] | 0.420 ± 0.160 | 0.005298 ± 0.003603 | 0.1546 ± 0.2641 |
| MIO13 | 10.47 ± 1.89 | [35 21] ± [24 18] | 0.665 ± 0.290 | 0.026294 ± 0.022790 | 1.0180 ± 1.0126 |
| MIO14 | 9.27 ± 1.03 | [49 31] ± [30 21] | 0.072 ± 0.028 | 0.002779 ± 0.002487 | 0.1657 ± 0.2409 |
| MIO15 | 9.75 ± 1.16 | [52 26] ± [29 16] | 0.788 ± 0.399 | 0.011558 ± 0.010541 | 0.6906 ± 0.6876 |
| MIO16 | 9.72 ± 1.26 | [33 17] ± [25 15] | 0.517 ± 0.135 | 0.013268 ± 0.011355 | 0.4397 ± 0.7167 |
| MIPB01 | 10.28 ± 1.25 | [63 46] ± [34 24] | 0.282 ± 0.109 | 0.006797 ± 0.004551 | 0.1401 ± 0.1229 |
| MIPB02 | 9.88 ± 1.08 | [55 37] ± [30 20] | 0.247 ± 0.097 | 0.005065 ± 0.003514 | 0.1358 ± 0.1389 |
| MIPB03 | 10.21 ± 1.12 | [66 44] ± [32 23] | 0.186 ± 0.103 | 0.005938 ± 0.004261 | 0.1978 ± 0.3590 |
| MIPB04 | 9.58 ± 1.02 | [51 37] ± [24 17] | 0.105 ± 0.060 | 0.004822 ± 0.003428 | 0.0652 ± 0.0555 |
| MIPB05 | 9.97 ± 0.97 | [73 48] ± [32 23] | 0.039 ± 0.017 | 0.004426 ± 0.002828 | 0.0826 ± 0.1313 |
| MIPB06 | 9.95 ± 0.85 | [58 35] ± [32 21] | 0.050 ± 0.022 | 0.004164 ± 0.002638 | 0.0549 ± 0.0720 |
| MIPB07 | 10.07 ± 1.00 | [73 47] ± [31 20] | 0.064 ± 0.038 | 0.004984 ± 0.003170 | 0.0785 ± 0.1411 |
| MIPB08 | 9.97 ± 1.08 | [71 47] ± [36 24] | 0.636 ± 0.272 | 0.004066 ± 0.002556 | 0.0740 ± 0.0897 |
| MIPP01 | 10.03 ± 1.21 | [36 22] ± [21 15] | 0.559 ± 0.241 | 0.009227 ± 0.007765 | 0.3472 ± 0.9075 |
| MIPP02 | 10.19 ± 1.20 | [42 22] ± [22 15] | 0.257 ± 0.083 | 0.011046 ± 0.010201 | 0.5014 ± 0.7665 |
| MIPP03 | 10.13 ± 1.24 | [37 20] ± [23 15] | 0.260 ± 0.101 | 0.008636 ± 0.007166 | 0.3205 ± 0.5786 |
| MIPP04 | 9.74 ± 1.09 | [38 23] ± [22 16] | 0.256 ± 0.144 | 0.007847 ± 0.006743 | 0.2586 ± 0.4557 |
| MIPP05 | 9.71 ± 0.84 | [37 24] ± [21 15] | 0.193 ± 0.086 | 0.005606 ± 0.004400 | 0.1670 ± 0.2398 |
| MIPP06 | 9.92 ± 3.11 | [37 21] ± [21 14] | 0.294 ± 0.136 | 0.009794 ± 0.008873 | 0.4016 ± 0.5648 |
| MIPT01 | 10.78 ± 2.06 | [68 44] ± [33 23] | 0.108 ± 0.060 | 0.003995 ± 0.002716 | 0.7109 ± 13.3461 |
| MIPT02 | 10.85 ± 1.27 | [79 54] ± [39 28] | 0.198 ± 0.109 | 0.003709 ± 0.002348 | 0.0839 ± 0.1175 |
| MIPT03 | 10.80 ± 1.55 | [76 52] ± [42 30] | 0.401 ± 0.206 | 0.005623 ± 0.003694 | 0.1363 ± 0.1789 |
| **AVG** | **11.33 ± 1.83** | **[49 23] ± [37 15]** | **0.192 ± 0.090** | **0.009439 ± 0.007998** | **0.3247 ± 0.6130** |
- \*: Average frame time. On an AMD Ryzen 7 5800X CPU. Run with pipeline fully
saturated. Real time operation frame times should be slightly lower.
- \*\*: RTE using delta of 6 frames (11ms)
- \*\*\*: The SDM metric is similar to RTE, it represents distance in meters
drifted for each meter of the dataset. The metric is implemented in the
[xrtslam-metrics](https://gitlab.freedesktop.org/mateosss/xrtslam-metrics)
project.
## License
This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a>
[post-processing-video]: https://youtu.be/0PX_6PNwrvQ
[MIPB08]: https://huggingface.co/datasets/collabora/monado-slam-datasets/tree/main/M_monado_datasets/MI_valve_index/MIP_playing/MIPB_beat_saber
|
collabora/monado-slam-datasets
|
[
"license:cc-by-4.0",
"doi:10.57967/hf/1081",
"region:us"
] |
2023-08-17T00:15:14+00:00
|
{"license": "cc-by-4.0"}
|
2023-09-08T14:24:43+00:00
|
[] |
[] |
TAGS
#license-cc-by-4.0 #doi-10.57967/hf/1081 #region-us
|

<a href="URL target="\_blank">
<source
src="URL
type="video/webm"/>Video tag not supported.
Monado SLAM Datasets
====================
The Monado SLAM datasets
(MSD), are
egocentric visual-inertial SLAM datasets recorded to improve the
Basalt-based inside-out tracking
component of the Monado project. These have a permissive
license CC-BY 4.0, meaning you
can use them for any purpose you want, including commercial, and only a mention
of the original project is required. The creation of these datasets was
supported by Collabora
Monado is an open-source OpenXR runtime that you can use to make devices OpenXR
compatible. It also provides drivers for different existing hardware thanks to
different contributors in the community creating drivers for it. Monado provides
different XR-related modules that these drivers can use. To be more specific,
inside-out head tracking is one of those modules and, while you can use
different tracking systems, the main system is a fork of
Basalt. Creating a good
open-source tracking solution requires a solid measurement pipeline to
understand how changes in the system affect tracking quality. For this reason,
the creation of these datasets was essential.
These datasets are very specific to the XR use case as they contain VI-SLAM
footage recorded from devices such as VR headsets, but other devices like phones
or AR glasses might be added in the future. These were made since current SLAM
datasets like EuRoC or TUM-VI were not specific enough for XR, or they didn't
have permissively enough usage licenses.
For questions or comments, you can use the Hugging Face
Community,
join Monado's discord server and ask in the
'#slam' channel, or send an email to [URL@URL](mailto:URL@URL).
List of sequences
-----------------
* MI\_valve\_index
+ MIC\_calibration
- MIC01\_camcalib1: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIC02\_camcalib2: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIC03\_camcalib3: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIC04\_imucalib1: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIC05\_imucalib2: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIC06\_imucalib3: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIC07\_camcalib4: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIC08\_camcalib5: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIC09\_imucalib4: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIC10\_imucalib5: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIC11\_camcalib6: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIC12\_imucalib6: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIC13\_camcalib7: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIC14\_camcalib8: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIC15\_imucalib7: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIC16\_imucalib8: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
+ MIO\_others
- MIO01\_hand\_puncher\_1: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIO02\_hand\_puncher\_2: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIO03\_hand\_shooter\_easy: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIO04\_hand\_shooter\_hard: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIO05\_inspect\_easy: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIO06\_inspect\_hard: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIO07\_mapping\_easy: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIO08\_mapping\_hard: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIO09\_short\_1\_updown: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIO10\_short\_2\_panorama: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIO11\_short\_3\_backandforth: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIO12\_moving\_screens: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIO13\_moving\_person: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIO14\_moving\_props: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIO15\_moving\_person\_props: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIO16\_moving\_screens\_person\_props: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
+ MIP\_playing
- MIPB\_beat\_saber
* MIPB01\_beatsaber\_100bills\_360\_normal: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
* MIPB02\_beatsaber\_crabrave\_360\_hard: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
* MIPB03\_beatsaber\_countryrounds\_360\_expert: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
* MIPB04\_beatsaber\_fitbeat\_hard: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
* MIPB05\_beatsaber\_fitbeat\_360\_expert: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
* MIPB06\_beatsaber\_fitbeat\_expertplus\_1: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
* MIPB07\_beatsaber\_fitbeat\_expertplus\_2: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
* MIPB08\_beatsaber\_long\_session\_1: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIPP\_pistol\_whip
* MIPP01\_pistolwhip\_blackmagic\_hard: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
* MIPP02\_pistolwhip\_lilith\_hard: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
* MIPP03\_pistolwhip\_requiem\_hard: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
* MIPP04\_pistolwhip\_revelations\_hard: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
* MIPP05\_pistolwhip\_thefall\_hard\_2pistols: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
* MIPP06\_pistolwhip\_thegrave\_hard: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
- MIPT\_thrill\_of\_the\_fight
* MIPT01\_thrillofthefight\_setup: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
* MIPT02\_thrillofthefight\_fight\_1: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
* MIPT03\_thrillofthefight\_fight\_2: Preview 5x<source src="URL type="video/webm"/>Video tag not supported.
Valve Index datasets
--------------------
These datasets were recorded using a Valve Index with the 'vive' driver in
Monado and they have ground truth from 3 lighthouses tracking the headset through
the proprietary OpenVR implementation provided by SteamVR. The exact commit used
in Monado at the time of recording is
a4e7765d.
The datasets are in the ASL dataset format, the same as the EuRoC
datasets.
Besides the main EuRoC format files, we provide some extra files with raw
timestamp data for exploring real time timestamp alignment techniques.
The dataset is post-processed to reduce as much as possible special treatment
from SLAM systems: camera-IMU and ground truth-IMU timestamp alignment, IMU
alignment and bias calibration have been applied, lighthouse tracked pose has
been converted to IMU pose, and so on. Most of the post-processing was done with
Basalt
calibration
and
alignment
tools, as well as the
xrtslam-metrics
scripts for Monado tracking. The post-processing process is documented in [this
video](URL) which goes through making the [MIPB08](URL) dataset ready
for use starting from its raw version.
### Data
#### Camera samples
In the 'vive' driver from Monado, we don't have direct access to the camera
device timestamps but only to V4L2 timestamps. These are not exactly hardware
timestamps and have some offset with respect to the device clock in which the
IMU samples are timestamped.
The camera frames can be found in the 'camX/data' directory as PNG files with
names corresponding to their V4L2 timestamps. The 'camX/URL' file contains
aligned timestamps of each frame. The 'camX/URL' also contains the
original V4L2 timestamp and the "host timestamp" which is the time at which the
host computer had the frame ready to use after USB transmission. By separating
arrival time and exposure time algorithms can be made to be more robust for
real time operation.
The cameras of the Valve Index have global shutters with a resolution of 960×960
streaming at 54fps. They have auto exposure enabled. While the cameras of the
Index are RGB you will find only grayscale images in these datasets. The
original images are provided in YUYV422 format but only the luma component is
stored.
For each dataset, the camera timestamps are aligned with respect to IMU
timestamps by running visual-only odometry with Basalt on a 30-second subset of
the dataset. The resulting trajectory is then aligned with the
'basalt\_time\_alignment'
tool that aligns the rotational velocities of the trajectory with the gyroscope
samples and returns the resulting offset in nanoseconds. That correction is then
applied to the dataset. Refer to the post-processing walkthrough
[video](URL) for more details.
#### IMU samples
The IMU timestamps are device timestamps, they come at about 1000Hz. We provide
an 'imu0/URL' file that contains the raw measurements without any axis
scale misalignment o bias correction. 'imu0/URL' has the
scale misalignment and bias corrections applied so that the SLAM system can
ignore those corrections. 'imu0/URL' contains the arrival time of the
IMU sample to the host computer for algorithms that want to adapt themselves to
work in real time.
#### Ground truth information
The ground truth setup consists of three lighthouses 2.0 base stations and a
SteamVR session providing tracking data through the OpenVR API to Monado. While
not as precise as other MoCap tracking systems like OptiTrack or Vicon it
should still provide pretty good accuracy and precision close to the 1mm range.
There are different attempts at studying the accuracy of SteamVR tracking that
you can check out like
this,
this,
or this. When a tracking system gets closer to
millimeter accuracy these datasets will no longer be as useful for improving it.
The raw ground truth data is stored in 'gt/URL'. OpenVR does not provide
timestamps and as such, the timestamps recorded are from when the host asks
OpenVR for the latest pose with a call to
'GetDeviceToAbsoluteTrackingPose'.
The poses contained in this file are not of the IMU but of the headset origin as
interpreted by SteamVR, which usually is between the middle of the eyes and
facing towards the displays. The file 'gt/URL' corrects each entry of the
previous file with timestamps aligned with the IMU clock and poses of the IMU
instead of this headset origin.
#### Calibration
There are multiple calibration datasets in the
'MIC\_calibration'
directory. There are camera-focused and IMU-focused calibration datasets. See
the
URL
file in there for more information on what each sequence is.
In the
'MI\_valve\_index/extras'
directory you can find the following files:
* 'URL':
Calibration file produced with the
'basalt\_calibrate\_imu'
tool from
'MIC01\_camcalib1'
and
'MIC04\_imucalib1'
datasets with camera-IMU time offset and IMU bias/misalignment info removed so
that it works with the fully the all the datasets by default which are fully
post-processed and don't require those fields.
* 'URL':
Same as 'URL' but with the cam-IMU time offset and IMU bias and
misalignment information filled in.
* 'URL':
JSON file exposed by the headset's firmware with information of the device. It
includes camera and display calibration as well as more data that might be of
interest. It is not used but included for completeness' sake.
* 'other\_calibrations/':
Calibration results obtained from the other calibration datasets. Shown for
comparison and ensuring that all of them have similar values.
'MICXX\_camcalibY' has camera-only calibration produced with the
'basalt\_calibrate'
tool, while the corresponding 'MICXX\_imucalibY' datasets use these datasets as
a starting point and have the 'basalt\_calibrate\_imu' calibration results.
##### Camera model
By default, the 'URL' file provides parameters 'k1', 'k2', 'k3',
and 'k4' for the Kannala-Brandt camera
model
with fish-eye distortion (also known as OpenCV's
fish-eye).
Calibrations with other camera models might be added later on, otherwise, you
can use the calibration sequences for custom calibrations.
##### IMU model
For the default 'URL' where all parameters are zero, you can ignore
any model and just use the measurements present in 'imu0/URL' directly. If
instead, you want to use the raw measurements from 'imu0/URL' you will
need to apply the Basalt
accelerometer
and
gyroscope
models that use a misalignment-scale correction matrix together with a constant
initial bias. The random walk and white noise parameters were not computed and
default reasonable values are used instead.
#### Post-processing walkthrough
If you are interested in understanding the step-by-step procedure of
post-processing of the dataset, below is a video detailing the procedure for the
[MIPB08](URL) dataset.
 is a long 40min
dataset with many levels played.
+ MIPP\_pistol\_whip:
This is a shooting and music game, each dataset is a different level/song.
+ MIPT\_thrill\_of\_the\_fight:
This is a boxing game.
* MIO\_others:
These are other datasets that might be useful, they include play-pretend
scenarios in which the user is supposed to be playing some particular game,
then there is some inspection and scanning/mapping of the room, some very
short and lightweight datasets for quick testing, and some datasets with a lot
of movement around the environment.
### Evaluation
These are the results of running the
current
Monado tracker that is based on
Basalt on the dataset sequences.
* \*: Average frame time. On an AMD Ryzen 7 5800X CPU. Run with pipeline fully
saturated. Real time operation frame times should be slightly lower.
* \*\*: RTE using delta of 6 frames (11ms)
* \*\*\*: The SDM metric is similar to RTE, it represents distance in meters
drifted for each meter of the dataset. The metric is implemented in the
xrtslam-metrics
project.
License
-------
This work is licensed under a <a rel="license" href="URL Commons Attribution 4.0 International License.
<a rel="license" href="URL alt="Creative Commons License" style="border-width:0" src="https://i.URL />
|
[
"### Data",
"#### Camera samples\n\n\nIn the 'vive' driver from Monado, we don't have direct access to the camera\ndevice timestamps but only to V4L2 timestamps. These are not exactly hardware\ntimestamps and have some offset with respect to the device clock in which the\nIMU samples are timestamped.\n\n\nThe camera frames can be found in the 'camX/data' directory as PNG files with\nnames corresponding to their V4L2 timestamps. The 'camX/URL' file contains\naligned timestamps of each frame. The 'camX/URL' also contains the\noriginal V4L2 timestamp and the \"host timestamp\" which is the time at which the\nhost computer had the frame ready to use after USB transmission. By separating\narrival time and exposure time algorithms can be made to be more robust for\nreal time operation.\n\n\nThe cameras of the Valve Index have global shutters with a resolution of 960×960\nstreaming at 54fps. They have auto exposure enabled. While the cameras of the\nIndex are RGB you will find only grayscale images in these datasets. The\noriginal images are provided in YUYV422 format but only the luma component is\nstored.\n\n\nFor each dataset, the camera timestamps are aligned with respect to IMU\ntimestamps by running visual-only odometry with Basalt on a 30-second subset of\nthe dataset. The resulting trajectory is then aligned with the\n'basalt\\_time\\_alignment'\ntool that aligns the rotational velocities of the trajectory with the gyroscope\nsamples and returns the resulting offset in nanoseconds. That correction is then\napplied to the dataset. Refer to the post-processing walkthrough\n[video](URL) for more details.",
"#### IMU samples\n\n\nThe IMU timestamps are device timestamps, they come at about 1000Hz. We provide\nan 'imu0/URL' file that contains the raw measurements without any axis\nscale misalignment o bias correction. 'imu0/URL' has the\nscale misalignment and bias corrections applied so that the SLAM system can\nignore those corrections. 'imu0/URL' contains the arrival time of the\nIMU sample to the host computer for algorithms that want to adapt themselves to\nwork in real time.",
"#### Ground truth information\n\n\nThe ground truth setup consists of three lighthouses 2.0 base stations and a\nSteamVR session providing tracking data through the OpenVR API to Monado. While\nnot as precise as other MoCap tracking systems like OptiTrack or Vicon it\nshould still provide pretty good accuracy and precision close to the 1mm range.\nThere are different attempts at studying the accuracy of SteamVR tracking that\nyou can check out like\nthis,\nthis,\nor this. When a tracking system gets closer to\nmillimeter accuracy these datasets will no longer be as useful for improving it.\n\n\nThe raw ground truth data is stored in 'gt/URL'. OpenVR does not provide\ntimestamps and as such, the timestamps recorded are from when the host asks\nOpenVR for the latest pose with a call to\n'GetDeviceToAbsoluteTrackingPose'.\nThe poses contained in this file are not of the IMU but of the headset origin as\ninterpreted by SteamVR, which usually is between the middle of the eyes and\nfacing towards the displays. The file 'gt/URL' corrects each entry of the\nprevious file with timestamps aligned with the IMU clock and poses of the IMU\ninstead of this headset origin.",
"#### Calibration\n\n\nThere are multiple calibration datasets in the\n'MIC\\_calibration'\ndirectory. There are camera-focused and IMU-focused calibration datasets. See\nthe\nURL\nfile in there for more information on what each sequence is.\n\n\nIn the\n'MI\\_valve\\_index/extras'\ndirectory you can find the following files:\n\n\n* 'URL':\nCalibration file produced with the\n'basalt\\_calibrate\\_imu'\ntool from\n'MIC01\\_camcalib1'\nand\n'MIC04\\_imucalib1'\ndatasets with camera-IMU time offset and IMU bias/misalignment info removed so\nthat it works with the fully the all the datasets by default which are fully\npost-processed and don't require those fields.\n* 'URL':\nSame as 'URL' but with the cam-IMU time offset and IMU bias and\nmisalignment information filled in.\n* 'URL':\nJSON file exposed by the headset's firmware with information of the device. It\nincludes camera and display calibration as well as more data that might be of\ninterest. It is not used but included for completeness' sake.\n* 'other\\_calibrations/':\nCalibration results obtained from the other calibration datasets. Shown for\ncomparison and ensuring that all of them have similar values.\n'MICXX\\_camcalibY' has camera-only calibration produced with the\n'basalt\\_calibrate'\ntool, while the corresponding 'MICXX\\_imucalibY' datasets use these datasets as\na starting point and have the 'basalt\\_calibrate\\_imu' calibration results.",
"##### Camera model\n\n\nBy default, the 'URL' file provides parameters 'k1', 'k2', 'k3',\nand 'k4' for the Kannala-Brandt camera\nmodel\nwith fish-eye distortion (also known as OpenCV's\nfish-eye).\n\n\nCalibrations with other camera models might be added later on, otherwise, you\ncan use the calibration sequences for custom calibrations.",
"##### IMU model\n\n\nFor the default 'URL' where all parameters are zero, you can ignore\nany model and just use the measurements present in 'imu0/URL' directly. If\ninstead, you want to use the raw measurements from 'imu0/URL' you will\nneed to apply the Basalt\naccelerometer\nand\ngyroscope\nmodels that use a misalignment-scale correction matrix together with a constant\ninitial bias. The random walk and white noise parameters were not computed and\ndefault reasonable values are used instead.",
"#### Post-processing walkthrough\n\n\nIf you are interested in understanding the step-by-step procedure of\npost-processing of the dataset, below is a video detailing the procedure for the\n[MIPB08](URL) dataset.\n\n\n is a long 40min\n\tdataset with many levels played.\n\t+ MIPP\\_pistol\\_whip:\n\tThis is a shooting and music game, each dataset is a different level/song.\n\t+ MIPT\\_thrill\\_of\\_the\\_fight:\n\tThis is a boxing game.\n* MIO\\_others:\nThese are other datasets that might be useful, they include play-pretend\nscenarios in which the user is supposed to be playing some particular game,\nthen there is some inspection and scanning/mapping of the room, some very\nshort and lightweight datasets for quick testing, and some datasets with a lot\nof movement around the environment.",
"### Evaluation\n\n\nThese are the results of running the\ncurrent\nMonado tracker that is based on\nBasalt on the dataset sequences.\n\n\n\n* \\*: Average frame time. On an AMD Ryzen 7 5800X CPU. Run with pipeline fully\nsaturated. Real time operation frame times should be slightly lower.\n* \\*\\*: RTE using delta of 6 frames (11ms)\n* \\*\\*\\*: The SDM metric is similar to RTE, it represents distance in meters\ndrifted for each meter of the dataset. The metric is implemented in the\nxrtslam-metrics\nproject.\n\n\nLicense\n-------\n\n\nThis work is licensed under a <a rel=\"license\" href=\"URL Commons Attribution 4.0 International License.\n<a rel=\"license\" href=\"URL alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.URL />"
] |
[
"TAGS\n#license-cc-by-4.0 #doi-10.57967/hf/1081 #region-us \n",
"### Data",
"#### Camera samples\n\n\nIn the 'vive' driver from Monado, we don't have direct access to the camera\ndevice timestamps but only to V4L2 timestamps. These are not exactly hardware\ntimestamps and have some offset with respect to the device clock in which the\nIMU samples are timestamped.\n\n\nThe camera frames can be found in the 'camX/data' directory as PNG files with\nnames corresponding to their V4L2 timestamps. The 'camX/URL' file contains\naligned timestamps of each frame. The 'camX/URL' also contains the\noriginal V4L2 timestamp and the \"host timestamp\" which is the time at which the\nhost computer had the frame ready to use after USB transmission. By separating\narrival time and exposure time algorithms can be made to be more robust for\nreal time operation.\n\n\nThe cameras of the Valve Index have global shutters with a resolution of 960×960\nstreaming at 54fps. They have auto exposure enabled. While the cameras of the\nIndex are RGB you will find only grayscale images in these datasets. The\noriginal images are provided in YUYV422 format but only the luma component is\nstored.\n\n\nFor each dataset, the camera timestamps are aligned with respect to IMU\ntimestamps by running visual-only odometry with Basalt on a 30-second subset of\nthe dataset. The resulting trajectory is then aligned with the\n'basalt\\_time\\_alignment'\ntool that aligns the rotational velocities of the trajectory with the gyroscope\nsamples and returns the resulting offset in nanoseconds. That correction is then\napplied to the dataset. Refer to the post-processing walkthrough\n[video](URL) for more details.",
"#### IMU samples\n\n\nThe IMU timestamps are device timestamps, they come at about 1000Hz. We provide\nan 'imu0/URL' file that contains the raw measurements without any axis\nscale misalignment o bias correction. 'imu0/URL' has the\nscale misalignment and bias corrections applied so that the SLAM system can\nignore those corrections. 'imu0/URL' contains the arrival time of the\nIMU sample to the host computer for algorithms that want to adapt themselves to\nwork in real time.",
"#### Ground truth information\n\n\nThe ground truth setup consists of three lighthouses 2.0 base stations and a\nSteamVR session providing tracking data through the OpenVR API to Monado. While\nnot as precise as other MoCap tracking systems like OptiTrack or Vicon it\nshould still provide pretty good accuracy and precision close to the 1mm range.\nThere are different attempts at studying the accuracy of SteamVR tracking that\nyou can check out like\nthis,\nthis,\nor this. When a tracking system gets closer to\nmillimeter accuracy these datasets will no longer be as useful for improving it.\n\n\nThe raw ground truth data is stored in 'gt/URL'. OpenVR does not provide\ntimestamps and as such, the timestamps recorded are from when the host asks\nOpenVR for the latest pose with a call to\n'GetDeviceToAbsoluteTrackingPose'.\nThe poses contained in this file are not of the IMU but of the headset origin as\ninterpreted by SteamVR, which usually is between the middle of the eyes and\nfacing towards the displays. The file 'gt/URL' corrects each entry of the\nprevious file with timestamps aligned with the IMU clock and poses of the IMU\ninstead of this headset origin.",
"#### Calibration\n\n\nThere are multiple calibration datasets in the\n'MIC\\_calibration'\ndirectory. There are camera-focused and IMU-focused calibration datasets. See\nthe\nURL\nfile in there for more information on what each sequence is.\n\n\nIn the\n'MI\\_valve\\_index/extras'\ndirectory you can find the following files:\n\n\n* 'URL':\nCalibration file produced with the\n'basalt\\_calibrate\\_imu'\ntool from\n'MIC01\\_camcalib1'\nand\n'MIC04\\_imucalib1'\ndatasets with camera-IMU time offset and IMU bias/misalignment info removed so\nthat it works with the fully the all the datasets by default which are fully\npost-processed and don't require those fields.\n* 'URL':\nSame as 'URL' but with the cam-IMU time offset and IMU bias and\nmisalignment information filled in.\n* 'URL':\nJSON file exposed by the headset's firmware with information of the device. It\nincludes camera and display calibration as well as more data that might be of\ninterest. It is not used but included for completeness' sake.\n* 'other\\_calibrations/':\nCalibration results obtained from the other calibration datasets. Shown for\ncomparison and ensuring that all of them have similar values.\n'MICXX\\_camcalibY' has camera-only calibration produced with the\n'basalt\\_calibrate'\ntool, while the corresponding 'MICXX\\_imucalibY' datasets use these datasets as\na starting point and have the 'basalt\\_calibrate\\_imu' calibration results.",
"##### Camera model\n\n\nBy default, the 'URL' file provides parameters 'k1', 'k2', 'k3',\nand 'k4' for the Kannala-Brandt camera\nmodel\nwith fish-eye distortion (also known as OpenCV's\nfish-eye).\n\n\nCalibrations with other camera models might be added later on, otherwise, you\ncan use the calibration sequences for custom calibrations.",
"##### IMU model\n\n\nFor the default 'URL' where all parameters are zero, you can ignore\nany model and just use the measurements present in 'imu0/URL' directly. If\ninstead, you want to use the raw measurements from 'imu0/URL' you will\nneed to apply the Basalt\naccelerometer\nand\ngyroscope\nmodels that use a misalignment-scale correction matrix together with a constant\ninitial bias. The random walk and white noise parameters were not computed and\ndefault reasonable values are used instead.",
"#### Post-processing walkthrough\n\n\nIf you are interested in understanding the step-by-step procedure of\npost-processing of the dataset, below is a video detailing the procedure for the\n[MIPB08](URL) dataset.\n\n\n is a long 40min\n\tdataset with many levels played.\n\t+ MIPP\\_pistol\\_whip:\n\tThis is a shooting and music game, each dataset is a different level/song.\n\t+ MIPT\\_thrill\\_of\\_the\\_fight:\n\tThis is a boxing game.\n* MIO\\_others:\nThese are other datasets that might be useful, they include play-pretend\nscenarios in which the user is supposed to be playing some particular game,\nthen there is some inspection and scanning/mapping of the room, some very\nshort and lightweight datasets for quick testing, and some datasets with a lot\nof movement around the environment.",
"### Evaluation\n\n\nThese are the results of running the\ncurrent\nMonado tracker that is based on\nBasalt on the dataset sequences.\n\n\n\n* \\*: Average frame time. On an AMD Ryzen 7 5800X CPU. Run with pipeline fully\nsaturated. Real time operation frame times should be slightly lower.\n* \\*\\*: RTE using delta of 6 frames (11ms)\n* \\*\\*\\*: The SDM metric is similar to RTE, it represents distance in meters\ndrifted for each meter of the dataset. The metric is implemented in the\nxrtslam-metrics\nproject.\n\n\nLicense\n-------\n\n\nThis work is licensed under a <a rel=\"license\" href=\"URL Commons Attribution 4.0 International License.\n<a rel=\"license\" href=\"URL alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.URL />"
] |
[
27,
3,
403,
121,
282,
395,
96,
116,
64,
312,
204
] |
[
"passage: TAGS\n#license-cc-by-4.0 #doi-10.57967/hf/1081 #region-us \n### Data#### Camera samples\n\n\nIn the 'vive' driver from Monado, we don't have direct access to the camera\ndevice timestamps but only to V4L2 timestamps. These are not exactly hardware\ntimestamps and have some offset with respect to the device clock in which the\nIMU samples are timestamped.\n\n\nThe camera frames can be found in the 'camX/data' directory as PNG files with\nnames corresponding to their V4L2 timestamps. The 'camX/URL' file contains\naligned timestamps of each frame. The 'camX/URL' also contains the\noriginal V4L2 timestamp and the \"host timestamp\" which is the time at which the\nhost computer had the frame ready to use after USB transmission. By separating\narrival time and exposure time algorithms can be made to be more robust for\nreal time operation.\n\n\nThe cameras of the Valve Index have global shutters with a resolution of 960×960\nstreaming at 54fps. They have auto exposure enabled. While the cameras of the\nIndex are RGB you will find only grayscale images in these datasets. The\noriginal images are provided in YUYV422 format but only the luma component is\nstored.\n\n\nFor each dataset, the camera timestamps are aligned with respect to IMU\ntimestamps by running visual-only odometry with Basalt on a 30-second subset of\nthe dataset. The resulting trajectory is then aligned with the\n'basalt\\_time\\_alignment'\ntool that aligns the rotational velocities of the trajectory with the gyroscope\nsamples and returns the resulting offset in nanoseconds. That correction is then\napplied to the dataset. Refer to the post-processing walkthrough\n[video](URL) for more details.",
"passage: #### IMU samples\n\n\nThe IMU timestamps are device timestamps, they come at about 1000Hz. We provide\nan 'imu0/URL' file that contains the raw measurements without any axis\nscale misalignment o bias correction. 'imu0/URL' has the\nscale misalignment and bias corrections applied so that the SLAM system can\nignore those corrections. 'imu0/URL' contains the arrival time of the\nIMU sample to the host computer for algorithms that want to adapt themselves to\nwork in real time.#### Ground truth information\n\n\nThe ground truth setup consists of three lighthouses 2.0 base stations and a\nSteamVR session providing tracking data through the OpenVR API to Monado. While\nnot as precise as other MoCap tracking systems like OptiTrack or Vicon it\nshould still provide pretty good accuracy and precision close to the 1mm range.\nThere are different attempts at studying the accuracy of SteamVR tracking that\nyou can check out like\nthis,\nthis,\nor this. When a tracking system gets closer to\nmillimeter accuracy these datasets will no longer be as useful for improving it.\n\n\nThe raw ground truth data is stored in 'gt/URL'. OpenVR does not provide\ntimestamps and as such, the timestamps recorded are from when the host asks\nOpenVR for the latest pose with a call to\n'GetDeviceToAbsoluteTrackingPose'.\nThe poses contained in this file are not of the IMU but of the headset origin as\ninterpreted by SteamVR, which usually is between the middle of the eyes and\nfacing towards the displays. The file 'gt/URL' corrects each entry of the\nprevious file with timestamps aligned with the IMU clock and poses of the IMU\ninstead of this headset origin.",
"passage: #### Calibration\n\n\nThere are multiple calibration datasets in the\n'MIC\\_calibration'\ndirectory. There are camera-focused and IMU-focused calibration datasets. See\nthe\nURL\nfile in there for more information on what each sequence is.\n\n\nIn the\n'MI\\_valve\\_index/extras'\ndirectory you can find the following files:\n\n\n* 'URL':\nCalibration file produced with the\n'basalt\\_calibrate\\_imu'\ntool from\n'MIC01\\_camcalib1'\nand\n'MIC04\\_imucalib1'\ndatasets with camera-IMU time offset and IMU bias/misalignment info removed so\nthat it works with the fully the all the datasets by default which are fully\npost-processed and don't require those fields.\n* 'URL':\nSame as 'URL' but with the cam-IMU time offset and IMU bias and\nmisalignment information filled in.\n* 'URL':\nJSON file exposed by the headset's firmware with information of the device. It\nincludes camera and display calibration as well as more data that might be of\ninterest. It is not used but included for completeness' sake.\n* 'other\\_calibrations/':\nCalibration results obtained from the other calibration datasets. Shown for\ncomparison and ensuring that all of them have similar values.\n'MICXX\\_camcalibY' has camera-only calibration produced with the\n'basalt\\_calibrate'\ntool, while the corresponding 'MICXX\\_imucalibY' datasets use these datasets as\na starting point and have the 'basalt\\_calibrate\\_imu' calibration results.##### Camera model\n\n\nBy default, the 'URL' file provides parameters 'k1', 'k2', 'k3',\nand 'k4' for the Kannala-Brandt camera\nmodel\nwith fish-eye distortion (also known as OpenCV's\nfish-eye).\n\n\nCalibrations with other camera models might be added later on, otherwise, you\ncan use the calibration sequences for custom calibrations.##### IMU model\n\n\nFor the default 'URL' where all parameters are zero, you can ignore\nany model and just use the measurements present in 'imu0/URL' directly. If\ninstead, you want to use the raw measurements from 'imu0/URL' you will\nneed to apply the Basalt\naccelerometer\nand\ngyroscope\nmodels that use a misalignment-scale correction matrix together with a constant\ninitial bias. The random walk and white noise parameters were not computed and\ndefault reasonable values are used instead.#### Post-processing walkthrough\n\n\nIf you are interested in understanding the step-by-step procedure of\npost-processing of the dataset, below is a video detailing the procedure for the\n[MIPB08](URL) dataset.\n\n\n
|
thesistranslation/distilled-ccmatrix-de-en
|
[
"language:de",
"language:en",
"region:us"
] |
2023-08-17T00:34:16+00:00
|
{"language": ["de", "en"], "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "translation", "dtype": {"translation": {"languages": ["de", "en"]}}}], "splits": [{"name": "train", "num_bytes": 7314473226, "num_examples": 30000000}], "download_size": 5149999083, "dataset_size": 7314473226}}
|
2023-10-03T08:22:18+00:00
|
[] |
[
"de",
"en"
] |
TAGS
#language-German #language-English #region-us
|
# Dataset Card for "distilled-ccmatrix-de-en"
More Information needed
|
[
"# Dataset Card for \"distilled-ccmatrix-de-en\"\n\nMore Information needed"
] |
[
"TAGS\n#language-German #language-English #region-us \n",
"# Dataset Card for \"distilled-ccmatrix-de-en\"\n\nMore Information needed"
] |
[
14,
21
] |
[
"passage: TAGS\n#language-German #language-English #region-us \n# Dataset Card for \"distilled-ccmatrix-de-en\"\n\nMore Information needed"
] |
19bb44dbc78a6dc08a4bae4f2c1b54563f99cde5
|
**数据集格式说明: [[glaiveai/glaive-function-calling · Datasets at Hugging Face](https://huggingface.co/datasets/glaiveai/glaive-function-calling)](glaiveai/glaive-function-calling) 的 SFT 格式**
我们高兴地宣布,数据集 "glaiveai/glaive-function-calling" 已经根据 SFT(Supervised Fine-Tuning)的需求进行了格式转换,以支持大型语言模型的训练。以下是有关这一新格式的简要说明:
1. **数据集概述:**
数据集 "glaiveai/glaive-function-calling" 基于 CC-BY-4.0 协议发布,原始数据集包含标识符和对话信息,这些数据已被转换为适应 SFT 训练的结构。
2. **数据格式:**
转换后的数据集格式包含以下关键信息:
- `id`: 整数类型的标识符,用于唯一标识每个数据样本。
- `conversations`: 一个数组,其中包含对话信息。每个对话可以由多个句子组成,以更好地呈现函数调用的上下文。
3. **数据集用途:**
转换后的数据集适用于 SFT 的训练,主要用途包括但不限于:
- 函数调用理解: 通过分析对话中的函数调用信息,让语言模型更好地理解函数之间的关系,从而提高其代码理解能力。
- 上下文感知性: 对话信息能够为模型提供更丰富的上下文,使其更准确地推断和生成代码片段。
- 代码生成与推荐: 基于对话中的函数调用上下文,模型可以更精确地生成代码,并提供更合适的函数建议。
通过将数据集 "glaiveai/glaive-function-calling" 转换为 SFT 格式,我们旨在为大型语言模型的训练提供更适合sft的函数调用数据,以提升其代码理解和生成的性能。
如有任何问题或需要进一步帮助,请随时联系我们。感谢您对函数调用数据集及其应用的兴趣与支持!
|
Deepexi/glaive-function-calling-vicuna
|
[
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"region:us"
] |
2023-08-17T00:36:45+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"]}
|
2023-08-17T02:15:19+00:00
|
[] |
[
"en"
] |
TAGS
#size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us
|
数据集格式说明: [glaiveai/glaive-function-calling · Datasets at Hugging Face](glaiveai/glaive-function-calling) 的 SFT 格式
我们高兴地宣布,数据集 "glaiveai/glaive-function-calling" 已经根据 SFT(Supervised Fine-Tuning)的需求进行了格式转换,以支持大型语言模型的训练。以下是有关这一新格式的简要说明:
1. 数据集概述:
数据集 "glaiveai/glaive-function-calling" 基于 CC-BY-4.0 协议发布,原始数据集包含标识符和对话信息,这些数据已被转换为适应 SFT 训练的结构。
2. 数据格式:
转换后的数据集格式包含以下关键信息:
- 'id': 整数类型的标识符,用于唯一标识每个数据样本。
- 'conversations': 一个数组,其中包含对话信息。每个对话可以由多个句子组成,以更好地呈现函数调用的上下文。
3. 数据集用途:
转换后的数据集适用于 SFT 的训练,主要用途包括但不限于:
- 函数调用理解: 通过分析对话中的函数调用信息,让语言模型更好地理解函数之间的关系,从而提高其代码理解能力。
- 上下文感知性: 对话信息能够为模型提供更丰富的上下文,使其更准确地推断和生成代码片段。
- 代码生成与推荐: 基于对话中的函数调用上下文,模型可以更精确地生成代码,并提供更合适的函数建议。
通过将数据集 "glaiveai/glaive-function-calling" 转换为 SFT 格式,我们旨在为大型语言模型的训练提供更适合sft的函数调用数据,以提升其代码理解和生成的性能。
如有任何问题或需要进一步帮助,请随时联系我们。感谢您对函数调用数据集及其应用的兴趣与支持!
|
[] |
[
"TAGS\n#size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us \n"
] |
[
31
] |
[
"passage: TAGS\n#size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us \n"
] |
021f1521dd3d4921ec35261bd7eb80cd914ed7d0
|
# Dataset Card for "Biorxiv_abstracts_large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
KhalfounMehdi/Biorxiv_abstracts_large
|
[
"region:us"
] |
2023-08-17T00:46:49+00:00
|
{"dataset_info": {"features": [{"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 33615443, "num_examples": 21078}], "download_size": 18750994, "dataset_size": 33615443}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-17T00:46:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Biorxiv_abstracts_large"
More Information needed
|
[
"# Dataset Card for \"Biorxiv_abstracts_large\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Biorxiv_abstracts_large\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Biorxiv_abstracts_large\"\n\nMore Information needed"
] |
ed5a99b04dd813ebcc000e87db24c300faae3dbf
|
<h1>
<img alt="Alt text" src="./rh-moustouche-hat.jpg" style="display:inline-block; vertical-align:middle" />
Only Connect Wall (OCW) Dataset
</h1>
The Only Connect Wall (OCW) dataset contains 618 _"Connecting Walls"_ from the [Round 3: Connecting Wall](https://en.wikipedia.org/wiki/Only_Connect#Round_3:_Connecting_Wall) segment of the [Only Connect quiz show](https://en.wikipedia.org/wiki/Only_Connect), collected from 15 seasons' worth of episodes. Each wall contains the ground-truth __groups__ and __connections__ as well as recorded human performance. Please see [our paper](https://arxiv.org/abs/2306.11167) and [GitHub repo](https://github.com/TaatiTeam/OCW) for more details about the dataset and its motivations.
## Usage
```python
# pip install datasets
from datasets import load_dataset
dataset = load_dataset("TaatiTeam/OCW")
# The dataset can be used like any other HuggingFace dataset
# E.g. get the wall_id of the first example in the train set
dataset["train"]["wall_id"][0]
# or get the words of the first 10 examples in the test set
dataset["test"]["words"][0:10]
```
We also provide two different versions of the dataset where the red herrings in each wall have been significantly reduced (`ocw_randomized`) or removed altogether (`ocw_wordnet`) which can be loaded like:
```python
# pip install datasets
from datasets import load_dataset
ocw_randomized = load_dataset("TaatiTeam/OCW", "ocw_randomized")
ocw_wordnet = load_dataset("TaatiTeam/OCW", "ocw_wordnet")
```
See [our paper](https://arxiv.org/abs/2306.11167) for more details.
## 📝 Citing
If you use the Only Connect dataset in your work, please consider citing our paper:
```
@article{Naeini2023LargeLM,
title = {Large Language Models are Fixated by Red Herrings: Exploring Creative Problem Solving and Einstellung Effect using the Only Connect Wall Dataset},
author = {Saeid Alavi Naeini and Raeid Saqur and Mozhgan Saeidi and John Giorgi and Babak Taati},
year = 2023,
journal = {ArXiv},
volume = {abs/2306.11167},
url = {https://api.semanticscholar.org/CorpusID:259203717}
}
```
## 🙏 Acknowledgements
We would like the thank the maintainers and contributors of the fan-made and run website [https://ocdb.cc/](https://ocdb.cc/) for providing the data for this dataset. We would also like to thank the creators of the Only Connect quiz show for producing such an entertaining and thought-provoking show.
|
TaatiTeam/OCW
|
[
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:mit",
"creative problem solving",
"puzzles",
"fixation effect",
"large language models",
"only connect",
"quiz show",
"connecting walls",
"arxiv:2306.11167",
"region:us"
] |
2023-08-17T00:47:00+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "pretty_name": "Only Connect Wall Dataset", "tags": ["creative problem solving", "puzzles", "fixation effect", "large language models", "only connect", "quiz show", "connecting walls"]}
|
2023-11-07T18:59:55+00:00
|
[
"2306.11167"
] |
[
"en"
] |
TAGS
#task_categories-text-classification #size_categories-n<1K #language-English #license-mit #creative problem solving #puzzles #fixation effect #large language models #only connect #quiz show #connecting walls #arxiv-2306.11167 #region-us
|
<h1>
<img alt="Alt text" src="./URL" style="display:inline-block; vertical-align:middle" />
Only Connect Wall (OCW) Dataset
</h1>
The Only Connect Wall (OCW) dataset contains 618 _"Connecting Walls"_ from the Round 3: Connecting Wall segment of the Only Connect quiz show, collected from 15 seasons' worth of episodes. Each wall contains the ground-truth __groups__ and __connections__ as well as recorded human performance. Please see our paper and GitHub repo for more details about the dataset and its motivations.
## Usage
We also provide two different versions of the dataset where the red herrings in each wall have been significantly reduced ('ocw_randomized') or removed altogether ('ocw_wordnet') which can be loaded like:
See our paper for more details.
## Citing
If you use the Only Connect dataset in your work, please consider citing our paper:
## Acknowledgements
We would like the thank the maintainers and contributors of the fan-made and run website URL for providing the data for this dataset. We would also like to thank the creators of the Only Connect quiz show for producing such an entertaining and thought-provoking show.
|
[
"## Usage\n\n\n\nWe also provide two different versions of the dataset where the red herrings in each wall have been significantly reduced ('ocw_randomized') or removed altogether ('ocw_wordnet') which can be loaded like:\n\n\n\nSee our paper for more details.",
"## Citing\n\nIf you use the Only Connect dataset in your work, please consider citing our paper:",
"## Acknowledgements\n\nWe would like the thank the maintainers and contributors of the fan-made and run website URL for providing the data for this dataset. We would also like to thank the creators of the Only Connect quiz show for producing such an entertaining and thought-provoking show."
] |
[
"TAGS\n#task_categories-text-classification #size_categories-n<1K #language-English #license-mit #creative problem solving #puzzles #fixation effect #large language models #only connect #quiz show #connecting walls #arxiv-2306.11167 #region-us \n",
"## Usage\n\n\n\nWe also provide two different versions of the dataset where the red herrings in each wall have been significantly reduced ('ocw_randomized') or removed altogether ('ocw_wordnet') which can be loaded like:\n\n\n\nSee our paper for more details.",
"## Citing\n\nIf you use the Only Connect dataset in your work, please consider citing our paper:",
"## Acknowledgements\n\nWe would like the thank the maintainers and contributors of the fan-made and run website URL for providing the data for this dataset. We would also like to thank the creators of the Only Connect quiz show for producing such an entertaining and thought-provoking show."
] |
[
76,
64,
22,
63
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-n<1K #language-English #license-mit #creative problem solving #puzzles #fixation effect #large language models #only connect #quiz show #connecting walls #arxiv-2306.11167 #region-us \n## Usage\n\n\n\nWe also provide two different versions of the dataset where the red herrings in each wall have been significantly reduced ('ocw_randomized') or removed altogether ('ocw_wordnet') which can be loaded like:\n\n\n\nSee our paper for more details.## Citing\n\nIf you use the Only Connect dataset in your work, please consider citing our paper:## Acknowledgements\n\nWe would like the thank the maintainers and contributors of the fan-made and run website URL for providing the data for this dataset. We would also like to thank the creators of the Only Connect quiz show for producing such an entertaining and thought-provoking show."
] |
d4174b1a1712e7d9d6c1a45e00f05fab66ac412e
|
# Dataset of remilia_scarlet/レミリア・スカーレット/레밀리아스칼렛 (Touhou)
This is the dataset of remilia_scarlet/レミリア・スカーレット/레밀리아스칼렛 (Touhou), containing 500 images and their tags.
The core tags of this character are `short_hair, red_eyes, bat_wings, wings, hat, ribbon, mob_cap, blue_hair, hat_ribbon, bangs, hair_between_eyes, red_ribbon, bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 779.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/remilia_scarlet_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 443.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/remilia_scarlet_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1251 | 935.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/remilia_scarlet_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 692.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/remilia_scarlet_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1251 | 1.29 GiB | [Download](https://huggingface.co/datasets/CyberHarem/remilia_scarlet_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/remilia_scarlet_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, closed_mouth, frilled_shirt_collar, looking_at_viewer, puffy_short_sleeves, red_ascot, solo, wrist_cuffs, brooch, smile, pink_dress, pink_headwear, pink_shirt, purple_hair, red_bow, skirt, blush, cowboy_shot, sash, simple_background, white_background |
| 1 | 10 |  |  |  |  |  | 1girl, looking_at_viewer, puffy_short_sleeves, solo, white_dress, red_ascot, simple_background, upper_body, white_background, white_headwear, wrist_cuffs, brooch, red_bow, blush, frilled_shirt_collar, nail_polish, smile |
| 2 | 8 |  |  |  |  |  | 1girl, brooch, looking_at_viewer, open_mouth, pink_dress, puffy_short_sleeves, solo, fang, pink_headwear, wrist_cuffs, blush, simple_background, smile, upper_body, white_background, frills, grey_background, hands_up, red_ascot, red_bow, v-shaped_eyebrows |
| 3 | 6 |  |  |  |  |  | 1girl, blush, frilled_shirt_collar, pink_dress, red_ascot, solo, brooch, looking_at_viewer, simple_background, white_background, no_headwear, artist_name, puffy_short_sleeves, upper_body |
| 4 | 8 |  |  |  |  |  | 1girl, full_moon, looking_at_viewer, solo, wrist_cuffs, red_moon, bat_(animal), brooch, puffy_short_sleeves, red_ascot, spear_the_gungnir, frilled_shirt_collar, red_bow, dress, skirt_set, frilled_sleeves, holding_weapon, night_sky, open_mouth, pointy_ears, polearm, purple_hair, red_nails, smile, star_(sky), white_headwear, white_skirt |
| 5 | 10 |  |  |  |  |  | 1girl, solo, looking_at_viewer, smile, wrist_cuffs, ascot, dress, puffy_short_sleeves, skirt_set, moon, sash, spear_the_gungnir |
| 6 | 6 |  |  |  |  |  | 1girl, dress, solo, wrist_cuffs, smile, ascot |
| 7 | 6 |  |  |  |  |  | 1girl, blush, looking_at_viewer, pointy_ears, simple_background, smile, solo, white_background, cowboy_shot, dress, standing, juliet_sleeves, neck_ribbon, shirt, skirt, black_thighhighs, center_frills, closed_mouth, hat_bow, medium_breasts, zettai_ryouiki |
| 8 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, sitting, solo, bare_shoulders, small_breasts, pillow, white_panties, black_thighhighs, canopy_bed, corset, flower, lips, pantyshot, pointy_ears, underwear_only, white_gloves, white_thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | closed_mouth | frilled_shirt_collar | looking_at_viewer | puffy_short_sleeves | red_ascot | solo | wrist_cuffs | brooch | smile | pink_dress | pink_headwear | pink_shirt | purple_hair | red_bow | skirt | blush | cowboy_shot | sash | simple_background | white_background | white_dress | upper_body | white_headwear | nail_polish | open_mouth | fang | frills | grey_background | hands_up | v-shaped_eyebrows | no_headwear | artist_name | full_moon | red_moon | bat_(animal) | spear_the_gungnir | dress | skirt_set | frilled_sleeves | holding_weapon | night_sky | pointy_ears | polearm | red_nails | star_(sky) | white_skirt | ascot | moon | standing | juliet_sleeves | neck_ribbon | shirt | black_thighhighs | center_frills | hat_bow | medium_breasts | zettai_ryouiki | sitting | bare_shoulders | small_breasts | pillow | white_panties | canopy_bed | corset | flower | lips | pantyshot | underwear_only | white_gloves | white_thighhighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-----------------------|:--------------------|:----------------------|:------------|:-------|:--------------|:---------|:--------|:-------------|:----------------|:-------------|:--------------|:----------|:--------|:--------|:--------------|:-------|:--------------------|:-------------------|:--------------|:-------------|:-----------------|:--------------|:-------------|:-------|:---------|:------------------|:-----------|:--------------------|:--------------|:--------------|:------------|:-----------|:---------------|:--------------------|:--------|:------------|:------------------|:-----------------|:------------|:--------------|:----------|:------------|:-------------|:--------------|:--------|:-------|:-----------|:-----------------|:--------------|:--------|:-------------------|:----------------|:----------|:-----------------|:-----------------|:----------|:-----------------|:----------------|:---------|:----------------|:-------------|:---------|:---------|:-------|:------------|:-----------------|:---------------|:-------------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | | X | X | X | X | X | X | X | X | | | | | X | | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | | | X | X | X | X | X | X | X | X | X | | | X | | X | | | X | X | | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | X | X | X | X | X | | X | | X | | | | | | X | | | X | X | | X | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | | X | X | X | X | X | X | X | X | | | | X | X | | | | | | | | | X | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 10 |  |  |  |  |  | X | | | X | X | | X | X | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | X | X | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | | | | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | X | | X | | | X | | | X | | | | | | X | X | X | | X | X | | | | | | | | | | | | | | | | | X | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/remilia_scarlet_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T00:51:21+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T08:25:37+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of remilia\_scarlet/レミリア・スカーレット/레밀리아스칼렛 (Touhou)
========================================================
This is the dataset of remilia\_scarlet/レミリア・スカーレット/레밀리아스칼렛 (Touhou), containing 500 images and their tags.
The core tags of this character are 'short\_hair, red\_eyes, bat\_wings, wings, hat, ribbon, mob\_cap, blue\_hair, hat\_ribbon, bangs, hair\_between\_eyes, red\_ribbon, bow', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
983b10af2f61d9efc76b89e2558c74ae381ae9c7
|
中文外卖 10k 评论数据集。
|
ttxy/sentiment
|
[
"task_categories:text-classification",
"language:code",
"license:bsd",
"sentiment",
"region:us"
] |
2023-08-17T01:05:09+00:00
|
{"language": ["code"], "license": "bsd", "task_categories": ["text-classification"], "pretty_name": "Chinese sentiment analysis dataseet", "tags": ["sentiment"]}
|
2023-08-17T01:15:03+00:00
|
[] |
[
"code"
] |
TAGS
#task_categories-text-classification #language-code #license-bsd #sentiment #region-us
|
中文外卖 10k 评论数据集。
|
[] |
[
"TAGS\n#task_categories-text-classification #language-code #license-bsd #sentiment #region-us \n"
] |
[
30
] |
[
"passage: TAGS\n#task_categories-text-classification #language-code #license-bsd #sentiment #region-us \n"
] |
b4f602fff80b45f3d4cda55898e6e37695c79acf
|
# Dataset of iris (Pokémon)
This is the dataset of iris (Pokémon), containing 500 images and their tags.
The core tags of this character are `dark-skinned_female, dark_skin, long_hair, purple_hair, bangs, big_hair, brown_eyes, very_long_hair, two_side_up, breasts, eyelashes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 426.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iris_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 272.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iris_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 997 | 509.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iris_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 387.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iris_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 997 | 679.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iris_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/iris_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, solo, open_mouth, waist_bow, dress, smile, blush, crown |
| 1 | 16 |  |  |  |  |  | 1girl, :d, open_mouth, tongue, dress, hair_rings, long_sleeves, tiara, upper_teeth_only, wide_sleeves, looking_at_viewer, bow, blush, sandals, solo, pokemon_(creature), red_eyes, toes, white_footwear, collarbone, spread_fingers |
| 2 | 6 |  |  |  |  |  | 1girl, :d, armlet, black_dress, fake_horns, official_alternate_costume, open_mouth, tongue, twintails, upper_teeth_only, wrist_cuffs, bare_shoulders, black_hairband, claw_pose, hair_rings, hands_up, looking_at_viewer, red_eyes, sleeveless_dress, solo, blush, fake_wings, halloween |
| 3 | 6 |  |  |  |  |  | 1girl, :d, collarbone, fake_horns, fangs, hair_rings, official_alternate_costume, open_mouth, tongue, black_hairband, twintails, wrist_cuffs, armlet, bare_shoulders, black_dress, blush, looking_at_viewer, solo, wings, hands_up, upper_teeth_only |
| 4 | 22 |  |  |  |  |  | 1girl, nipples, nude, blush, solo, open_mouth, collarbone, looking_at_viewer, navel, pussy, small_breasts, tongue, :d, barefoot, light_areolae, shiny_skin, censored, simple_background, white_background |
| 5 | 11 |  |  |  |  |  | 1girl, hetero, nipples, nude, blush, 1boy, sex, small_breasts, vaginal, penis, pussy, solo_focus, pokemon_(creature), red_eyes, uncensored, bestiality, navel, spread_legs, open_mouth, pokephilia, :q, closed_mouth, collarbone, hair_tie, loli, looking_down, smile |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | open_mouth | waist_bow | dress | smile | blush | crown | :d | tongue | hair_rings | long_sleeves | tiara | upper_teeth_only | wide_sleeves | looking_at_viewer | bow | sandals | pokemon_(creature) | red_eyes | toes | white_footwear | collarbone | spread_fingers | armlet | black_dress | fake_horns | official_alternate_costume | twintails | wrist_cuffs | bare_shoulders | black_hairband | claw_pose | hands_up | sleeveless_dress | fake_wings | halloween | fangs | wings | nipples | nude | navel | pussy | small_breasts | barefoot | light_areolae | shiny_skin | censored | simple_background | white_background | hetero | 1boy | sex | vaginal | penis | solo_focus | uncensored | bestiality | spread_legs | pokephilia | :q | closed_mouth | hair_tie | loli | looking_down |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-------------|:------------|:--------|:--------|:--------|:--------|:-----|:---------|:-------------|:---------------|:--------|:-------------------|:---------------|:--------------------|:------|:----------|:---------------------|:-----------|:-------|:-----------------|:-------------|:-----------------|:---------|:--------------|:-------------|:-----------------------------|:------------|:--------------|:-----------------|:-----------------|:------------|:-----------|:-------------------|:-------------|:------------|:--------|:--------|:----------|:-------|:--------|:--------|:----------------|:-----------|:----------------|:-------------|:-----------|:--------------------|:-------------------|:---------|:-------|:------|:----------|:--------|:-------------|:-------------|:-------------|:--------------|:-------------|:-----|:---------------|:-----------|:-------|:---------------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 16 |  |  |  |  |  | X | X | X | | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | | | | X | | X | X | X | | | X | | X | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | X | | | | X | | X | X | X | | | X | | X | | | | | | | X | | X | X | X | X | X | X | X | X | | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 22 |  |  |  |  |  | X | X | X | | | | X | | X | X | | | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 5 | 11 |  |  |  |  |  | X | | X | | | X | X | | | | | | | | | | | | X | X | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/iris_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T01:14:59+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T15:10:33+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of iris (Pokémon)
=========================
This is the dataset of iris (Pokémon), containing 500 images and their tags.
The core tags of this character are 'dark-skinned\_female, dark\_skin, long\_hair, purple\_hair, bangs, big\_hair, brown\_eyes, very\_long\_hair, two\_side\_up, breasts, eyelashes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
87b466e4337172f896401b84e3c3825941eff076
|
# Dataset Card for "Biorxiv_abstracts_large_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
KhalfounMehdi/Biorxiv_abstracts_large_text
|
[
"region:us"
] |
2023-08-17T01:24:05+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 33615443, "num_examples": 21078}], "download_size": 18750798, "dataset_size": 33615443}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-17T01:24:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Biorxiv_abstracts_large_text"
More Information needed
|
[
"# Dataset Card for \"Biorxiv_abstracts_large_text\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Biorxiv_abstracts_large_text\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Biorxiv_abstracts_large_text\"\n\nMore Information needed"
] |
8523e4b2ccc71581007b1973c3e9ec58ae80598a
|
# Dataset of konpaku_youmu/妖夢/콘파쿠요무 (Touhou)
This is the dataset of konpaku_youmu/妖夢/콘파쿠요무 (Touhou), containing 500 images and their tags.
The core tags of this character are `short_hair, hairband, ribbon, black_hairband, hair_ribbon, white_hair, bangs, black_ribbon, blue_eyes, bow, grey_hair, black_bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 795.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/konpaku_youmu_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 435.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/konpaku_youmu_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1281 | 955.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/konpaku_youmu_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 721.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/konpaku_youmu_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1281 | 1.34 GiB | [Download](https://huggingface.co/datasets/CyberHarem/konpaku_youmu_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/konpaku_youmu_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, green_skirt, green_vest, holding_sword, katana, simple_background, solo, white_background, white_shirt, sheath, skirt_set, looking_at_viewer, puffy_short_sleeves, full_body, hitodama, shoes, black_footwear, bowtie, white_socks, closed_mouth |
| 1 | 27 |  |  |  |  |  | 1girl, green_skirt, green_vest, katana, solo, white_shirt, holding_sword, looking_at_viewer, puffy_short_sleeves, cherry_blossoms, collared_shirt, petals, skirt_set, hitodama, black_bowtie, sheath, closed_mouth, frilled_skirt, flower |
| 2 | 5 |  |  |  |  |  | 1girl, black_bowtie, collared_shirt, green_skirt, green_vest, hitodama, holding_sword, katana, looking_at_viewer, simple_background, solo, white_background, white_shirt, closed_mouth, puffy_short_sleeves, blush, blue_nails, nail_polish, unsheathing |
| 3 | 6 |  |  |  |  |  | 1girl, blush, colored_eyelashes, cowboy_shot, green_skirt, green_vest, hitodama, katana, looking_at_viewer, miniskirt, scabbard, solo, white_shirt, black_belt, closed_mouth, collared_shirt, hair_between_eyes, puffy_short_sleeves, standing, holding_sword, open_vest, sheathed, skirt_set, thighs, black_bowtie |
| 4 | 17 |  |  |  |  |  | 1girl, simple_background, solo, white_shirt, collared_shirt, green_vest, looking_at_viewer, puffy_short_sleeves, white_background, blush, black_bowtie, green_skirt, closed_mouth, hitodama, upper_body, smile, open_mouth |
| 5 | 7 |  |  |  |  |  | 1girl, katana, solo, hitodama, ghost, skirt, cherry_blossoms, scabbard, vest |
| 6 | 6 |  |  |  |  |  | 1girl, long_sleeves, looking_at_viewer, obi, solo, alternate_costume, green_kimono, wide_sleeves, floral_print, blush, cowboy_shot, hitodama |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | green_skirt | green_vest | holding_sword | katana | simple_background | solo | white_background | white_shirt | sheath | skirt_set | looking_at_viewer | puffy_short_sleeves | full_body | hitodama | shoes | black_footwear | bowtie | white_socks | closed_mouth | cherry_blossoms | collared_shirt | petals | black_bowtie | frilled_skirt | flower | blush | blue_nails | nail_polish | unsheathing | colored_eyelashes | cowboy_shot | miniskirt | scabbard | black_belt | hair_between_eyes | standing | open_vest | sheathed | thighs | upper_body | smile | open_mouth | ghost | skirt | vest | long_sleeves | obi | alternate_costume | green_kimono | wide_sleeves | floral_print |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:-------------|:----------------|:---------|:--------------------|:-------|:-------------------|:--------------|:---------|:------------|:--------------------|:----------------------|:------------|:-----------|:--------|:-----------------|:---------|:--------------|:---------------|:------------------|:-----------------|:---------|:---------------|:----------------|:---------|:--------|:-------------|:--------------|:--------------|:--------------------|:--------------|:------------|:-----------|:-------------|:--------------------|:-----------|:------------|:-----------|:---------|:-------------|:--------|:-------------|:--------|:--------|:-------|:---------------|:------|:--------------------|:---------------|:---------------|:---------------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 27 |  |  |  |  |  | X | X | X | X | X | | X | | X | X | X | X | X | | X | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | X | X | | X | | | | | X | | X | | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | X | X | X | | X | | X | | X | X | X | | X | | | | | X | | X | | X | | | X | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 4 | 17 |  |  |  |  |  | X | X | X | | | X | X | X | X | | | X | X | | X | | | | | X | | X | | X | | | X | | | | | | | | | | | | | | X | X | X | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | | | | X | | X | | | | | | | | X | | | | | | X | | | | | | | | | | | | | X | | | | | | | | | | X | X | X | | | | | | |
| 6 | 6 |  |  |  |  |  | X | | | | | | X | | | | | X | | | X | | | | | | | | | | | | X | | | | | X | | | | | | | | | | | | | | | X | X | X | X | X | X |
|
CyberHarem/konpaku_youmu_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T01:31:59+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T08:11:38+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of konpaku\_youmu/妖夢/콘파쿠요무 (Touhou)
===========================================
This is the dataset of konpaku\_youmu/妖夢/콘파쿠요무 (Touhou), containing 500 images and their tags.
The core tags of this character are 'short\_hair, hairband, ribbon, black\_hairband, hair\_ribbon, white\_hair, bangs, black\_ribbon, blue\_eyes, bow, grey\_hair, black\_bow', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
3c52309663f24d3475a78b20244b0930c39514ae
|
# EverythingLM V2 Dataset
**EverythingLM V2** is a diverse instruct dataset consisting of 1k of human-assistant conversations. These sets were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions.
### Differences for V1:
- All data in V2 is generated by GPT4
- Higher quality dataset generation pipeline:
- More humalike seed prompts
- Fixed some bugs in the script
- More diverse creative writing
- More diverse seed prompts in general
- Attempt not to overfit the model on complex instructions by occasionally skipping evol
### Cost:
Reproducing this dataset would cost roughly $40.
### Instruction Categories:
- Reasoning
- Creative Writing
- General Knowledge
- Brainstorming
- Search Query
- Coding
- Basic Instruct
We also leverage various system prompts for evol-instruct and for responding to prompts.
This dataset has also been filtered to remove OpenAI alignment.
### How it stands out:
- Long, detailed outputs
- Humanlike creativity
- CoT reasoning
- Complex & challenging tasks
### Plans:
- Train Llama 7b & 13b models (13b model V1 trained)
- Train Llama 70b QLoRA
- Generate V2 of the dataset, with more categories and GPT-4 (DONE) ✓
Included in this repo is the script to generate the dataset.
|
totally-not-an-llm/EverythingLM-data-V2
|
[
"license:mit",
"region:us"
] |
2023-08-17T01:44:42+00:00
|
{"license": "mit"}
|
2023-08-18T15:45:39+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
# EverythingLM V2 Dataset
EverythingLM V2 is a diverse instruct dataset consisting of 1k of human-assistant conversations. These sets were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions.
### Differences for V1:
- All data in V2 is generated by GPT4
- Higher quality dataset generation pipeline:
- More humalike seed prompts
- Fixed some bugs in the script
- More diverse creative writing
- More diverse seed prompts in general
- Attempt not to overfit the model on complex instructions by occasionally skipping evol
### Cost:
Reproducing this dataset would cost roughly $40.
### Instruction Categories:
- Reasoning
- Creative Writing
- General Knowledge
- Brainstorming
- Search Query
- Coding
- Basic Instruct
We also leverage various system prompts for evol-instruct and for responding to prompts.
This dataset has also been filtered to remove OpenAI alignment.
### How it stands out:
- Long, detailed outputs
- Humanlike creativity
- CoT reasoning
- Complex & challenging tasks
### Plans:
- Train Llama 7b & 13b models (13b model V1 trained)
- Train Llama 70b QLoRA
- Generate V2 of the dataset, with more categories and GPT-4 (DONE)
Included in this repo is the script to generate the dataset.
|
[
"# EverythingLM V2 Dataset\n\nEverythingLM V2 is a diverse instruct dataset consisting of 1k of human-assistant conversations. These sets were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions.",
"### Differences for V1:\n\n- All data in V2 is generated by GPT4\n- Higher quality dataset generation pipeline:\n - More humalike seed prompts\n - Fixed some bugs in the script\n - More diverse creative writing\n - More diverse seed prompts in general\n - Attempt not to overfit the model on complex instructions by occasionally skipping evol",
"### Cost:\nReproducing this dataset would cost roughly $40.",
"### Instruction Categories:\n\n- Reasoning\n- Creative Writing\n- General Knowledge\n- Brainstorming\n- Search Query\n- Coding\n- Basic Instruct\n\nWe also leverage various system prompts for evol-instruct and for responding to prompts.\nThis dataset has also been filtered to remove OpenAI alignment.",
"### How it stands out:\n\n- Long, detailed outputs\n- Humanlike creativity\n- CoT reasoning\n- Complex & challenging tasks",
"### Plans:\n\n- Train Llama 7b & 13b models (13b model V1 trained)\n- Train Llama 70b QLoRA\n- Generate V2 of the dataset, with more categories and GPT-4 (DONE) \n\nIncluded in this repo is the script to generate the dataset."
] |
[
"TAGS\n#license-mit #region-us \n",
"# EverythingLM V2 Dataset\n\nEverythingLM V2 is a diverse instruct dataset consisting of 1k of human-assistant conversations. These sets were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions.",
"### Differences for V1:\n\n- All data in V2 is generated by GPT4\n- Higher quality dataset generation pipeline:\n - More humalike seed prompts\n - Fixed some bugs in the script\n - More diverse creative writing\n - More diverse seed prompts in general\n - Attempt not to overfit the model on complex instructions by occasionally skipping evol",
"### Cost:\nReproducing this dataset would cost roughly $40.",
"### Instruction Categories:\n\n- Reasoning\n- Creative Writing\n- General Knowledge\n- Brainstorming\n- Search Query\n- Coding\n- Basic Instruct\n\nWe also leverage various system prompts for evol-instruct and for responding to prompts.\nThis dataset has also been filtered to remove OpenAI alignment.",
"### How it stands out:\n\n- Long, detailed outputs\n- Humanlike creativity\n- CoT reasoning\n- Complex & challenging tasks",
"### Plans:\n\n- Train Llama 7b & 13b models (13b model V1 trained)\n- Train Llama 70b QLoRA\n- Generate V2 of the dataset, with more categories and GPT-4 (DONE) \n\nIncluded in this repo is the script to generate the dataset."
] |
[
11,
69,
83,
16,
69,
31,
69
] |
[
"passage: TAGS\n#license-mit #region-us \n# EverythingLM V2 Dataset\n\nEverythingLM V2 is a diverse instruct dataset consisting of 1k of human-assistant conversations. These sets were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions.### Differences for V1:\n\n- All data in V2 is generated by GPT4\n- Higher quality dataset generation pipeline:\n - More humalike seed prompts\n - Fixed some bugs in the script\n - More diverse creative writing\n - More diverse seed prompts in general\n - Attempt not to overfit the model on complex instructions by occasionally skipping evol### Cost:\nReproducing this dataset would cost roughly $40.### Instruction Categories:\n\n- Reasoning\n- Creative Writing\n- General Knowledge\n- Brainstorming\n- Search Query\n- Coding\n- Basic Instruct\n\nWe also leverage various system prompts for evol-instruct and for responding to prompts.\nThis dataset has also been filtered to remove OpenAI alignment.### How it stands out:\n\n- Long, detailed outputs\n- Humanlike creativity\n- CoT reasoning\n- Complex & challenging tasks### Plans:\n\n- Train Llama 7b & 13b models (13b model V1 trained)\n- Train Llama 70b QLoRA\n- Generate V2 of the dataset, with more categories and GPT-4 (DONE) \n\nIncluded in this repo is the script to generate the dataset."
] |
8bdf68f6f6c06041cd00d3d7de6284df5332dd5e
|
# Dataset of komeiji_koishi/古明地こいし/코메이지코이시 (Touhou)
This is the dataset of komeiji_koishi/古明地こいし/코메이지코이시 (Touhou), containing 500 images and their tags.
The core tags of this character are `third_eye, hat, green_eyes, short_hair, green_hair, ribbon, black_headwear, bow, hat_ribbon, hat_bow, bangs, hair_between_eyes, yellow_bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 807.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komeiji_koishi_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 443.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komeiji_koishi_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1258 | 949.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komeiji_koishi_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 706.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komeiji_koishi_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1258 | 1.34 GiB | [Download](https://huggingface.co/datasets/CyberHarem/komeiji_koishi_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/komeiji_koishi_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, :d, frilled_sleeves, green_skirt, long_sleeves, looking_at_viewer, open_mouth, solo, wide_sleeves, yellow_shirt, blouse, eyeball, frilled_shirt_collar, heart_of_string, white_background, simple_background, yellow_ribbon, blue_eyes, holding, upper_body |
| 1 | 9 |  |  |  |  |  | 1girl, eyeball, green_skirt, heart_of_string, long_sleeves, looking_at_viewer, solo, wide_sleeves, :d, blush, heart-shaped_pupils, open_mouth, yellow_shirt, floral_print, frilled_sleeves |
| 2 | 5 |  |  |  |  |  | 1girl, frilled_shirt_collar, frilled_sleeves, green_skirt, long_sleeves, looking_at_viewer, solo, wide_sleeves, yellow_shirt, blouse, eyeball, open_mouth, rose_print, yellow_ribbon, heart_of_string, :d, boots, brown_footwear, frilled_skirt, looking_back, medium_hair |
| 3 | 13 |  |  |  |  |  | 1girl, green_skirt, heart_of_string, long_sleeves, solo, wide_sleeves, floral_print, looking_at_viewer, smile, eyeball, simple_background, white_background, frills, yellow_shirt, blush, full_body |
| 4 | 16 |  |  |  |  |  | 1girl, solo, long_sleeves, skirt, smile, wide_sleeves, eyeball, shirt, heart_of_string, open_mouth, looking_at_viewer |
| 5 | 5 |  |  |  |  |  | 1girl, frilled_shirt_collar, long_sleeves, looking_at_viewer, solo, upper_body, yellow_shirt, closed_mouth, frilled_sleeves, simple_background, smile, wide_sleeves, blush, collared_shirt, red_background, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | :d | frilled_sleeves | green_skirt | long_sleeves | looking_at_viewer | open_mouth | solo | wide_sleeves | yellow_shirt | blouse | eyeball | frilled_shirt_collar | heart_of_string | white_background | simple_background | yellow_ribbon | blue_eyes | holding | upper_body | blush | heart-shaped_pupils | floral_print | rose_print | boots | brown_footwear | frilled_skirt | looking_back | medium_hair | smile | frills | full_body | skirt | shirt | closed_mouth | collared_shirt | red_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----|:------------------|:--------------|:---------------|:--------------------|:-------------|:-------|:---------------|:---------------|:---------|:----------|:-----------------------|:------------------|:-------------------|:--------------------|:----------------|:------------|:----------|:-------------|:--------|:----------------------|:---------------|:-------------|:--------|:-----------------|:----------------|:---------------|:--------------|:--------|:---------|:------------|:--------|:--------|:---------------|:-----------------|:-----------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | X | | X | | | | | | | X | X | X | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | X | | | | | | | X | X | X | X | X | X | | | | | | | | |
| 3 | 13 |  |  |  |  |  | X | | | X | X | X | | X | X | X | | X | | X | X | X | | | | | X | | X | | | | | | | X | X | X | | | | | |
| 4 | 16 |  |  |  |  |  | X | | | | X | X | X | X | X | | | X | | X | | | | | | | | | | | | | | | | X | | | X | X | | | |
| 5 | 5 |  |  |  |  |  | X | | X | | X | X | | X | X | X | | | X | | X | X | | | | X | X | | | | | | | | | X | | | | | X | X | X |
|
CyberHarem/komeiji_koishi_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T02:18:22+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T08:21:17+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of komeiji\_koishi/古明地こいし/코메이지코이시 (Touhou)
==================================================
This is the dataset of komeiji\_koishi/古明地こいし/코메이지코이시 (Touhou), containing 500 images and their tags.
The core tags of this character are 'third\_eye, hat, green\_eyes, short\_hair, green\_hair, ribbon, black\_headwear, bow, hat\_ribbon, hat\_bow, bangs, hair\_between\_eyes, yellow\_bow', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
06b57b1ea87591475e28201211cdffea2226dc62
|
# Dataset of lass/ミニスカート (Pokémon)
This is the dataset of lass/ミニスカート (Pokémon), containing 83 images and their tags.
The core tags of this character are `blonde_hair, long_hair, blue_eyes, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 83 | 71.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lass_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 83 | 42.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lass_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 194 | 91.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lass_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 83 | 64.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lass_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 194 | 122.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lass_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/lass_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, open_mouth, school_uniform, solo, white_shirt, collared_shirt, long_sleeves, open_jacket, looking_at_viewer, black_pantyhose, blazer, holding_poke_ball, pleated_skirt, red_jacket, black_skirt, poke_ball_(basic), standing, simple_background, teeth, :d, black_necktie, miniskirt, white_background, blush, shoes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | open_mouth | school_uniform | solo | white_shirt | collared_shirt | long_sleeves | open_jacket | looking_at_viewer | black_pantyhose | blazer | holding_poke_ball | pleated_skirt | red_jacket | black_skirt | poke_ball_(basic) | standing | simple_background | teeth | :d | black_necktie | miniskirt | white_background | blush | shoes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:-----------------|:-------|:--------------|:-----------------|:---------------|:--------------|:--------------------|:------------------|:---------|:--------------------|:----------------|:-------------|:--------------|:--------------------|:-----------|:--------------------|:--------|:-----|:----------------|:------------|:-------------------|:--------|:--------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/lass_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T02:21:38+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T12:45:00+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of lass/ミニスカート (Pokémon)
================================
This is the dataset of lass/ミニスカート (Pokémon), containing 83 images and their tags.
The core tags of this character are 'blonde\_hair, long\_hair, blue\_eyes, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
4b983cfe8566fdc22a7e102d097c59b9d48af108
|
# Dataset Card for "bengaliAI-preprocessed-whisper-medium-50000-100000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rounak28/bengaliAI-preprocessed-whisper-medium-50000-100000
|
[
"region:us"
] |
2023-08-17T02:26:01+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 48065668591, "num_examples": 50000}], "download_size": 6859604636, "dataset_size": 48065668591}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-17T02:36:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bengaliAI-preprocessed-whisper-medium-50000-100000"
More Information needed
|
[
"# Dataset Card for \"bengaliAI-preprocessed-whisper-medium-50000-100000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bengaliAI-preprocessed-whisper-medium-50000-100000\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bengaliAI-preprocessed-whisper-medium-50000-100000\"\n\nMore Information needed"
] |
4a991ddc9482ad93bc676789b0da7e5941ee0b26
|
# Dataset of pachira/パキラ (Pokémon)
This is the dataset of pachira/パキラ (Pokémon), containing 107 images and their tags.
The core tags of this character are `pink_hair, long_hair, breasts, sunglasses, tinted_eyewear, red-tinted_eyewear, sidelocks, glasses`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 107 | 79.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/pachira_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 107 | 55.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/pachira_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 217 | 99.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/pachira_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 107 | 74.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/pachira_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 217 | 124.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/pachira_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/pachira_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, black_shirt, crop_top, midriff, sleeveless_shirt, solo, bare_arms, navel, eyelashes, looking_at_viewer, smile, bangs, orange-tinted_eyewear, red_pants, simple_background, closed_mouth, white_background, hand_up, holding, orange_eyes, shiny, upper_body |
| 1 | 5 |  |  |  |  |  | 1girl, crop_top, midriff, sleeveless, smile, solo, navel, pants, lipstick, nail_polish, turtleneck |
| 2 | 10 |  |  |  |  |  | 1girl, hetero, 1boy, penis, blush, nipples, nude, solo_focus, uncensored, cum_in_pussy, eyelashes, large_breasts, navel, testicles, fellatio, pubic_hair, sex, spread_legs |
| 3 | 5 |  |  |  |  |  | 1girl, all_fours, bestiality, doggystyle, hetero, pokemon_(creature), pokephilia, sex_from_behind, cum, tongue, eyelashes, large_breasts, nipples, nude, red_eyes, sweat, bottomless, bouncing_breasts, clenched_teeth, half-closed_eyes, open_mouth, rolling_eyes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_shirt | crop_top | midriff | sleeveless_shirt | solo | bare_arms | navel | eyelashes | looking_at_viewer | smile | bangs | orange-tinted_eyewear | red_pants | simple_background | closed_mouth | white_background | hand_up | holding | orange_eyes | shiny | upper_body | sleeveless | pants | lipstick | nail_polish | turtleneck | hetero | 1boy | penis | blush | nipples | nude | solo_focus | uncensored | cum_in_pussy | large_breasts | testicles | fellatio | pubic_hair | sex | spread_legs | all_fours | bestiality | doggystyle | pokemon_(creature) | pokephilia | sex_from_behind | cum | tongue | red_eyes | sweat | bottomless | bouncing_breasts | clenched_teeth | half-closed_eyes | open_mouth | rolling_eyes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:-----------|:----------|:-------------------|:-------|:------------|:--------|:------------|:--------------------|:--------|:--------|:------------------------|:------------|:--------------------|:---------------|:-------------------|:----------|:----------|:--------------|:--------|:-------------|:-------------|:--------|:-----------|:--------------|:-------------|:---------|:-------|:--------|:--------|:----------|:-------|:-------------|:-------------|:---------------|:----------------|:------------|:-----------|:-------------|:------|:--------------|:------------|:-------------|:-------------|:---------------------|:-------------|:------------------|:------|:---------|:-----------|:--------|:-------------|:-------------------|:-----------------|:-------------------|:-------------|:---------------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | | X | X | | X | | X | | | X | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | | | | | | | X | X | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | | | | X | X | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/pachira_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T02:53:11+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T13:29:02+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of pachira/パキラ (Pokémon)
================================
This is the dataset of pachira/パキラ (Pokémon), containing 107 images and their tags.
The core tags of this character are 'pink\_hair, long\_hair, breasts, sunglasses, tinted\_eyewear, red-tinted\_eyewear, sidelocks, glasses', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
0a8d294cc92afdbb29f25b668d2112294e0a43d5
|
# Dataset of patchouli_knowledge/パチュリー・ノーレッジ/파츄리널릿지 (Touhou)
This is the dataset of patchouli_knowledge/パチュリー・ノーレッジ/파츄리널릿지 (Touhou), containing 500 images and their tags.
The core tags of this character are `long_hair, purple_hair, purple_eyes, hat, bow, ribbon, hat_ornament, crescent_hat_ornament, hair_bow, mob_cap, bangs, very_long_hair, blue_bow, red_bow, red_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 749.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/patchouli_knowledge_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 430.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/patchouli_knowledge_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1172 | 870.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/patchouli_knowledge_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 664.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/patchouli_knowledge_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1172 | 1.20 GiB | [Download](https://huggingface.co/datasets/CyberHarem/patchouli_knowledge_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/patchouli_knowledge_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, crescent, looking_at_viewer, solo, striped_dress, long_sleeves, capelet, simple_background, white_background, book, tress_ribbon, open_mouth |
| 1 | 5 |  |  |  |  |  | 1girl, crescent, dress, looking_at_viewer, simple_background, solo, striped, blush, capelet, closed_mouth, hat_ribbon, long_sleeves, white_background, upper_body, blue_ribbon |
| 2 | 16 |  |  |  |  |  | 1girl, crescent, solo, book, dress |
| 3 | 6 |  |  |  |  |  | 1girl, capelet, closed_mouth, crescent, holding_book, long_sleeves, looking_at_viewer, pink_dress, solo, striped_dress, blue_ribbon, blush, hat_ribbon, pink_headwear, vertical_stripes, blunt_bangs, open_book, upper_body, pink_bow, purple_dress, purple_headwear, simple_background, wide_sleeves |
| 4 | 5 |  |  |  |  |  | 1girl, capelet, closed_mouth, crescent, hat_ribbon, long_sleeves, pink_headwear, solo, vertical-striped_dress, white_background, blue_ribbon, looking_at_viewer, pink_dress, simple_background, bowtie, full_body, holding_book, open_book, purple_dress, wide_sleeves, blunt_bangs, footwear_bow, yellow_bow |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | crescent | looking_at_viewer | solo | striped_dress | long_sleeves | capelet | simple_background | white_background | book | tress_ribbon | open_mouth | dress | striped | blush | closed_mouth | hat_ribbon | upper_body | blue_ribbon | holding_book | pink_dress | pink_headwear | vertical_stripes | blunt_bangs | open_book | pink_bow | purple_dress | purple_headwear | wide_sleeves | vertical-striped_dress | bowtie | full_body | footwear_bow | yellow_bow |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:--------------------|:-------|:----------------|:---------------|:----------|:--------------------|:-------------------|:-------|:---------------|:-------------|:--------|:----------|:--------|:---------------|:-------------|:-------------|:--------------|:---------------|:-------------|:----------------|:-------------------|:--------------|:------------|:-----------|:---------------|:------------------|:---------------|:-------------------------|:---------|:------------|:---------------|:-------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | | X | X | X | X | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 2 | 16 |  |  |  |  |  | X | X | | X | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | X | X | | X | X | X | X | | | | | | | X | X | | X | X | X | X | | X | X | | X | | X | X | X | X | X | X |
|
CyberHarem/patchouli_knowledge_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T02:57:14+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T10:22:48+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of patchouli\_knowledge/パチュリー・ノーレッジ/파츄리널릿지 (Touhou)
===========================================================
This is the dataset of patchouli\_knowledge/パチュリー・ノーレッジ/파츄리널릿지 (Touhou), containing 500 images and their tags.
The core tags of this character are 'long\_hair, purple\_hair, purple\_eyes, hat, bow, ribbon, hat\_ornament, crescent\_hat\_ornament, hair\_bow, mob\_cap, bangs, very\_long\_hair, blue\_bow, red\_bow, red\_ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
e3b2977b43a7987f995a9fd260befccc33205fcd
|
# Dataset Card for "issste-tori"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ittailup/issste-tori
|
[
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"size_categories:1M<n<10M",
"license:apache-2.0",
"region:us"
] |
2023-08-17T03:18:50+00:00
|
{"license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "B-forenames", "1": "I-forenames", "2": "B-surnames", "3": "I-surnames"}}}}], "splits": [{"name": "train", "num_bytes": 396964996.1323354, "num_examples": 5311611}, {"name": "test", "num_bytes": 20892933.86766455, "num_examples": 279559}], "download_size": 94253178, "dataset_size": 417857930}}
|
2023-08-17T03:54:33+00:00
|
[] |
[] |
TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #size_categories-1M<n<10M #license-apache-2.0 #region-us
|
# Dataset Card for "issste-tori"
More Information needed
|
[
"# Dataset Card for \"issste-tori\"\n\nMore Information needed"
] |
[
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #size_categories-1M<n<10M #license-apache-2.0 #region-us \n",
"# Dataset Card for \"issste-tori\"\n\nMore Information needed"
] |
[
53,
15
] |
[
"passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #size_categories-1M<n<10M #license-apache-2.0 #region-us \n# Dataset Card for \"issste-tori\"\n\nMore Information needed"
] |
ec2249646a029787f0998d09a1bd67672c264674
|
# Dataset of furisode_girl (Pokémon)
This is the dataset of furisode_girl (Pokémon), containing 29 images and their tags.
The core tags of this character are `long_hair, brown_hair, breasts, blue_eyes, earrings, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 29 | 21.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/furisode_girl_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 29 | 17.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/furisode_girl_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 55 | 28.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/furisode_girl_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 29 | 20.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/furisode_girl_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 55 | 33.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/furisode_girl_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/furisode_girl_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, long_sleeves, looking_at_viewer, smile, collarbone, eyelashes, jewelry, kimono, nail_polish, poke_ball_(basic), wide_sleeves, holding_poke_ball, multiple_girls, orange_hair, sash, socks, blue_nails, closed_mouth, green_eyes, standing |
| 1 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, smile, brown_eyes, dark-skinned_female, hair_flower, nipples, barefoot, medium_breasts, nude, bare_shoulders, blush, navel, sitting, solo_focus, toes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | long_sleeves | looking_at_viewer | smile | collarbone | eyelashes | jewelry | kimono | nail_polish | poke_ball_(basic) | wide_sleeves | holding_poke_ball | multiple_girls | orange_hair | sash | socks | blue_nails | closed_mouth | green_eyes | standing | brown_eyes | dark-skinned_female | hair_flower | nipples | barefoot | medium_breasts | nude | bare_shoulders | blush | navel | sitting | solo_focus | toes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:--------------------|:--------|:-------------|:------------|:----------|:---------|:--------------|:--------------------|:---------------|:--------------------|:-----------------|:--------------|:-------|:--------|:-------------|:---------------|:-------------|:-----------|:-------------|:----------------------|:--------------|:----------|:-----------|:-----------------|:-------|:-----------------|:--------|:--------|:----------|:-------------|:-------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | | X | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/furisode_girl_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T03:23:39+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T20:37:02+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of furisode\_girl (Pokémon)
===================================
This is the dataset of furisode\_girl (Pokémon), containing 29 images and their tags.
The core tags of this character are 'long\_hair, brown\_hair, breasts, blue\_eyes, earrings, bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
25f2c32a9bd3c19727581568480532e45b3c5d01
|
# Dataset Card for "bengaliAI-preprocessed-whisper-medium-100000-150000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rounak28/bengaliAI-preprocessed-whisper-medium-100000-150000
|
[
"region:us"
] |
2023-08-17T03:25:17+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 48065723126, "num_examples": 50000}], "download_size": 6849736919, "dataset_size": 48065723126}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-17T03:32:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bengaliAI-preprocessed-whisper-medium-100000-150000"
More Information needed
|
[
"# Dataset Card for \"bengaliAI-preprocessed-whisper-medium-100000-150000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bengaliAI-preprocessed-whisper-medium-100000-150000\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bengaliAI-preprocessed-whisper-medium-100000-150000\"\n\nMore Information needed"
] |
d948db52932343c25d8abef4643f6a50fca2d3bf
|
# Dataset Card for "cours_medecine"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
KhalfounMehdi/cours_medecine
|
[
"region:us"
] |
2023-08-17T03:48:46+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4918302, "num_examples": 313}], "download_size": 2424246, "dataset_size": 4918302}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-17T03:48:47+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cours_medecine"
More Information needed
|
[
"# Dataset Card for \"cours_medecine\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cours_medecine\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cours_medecine\"\n\nMore Information needed"
] |
a637bc6684aeb7a7d86c8075f1e7d4e61a50f788
|
# Dataset of alice_margatroid/アリス・マーガトロイド/앨리스마가트로이드 (Touhou)
This is the dataset of alice_margatroid/アリス・マーガトロイド/앨리스마가트로이드 (Touhou), containing 500 images and their tags.
The core tags of this character are `blonde_hair, short_hair, hairband, blue_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 612.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/alice_margatroid_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 393.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/alice_margatroid_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1102 | 756.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/alice_margatroid_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 558.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/alice_margatroid_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1102 | 995.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/alice_margatroid_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/alice_margatroid_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 24 |  |  |  |  |  | 1girl, solo, looking_at_viewer, blue_dress, red_hairband, white_capelet, bangs, closed_mouth, hair_between_eyes, simple_background, white_background, frills, blush, lolita_hairband, smile, upper_body, breasts, red_necktie, puffy_short_sleeves |
| 1 | 9 |  |  |  |  |  | 1girl, capelet, sash, solo, simple_background, smile, looking_at_viewer, white_background, blue_dress, open_mouth |
| 2 | 7 |  |  |  |  |  | 1girl, capelet, dress, open_mouth, smile, solo, sash |
| 3 | 13 |  |  |  |  |  | 1girl, capelet, dress, solo, book, sash, ribbon, petals |
| 4 | 5 |  |  |  |  |  | 1girl, blue_dress, capelet, looking_at_viewer, puppet_strings, sash, solo, ribbon, bow, jewelry, lolita_hairband, red_eyes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | blue_dress | red_hairband | white_capelet | bangs | closed_mouth | hair_between_eyes | simple_background | white_background | frills | blush | lolita_hairband | smile | upper_body | breasts | red_necktie | puffy_short_sleeves | capelet | sash | open_mouth | dress | book | ribbon | petals | puppet_strings | bow | jewelry | red_eyes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:-------------|:---------------|:----------------|:--------|:---------------|:--------------------|:--------------------|:-------------------|:---------|:--------|:------------------|:--------|:-------------|:----------|:--------------|:----------------------|:----------|:-------|:-------------|:--------|:-------|:---------|:---------|:-----------------|:------|:----------|:-----------|
| 0 | 24 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | X | X | | | | | | X | X | | | | X | | | | | X | X | X | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | | | | | | | | | | | | | X | | | | | X | X | X | X | | | | | | | |
| 3 | 13 |  |  |  |  |  | X | X | | | | | | | | | | | | | | | | | | X | X | | X | X | X | X | | | | |
| 4 | 5 |  |  |  |  |  | X | X | X | X | | | | | | | | | | X | | | | | | X | X | | | | X | | X | X | X | X |
|
CyberHarem/alice_margatroid_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T03:50:35+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T08:19:25+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of alice\_margatroid/アリス・マーガトロイド/앨리스마가트로이드 (Touhou)
===========================================================
This is the dataset of alice\_margatroid/アリス・マーガトロイド/앨리스마가트로이드 (Touhou), containing 500 images and their tags.
The core tags of this character are 'blonde\_hair, short\_hair, hairband, blue\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
ec0177cb8aab64a10f8a17f5e0be87dd590f85b0
|
# Dataset Card for "ToxicContent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vietgpt-archive/ToxicContent
|
[
"region:us"
] |
2023-08-17T04:02:52+00:00
|
{"dataset_info": {"features": [{"name": "answer", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 13575089.0, "num_examples": 48009}], "download_size": 7797242, "dataset_size": 13575089.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-17T04:08:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ToxicContent"
More Information needed
|
[
"# Dataset Card for \"ToxicContent\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ToxicContent\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ToxicContent\"\n\nMore Information needed"
] |
faee3720130b209d26d97ce1ee23c0ca129008f3
|
# Dataset of yakumo_yukari/八雲紫/야쿠모유카리 (Touhou)
This is the dataset of yakumo_yukari/八雲紫/야쿠모유카리 (Touhou), containing 500 images and their tags.
The core tags of this character are `blonde_hair, hat, ribbon, long_hair, hat_ribbon, bow, mob_cap, hair_bow, purple_eyes, breasts, red_ribbon, very_long_hair, white_headwear, bangs, hair_between_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 751.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yakumo_yukari_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 440.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yakumo_yukari_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1126 | 851.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yakumo_yukari_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 673.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yakumo_yukari_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1126 | 1.15 GiB | [Download](https://huggingface.co/datasets/CyberHarem/yakumo_yukari_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/yakumo_yukari_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, dress, gap_(touhou), solo, tabard, smile, yellow_eyes, umbrella |
| 1 | 31 |  |  |  |  |  | 1girl, long_sleeves, solo, tabard, white_dress, looking_at_viewer, wide_sleeves, smile, gap_(touhou), umbrella, puffy_sleeves |
| 2 | 5 |  |  |  |  |  | 1girl, closed_mouth, folding_fan, holding_fan, long_sleeves, looking_at_viewer, red_bow, smile, solo, tabard, white_dress, upper_body, wide_sleeves, blush, sidelocks, simple_background |
| 3 | 15 |  |  |  |  |  | 1girl, solo, dress, smile, gap_(touhou), red_eyes |
| 4 | 5 |  |  |  |  |  | 1girl, dress, elbow_gloves, gap_(touhou), solo, white_gloves, smile, umbrella, folding_fan |
| 5 | 12 |  |  |  |  |  | 1girl, solo, white_gloves, dress, elbow_gloves, parasol, smile, gap_(touhou), butterfly |
| 6 | 12 |  |  |  |  |  | 1girl, looking_at_viewer, purple_dress, solo, white_gloves, elbow_gloves, smile, puffy_short_sleeves, blush, gap_(touhou), simple_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | dress | gap_(touhou) | solo | tabard | smile | yellow_eyes | umbrella | long_sleeves | white_dress | looking_at_viewer | wide_sleeves | puffy_sleeves | closed_mouth | folding_fan | holding_fan | red_bow | upper_body | blush | sidelocks | simple_background | red_eyes | elbow_gloves | white_gloves | parasol | butterfly | purple_dress | puffy_short_sleeves |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:---------------|:-------|:---------|:--------|:--------------|:-----------|:---------------|:--------------|:--------------------|:---------------|:----------------|:---------------|:--------------|:--------------|:----------|:-------------|:--------|:------------|:--------------------|:-----------|:---------------|:---------------|:----------|:------------|:---------------|:----------------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 1 | 31 |  |  |  |  |  | X | | X | X | X | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | | | X | X | X | | | X | X | X | X | | X | X | X | X | X | X | X | X | | | | | | | |
| 3 | 15 |  |  |  |  |  | X | X | X | X | | X | | | | | | | | | | | | | | | | X | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | X | X | | X | | X | | | | | | | X | | | | | | | | X | X | | | | |
| 5 | 12 |  |  |  |  |  | X | X | X | X | | X | | | | | | | | | | | | | | | | | X | X | X | X | | |
| 6 | 12 |  |  |  |  |  | X | | X | X | | X | | | | | X | | | | | | | | X | | X | | X | X | | | X | X |
|
CyberHarem/yakumo_yukari_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T04:27:30+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T10:01:54+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of yakumo\_yukari/八雲紫/야쿠모유카리 (Touhou)
=============================================
This is the dataset of yakumo\_yukari/八雲紫/야쿠모유카리 (Touhou), containing 500 images and their tags.
The core tags of this character are 'blonde\_hair, hat, ribbon, long\_hair, hat\_ribbon, bow, mob\_cap, hair\_bow, purple\_eyes, breasts, red\_ribbon, very\_long\_hair, white\_headwear, bangs, hair\_between\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
0abaadb64ecb74ac66af7bf254a8dd3dcc12b1b0
|
# Dataset of joy (Pokémon)
This is the dataset of joy (Pokémon), containing 230 images and their tags.
The core tags of this character are `pink_hair, hat, nurse_cap, blue_eyes, breasts, hair_rings, long_hair, white_headwear, eyelashes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 230 | 186.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/joy_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 230 | 121.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/joy_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 458 | 223.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/joy_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 230 | 171.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/joy_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 458 | 296.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/joy_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/joy_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | 1girl, nurse, smile, solo, looking_at_viewer, short_sleeves, apron, pink_dress, blush, open_mouth, bangs, full_body, standing, closed_mouth, own_hands_together, shoes, white_background |
| 1 | 6 |  |  |  |  |  | 1girl, nurse, pink_dress, bangs, collared_dress, open_mouth, solo, white_apron, simple_background, white_background, :d, puffy_short_sleeves, upper_body |
| 2 | 15 |  |  |  |  |  | 1girl, blush, navel, nipples, collarbone, nude, solo, large_breasts, looking_at_viewer, open_mouth, simple_background, nurse, pussy, white_background, :d, shiny, tongue |
| 3 | 7 |  |  |  |  |  | 1boy, 1girl, hetero, nipples, pussy, vaginal, large_breasts, nurse, open_mouth, sex, uncensored, spread_legs, blush, nude, solo_focus, thighhighs, clitoris, navel, veiny_penis |
| 4 | 6 |  |  |  |  |  | 1girl, barefoot, shiny_hair, shiny_skin, toes, blue_bikini, collarbone, dark-skinned_female, bangs, cleavage, closed_mouth, looking_at_viewer, navel, smile, solo, bare_arms, full_body, knees, sitting, tan, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | nurse | smile | solo | looking_at_viewer | short_sleeves | apron | pink_dress | blush | open_mouth | bangs | full_body | standing | closed_mouth | own_hands_together | shoes | white_background | collared_dress | white_apron | simple_background | :d | puffy_short_sleeves | upper_body | navel | nipples | collarbone | nude | large_breasts | pussy | shiny | tongue | 1boy | hetero | vaginal | sex | uncensored | spread_legs | solo_focus | thighhighs | clitoris | veiny_penis | barefoot | shiny_hair | shiny_skin | toes | blue_bikini | dark-skinned_female | cleavage | bare_arms | knees | sitting | tan |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:--------|:-------|:--------------------|:----------------|:--------|:-------------|:--------|:-------------|:--------|:------------|:-----------|:---------------|:---------------------|:--------|:-------------------|:-----------------|:--------------|:--------------------|:-----|:----------------------|:-------------|:--------|:----------|:-------------|:-------|:----------------|:--------|:--------|:---------|:-------|:---------|:----------|:------|:-------------|:--------------|:-------------|:-------------|:-----------|:--------------|:-----------|:-------------|:-------------|:-------|:--------------|:----------------------|:-----------|:------------|:--------|:----------|:------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | | X | | | | X | | X | X | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 15 |  |  |  |  |  | X | X | | X | X | | | | X | X | | | | | | | X | | | X | X | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | X | | | | | | | X | X | | | | | | | | | | | | | | X | X | | X | X | X | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | | X | X | X | | | | | | X | X | | X | | | X | | | | | | | X | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/joy_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T04:40:16+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T14:31:29+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of joy (Pokémon)
========================
This is the dataset of joy (Pokémon), containing 230 images and their tags.
The core tags of this character are 'pink\_hair, hat, nurse\_cap, blue\_eyes, breasts, hair\_rings, long\_hair, white\_headwear, eyelashes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
d09f05a3f981c402cab99df1efedf97a106e2d23
|
# Dataset Card for "cot_explanation_targets_h2ogpt-gm-oasst1-en-2048-falcon-40b-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LahiruLowe/cot_explanation_targets_h2ogpt-gm-oasst1-en-2048-falcon-40b-v2
|
[
"region:us"
] |
2023-08-17T04:45:37+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "task_source", "dtype": "string"}, {"name": "task_name", "dtype": "string"}, {"name": "template_type", "dtype": "string"}, {"name": "explained_targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 42818, "num_examples": 35}], "download_size": 38965, "dataset_size": 42818}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-17T06:33:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cot_explanation_targets_h2ogpt-gm-oasst1-en-2048-falcon-40b-v2"
More Information needed
|
[
"# Dataset Card for \"cot_explanation_targets_h2ogpt-gm-oasst1-en-2048-falcon-40b-v2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cot_explanation_targets_h2ogpt-gm-oasst1-en-2048-falcon-40b-v2\"\n\nMore Information needed"
] |
[
6,
43
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cot_explanation_targets_h2ogpt-gm-oasst1-en-2048-falcon-40b-v2\"\n\nMore Information needed"
] |
46ff160eed6747516f7952dddbc0dfd766f05e60
|
arxiv.org/abs/2308.07891
|
ISEKAI-Portal/ISEKAI-10
|
[
"license:cc-by-nc-2.0",
"arxiv:2308.07891",
"region:us"
] |
2023-08-17T04:59:23+00:00
|
{"license": "cc-by-nc-2.0"}
|
2023-08-17T14:04:44+00:00
|
[
"2308.07891"
] |
[] |
TAGS
#license-cc-by-nc-2.0 #arxiv-2308.07891 #region-us
|
URL
|
[] |
[
"TAGS\n#license-cc-by-nc-2.0 #arxiv-2308.07891 #region-us \n"
] |
[
26
] |
[
"passage: TAGS\n#license-cc-by-nc-2.0 #arxiv-2308.07891 #region-us \n"
] |
28b7916f28ada2237d75b94b7b09f25e9a153031
|
arxiv.org/abs/2308.07891
|
ISEKAI-Portal/ISEKAI-pair
|
[
"license:cc-by-nc-2.0",
"arxiv:2308.07891",
"region:us"
] |
2023-08-17T05:00:07+00:00
|
{"license": "cc-by-nc-2.0"}
|
2023-08-17T14:03:24+00:00
|
[
"2308.07891"
] |
[] |
TAGS
#license-cc-by-nc-2.0 #arxiv-2308.07891 #region-us
|
URL
|
[] |
[
"TAGS\n#license-cc-by-nc-2.0 #arxiv-2308.07891 #region-us \n"
] |
[
26
] |
[
"passage: TAGS\n#license-cc-by-nc-2.0 #arxiv-2308.07891 #region-us \n"
] |
7c0519a8118aeb8c8e634e5c554f8dcc49ea024f
|
# Dataset Card for "stocks_one_nvda_v3_weekly"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
achang/stocks_one_nvda_v3_weekly
|
[
"region:us"
] |
2023-08-17T05:01:12+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2471045, "num_examples": 1539}], "download_size": 148768, "dataset_size": 2471045}}
|
2023-08-17T05:01:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "stocks_one_nvda_v3_weekly"
More Information needed
|
[
"# Dataset Card for \"stocks_one_nvda_v3_weekly\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"stocks_one_nvda_v3_weekly\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"stocks_one_nvda_v3_weekly\"\n\nMore Information needed"
] |
611f74ccfece5a37b36d32d8bedbe8ac5d3e56c1
|
# Dataset Card for DUVEL
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/cnachteg/DUVEL/
- **Repository:** https://github.com/cnachteg/DUVEL
- **Paper:** TBA
- **Point of Contact:** TBA
### Dataset Summary
This dataset was created to identity oligogenic variant combinations, i.e. relation between several genes and their mutations, causing genetic diseases in scientific articles written in english. At the moment, it contains only digenic variant combinations, i.e. relations between two genes and at least two variants. The dataset is intended for binary relation extraction where the entities are masked within the text.
### Supported Task
The dataset can be used to train a model for ``text-classification`` (as the relation extraction task is here considered as a classification task). Success on this task is typically measured by achieving a high F1-score.
The BioLinkBERT model (https://huggingface.co/michiyasunaga/BioLinkBERT-large) currently achieves the following score of 0.8207 F1-score, with a precision of 0.7941 and a recall of 0.8491.
### Languages
The dataset consists in text extracted from scientific articles written in english (en).
## Dataset Structure
### Data Instances
Each instance describes the two genes and two variants composing the potential digenic variant combination, as well as the fragment of text with the masked entities, the PubMed Central identifier of the article and the label of the instance (i.e., if the fragment of text contains a valid digenic variant combination or not, respectively 1 and 0).
```json
{
'sentence': 'Two unrelated KS patients had heterozygous NELF mutations and mutation in a second gene: NELF/@GENE$ (@VARIANT$; p.Ala253Thr of @GENE$ and c.488_490delGTT; p.Cys163del of KAL1) and NELF/TACR3 (c. 1160-13C>T of NELF and c.824G>A; @VARIANT$ of TACR3).',
'pmcid': 3888818,
'gene1': 'KAL1;55445',
'gene2': 'NELF;10648',
'variant1': 'c.757G>A;tmVar:c|SUB|G|757|A;HGVS:c.757G>A;VariantGroup:3;CorrespondingGene:26012;RS#:142726563;CA#:5370407',
'variant2': 'p.Trp275X;tmVar:p|SUB|W|275|X;HGVS:p.W275X;VariantGroup:1;CorrespondingGene:6870;RS#:144292455;CA#:144871',
'label': 0
}
```
### Data Fields
- `sentence`: *string*, text containing the entities masked with either @GENE$ for the gene type or @VARIANT$ for the mutation type. The text can be either single or cross-sentence, but no longer than 256 tokens according to the BiomedBERT tokenizer (see [BiomedBERT](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext)).
- `pmcid`: *int*, PubMed Central identifier of the article from which the text was extracted (https://www.ncbi.nlm.nih.gov/pmc/)
- `gene1`: *string*, first gene mention as it appears in the text and internal identifier.
- `gene2`: *string*, second gene mention as it appears in the text and internal identifier.
- `variant1`: *string*, first variant mention as it appears in the text, with its normalized form, HGVS form (https://varnomen.hgvs.org/), gene where it occurs, and eventually variation identifier is available.
- `variant2`: *string*, second variant mention as it appears in the text, with its normalized form, HGVS form (https://varnomen.hgvs.org/), gene where it occurs, and eventually variation identifier is available.
- `label`: *int*, class of the instance, 0 if there is no relation between the entities, 1 if there is.
### Data Splits
Dataset is split between train, dev and test sets. Splitting has been done with a stratified split based on the labels in order to maintain a similar distribution (around 9.4% of positive class).
| | train | test | dev |
|--------------------------------------------|------:|-----:|-----:|
| Total number of instances | 6553 | 1689 | 200 |
| Number of positive instances | 616 | 159 | 19 |
| Total number of articles | 79 | 75 | 51 |
| Number of articles with positive instances | 61 | 51 | 12 |
| Number of articles with negative instances | 78 | 73 | 50 |
## Dataset Creation
### Curation Rationale
The curation of oligogenic variant combinations requires high expertise and time, while the number of genetic studies have increased across the years, especially with the apparition of the next-generation sequencing technologies. This dataset aims to support such curation by extracting potential candidates directly from the text.
### Source Data
#### Initial Data Collection and Normalization
Scientific articles containing oligogenic variant combinations potentially causing genetic diseases were retrieved from [OLIDA](https://olida.ibsquare.be), the OLIgogenic diseases DAtabase. Articles were filtered to keep only those containing at least one digenic variant combination, i.e. combination between two genes and at least one variant in each gene. The articles were then pre-annotated with the help of PubTator API (https://www.ncbi.nlm.nih.gov/research/pubtator/api.html) to obtain the full text of the articles with the genes and variants identified.
Fragment of texts to annotate were created by extracting all the text (both single and cross-sentence) containing two different gene and two different variant mentions with a maximum length of 256 tokens, as tokenized by the BiomedBERT tokenizer (see [BiomedBERT](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext)). Text containing tables or incomplete sentences were excluded during the annotation process.
#### Who are the source language producers?
The dataset is machine-generated, as the full annotated text of the article is retrieved from the PubTator API and then the relevant text containing two genes and two variants are generated through python scripts.
### Annotations
The annotation was done with the ALAMBIC platform, with an Active Learning (AL) setting (see [Nachtegael 2023](https://aclanthology.org/2023.eacl-demo.14)).
#### Annotation process
1500 samples were randomly selected to be labelled, with 1000 samples for the test set and 500 as seed for the AL process. 9 iterations of AL selection of 500 samples with the Margin Sampling strategy was conducted with BiomedBERT as the model used for the selection (see [BiomedBERT](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext)), samples subsequently annotated. The annotation limit was initially set at 6000 samples, but was exceeded due to several restarts of the process due to exclusion of invalid instances.
The annotator had access to the genes and variants, the PMCID of the article the text was extracted from and the text with the masked entities. One out of three possible classes is given to each fragment of text :
- *0* for the absence of a digenic variant combination relation in the text.
- *1* for the presence of a digenic variant combination relation. The genes and the variants need to be relating to each other for there to be a valid relation. If the entities are involved in an alleged digenic relation according to OLIDA, but the syntactic aspects of the text showed no clear relation between the entities, then the text contains no relation. The combination needs to be carried by at least one individual.
- *-1* if the fragment of text is not valid. The text can be deemed as invalid if one of the entities is not a valid entity, i.e. not a valid gene name or mutation, or the text contains an unfinished sentence or invalid sentence, i.e. with part of the text being a table. Invalid gene name and mutation comprised : (a) error in the annotation, e.g. P05, a patient denomination, which was annotated as a gene name or the cell line HEK293 which was annotated as variant; (b) genes in species not human; (c) Isoforms denominations of proteins and (d) gene products. Tables were excluded as it is not considered as comprehensive text without the notion of their structure. To be used, they would need to be parsed in order to convey this structure, which is not rendered in free text.
Only instances from the positive and the negative classes (labels of *0* and *1*) are included in the final data set, all the invalid instances are excluded from further use as they do not fill our quality standards.
It must be noted that while the articles were filtered for those containing digenic variant combinations, it is possible to also find oligogenic variant combinations involving more than two genes and/or two variants. In that case, a subset of those variant combinations, i.e. two gene-variant pairs which are connected in the text and are part of the variant combination, were considered as a valid digenic variant combinations and classified them as class *1*.
#### Who are the annotators?
Annotation was done by Charlotte Nachtegael, one of the author and curator of OLIDA, with a substantial background in genetics and molecular biology.
### Personal and Sensitive Information
None.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset should help to the curation of complex genetic diseases, contributing to the research of such medical problems. It should not, at the moment, but used exclusively for support of the curation and not as the curation iteself of oligogenic/digenic variant combinations.
### Discussion of Biases
Some diseases are more studied/known as oligogenic, thus the variants and genes could be biased towards those gene panels more well-known. Moreover, some articles are more represented in the dataset than others because they had more genes and/or variants in the text than others.
The named entity recognition step was also done automatically, so it could be possible that some entities were not recognized and thus ignored when creating the candidates. When errors were encountered during the annotation process, the candidates were excluded from the dataset.
### Other Known Limitations
None.
## Additional Information
### Dataset Curators
This work was supported by the Service Public de Wallonie Recherche by DIGITALWALLONIA4.AI [2010235—ARIAC]
- Charlotte Nachtegael, Université Libre de Bruxelles, Belgium
### Licensing Information
This dataset is under the Creative Commons Attribution Non Commercial Share Alike 4.0 license.
### Citation Information
TBA
```bib
@article{DUVEL_2023,
author = {},
title = {},
journal = {},
year = {2023}
}
```
### Contributions
Thanks to Barbara Gravel and Sofia Papadimitriou for their initial work with OLIDA.
Thanks to Jacopo De Stefani, Anthony Cnudde and Tom Lenaerts for their help with the experimental design and writing of the paper for DUVEL.
|
cnachteg/duvel
|
[
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"biology",
"medical",
"genetics",
"doi:10.57967/hf/1571",
"region:us"
] |
2023-08-17T05:01:26+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "pretty_name": "Detection of Unlimited Variant Ensemble in Literature (DUVEL)", "dataset_info": [{"config_name": "DUVEL", "features": [{"name": "sentence", "dtype": "string"}, {"name": "pmcid", "dtype": "int64"}, {"name": "gene1", "dtype": "string"}, {"name": "gene2", "dtype": "string"}, {"name": "variant1", "dtype": "string"}, {"name": "variant2", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 5160622, "num_examples": 6553}, {"name": "validation", "num_bytes": 156567, "num_examples": 200}, {"name": "test", "num_bytes": 1317156, "num_examples": 1689}], "download_size": 6473496, "dataset_size": 6634345}, {"config_name": "default", "features": [{"name": "sentence", "dtype": "string"}, {"name": "pmcid", "dtype": "int32"}, {"name": "gene1", "dtype": "string"}, {"name": "gene2", "dtype": "string"}, {"name": "variant1", "dtype": "string"}, {"name": "variant2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": 0, "1": 1}}}}], "splits": [{"name": "train", "num_bytes": 5134410, "num_examples": 6553}, {"name": "test", "num_bytes": 1310400, "num_examples": 1689}, {"name": "validation", "num_bytes": 155767, "num_examples": 200}], "download_size": 6473496, "dataset_size": 6600577}], "tags": ["biology", "medical", "genetics"]}
|
2023-12-20T13:46:21+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #biology #medical #genetics #doi-10.57967/hf/1571 #region-us
|
Dataset Card for DUVEL
======================
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: TBA
* Point of Contact: TBA
### Dataset Summary
This dataset was created to identity oligogenic variant combinations, i.e. relation between several genes and their mutations, causing genetic diseases in scientific articles written in english. At the moment, it contains only digenic variant combinations, i.e. relations between two genes and at least two variants. The dataset is intended for binary relation extraction where the entities are masked within the text.
### Supported Task
The dataset can be used to train a model for ''text-classification'' (as the relation extraction task is here considered as a classification task). Success on this task is typically measured by achieving a high F1-score.
The BioLinkBERT model (URL currently achieves the following score of 0.8207 F1-score, with a precision of 0.7941 and a recall of 0.8491.
### Languages
The dataset consists in text extracted from scientific articles written in english (en).
Dataset Structure
-----------------
### Data Instances
Each instance describes the two genes and two variants composing the potential digenic variant combination, as well as the fragment of text with the masked entities, the PubMed Central identifier of the article and the label of the instance (i.e., if the fragment of text contains a valid digenic variant combination or not, respectively 1 and 0).
### Data Fields
* 'sentence': *string*, text containing the entities masked with either @GENE$ for the gene type or @VARIANT$ for the mutation type. The text can be either single or cross-sentence, but no longer than 256 tokens according to the BiomedBERT tokenizer (see BiomedBERT).
* 'pmcid': *int*, PubMed Central identifier of the article from which the text was extracted (URL
* 'gene1': *string*, first gene mention as it appears in the text and internal identifier.
* 'gene2': *string*, second gene mention as it appears in the text and internal identifier.
* 'variant1': *string*, first variant mention as it appears in the text, with its normalized form, HGVS form (URL gene where it occurs, and eventually variation identifier is available.
* 'variant2': *string*, second variant mention as it appears in the text, with its normalized form, HGVS form (URL gene where it occurs, and eventually variation identifier is available.
* 'label': *int*, class of the instance, 0 if there is no relation between the entities, 1 if there is.
### Data Splits
Dataset is split between train, dev and test sets. Splitting has been done with a stratified split based on the labels in order to maintain a similar distribution (around 9.4% of positive class).
Dataset Creation
----------------
### Curation Rationale
The curation of oligogenic variant combinations requires high expertise and time, while the number of genetic studies have increased across the years, especially with the apparition of the next-generation sequencing technologies. This dataset aims to support such curation by extracting potential candidates directly from the text.
### Source Data
#### Initial Data Collection and Normalization
Scientific articles containing oligogenic variant combinations potentially causing genetic diseases were retrieved from OLIDA, the OLIgogenic diseases DAtabase. Articles were filtered to keep only those containing at least one digenic variant combination, i.e. combination between two genes and at least one variant in each gene. The articles were then pre-annotated with the help of PubTator API (URL to obtain the full text of the articles with the genes and variants identified.
Fragment of texts to annotate were created by extracting all the text (both single and cross-sentence) containing two different gene and two different variant mentions with a maximum length of 256 tokens, as tokenized by the BiomedBERT tokenizer (see BiomedBERT). Text containing tables or incomplete sentences were excluded during the annotation process.
#### Who are the source language producers?
The dataset is machine-generated, as the full annotated text of the article is retrieved from the PubTator API and then the relevant text containing two genes and two variants are generated through python scripts.
### Annotations
The annotation was done with the ALAMBIC platform, with an Active Learning (AL) setting (see Nachtegael 2023).
#### Annotation process
1500 samples were randomly selected to be labelled, with 1000 samples for the test set and 500 as seed for the AL process. 9 iterations of AL selection of 500 samples with the Margin Sampling strategy was conducted with BiomedBERT as the model used for the selection (see BiomedBERT), samples subsequently annotated. The annotation limit was initially set at 6000 samples, but was exceeded due to several restarts of the process due to exclusion of invalid instances.
The annotator had access to the genes and variants, the PMCID of the article the text was extracted from and the text with the masked entities. One out of three possible classes is given to each fragment of text :
* *0* for the absence of a digenic variant combination relation in the text.
* *1* for the presence of a digenic variant combination relation. The genes and the variants need to be relating to each other for there to be a valid relation. If the entities are involved in an alleged digenic relation according to OLIDA, but the syntactic aspects of the text showed no clear relation between the entities, then the text contains no relation. The combination needs to be carried by at least one individual.
* *-1* if the fragment of text is not valid. The text can be deemed as invalid if one of the entities is not a valid entity, i.e. not a valid gene name or mutation, or the text contains an unfinished sentence or invalid sentence, i.e. with part of the text being a table. Invalid gene name and mutation comprised : (a) error in the annotation, e.g. P05, a patient denomination, which was annotated as a gene name or the cell line HEK293 which was annotated as variant; (b) genes in species not human; (c) Isoforms denominations of proteins and (d) gene products. Tables were excluded as it is not considered as comprehensive text without the notion of their structure. To be used, they would need to be parsed in order to convey this structure, which is not rendered in free text.
Only instances from the positive and the negative classes (labels of *0* and *1*) are included in the final data set, all the invalid instances are excluded from further use as they do not fill our quality standards.
It must be noted that while the articles were filtered for those containing digenic variant combinations, it is possible to also find oligogenic variant combinations involving more than two genes and/or two variants. In that case, a subset of those variant combinations, i.e. two gene-variant pairs which are connected in the text and are part of the variant combination, were considered as a valid digenic variant combinations and classified them as class *1*.
#### Who are the annotators?
Annotation was done by Charlotte Nachtegael, one of the author and curator of OLIDA, with a substantial background in genetics and molecular biology.
### Personal and Sensitive Information
None.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
The dataset should help to the curation of complex genetic diseases, contributing to the research of such medical problems. It should not, at the moment, but used exclusively for support of the curation and not as the curation iteself of oligogenic/digenic variant combinations.
### Discussion of Biases
Some diseases are more studied/known as oligogenic, thus the variants and genes could be biased towards those gene panels more well-known. Moreover, some articles are more represented in the dataset than others because they had more genes and/or variants in the text than others.
The named entity recognition step was also done automatically, so it could be possible that some entities were not recognized and thus ignored when creating the candidates. When errors were encountered during the annotation process, the candidates were excluded from the dataset.
### Other Known Limitations
None.
Additional Information
----------------------
### Dataset Curators
This work was supported by the Service Public de Wallonie Recherche by DIGITALWALLONIA4.AI [2010235—ARIAC]
* Charlotte Nachtegael, Université Libre de Bruxelles, Belgium
### Licensing Information
This dataset is under the Creative Commons Attribution Non Commercial Share Alike 4.0 license.
TBA
### Contributions
Thanks to Barbara Gravel and Sofia Papadimitriou for their initial work with OLIDA.
Thanks to Jacopo De Stefani, Anthony Cnudde and Tom Lenaerts for their help with the experimental design and writing of the paper for DUVEL.
|
[
"### Dataset Summary\n\n\nThis dataset was created to identity oligogenic variant combinations, i.e. relation between several genes and their mutations, causing genetic diseases in scientific articles written in english. At the moment, it contains only digenic variant combinations, i.e. relations between two genes and at least two variants. The dataset is intended for binary relation extraction where the entities are masked within the text.",
"### Supported Task\n\n\nThe dataset can be used to train a model for ''text-classification'' (as the relation extraction task is here considered as a classification task). Success on this task is typically measured by achieving a high F1-score.\n\n\nThe BioLinkBERT model (URL currently achieves the following score of 0.8207 F1-score, with a precision of 0.7941 and a recall of 0.8491.",
"### Languages\n\n\nThe dataset consists in text extracted from scientific articles written in english (en).\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nEach instance describes the two genes and two variants composing the potential digenic variant combination, as well as the fragment of text with the masked entities, the PubMed Central identifier of the article and the label of the instance (i.e., if the fragment of text contains a valid digenic variant combination or not, respectively 1 and 0).",
"### Data Fields\n\n\n* 'sentence': *string*, text containing the entities masked with either @GENE$ for the gene type or @VARIANT$ for the mutation type. The text can be either single or cross-sentence, but no longer than 256 tokens according to the BiomedBERT tokenizer (see BiomedBERT).\n* 'pmcid': *int*, PubMed Central identifier of the article from which the text was extracted (URL\n* 'gene1': *string*, first gene mention as it appears in the text and internal identifier.\n* 'gene2': *string*, second gene mention as it appears in the text and internal identifier.\n* 'variant1': *string*, first variant mention as it appears in the text, with its normalized form, HGVS form (URL gene where it occurs, and eventually variation identifier is available.\n* 'variant2': *string*, second variant mention as it appears in the text, with its normalized form, HGVS form (URL gene where it occurs, and eventually variation identifier is available.\n* 'label': *int*, class of the instance, 0 if there is no relation between the entities, 1 if there is.",
"### Data Splits\n\n\nDataset is split between train, dev and test sets. Splitting has been done with a stratified split based on the labels in order to maintain a similar distribution (around 9.4% of positive class).\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe curation of oligogenic variant combinations requires high expertise and time, while the number of genetic studies have increased across the years, especially with the apparition of the next-generation sequencing technologies. This dataset aims to support such curation by extracting potential candidates directly from the text.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nScientific articles containing oligogenic variant combinations potentially causing genetic diseases were retrieved from OLIDA, the OLIgogenic diseases DAtabase. Articles were filtered to keep only those containing at least one digenic variant combination, i.e. combination between two genes and at least one variant in each gene. The articles were then pre-annotated with the help of PubTator API (URL to obtain the full text of the articles with the genes and variants identified.\n\n\nFragment of texts to annotate were created by extracting all the text (both single and cross-sentence) containing two different gene and two different variant mentions with a maximum length of 256 tokens, as tokenized by the BiomedBERT tokenizer (see BiomedBERT). Text containing tables or incomplete sentences were excluded during the annotation process.",
"#### Who are the source language producers?\n\n\nThe dataset is machine-generated, as the full annotated text of the article is retrieved from the PubTator API and then the relevant text containing two genes and two variants are generated through python scripts.",
"### Annotations\n\n\nThe annotation was done with the ALAMBIC platform, with an Active Learning (AL) setting (see Nachtegael 2023).",
"#### Annotation process\n\n\n1500 samples were randomly selected to be labelled, with 1000 samples for the test set and 500 as seed for the AL process. 9 iterations of AL selection of 500 samples with the Margin Sampling strategy was conducted with BiomedBERT as the model used for the selection (see BiomedBERT), samples subsequently annotated. The annotation limit was initially set at 6000 samples, but was exceeded due to several restarts of the process due to exclusion of invalid instances.\n\n\nThe annotator had access to the genes and variants, the PMCID of the article the text was extracted from and the text with the masked entities. One out of three possible classes is given to each fragment of text :\n\n\n* *0* for the absence of a digenic variant combination relation in the text.\n* *1* for the presence of a digenic variant combination relation. The genes and the variants need to be relating to each other for there to be a valid relation. If the entities are involved in an alleged digenic relation according to OLIDA, but the syntactic aspects of the text showed no clear relation between the entities, then the text contains no relation. The combination needs to be carried by at least one individual.\n* *-1* if the fragment of text is not valid. The text can be deemed as invalid if one of the entities is not a valid entity, i.e. not a valid gene name or mutation, or the text contains an unfinished sentence or invalid sentence, i.e. with part of the text being a table. Invalid gene name and mutation comprised : (a) error in the annotation, e.g. P05, a patient denomination, which was annotated as a gene name or the cell line HEK293 which was annotated as variant; (b) genes in species not human; (c) Isoforms denominations of proteins and (d) gene products. Tables were excluded as it is not considered as comprehensive text without the notion of their structure. To be used, they would need to be parsed in order to convey this structure, which is not rendered in free text.\n\n\nOnly instances from the positive and the negative classes (labels of *0* and *1*) are included in the final data set, all the invalid instances are excluded from further use as they do not fill our quality standards.\n\n\nIt must be noted that while the articles were filtered for those containing digenic variant combinations, it is possible to also find oligogenic variant combinations involving more than two genes and/or two variants. In that case, a subset of those variant combinations, i.e. two gene-variant pairs which are connected in the text and are part of the variant combination, were considered as a valid digenic variant combinations and classified them as class *1*.",
"#### Who are the annotators?\n\n\nAnnotation was done by Charlotte Nachtegael, one of the author and curator of OLIDA, with a substantial background in genetics and molecular biology.",
"### Personal and Sensitive Information\n\n\nNone.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe dataset should help to the curation of complex genetic diseases, contributing to the research of such medical problems. It should not, at the moment, but used exclusively for support of the curation and not as the curation iteself of oligogenic/digenic variant combinations.",
"### Discussion of Biases\n\n\nSome diseases are more studied/known as oligogenic, thus the variants and genes could be biased towards those gene panels more well-known. Moreover, some articles are more represented in the dataset than others because they had more genes and/or variants in the text than others.\n\n\nThe named entity recognition step was also done automatically, so it could be possible that some entities were not recognized and thus ignored when creating the candidates. When errors were encountered during the annotation process, the candidates were excluded from the dataset.",
"### Other Known Limitations\n\n\nNone.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThis work was supported by the Service Public de Wallonie Recherche by DIGITALWALLONIA4.AI [2010235—ARIAC]\n\n\n* Charlotte Nachtegael, Université Libre de Bruxelles, Belgium",
"### Licensing Information\n\n\nThis dataset is under the Creative Commons Attribution Non Commercial Share Alike 4.0 license.\n\n\nTBA",
"### Contributions\n\n\nThanks to Barbara Gravel and Sofia Papadimitriou for their initial work with OLIDA.\nThanks to Jacopo De Stefani, Anthony Cnudde and Tom Lenaerts for their help with the experimental design and writing of the paper for DUVEL."
] |
[
"TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #biology #medical #genetics #doi-10.57967/hf/1571 #region-us \n",
"### Dataset Summary\n\n\nThis dataset was created to identity oligogenic variant combinations, i.e. relation between several genes and their mutations, causing genetic diseases in scientific articles written in english. At the moment, it contains only digenic variant combinations, i.e. relations between two genes and at least two variants. The dataset is intended for binary relation extraction where the entities are masked within the text.",
"### Supported Task\n\n\nThe dataset can be used to train a model for ''text-classification'' (as the relation extraction task is here considered as a classification task). Success on this task is typically measured by achieving a high F1-score.\n\n\nThe BioLinkBERT model (URL currently achieves the following score of 0.8207 F1-score, with a precision of 0.7941 and a recall of 0.8491.",
"### Languages\n\n\nThe dataset consists in text extracted from scientific articles written in english (en).\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nEach instance describes the two genes and two variants composing the potential digenic variant combination, as well as the fragment of text with the masked entities, the PubMed Central identifier of the article and the label of the instance (i.e., if the fragment of text contains a valid digenic variant combination or not, respectively 1 and 0).",
"### Data Fields\n\n\n* 'sentence': *string*, text containing the entities masked with either @GENE$ for the gene type or @VARIANT$ for the mutation type. The text can be either single or cross-sentence, but no longer than 256 tokens according to the BiomedBERT tokenizer (see BiomedBERT).\n* 'pmcid': *int*, PubMed Central identifier of the article from which the text was extracted (URL\n* 'gene1': *string*, first gene mention as it appears in the text and internal identifier.\n* 'gene2': *string*, second gene mention as it appears in the text and internal identifier.\n* 'variant1': *string*, first variant mention as it appears in the text, with its normalized form, HGVS form (URL gene where it occurs, and eventually variation identifier is available.\n* 'variant2': *string*, second variant mention as it appears in the text, with its normalized form, HGVS form (URL gene where it occurs, and eventually variation identifier is available.\n* 'label': *int*, class of the instance, 0 if there is no relation between the entities, 1 if there is.",
"### Data Splits\n\n\nDataset is split between train, dev and test sets. Splitting has been done with a stratified split based on the labels in order to maintain a similar distribution (around 9.4% of positive class).\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe curation of oligogenic variant combinations requires high expertise and time, while the number of genetic studies have increased across the years, especially with the apparition of the next-generation sequencing technologies. This dataset aims to support such curation by extracting potential candidates directly from the text.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nScientific articles containing oligogenic variant combinations potentially causing genetic diseases were retrieved from OLIDA, the OLIgogenic diseases DAtabase. Articles were filtered to keep only those containing at least one digenic variant combination, i.e. combination between two genes and at least one variant in each gene. The articles were then pre-annotated with the help of PubTator API (URL to obtain the full text of the articles with the genes and variants identified.\n\n\nFragment of texts to annotate were created by extracting all the text (both single and cross-sentence) containing two different gene and two different variant mentions with a maximum length of 256 tokens, as tokenized by the BiomedBERT tokenizer (see BiomedBERT). Text containing tables or incomplete sentences were excluded during the annotation process.",
"#### Who are the source language producers?\n\n\nThe dataset is machine-generated, as the full annotated text of the article is retrieved from the PubTator API and then the relevant text containing two genes and two variants are generated through python scripts.",
"### Annotations\n\n\nThe annotation was done with the ALAMBIC platform, with an Active Learning (AL) setting (see Nachtegael 2023).",
"#### Annotation process\n\n\n1500 samples were randomly selected to be labelled, with 1000 samples for the test set and 500 as seed for the AL process. 9 iterations of AL selection of 500 samples with the Margin Sampling strategy was conducted with BiomedBERT as the model used for the selection (see BiomedBERT), samples subsequently annotated. The annotation limit was initially set at 6000 samples, but was exceeded due to several restarts of the process due to exclusion of invalid instances.\n\n\nThe annotator had access to the genes and variants, the PMCID of the article the text was extracted from and the text with the masked entities. One out of three possible classes is given to each fragment of text :\n\n\n* *0* for the absence of a digenic variant combination relation in the text.\n* *1* for the presence of a digenic variant combination relation. The genes and the variants need to be relating to each other for there to be a valid relation. If the entities are involved in an alleged digenic relation according to OLIDA, but the syntactic aspects of the text showed no clear relation between the entities, then the text contains no relation. The combination needs to be carried by at least one individual.\n* *-1* if the fragment of text is not valid. The text can be deemed as invalid if one of the entities is not a valid entity, i.e. not a valid gene name or mutation, or the text contains an unfinished sentence or invalid sentence, i.e. with part of the text being a table. Invalid gene name and mutation comprised : (a) error in the annotation, e.g. P05, a patient denomination, which was annotated as a gene name or the cell line HEK293 which was annotated as variant; (b) genes in species not human; (c) Isoforms denominations of proteins and (d) gene products. Tables were excluded as it is not considered as comprehensive text without the notion of their structure. To be used, they would need to be parsed in order to convey this structure, which is not rendered in free text.\n\n\nOnly instances from the positive and the negative classes (labels of *0* and *1*) are included in the final data set, all the invalid instances are excluded from further use as they do not fill our quality standards.\n\n\nIt must be noted that while the articles were filtered for those containing digenic variant combinations, it is possible to also find oligogenic variant combinations involving more than two genes and/or two variants. In that case, a subset of those variant combinations, i.e. two gene-variant pairs which are connected in the text and are part of the variant combination, were considered as a valid digenic variant combinations and classified them as class *1*.",
"#### Who are the annotators?\n\n\nAnnotation was done by Charlotte Nachtegael, one of the author and curator of OLIDA, with a substantial background in genetics and molecular biology.",
"### Personal and Sensitive Information\n\n\nNone.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe dataset should help to the curation of complex genetic diseases, contributing to the research of such medical problems. It should not, at the moment, but used exclusively for support of the curation and not as the curation iteself of oligogenic/digenic variant combinations.",
"### Discussion of Biases\n\n\nSome diseases are more studied/known as oligogenic, thus the variants and genes could be biased towards those gene panels more well-known. Moreover, some articles are more represented in the dataset than others because they had more genes and/or variants in the text than others.\n\n\nThe named entity recognition step was also done automatically, so it could be possible that some entities were not recognized and thus ignored when creating the candidates. When errors were encountered during the annotation process, the candidates were excluded from the dataset.",
"### Other Known Limitations\n\n\nNone.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThis work was supported by the Service Public de Wallonie Recherche by DIGITALWALLONIA4.AI [2010235—ARIAC]\n\n\n* Charlotte Nachtegael, Université Libre de Bruxelles, Belgium",
"### Licensing Information\n\n\nThis dataset is under the Creative Commons Attribution Non Commercial Share Alike 4.0 license.\n\n\nTBA",
"### Contributions\n\n\nThanks to Barbara Gravel and Sofia Papadimitriou for their initial work with OLIDA.\nThanks to Jacopo De Stefani, Anthony Cnudde and Tom Lenaerts for their help with the experimental design and writing of the paper for DUVEL."
] |
[
107,
99,
99,
29,
88,
282,
55,
72,
4,
205,
62,
32,
650,
42,
21,
71,
138,
17,
48,
27,
59
] |
[
"passage: TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #biology #medical #genetics #doi-10.57967/hf/1571 #region-us \n### Dataset Summary\n\n\nThis dataset was created to identity oligogenic variant combinations, i.e. relation between several genes and their mutations, causing genetic diseases in scientific articles written in english. At the moment, it contains only digenic variant combinations, i.e. relations between two genes and at least two variants. The dataset is intended for binary relation extraction where the entities are masked within the text.### Supported Task\n\n\nThe dataset can be used to train a model for ''text-classification'' (as the relation extraction task is here considered as a classification task). Success on this task is typically measured by achieving a high F1-score.\n\n\nThe BioLinkBERT model (URL currently achieves the following score of 0.8207 F1-score, with a precision of 0.7941 and a recall of 0.8491.### Languages\n\n\nThe dataset consists in text extracted from scientific articles written in english (en).\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nEach instance describes the two genes and two variants composing the potential digenic variant combination, as well as the fragment of text with the masked entities, the PubMed Central identifier of the article and the label of the instance (i.e., if the fragment of text contains a valid digenic variant combination or not, respectively 1 and 0).",
"passage: ### Data Fields\n\n\n* 'sentence': *string*, text containing the entities masked with either @GENE$ for the gene type or @VARIANT$ for the mutation type. The text can be either single or cross-sentence, but no longer than 256 tokens according to the BiomedBERT tokenizer (see BiomedBERT).\n* 'pmcid': *int*, PubMed Central identifier of the article from which the text was extracted (URL\n* 'gene1': *string*, first gene mention as it appears in the text and internal identifier.\n* 'gene2': *string*, second gene mention as it appears in the text and internal identifier.\n* 'variant1': *string*, first variant mention as it appears in the text, with its normalized form, HGVS form (URL gene where it occurs, and eventually variation identifier is available.\n* 'variant2': *string*, second variant mention as it appears in the text, with its normalized form, HGVS form (URL gene where it occurs, and eventually variation identifier is available.\n* 'label': *int*, class of the instance, 0 if there is no relation between the entities, 1 if there is.### Data Splits\n\n\nDataset is split between train, dev and test sets. Splitting has been done with a stratified split based on the labels in order to maintain a similar distribution (around 9.4% of positive class).\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThe curation of oligogenic variant combinations requires high expertise and time, while the number of genetic studies have increased across the years, especially with the apparition of the next-generation sequencing technologies. This dataset aims to support such curation by extracting potential candidates directly from the text.### Source Data#### Initial Data Collection and Normalization\n\n\nScientific articles containing oligogenic variant combinations potentially causing genetic diseases were retrieved from OLIDA, the OLIgogenic diseases DAtabase. Articles were filtered to keep only those containing at least one digenic variant combination, i.e. combination between two genes and at least one variant in each gene. The articles were then pre-annotated with the help of PubTator API (URL to obtain the full text of the articles with the genes and variants identified.\n\n\nFragment of texts to annotate were created by extracting all the text (both single and cross-sentence) containing two different gene and two different variant mentions with a maximum length of 256 tokens, as tokenized by the BiomedBERT tokenizer (see BiomedBERT). Text containing tables or incomplete sentences were excluded during the annotation process.#### Who are the source language producers?\n\n\nThe dataset is machine-generated, as the full annotated text of the article is retrieved from the PubTator API and then the relevant text containing two genes and two variants are generated through python scripts.### Annotations\n\n\nThe annotation was done with the ALAMBIC platform, with an Active Learning (AL) setting (see Nachtegael 2023)."
] |
f633a434c9465389b0938059b97d6c0fc90611dd
|
# Dataset of reisen_udongein_inaba/鈴仙・優曇華院・イナバ/레이센우동게인이나바 (Touhou)
This is the dataset of reisen_udongein_inaba/鈴仙・優曇華院・イナバ/레이센우동게인이나바 (Touhou), containing 500 images and their tags.
The core tags of this character are `animal_ears, long_hair, rabbit_ears, purple_hair, red_eyes, very_long_hair, breasts, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 657.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/reisen_udongein_inaba_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 393.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/reisen_udongein_inaba_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1285 | 843.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/reisen_udongein_inaba_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 591.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/reisen_udongein_inaba_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1285 | 1.12 GiB | [Download](https://huggingface.co/datasets/CyberHarem/reisen_udongein_inaba_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/reisen_udongein_inaba_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 20 |  |  |  |  |  | 1girl, blazer, solo, red_necktie, skirt, smile, black_thighhighs, blush, crescent, zettai_ryouiki |
| 1 | 5 |  |  |  |  |  | 1girl, blazer, red_necktie, simple_background, skirt, solo, shirt, smile, white_background, blush, crescent, socks, finger_gun, long_sleeves |
| 2 | 14 |  |  |  |  |  | 1girl, long_sleeves, looking_at_viewer, red_necktie, solo, white_shirt, blazer, blush, collared_shirt, pleated_skirt, pink_skirt, simple_background, black_jacket, hair_between_eyes, cowboy_shot, white_background, crescent_pin, open_mouth, smile, standing, thighhighs, zettai_ryouiki |
| 3 | 5 |  |  |  |  |  | 1girl, blazer, rabbit_tail, skirt, solo, black_thighhighs, rabbit_girl, red_necktie, one_eye_closed, pointing, smile, zettai_ryouiki |
| 4 | 5 |  |  |  |  |  | 1girl, black_jacket, blazer, closed_mouth, collared_shirt, long_sleeves, pleated_skirt, shoes, solo, white_shirt, white_socks, pink_skirt, standing, black_footwear, buttons, finger_gun, full_body, looking_at_viewer, brown_footwear, crescent_pin, danmaku, hair_between_eyes, red_necktie, simple_background, smile |
| 5 | 13 |  |  |  |  |  | 1girl, looking_at_viewer, red_necktie, solo, white_shirt, white_background, simple_background, blush, collared_shirt, puffy_short_sleeves, smile, cowboy_shot, open_mouth, red_skirt |
| 6 | 10 |  |  |  |  |  | 1girl, looking_at_viewer, puffy_short_sleeves, red_necktie, solo, white_shirt, collared_shirt, loafers, pink_skirt, white_socks, closed_mouth, hair_between_eyes, brown_footwear, carrot, blush, full_body, red_skirt, standing, full_moon, holding_gun, kneehighs, night_sky, rabbit_tail, smile, starry_sky |
| 7 | 5 |  |  |  |  |  | 1girl, blush, cleavage, large_breasts, looking_at_viewer, pink_panties, solo, navel, open_shirt, pink_bra, collarbone, bare_shoulders, black_thighhighs, dress_shirt, long_sleeves, no_pants, open_mouth, red_necktie |
| 8 | 6 |  |  |  |  |  | 1girl, alternate_costume, blush, looking_at_viewer, outdoors, pleated_skirt, serafuku, smile, solo, day, sailor_collar, blue_skirt, cloud, red_neckerchief, short_sleeves, standing, white_shirt, blue_sky, closed_mouth, holding_bag, pink_hair, school_bag |
| 9 | 16 |  |  |  |  |  | 1girl, solo, blush, rabbit_girl, rabbit_tail, bare_shoulders, large_breasts, playboy_bunny, looking_at_viewer, wrist_cuffs, cleavage, detached_collar, leotard, ass, simple_background, black_pantyhose, white_background |
| 10 | 5 |  |  |  |  |  | 1girl, medium_breasts, solo, blush, cleavage, looking_at_viewer, smile, frilled_bikini, front-tie_top, navel, open_mouth, barefoot, collarbone, side-tie_bikini_bottom, wariza, water |
| 11 | 9 |  |  |  |  |  | 1girl, solo, blush, enmaided, looking_at_viewer, maid_headdress, white_apron, hair_between_eyes, maid_apron, black_dress, open_mouth, short_sleeves, bowtie, frills, long_sleeves, simple_background, standing |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blazer | solo | red_necktie | skirt | smile | black_thighhighs | blush | crescent | zettai_ryouiki | simple_background | shirt | white_background | socks | finger_gun | long_sleeves | looking_at_viewer | white_shirt | collared_shirt | pleated_skirt | pink_skirt | black_jacket | hair_between_eyes | cowboy_shot | crescent_pin | open_mouth | standing | thighhighs | rabbit_tail | rabbit_girl | one_eye_closed | pointing | closed_mouth | shoes | white_socks | black_footwear | buttons | full_body | brown_footwear | danmaku | puffy_short_sleeves | red_skirt | loafers | carrot | full_moon | holding_gun | kneehighs | night_sky | starry_sky | cleavage | large_breasts | pink_panties | navel | open_shirt | pink_bra | collarbone | bare_shoulders | dress_shirt | no_pants | alternate_costume | outdoors | serafuku | day | sailor_collar | blue_skirt | cloud | red_neckerchief | short_sleeves | blue_sky | holding_bag | pink_hair | school_bag | playboy_bunny | wrist_cuffs | detached_collar | leotard | ass | black_pantyhose | medium_breasts | frilled_bikini | front-tie_top | barefoot | side-tie_bikini_bottom | wariza | water | enmaided | maid_headdress | white_apron | maid_apron | black_dress | bowtie | frills |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:---------|:-------|:--------------|:--------|:--------|:-------------------|:--------|:-----------|:-----------------|:--------------------|:--------|:-------------------|:--------|:-------------|:---------------|:--------------------|:--------------|:-----------------|:----------------|:-------------|:---------------|:--------------------|:--------------|:---------------|:-------------|:-----------|:-------------|:--------------|:--------------|:-----------------|:-----------|:---------------|:--------|:--------------|:-----------------|:----------|:------------|:-----------------|:----------|:----------------------|:------------|:----------|:---------|:------------|:--------------|:------------|:------------|:-------------|:-----------|:----------------|:---------------|:--------|:-------------|:-----------|:-------------|:-----------------|:--------------|:-----------|:--------------------|:-----------|:-----------|:------|:----------------|:-------------|:--------|:------------------|:----------------|:-----------|:--------------|:------------|:-------------|:----------------|:--------------|:------------------|:----------|:------|:------------------|:-----------------|:-----------------|:----------------|:-----------|:-------------------------|:---------|:--------|:-----------|:-----------------|:--------------|:-------------|:--------------|:---------|:---------|
| 0 | 20 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | | X | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 14 |  |  |  |  |  | X | X | X | X | | X | | X | | X | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | | | X | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | X | X | | X | | | | | X | | | | X | X | X | X | X | X | X | X | X | | X | | X | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 13 |  |  |  |  |  | X | | X | X | | X | | X | | | X | | X | | | | X | X | X | | | | | X | | X | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 10 |  |  |  |  |  | X | | X | X | | X | | X | | | | | | | | | X | X | X | | X | | X | | | | X | | X | | | | X | | X | | | X | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | | X | X | | | X | X | | | | | | | | X | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 6 |  |  |  |  |  | X | | X | | | X | | X | | | | | | | | | X | X | | X | | | | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 9 | 16 |  |  |  |  |  | X | | X | | | | | X | | | X | | X | | | | X | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | X | X | | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 10 | 5 |  |  |  |  |  | X | | X | | | X | | X | | | | | | | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | |
| 11 | 9 |  |  |  |  |  | X | | X | | | | | X | | | X | | | | | X | X | | | | | | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X |
|
CyberHarem/reisen_udongein_inaba_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T05:08:56+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T10:08:56+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of reisen\_udongein\_inaba/鈴仙・優曇華院・イナバ/레이센우동게인이나바 (Touhou)
==================================================================
This is the dataset of reisen\_udongein\_inaba/鈴仙・優曇華院・イナバ/레이센우동게인이나바 (Touhou), containing 500 images and their tags.
The core tags of this character are 'animal\_ears, long\_hair, rabbit\_ears, purple\_hair, red\_eyes, very\_long\_hair, breasts, bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
7a2dfddf88d136626cace526018d2ad90eedccba
|
# Dataset Card for "direct_tv_vectors"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jganzabalseenka/direct_tv_vectors
|
[
"region:us"
] |
2023-08-17T05:12:31+00:00
|
{"dataset_info": {"features": [{"name": "cluster_frames", "sequence": {"sequence": "int64"}}, {"name": "cluster_vectors", "sequence": {"sequence": "float64"}}, {"name": "cluster_predictions", "sequence": "int64"}, {"name": "distances_between_clusters", "sequence": {"sequence": "float64"}}, {"name": "video_path", "dtype": "string"}, {"name": "different_rows", "list": [{"name": "black_image", "dtype": "bool"}, {"name": "frame_number", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "horizontal_check", "dtype": "bool"}, {"name": "horizontal_xmax", "dtype": "int64"}, {"name": "horizontal_xmin", "dtype": "int64"}, {"name": "horizontal_ymax", "dtype": "int64"}, {"name": "horizontal_ymin", "dtype": "int64"}, {"name": "is_L_shape", "dtype": "bool"}, {"name": "vertical_check", "dtype": "bool"}, {"name": "vertical_xmax", "dtype": "int64"}, {"name": "vertical_xmin", "dtype": "int64"}, {"name": "vertical_ymax", "dtype": "int64"}, {"name": "vertical_ymin", "dtype": "int64"}, {"name": "width", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 50750, "num_examples": 2}], "download_size": 0, "dataset_size": 50750}}
|
2023-08-17T05:15:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "direct_tv_vectors"
More Information needed
|
[
"# Dataset Card for \"direct_tv_vectors\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"direct_tv_vectors\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"direct_tv_vectors\"\n\nMore Information needed"
] |
8843848702c494514c57cbaba6a34fa39dc1fc19
|
# Dataset of prim (Pokémon)
This is the dataset of prim (Pokémon), containing 112 images and their tags.
The core tags of this character are `blonde_hair, breasts, blue_eyes, long_hair, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 112 | 94.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/prim_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 112 | 61.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/prim_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 220 | 116.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/prim_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 112 | 86.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/prim_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 220 | 152.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/prim_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/prim_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, collarbone, smile, eyelashes, long_sleeves, purple_dress, solo, white_gloves, cleavage, closed_mouth, looking_at_viewer, makeup, upper_body, bangs, hand_up, own_hands_together |
| 1 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, purple_dress, smile, solo, white_gloves, closed_mouth, eyelashes, full_body, holding_poke_ball, poke_ball_(basic), standing, collarbone, grey_footwear, shoes, sleeves_past_elbows, white_background, green_eyes, grey_gloves, long_dress, long_sleeves, makeup, puffy_sleeves, simple_background |
| 2 | 13 |  |  |  |  |  | 1boy, hetero, nipples, 1girl, blush, solo_focus, penis, paizuri, smile, gloves, nude, cum_on_body, huge_breasts, makeup, uncensored |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | collarbone | smile | eyelashes | long_sleeves | purple_dress | solo | white_gloves | cleavage | closed_mouth | looking_at_viewer | makeup | upper_body | bangs | hand_up | own_hands_together | full_body | holding_poke_ball | poke_ball_(basic) | standing | grey_footwear | shoes | sleeves_past_elbows | white_background | green_eyes | grey_gloves | long_dress | puffy_sleeves | simple_background | 1boy | hetero | nipples | blush | solo_focus | penis | paizuri | gloves | nude | cum_on_body | huge_breasts | uncensored |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:--------|:------------|:---------------|:---------------|:-------|:---------------|:-----------|:---------------|:--------------------|:---------|:-------------|:--------|:----------|:---------------------|:------------|:--------------------|:--------------------|:-----------|:----------------|:--------|:----------------------|:-------------------|:-------------|:--------------|:-------------|:----------------|:--------------------|:-------|:---------|:----------|:--------|:-------------|:--------|:----------|:---------|:-------|:--------------|:---------------|:-------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | | X | X | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 2 | 13 |  |  |  |  |  | X | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/prim_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T05:13:39+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T14:29:43+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of prim (Pokémon)
=========================
This is the dataset of prim (Pokémon), containing 112 images and their tags.
The core tags of this character are 'blonde\_hair, breasts, blue\_eyes, long\_hair, large\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
6f099259b367f12d7ffa900a58b05380e393141d
|
# Dataset Card for "direct_tv_vectors"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Seenka/direct_tv_vectors
|
[
"region:us"
] |
2023-08-17T05:16:31+00:00
|
{"dataset_info": {"features": [{"name": "cluster_frames", "sequence": {"sequence": "int64"}}, {"name": "cluster_vectors", "sequence": {"sequence": "float64"}}, {"name": "cluster_predictions", "sequence": "int64"}, {"name": "distances_between_clusters", "sequence": {"sequence": "float64"}}, {"name": "video_path", "dtype": "string"}, {"name": "different_rows", "list": [{"name": "black_image", "dtype": "bool"}, {"name": "frame_number", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "horizontal_check", "dtype": "bool"}, {"name": "horizontal_xmax", "dtype": "int64"}, {"name": "horizontal_xmin", "dtype": "int64"}, {"name": "horizontal_ymax", "dtype": "int64"}, {"name": "horizontal_ymin", "dtype": "int64"}, {"name": "is_L_shape", "dtype": "bool"}, {"name": "vertical_check", "dtype": "bool"}, {"name": "vertical_xmax", "dtype": "int64"}, {"name": "vertical_xmin", "dtype": "int64"}, {"name": "vertical_ymax", "dtype": "int64"}, {"name": "vertical_ymin", "dtype": "int64"}, {"name": "width", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 50750, "num_examples": 2}], "download_size": 42175, "dataset_size": 50750}}
|
2023-08-17T05:16:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "direct_tv_vectors"
More Information needed
|
[
"# Dataset Card for \"direct_tv_vectors\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"direct_tv_vectors\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"direct_tv_vectors\"\n\nMore Information needed"
] |
998698fc883c9885d966481e4a2dc7eaf7a85738
|
# Dataset Card for "code_exercises"
# Code exercise
This dataset is composed of a diverse set of \~120k Python code exercises (~120m total tokens) generated by ChatGPT 3.5. It is designed to distill ChatGPT 3.5 knowledge about Python coding tasks into other (potentially smaller) models. The exercises have been generated by following the steps described in the [related GitHub repository](https://github.com/jina-ai/textbook).
The generated exercises follow the format of the [Human Eval benchmark](https://github.com/openai/human-eval). Each training sample is split into a Python function signature with a descriptive docstring, and a solution to the exercise.
This approach is inspired by several works on synthetic dataset generation, especially by _Textbooks Are All You Need_ [(Gunasekar et al. 2023)](https://doi.org/10.48550/arXiv.2306.11644).
## Disclaimer
* This dataset has been generated using ChatGPT 3.5, and you should check the legal status of AI-generated content in your jurisdiction before use. We cannot guarantee that it is free of IP restrictions. You should also make sure that your usage complies with the [OpenAI Terms of Use](https://openai.com/policies/terms-of-use), in so far as legally applicable.
* This dataset focuses narrowly on improving performance on the kinds of tasks described in the Human Eval benchmark. The Human Eval benchmark has limitations and does not necessarily fully represent the coding abilities of a large language model, and there is no way to guarantee that an improvement on this benchmark represents an overall improvement in programming performance. We present this data as is, without any guarantee of its usefulness in any specific context, to encourage research that might be inspired by our method.
## Synthetic exercise creation
Model distillation is the process of transferring some of the skilled performance of large models on specific classes of tasks to significantly smaller models. The purpose is to get performance comparable to the larger model, but at a fraction of the cost and at vastly quicker speed. The general outline of this strategy is described (without technical implementation details) in [Textbooks Are All You Need](https://doi.org/10.48550/arXiv.2306.11644).
Key to the distillation process is the creation of synthetic data, generated by the larger AI model, to train the smaller model. We have applied this approach to Python programming tasks and are publishing a summary of our methods here along with the synthetic dataset.
For fuller details and implementation code, see the [related GitHub repository](https://github.com/jina-ai/textbook).
### Diversity
The main problem with model-generated synthetic data is its diversity. If we had constructed this dataset by giving ChatGPT 3.5 the same prompt several hundred thousand times, we would get many very similar, if not functionally identical, results. This would reduce the usefulness of the dataset for training. In principle, one might solve the problem by filtering the results for near duplicates, but this is a non-trivial problem, and even if it could be solved, it would be a wasteful and potentially expensive use of the larger model.
And even then, we could not be sure the examples adequately covered the topic. To solve this problem, we introduced a novel scheme for systematically prompting large language models to produce diverse examples.
### Using a topic tree to build diverse prompts
We constructed a hierarchical model of subjects in Python programming, i.e. a topic tree. First, we manually identified 42 general topic areas in Python knowledge, for example, _data structures_ and _sorting algorithms_. We asked an LLM to propose 10 subtopics for each, and then for each of those 420 fine-grained topics, we asked the LLM to generate 5 even more fine-grained sub-subtopics. This resulted in roughly 2000 very fine-grained topics.
We generated prompts by randomly selecting two of those roughly two thousand topics and combining them:
```
Create a code completion exercise on the intersection of {topic 1} and {topic 2}.
```
To increase randomness and diversity in the results, we also constructed a list of 40 professions, like _economist_, _engineer_, and _social worker_, and added them to the prompt:
```
Create a code completion exercise on the intersection of {topic 1} and {topic 2}.
Write it for a {profession}.
```
In principle, there are approximately two million possible pairs of topics, and with 40 possible professions, this yields 80 million unique prompts. If the response to each prompt averages 100 tokens, this means our method can generate an 8 billion token synthetic dataset while maintaining a high degree of diversity. The roughly 120,000 published here is a small random subset of what is possible.
## Credits
This dataset was developed at [Jina.ai](https://jina.ai/)
|
jinaai/code_exercises
|
[
"task_categories:text-generation",
"size_categories:100M<n<1B",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
2023-08-17T05:38:59+00:00
|
{"language": ["en"], "license": "cc-by-nc-sa-4.0", "size_categories": ["100M<n<1B"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "problem", "dtype": "string"}, {"name": "solution", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1121418005, "num_examples": 1468146}], "download_size": 486193162, "dataset_size": 1121418005}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-07T07:18:18+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-100M<n<1B #language-English #license-cc-by-nc-sa-4.0 #region-us
|
# Dataset Card for "code_exercises"
# Code exercise
This dataset is composed of a diverse set of \~120k Python code exercises (~120m total tokens) generated by ChatGPT 3.5. It is designed to distill ChatGPT 3.5 knowledge about Python coding tasks into other (potentially smaller) models. The exercises have been generated by following the steps described in the related GitHub repository.
The generated exercises follow the format of the Human Eval benchmark. Each training sample is split into a Python function signature with a descriptive docstring, and a solution to the exercise.
This approach is inspired by several works on synthetic dataset generation, especially by _Textbooks Are All You Need_ (Gunasekar et al. 2023).
## Disclaimer
* This dataset has been generated using ChatGPT 3.5, and you should check the legal status of AI-generated content in your jurisdiction before use. We cannot guarantee that it is free of IP restrictions. You should also make sure that your usage complies with the OpenAI Terms of Use, in so far as legally applicable.
* This dataset focuses narrowly on improving performance on the kinds of tasks described in the Human Eval benchmark. The Human Eval benchmark has limitations and does not necessarily fully represent the coding abilities of a large language model, and there is no way to guarantee that an improvement on this benchmark represents an overall improvement in programming performance. We present this data as is, without any guarantee of its usefulness in any specific context, to encourage research that might be inspired by our method.
## Synthetic exercise creation
Model distillation is the process of transferring some of the skilled performance of large models on specific classes of tasks to significantly smaller models. The purpose is to get performance comparable to the larger model, but at a fraction of the cost and at vastly quicker speed. The general outline of this strategy is described (without technical implementation details) in Textbooks Are All You Need.
Key to the distillation process is the creation of synthetic data, generated by the larger AI model, to train the smaller model. We have applied this approach to Python programming tasks and are publishing a summary of our methods here along with the synthetic dataset.
For fuller details and implementation code, see the related GitHub repository.
### Diversity
The main problem with model-generated synthetic data is its diversity. If we had constructed this dataset by giving ChatGPT 3.5 the same prompt several hundred thousand times, we would get many very similar, if not functionally identical, results. This would reduce the usefulness of the dataset for training. In principle, one might solve the problem by filtering the results for near duplicates, but this is a non-trivial problem, and even if it could be solved, it would be a wasteful and potentially expensive use of the larger model.
And even then, we could not be sure the examples adequately covered the topic. To solve this problem, we introduced a novel scheme for systematically prompting large language models to produce diverse examples.
### Using a topic tree to build diverse prompts
We constructed a hierarchical model of subjects in Python programming, i.e. a topic tree. First, we manually identified 42 general topic areas in Python knowledge, for example, _data structures_ and _sorting algorithms_. We asked an LLM to propose 10 subtopics for each, and then for each of those 420 fine-grained topics, we asked the LLM to generate 5 even more fine-grained sub-subtopics. This resulted in roughly 2000 very fine-grained topics.
We generated prompts by randomly selecting two of those roughly two thousand topics and combining them:
To increase randomness and diversity in the results, we also constructed a list of 40 professions, like _economist_, _engineer_, and _social worker_, and added them to the prompt:
In principle, there are approximately two million possible pairs of topics, and with 40 possible professions, this yields 80 million unique prompts. If the response to each prompt averages 100 tokens, this means our method can generate an 8 billion token synthetic dataset while maintaining a high degree of diversity. The roughly 120,000 published here is a small random subset of what is possible.
## Credits
This dataset was developed at URL
|
[
"# Dataset Card for \"code_exercises\"",
"# Code exercise\nThis dataset is composed of a diverse set of \\~120k Python code exercises (~120m total tokens) generated by ChatGPT 3.5. It is designed to distill ChatGPT 3.5 knowledge about Python coding tasks into other (potentially smaller) models. The exercises have been generated by following the steps described in the related GitHub repository.\n\nThe generated exercises follow the format of the Human Eval benchmark. Each training sample is split into a Python function signature with a descriptive docstring, and a solution to the exercise.\n\nThis approach is inspired by several works on synthetic dataset generation, especially by _Textbooks Are All You Need_ (Gunasekar et al. 2023).",
"## Disclaimer\n\n* This dataset has been generated using ChatGPT 3.5, and you should check the legal status of AI-generated content in your jurisdiction before use. We cannot guarantee that it is free of IP restrictions. You should also make sure that your usage complies with the OpenAI Terms of Use, in so far as legally applicable.\n\n* This dataset focuses narrowly on improving performance on the kinds of tasks described in the Human Eval benchmark. The Human Eval benchmark has limitations and does not necessarily fully represent the coding abilities of a large language model, and there is no way to guarantee that an improvement on this benchmark represents an overall improvement in programming performance. We present this data as is, without any guarantee of its usefulness in any specific context, to encourage research that might be inspired by our method.",
"## Synthetic exercise creation\n\nModel distillation is the process of transferring some of the skilled performance of large models on specific classes of tasks to significantly smaller models. The purpose is to get performance comparable to the larger model, but at a fraction of the cost and at vastly quicker speed. The general outline of this strategy is described (without technical implementation details) in Textbooks Are All You Need.\n\nKey to the distillation process is the creation of synthetic data, generated by the larger AI model, to train the smaller model. We have applied this approach to Python programming tasks and are publishing a summary of our methods here along with the synthetic dataset.\n\nFor fuller details and implementation code, see the related GitHub repository.",
"### Diversity\n\nThe main problem with model-generated synthetic data is its diversity. If we had constructed this dataset by giving ChatGPT 3.5 the same prompt several hundred thousand times, we would get many very similar, if not functionally identical, results. This would reduce the usefulness of the dataset for training. In principle, one might solve the problem by filtering the results for near duplicates, but this is a non-trivial problem, and even if it could be solved, it would be a wasteful and potentially expensive use of the larger model.\n\nAnd even then, we could not be sure the examples adequately covered the topic. To solve this problem, we introduced a novel scheme for systematically prompting large language models to produce diverse examples.",
"### Using a topic tree to build diverse prompts\n\nWe constructed a hierarchical model of subjects in Python programming, i.e. a topic tree. First, we manually identified 42 general topic areas in Python knowledge, for example, _data structures_ and _sorting algorithms_. We asked an LLM to propose 10 subtopics for each, and then for each of those 420 fine-grained topics, we asked the LLM to generate 5 even more fine-grained sub-subtopics. This resulted in roughly 2000 very fine-grained topics.\n\nWe generated prompts by randomly selecting two of those roughly two thousand topics and combining them:\n\n\n\nTo increase randomness and diversity in the results, we also constructed a list of 40 professions, like _economist_, _engineer_, and _social worker_, and added them to the prompt:\n\n\n\nIn principle, there are approximately two million possible pairs of topics, and with 40 possible professions, this yields 80 million unique prompts. If the response to each prompt averages 100 tokens, this means our method can generate an 8 billion token synthetic dataset while maintaining a high degree of diversity. The roughly 120,000 published here is a small random subset of what is possible.",
"## Credits\n\nThis dataset was developed at URL"
] |
[
"TAGS\n#task_categories-text-generation #size_categories-100M<n<1B #language-English #license-cc-by-nc-sa-4.0 #region-us \n",
"# Dataset Card for \"code_exercises\"",
"# Code exercise\nThis dataset is composed of a diverse set of \\~120k Python code exercises (~120m total tokens) generated by ChatGPT 3.5. It is designed to distill ChatGPT 3.5 knowledge about Python coding tasks into other (potentially smaller) models. The exercises have been generated by following the steps described in the related GitHub repository.\n\nThe generated exercises follow the format of the Human Eval benchmark. Each training sample is split into a Python function signature with a descriptive docstring, and a solution to the exercise.\n\nThis approach is inspired by several works on synthetic dataset generation, especially by _Textbooks Are All You Need_ (Gunasekar et al. 2023).",
"## Disclaimer\n\n* This dataset has been generated using ChatGPT 3.5, and you should check the legal status of AI-generated content in your jurisdiction before use. We cannot guarantee that it is free of IP restrictions. You should also make sure that your usage complies with the OpenAI Terms of Use, in so far as legally applicable.\n\n* This dataset focuses narrowly on improving performance on the kinds of tasks described in the Human Eval benchmark. The Human Eval benchmark has limitations and does not necessarily fully represent the coding abilities of a large language model, and there is no way to guarantee that an improvement on this benchmark represents an overall improvement in programming performance. We present this data as is, without any guarantee of its usefulness in any specific context, to encourage research that might be inspired by our method.",
"## Synthetic exercise creation\n\nModel distillation is the process of transferring some of the skilled performance of large models on specific classes of tasks to significantly smaller models. The purpose is to get performance comparable to the larger model, but at a fraction of the cost and at vastly quicker speed. The general outline of this strategy is described (without technical implementation details) in Textbooks Are All You Need.\n\nKey to the distillation process is the creation of synthetic data, generated by the larger AI model, to train the smaller model. We have applied this approach to Python programming tasks and are publishing a summary of our methods here along with the synthetic dataset.\n\nFor fuller details and implementation code, see the related GitHub repository.",
"### Diversity\n\nThe main problem with model-generated synthetic data is its diversity. If we had constructed this dataset by giving ChatGPT 3.5 the same prompt several hundred thousand times, we would get many very similar, if not functionally identical, results. This would reduce the usefulness of the dataset for training. In principle, one might solve the problem by filtering the results for near duplicates, but this is a non-trivial problem, and even if it could be solved, it would be a wasteful and potentially expensive use of the larger model.\n\nAnd even then, we could not be sure the examples adequately covered the topic. To solve this problem, we introduced a novel scheme for systematically prompting large language models to produce diverse examples.",
"### Using a topic tree to build diverse prompts\n\nWe constructed a hierarchical model of subjects in Python programming, i.e. a topic tree. First, we manually identified 42 general topic areas in Python knowledge, for example, _data structures_ and _sorting algorithms_. We asked an LLM to propose 10 subtopics for each, and then for each of those 420 fine-grained topics, we asked the LLM to generate 5 even more fine-grained sub-subtopics. This resulted in roughly 2000 very fine-grained topics.\n\nWe generated prompts by randomly selecting two of those roughly two thousand topics and combining them:\n\n\n\nTo increase randomness and diversity in the results, we also constructed a list of 40 professions, like _economist_, _engineer_, and _social worker_, and added them to the prompt:\n\n\n\nIn principle, there are approximately two million possible pairs of topics, and with 40 possible professions, this yields 80 million unique prompts. If the response to each prompt averages 100 tokens, this means our method can generate an 8 billion token synthetic dataset while maintaining a high degree of diversity. The roughly 120,000 published here is a small random subset of what is possible.",
"## Credits\n\nThis dataset was developed at URL"
] |
[
46,
12,
163,
180,
166,
171,
290,
10
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-100M<n<1B #language-English #license-cc-by-nc-sa-4.0 #region-us \n# Dataset Card for \"code_exercises\"# Code exercise\nThis dataset is composed of a diverse set of \\~120k Python code exercises (~120m total tokens) generated by ChatGPT 3.5. It is designed to distill ChatGPT 3.5 knowledge about Python coding tasks into other (potentially smaller) models. The exercises have been generated by following the steps described in the related GitHub repository.\n\nThe generated exercises follow the format of the Human Eval benchmark. Each training sample is split into a Python function signature with a descriptive docstring, and a solution to the exercise.\n\nThis approach is inspired by several works on synthetic dataset generation, especially by _Textbooks Are All You Need_ (Gunasekar et al. 2023).## Disclaimer\n\n* This dataset has been generated using ChatGPT 3.5, and you should check the legal status of AI-generated content in your jurisdiction before use. We cannot guarantee that it is free of IP restrictions. You should also make sure that your usage complies with the OpenAI Terms of Use, in so far as legally applicable.\n\n* This dataset focuses narrowly on improving performance on the kinds of tasks described in the Human Eval benchmark. The Human Eval benchmark has limitations and does not necessarily fully represent the coding abilities of a large language model, and there is no way to guarantee that an improvement on this benchmark represents an overall improvement in programming performance. We present this data as is, without any guarantee of its usefulness in any specific context, to encourage research that might be inspired by our method."
] |
e242ae3bbca4e859ac237d66de1d24921e3cb0e2
|
# Dataset Card for "voxpopuli-enl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
MikhailT/voxpopuli-enl
|
[
"region:us"
] |
2023-08-17T05:40:53+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "labels", "sequence": {"sequence": "float32"}}, {"name": "speaker_embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 3314283701.0464153, "num_examples": 23742}, {"name": "test", "num_bytes": 367678030.7921715, "num_examples": 2640}], "download_size": 3608967113, "dataset_size": 3681961731.838587}}
|
2023-08-17T06:18:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "voxpopuli-enl"
More Information needed
|
[
"# Dataset Card for \"voxpopuli-enl\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"voxpopuli-enl\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"voxpopuli-enl\"\n\nMore Information needed"
] |
7c907f19a91af9b2be063f6cf23bfc3c9db4ca34
|
# Dataset Card for "duped-num-duplicates"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
usvsnsp/duped-num-duplicates
|
[
"region:us"
] |
2023-08-17T05:47:18+00:00
|
{"dataset_info": {"features": [{"name": "Index", "dtype": "int64"}, {"name": "Counts", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2342912000, "num_examples": 146432000}], "download_size": 982426113, "dataset_size": 2342912000}}
|
2023-08-25T12:25:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "duped-num-duplicates"
More Information needed
|
[
"# Dataset Card for \"duped-num-duplicates\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"duped-num-duplicates\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"duped-num-duplicates\"\n\nMore Information needed"
] |
67d1a1e44127fd8261889b61fcfdbd11c09cd74d
|
# Dataset of fuyou (Pokémon)
This is the dataset of fuyou (Pokémon), containing 200 images and their tags.
The core tags of this character are `short_hair, hair_ornament, dark_skin, hair_flower, dark-skinned_female, blue_eyes, breasts, black_hair, brown_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 200 | 168.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuyou_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 200 | 108.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuyou_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 426 | 206.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuyou_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 200 | 154.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuyou_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 426 | 270.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuyou_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/fuyou_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, bandeau, blue_sarong, flower, navel, smile, bare_shoulders, solo, cleavage, midriff, print_sarong, blush, holding_poke_ball, tube_top, poke_ball_(basic), anklet, barefoot, hand_on_hip, large_breasts, open_mouth |
| 1 | 7 |  |  |  |  |  | 1girl, bare_shoulders, blue_sarong, flower, pokemon_(creature), print_sarong, smile, bandeau, navel, anklet, barefoot, midriff, open_mouth, tube_top |
| 2 | 6 |  |  |  |  |  | 1girl, eyelashes, open_mouth, :d, blue_sarong, pink_flower, pokemon_(creature), tongue, bangs, bare_shoulders, looking_at_viewer, strapless, blush, collarbone, navel, petals, spiked_hair, swimsuit |
| 3 | 8 |  |  |  |  |  | 1girl, bangs, detached_sleeves, dress, eyelashes, hairband, official_alternate_costume, pokemon_(creature), looking_at_viewer, open_mouth, tongue, blush, :d, hand_up |
| 4 | 14 |  |  |  |  |  | 1girl, flower, hetero, penis, sex, solo_focus, nipples, vaginal, 1boy, blue_sarong, blush, navel, open_mouth, spread_legs, smile, uncensored, cum_in_pussy, girl_on_top, medium_breasts, no_panties, print_sarong, bandeau, small_breasts, cowgirl_position, large_breasts, sweat |
| 5 | 5 |  |  |  |  |  | 1girl, flower, nipples, nude, smile, blush, solo, looking_at_viewer, closed_mouth, huge_breasts, large_breasts, medium_breasts, open_mouth, upper_body |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bandeau | blue_sarong | flower | navel | smile | bare_shoulders | solo | cleavage | midriff | print_sarong | blush | holding_poke_ball | tube_top | poke_ball_(basic) | anklet | barefoot | hand_on_hip | large_breasts | open_mouth | pokemon_(creature) | eyelashes | :d | pink_flower | tongue | bangs | looking_at_viewer | strapless | collarbone | petals | spiked_hair | swimsuit | detached_sleeves | dress | hairband | official_alternate_costume | hand_up | hetero | penis | sex | solo_focus | nipples | vaginal | 1boy | spread_legs | uncensored | cum_in_pussy | girl_on_top | medium_breasts | no_panties | small_breasts | cowgirl_position | sweat | nude | closed_mouth | huge_breasts | upper_body |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------|:--------------|:---------|:--------|:--------|:-----------------|:-------|:-----------|:----------|:---------------|:--------|:--------------------|:-----------|:--------------------|:---------|:-----------|:--------------|:----------------|:-------------|:---------------------|:------------|:-----|:--------------|:---------|:--------|:--------------------|:------------|:-------------|:---------|:--------------|:-----------|:-------------------|:--------|:-----------|:-----------------------------|:----------|:---------|:--------|:------|:-------------|:----------|:----------|:-------|:--------------|:-------------|:---------------|:--------------|:-----------------|:-------------|:----------------|:-------------------|:--------|:-------|:---------------|:---------------|:-------------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | | | X | X | | | X | | X | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | | X | | X | | X | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | X | | | | | | | | | | | X | | | | | | | | X | X | X | X | | X | X | X | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 4 | 14 |  |  |  |  |  | X | X | X | X | X | X | | | | | X | X | | | | | | | X | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | |
| 5 | 5 |  |  |  |  |  | X | | | X | | X | | X | | | | X | | | | | | | X | X | | | | | | | X | | | | | | | | | | | | | | | X | | | | | | | X | | | | | X | X | X | X |
|
CyberHarem/fuyou_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T05:51:36+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T15:08:56+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of fuyou (Pokémon)
==========================
This is the dataset of fuyou (Pokémon), containing 200 images and their tags.
The core tags of this character are 'short\_hair, hair\_ornament, dark\_skin, hair\_flower, dark-skinned\_female, blue\_eyes, breasts, black\_hair, brown\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
997adceffc2a497d719612d9fc2447c93bce7fb3
|
# Dataset Card for "SecondTest"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
JorangHorse/SecondTest
|
[
"region:us"
] |
2023-08-17T05:52:29+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1008242.0, "num_examples": 2}], "download_size": 556516, "dataset_size": 1008242.0}}
|
2023-08-19T03:05:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "SecondTest"
More Information needed
|
[
"# Dataset Card for \"SecondTest\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"SecondTest\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"SecondTest\"\n\nMore Information needed"
] |
37a824d061dd521823bfcf8e8f23a325a4b0e356
|
# Dataset of fujiwara_no_mokou/藤原妹紅/후지와라노모코 (Touhou)
This is the dataset of fujiwara_no_mokou/藤原妹紅/후지와라노모코 (Touhou), containing 500 images and their tags.
The core tags of this character are `long_hair, bow, hair_bow, red_eyes, very_long_hair, white_hair, ribbon, hair_ribbon, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 789.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fujiwara_no_mokou_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 450.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fujiwara_no_mokou_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1163 | 895.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fujiwara_no_mokou_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 700.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fujiwara_no_mokou_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1163 | 1.23 GiB | [Download](https://huggingface.co/datasets/CyberHarem/fujiwara_no_mokou_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/fujiwara_no_mokou_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 24 |  |  |  |  |  | 1girl, solo, suspenders, fire, pants, grey_hair |
| 1 | 9 |  |  |  |  |  | 1girl, fire, solo, suspenders, pants, shirt, grin |
| 2 | 20 |  |  |  |  |  | 1girl, solo, suspenders, white_bow, white_shirt, looking_at_viewer, collared_shirt, red_pants, simple_background, closed_mouth, grey_hair, white_background, hair_between_eyes, fire, juliet_sleeves, breasts, upper_body, buttons, ofuda_on_clothes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | suspenders | fire | pants | grey_hair | shirt | grin | white_bow | white_shirt | looking_at_viewer | collared_shirt | red_pants | simple_background | closed_mouth | white_background | hair_between_eyes | juliet_sleeves | breasts | upper_body | buttons | ofuda_on_clothes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-------------|:-------|:--------|:------------|:--------|:-------|:------------|:--------------|:--------------------|:-----------------|:------------|:--------------------|:---------------|:-------------------|:--------------------|:-----------------|:----------|:-------------|:----------|:-------------------|
| 0 | 24 |  |  |  |  |  | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | X | X | X | | X | X | | | | | | | | | | | | | | |
| 2 | 20 |  |  |  |  |  | X | X | X | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/fujiwara_no_mokou_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T05:54:04+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T10:18:16+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of fujiwara\_no\_mokou/藤原妹紅/후지와라노모코 (Touhou)
====================================================
This is the dataset of fujiwara\_no\_mokou/藤原妹紅/후지와라노모코 (Touhou), containing 500 images and their tags.
The core tags of this character are 'long\_hair, bow, hair\_bow, red\_eyes, very\_long\_hair, white\_hair, ribbon, hair\_ribbon, bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
b905c09d5e1e0ec81ff5318d66e81436dce50bc0
|
# Dataset Card for "deduped-num-duplicates"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
usvsnsp/deduped-num-duplicates
|
[
"region:us"
] |
2023-08-17T06:01:10+00:00
|
{"dataset_info": {"features": [{"name": "Index", "dtype": "int64"}, {"name": "Counts", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2342912000, "num_examples": 146432000}], "download_size": 963525905, "dataset_size": 2342912000}}
|
2023-08-25T12:34:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "deduped-num-duplicates"
More Information needed
|
[
"# Dataset Card for \"deduped-num-duplicates\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"deduped-num-duplicates\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"deduped-num-duplicates\"\n\nMore Information needed"
] |
0a2538aec38cd51c387e944a2d31b4590ea71a77
|
# Dataset of mache/マーシュ (Pokémon)
This is the dataset of mache/マーシュ (Pokémon), containing 102 images and their tags.
The core tags of this character are `long_hair, black_hair, hair_ornament, purple_eyes, breasts, bangs, very_long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 102 | 97.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mache_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 102 | 61.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mache_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 210 | 110.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mache_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 102 | 90.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mache_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 210 | 145.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mache_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/mache_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, kimono, long_sleeves, smile, solo, wide_sleeves, choker, looking_at_viewer, pantyhose, simple_background, standing, white_background, closed_mouth, full_body, sidelocks, platform_footwear, purple_ribbon, thighhighs |
| 1 | 6 |  |  |  |  |  | 1girl, black_eyes, smile, choker, looking_at_viewer, parted_bangs, pink_kimono, solo, blush, closed_mouth, simple_background, upper_body, wide_sleeves |
| 2 | 21 |  |  |  |  |  | 1girl, hetero, nipples, 1boy, penis, blush, sex, vaginal, open_mouth, pussy, solo_focus, cum, medium_breasts, nude, uncensored, large_breasts |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | kimono | long_sleeves | smile | solo | wide_sleeves | choker | looking_at_viewer | pantyhose | simple_background | standing | white_background | closed_mouth | full_body | sidelocks | platform_footwear | purple_ribbon | thighhighs | black_eyes | parted_bangs | pink_kimono | blush | upper_body | hetero | nipples | 1boy | penis | sex | vaginal | open_mouth | pussy | solo_focus | cum | medium_breasts | nude | uncensored | large_breasts |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:---------------|:--------|:-------|:---------------|:---------|:--------------------|:------------|:--------------------|:-----------|:-------------------|:---------------|:------------|:------------|:--------------------|:----------------|:-------------|:-------------|:---------------|:--------------|:--------|:-------------|:---------|:----------|:-------|:--------|:------|:----------|:-------------|:--------|:-------------|:------|:-----------------|:-------|:-------------|:----------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | | | X | X | X | X | X | | X | | | X | | | | | | X | X | X | X | X | | | | | | | | | | | | | | |
| 2 | 21 |  |  |  |  |  | X | | | | | | | | | | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/mache_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T06:07:11+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T13:08:16+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of mache/マーシュ (Pokémon)
===============================
This is the dataset of mache/マーシュ (Pokémon), containing 102 images and their tags.
The core tags of this character are 'long\_hair, black\_hair, hair\_ornament, purple\_eyes, breasts, bangs, very\_long\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
ffe6e7ee3544108bcc654aa84ff6d265b6ec52af
|
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [openlm-research/open_llama_7b_400bt_preview](https://huggingface.co/openlm-research/open_llama_7b_400bt_preview)
- Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.28.1
pip install accelerate==0.18.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="h2oai/h2ogpt-gm-oasst1-en-1024-open-llama-7b-preview-400bt",
torch_dtype=torch.float16,
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=512,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-1024-open-llama-7b-preview-400bt",
use_fast=False,
padding_side="left"
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-1024-open-llama-7b-preview-400bt",
torch_dtype=torch.float16,
device_map={"": "cuda:0"}
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=512,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "h2oai/h2ogpt-gm-oasst1-en-1024-open-llama-7b-preview-400bt" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=512,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```bash
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=h2oai/h2ogpt-gm-oasst1-en-1024-open-llama-7b-preview-400bt --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
sdi21doro/test
|
[
"language:en",
"license:apache-2.0",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"region:us"
] |
2023-08-17T06:13:36+00:00
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["gpt", "llm", "large language model", "h2o-llmstudio"], "inference": false, "thumbnail": "https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico", "datasets": ["OpenAssistant/oasst1"]}
|
2023-08-17T06:39:55+00:00
|
[] |
[
"en"
] |
TAGS
#language-English #license-apache-2.0 #gpt #llm #large language model #h2o-llmstudio #region-us
|
# Model Card
## Summary
This model was trained using H2O LLM Studio.
- Base model: openlm-research/open_llama_7b_400bt_preview
- Dataset preparation: OpenAssistant/oasst1
## Usage
To use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers', 'accelerate' and 'torch' libraries installed.
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
Alternatively, if you prefer to not use 'trust_remote_code=True' you can download h2oai_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
## Model Architecture
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in URL. Visit H2O LLM Studio to learn how to train your own large language models.
## Model Validation
Model validation results using EleutherAI lm-evaluation-harness.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
[
"# Model Card",
"## Summary\n\nThis model was trained using H2O LLM Studio.\n- Base model: openlm-research/open_llama_7b_400bt_preview\n- Dataset preparation: OpenAssistant/oasst1",
"## Usage\n\nTo use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers', 'accelerate' and 'torch' libraries installed.\n\n\n\n\n\nYou can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:\n\n\n\n\n\nAlternatively, if you prefer to not use 'trust_remote_code=True' you can download h2oai_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:\n\n\n\n\n\nYou may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:",
"## Model Architecture",
"## Model Configuration\n\nThis model was trained using H2O LLM Studio and with the configuration in URL. Visit H2O LLM Studio to learn how to train your own large language models.",
"## Model Validation\n\nModel validation results using EleutherAI lm-evaluation-harness.",
"## Disclaimer\n\nPlease read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.\n\n- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.\n- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.\n- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.\n- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.\n- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.\n- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.\n\nBy using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it."
] |
[
"TAGS\n#language-English #license-apache-2.0 #gpt #llm #large language model #h2o-llmstudio #region-us \n",
"# Model Card",
"## Summary\n\nThis model was trained using H2O LLM Studio.\n- Base model: openlm-research/open_llama_7b_400bt_preview\n- Dataset preparation: OpenAssistant/oasst1",
"## Usage\n\nTo use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers', 'accelerate' and 'torch' libraries installed.\n\n\n\n\n\nYou can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:\n\n\n\n\n\nAlternatively, if you prefer to not use 'trust_remote_code=True' you can download h2oai_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:\n\n\n\n\n\nYou may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:",
"## Model Architecture",
"## Model Configuration\n\nThis model was trained using H2O LLM Studio and with the configuration in URL. Visit H2O LLM Studio to learn how to train your own large language models.",
"## Model Validation\n\nModel validation results using EleutherAI lm-evaluation-harness.",
"## Disclaimer\n\nPlease read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.\n\n- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.\n- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.\n- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.\n- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.\n- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.\n- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.\n\nBy using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it."
] |
[
37,
3,
53,
154,
4,
42,
23,
518
] |
[
"passage: TAGS\n#language-English #license-apache-2.0 #gpt #llm #large language model #h2o-llmstudio #region-us \n# Model Card## Summary\n\nThis model was trained using H2O LLM Studio.\n- Base model: openlm-research/open_llama_7b_400bt_preview\n- Dataset preparation: OpenAssistant/oasst1## Usage\n\nTo use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers', 'accelerate' and 'torch' libraries installed.\n\n\n\n\n\nYou can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:\n\n\n\n\n\nAlternatively, if you prefer to not use 'trust_remote_code=True' you can download h2oai_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:\n\n\n\n\n\nYou may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:## Model Architecture## Model Configuration\n\nThis model was trained using H2O LLM Studio and with the configuration in URL. Visit H2O LLM Studio to learn how to train your own large language models.## Model Validation\n\nModel validation results using EleutherAI lm-evaluation-harness."
] |
3dc0295cc0d5e1c4262d098f77a4a4c353f02e59
|
# Dataset Card for Exhentai API DUMP
### Dataset Summary
A conversion of [Exhentai API dump](https://sukebei.nyaa.si/view/3914574) to csv files
|
bogeyturn/exhentai-api-dump
|
[
"size_categories:1M<n<10M",
"language:en",
"not-for-all-audiences",
"art",
"region:us"
] |
2023-08-17T06:15:31+00:00
|
{"language": ["en"], "size_categories": ["1M<n<10M"], "tags": ["not-for-all-audiences", "art"]}
|
2023-08-28T13:07:25+00:00
|
[] |
[
"en"
] |
TAGS
#size_categories-1M<n<10M #language-English #not-for-all-audiences #art #region-us
|
# Dataset Card for Exhentai API DUMP
### Dataset Summary
A conversion of Exhentai API dump to csv files
|
[
"# Dataset Card for Exhentai API DUMP",
"### Dataset Summary\n\nA conversion of Exhentai API dump to csv files"
] |
[
"TAGS\n#size_categories-1M<n<10M #language-English #not-for-all-audiences #art #region-us \n",
"# Dataset Card for Exhentai API DUMP",
"### Dataset Summary\n\nA conversion of Exhentai API dump to csv files"
] |
[
33,
11,
19
] |
[
"passage: TAGS\n#size_categories-1M<n<10M #language-English #not-for-all-audiences #art #region-us \n# Dataset Card for Exhentai API DUMP### Dataset Summary\n\nA conversion of Exhentai API dump to csv files"
] |
8d80dbc8d636da2575c04f1293a38a6d22df6e8e
|
# Dataset Card for "controlnet_fs_fetch"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zobnec/controlnet_fs_fetch
|
[
"region:us"
] |
2023-08-17T06:19:06+00:00
|
{"dataset_info": {"features": [{"name": "conditioning", "dtype": "image"}, {"name": "samples", "dtype": "image"}, {"name": "reconstruction", "dtype": "image"}, {"name": "control", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1061432873.487, "num_examples": 1317}], "download_size": 1058860880, "dataset_size": 1061432873.487}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-17T06:30:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "controlnet_fs_fetch"
More Information needed
|
[
"# Dataset Card for \"controlnet_fs_fetch\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"controlnet_fs_fetch\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"controlnet_fs_fetch\"\n\nMore Information needed"
] |
5625e1dc0bf23bf3fa7c528062f29c01d4a59c6a
|
# Dataset Card for "find_word_baseline_10000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/find_word_baseline_10000
|
[
"region:us"
] |
2023-08-17T06:20:03+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 822035, "num_examples": 10000}, {"name": "eval_find_word", "num_bytes": 82196, "num_examples": 1000}], "download_size": 442380, "dataset_size": 904231}}
|
2023-08-17T06:20:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "find_word_baseline_10000"
More Information needed
|
[
"# Dataset Card for \"find_word_baseline_10000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"find_word_baseline_10000\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"find_word_baseline_10000\"\n\nMore Information needed"
] |
e7a1439634e084d5678ef8a93575dd39e9c52b10
|
# Dataset Card for "c_voice_5000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aviroes/c_voice_5000
|
[
"region:us"
] |
2023-08-17T06:26:28+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 206612303.51124817, "num_examples": 5000}, {"name": "test", "num_bytes": 4267200.430121169, "num_examples": 100}, {"name": "validation", "num_bytes": 4222317.977288587, "num_examples": 100}], "download_size": 215608646, "dataset_size": 215101821.91865793}}
|
2023-08-17T06:26:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "c_voice_5000"
More Information needed
|
[
"# Dataset Card for \"c_voice_5000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"c_voice_5000\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"c_voice_5000\"\n\nMore Information needed"
] |
a15362e2070caaeb36a7898df1f26056126a15ad
|
# Dataset of makomo (Pokémon)
This is the dataset of makomo (Pokémon), containing 109 images and their tags.
The core tags of this character are `glasses, hair_ornament, long_hair, hairclip, breasts, blue_eyes, black_hair, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 109 | 57.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/makomo_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 109 | 43.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/makomo_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 189 | 75.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/makomo_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 109 | 54.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/makomo_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 189 | 89.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/makomo_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/makomo_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 18 |  |  |  |  |  | 1girl, labcoat, hair_flower, smile, open_mouth, pokemon_(creature), purple_hair, blush, solo |
| 1 | 7 |  |  |  |  |  | 1boy, 1girl, hetero, blush, labcoat, open_mouth, penis, purple_eyes, solo_focus, heart, nipples, purple_hair, saliva, sex, censored, pussy, vaginal |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | labcoat | hair_flower | smile | open_mouth | pokemon_(creature) | purple_hair | blush | solo | 1boy | hetero | penis | purple_eyes | solo_focus | heart | nipples | saliva | sex | censored | pussy | vaginal |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------|:--------------|:--------|:-------------|:---------------------|:--------------|:--------|:-------|:-------|:---------|:--------|:--------------|:-------------|:--------|:----------|:---------|:------|:-----------|:--------|:----------|
| 0 | 18 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | | | X | | X | X | | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/makomo_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T06:27:53+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T14:25:16+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of makomo (Pokémon)
===========================
This is the dataset of makomo (Pokémon), containing 109 images and their tags.
The core tags of this character are 'glasses, hair\_ornament, long\_hair, hairclip, breasts, blue\_eyes, black\_hair, large\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
6df116c067475caa4c85dea866f0c8deb06de275
|
# Dataset Card for "t0_explanation_targets_h2ogpt-gm-oasst1-en-2048-falcon-40b-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
LahiruLowe/t0_explanation_targets_h2ogpt-gm-oasst1-en-2048-falcon-40b-v2
|
[
"region:us"
] |
2023-08-17T06:34:08+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "task_source", "dtype": "string"}, {"name": "task_name", "dtype": "string"}, {"name": "template_type", "dtype": "string"}, {"name": "explained_targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9821, "num_examples": 5}], "download_size": 26143, "dataset_size": 9821}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-18T05:07:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "t0_explanation_targets_h2ogpt-gm-oasst1-en-2048-falcon-40b-v2"
More Information needed
|
[
"# Dataset Card for \"t0_explanation_targets_h2ogpt-gm-oasst1-en-2048-falcon-40b-v2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"t0_explanation_targets_h2ogpt-gm-oasst1-en-2048-falcon-40b-v2\"\n\nMore Information needed"
] |
[
6,
44
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"t0_explanation_targets_h2ogpt-gm-oasst1-en-2048-falcon-40b-v2\"\n\nMore Information needed"
] |
830e6160d676f4a1f426647eb58dceb0d838575b
|
# Dataset of rumia/ルーミア/루미아 (Touhou)
This is the dataset of rumia/ルーミア/루미아 (Touhou), containing 500 images and their tags.
The core tags of this character are `blonde_hair, ribbon, short_hair, hair_ribbon, red_eyes, red_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 595.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rumia_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 370.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rumia_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1281 | 802.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rumia_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 548.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rumia_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1281 | 1.03 GiB | [Download](https://huggingface.co/datasets/CyberHarem/rumia_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/rumia_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, ascot, looking_at_viewer, shirt, solo, vest, blush, open_mouth, :d, long_sleeves, simple_background, skirt_set, white_background, fang |
| 1 | 5 |  |  |  |  |  | 1girl, darkness, open_mouth, shirt, solo, vest, ascot, smile, spread_arms, fang, long_sleeves, skirt_set |
| 2 | 8 |  |  |  |  |  | 1girl, black_skirt, black_vest, full_body, long_sleeves, solo, white_shirt, red_footwear, spread_arms, white_socks, darkness, looking_at_viewer, open_mouth, mary_janes, skirt_set, :d, frilled_skirt, red_ascot |
| 3 | 12 |  |  |  |  |  | 1girl, black_skirt, long_sleeves, looking_at_viewer, open_mouth, red_ascot, solo, white_shirt, black_vest, :d, bangs, collared_shirt, spread_arms, hair_between_eyes, simple_background, blush, white_background |
| 4 | 6 |  |  |  |  |  | 1girl, black_skirt, black_vest, long_sleeves, open_mouth, red_ascot, solo, white_shirt, :d, darkness, looking_at_viewer, blush, fang, outstretched_arms, bangs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | ascot | looking_at_viewer | shirt | solo | vest | blush | open_mouth | :d | long_sleeves | simple_background | skirt_set | white_background | fang | darkness | smile | spread_arms | black_skirt | black_vest | full_body | white_shirt | red_footwear | white_socks | mary_janes | frilled_skirt | red_ascot | bangs | collared_shirt | hair_between_eyes | outstretched_arms |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:--------------------|:--------|:-------|:-------|:--------|:-------------|:-----|:---------------|:--------------------|:------------|:-------------------|:-------|:-----------|:--------|:--------------|:--------------|:-------------|:------------|:--------------|:---------------|:--------------|:-------------|:----------------|:------------|:--------|:-----------------|:--------------------|:--------------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | | X | X | X | | X | | X | | X | | X | X | X | X | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | | X | | X | | | X | X | X | | X | | | X | | X | X | X | X | X | X | X | X | X | X | | | | |
| 3 | 12 |  |  |  |  |  | X | | X | | X | | X | X | X | X | X | | X | | | | X | X | X | | X | | | | | X | X | X | X | |
| 4 | 6 |  |  |  |  |  | X | | X | | X | | X | X | X | X | | | | X | X | | | X | X | | X | | | | | X | X | | | X |
|
CyberHarem/rumia_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T06:34:50+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T08:11:57+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of rumia/ルーミア/루미아 (Touhou)
==================================
This is the dataset of rumia/ルーミア/루미아 (Touhou), containing 500 images and their tags.
The core tags of this character are 'blonde\_hair, ribbon, short\_hair, hair\_ribbon, red\_eyes, red\_ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
0238c52f1b5c543dad31f986073bdf5177350e6a
|
# Oscar 2023_01 DE Deduplicated
This is a filtered and deduplicated version of the german subset of the [23.01 OSCAR Corpus](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301), a large, crawled, and processed text dataset
curated by the OSCAR project (Open Super-large Crawled Aggregated coRpus).
OSCAR 23.01 is the January 2023 version of the OSCAR Corpus based on the November/December 2022 dump of Common Crawl.
While being quite similar to OSCAR 22.01, it contains several new features, including KenLM-based adult content detection, [...].
It was deduplicated using a MinHash implementation from the `text-dedup` library by `ChenghaoMou` available on [GitHub](https://github.com/ChenghaoMou/text-dedup). with the following command:
```bash
python -m text_dedup.minhash --path oscar-corpus/OSCAR-2301 --name "de" --cache_dir "../cache" --split "train" --column "text" --batch_size 10000 --output output/minhash_oscar_de_dedup
```
## Deduplication statistics
| Step | Runtime |
|---|---|
| Loading | 10.64s |
| MinHashing | 10574.02s |
| Clustering | 12187.65s |
| Filtering | 4198.70s |
| Saving | 3560.06s |
| Total | 30531.07s |
| Dataset | Number of documents |
|---|---|
| Before | 103299215 |
| After | 53172498 |
## Dataset scheme:
```json
{
"text":"English sentence\nphrase en français\n????????????", // (1)
"meta":{
"warc_headers":{ // (2)
"warc-identified-content-language":"fra,eng",
"warc-target-uri":"https://fr.wikipedia.org/wiki/...",
"warc-record-id":"<urn:uuid:29eaa920-d299-4b1d-b687-c72bd8d68116>",
"warc-type":"conversion",
"content-length":"35298", // (3)
"warc-refers-to":"<urn:uuid:39e42055-0d94-4e45-9c6c-9e7056635d64>",
"warc-block-digest":"sha1:WFH2A5WHCS2H365GIAFYQPI7UOAMFGHB", // (3)
"warc-date":"2022-11-26T09:45:47Z",
"content-type":"text/plain"
},
"identification":{ // (4)
"label":"fr",
"prob":0.8938327
},
"harmful_pp":4063.1814, // (5)
"tlsh":"tlsh:T125315FF2B6088901EEA097015DB39B4600B...", // (6)
"quality_warnings":[ // (7)
"short_sentences",
"header",
"footer"
],
"categories":[ // (8)
"examen_pix",
"liste_bu"
],
"sentence_identifications":[ // (9)
{
"label":"fr",
"prob":0.99837273
},
{
"label":"en",
"prob":0.9992377
},
null
]
}
}
```
## Filtering
Filtered with the following code (hyperparameters might vary slightly):
```python
from datasets import load_dataset, load_from_disk
import time
# Categories from https://dsi.ut-capitole.fr/blacklists/index_en.php
blocked_categories = set([
"adult", # Some adult site from erotic to hard pornography
"aggressif", # Sites that are aggressive or violent
"malware", # Any website which delivers malware
"phishing", # Same as above
"cryptojacking", # Mining site by hijacking
"dangerous_material", # Sites which describe how to make bomb and some dangerous material
])
# Blocked quality filters
blocked_quality_warnings = set([
"tiny", # The document has a low (≤ 5) number of lines
"short sentences", # The document has a high number (≥ 50%) of short lines
# "header", # Indicates that low-quality content could be present at the start of the document
# "footer", # Indicates that low-quality content could be present at the tail of the document
"noisy", # Indicates that the document is noisy
])
harmful_ppl_threshold = 500 # Determines the threshold for harmful ppl (lower is more harmful) TODO
language_prob_threshold = 0.9 # Determines the threshold for language identification (higher is more likely) TODO
blocked_urls = set([
"de.wikipedia.org", # Wikipedia (because we already have it)
"tagesschau.de", # Tagesschau (because we already have it)
])
def filter_content(example):
has_blocked_category = False
if "categories" in example["meta"] and example["meta"]["categories"] is not None:
has_blocked_category = len(set(example["meta"]["categories"]).intersection(blocked_categories)) > 0
has_blocked_quality_warnings = False
if "quality_warnings" in example["meta"] and example["meta"]["quality_warnings"] is not None:
has_blocked_quality_warnings = len(set(example["meta"]["quality_warnings"]).intersection(blocked_quality_warnings)) > 0
has_blocked_url = False
if "warc_headers" in example["meta"] and "warc-target-uri" in example["meta"]["warc_headers"] and example["meta"]["warc_headers"]["warc-target-uri"] is not None:
has_blocked_url = any([url in example["meta"]["warc_headers"]["warc-target-uri"] for url in blocked_urls])
has_harmful_ppl = example["meta"]["harmful_pp"] < harmful_ppl_threshold if "harmful_pp" in example["meta"] else False
has_bad_german_identification = example["meta"]["identification"]["prob"] < language_prob_threshold if "identification" in example["meta"] else True
return not (has_blocked_category or has_blocked_quality_warnings or has_blocked_url or has_harmful_ppl or has_bad_german_identification)
t_start = time.time()
ds = load_dataset("bjoernp/oscar2023_de_deduped", split="train", num_proc=128)
print(f"Loading took {time.time() - t_start}s")
print(f"Dataset size before filtering: {len(ds)}")
t_start = time.time()
ds = ds.filter(filter_content, num_proc=128)
print(f"Filtering took {time.time() - t_start}s")
print(f"Dataset size after filtering: {len(ds)}")
```
## Licensing
We follow the original licensing scheme of the Oscar Corpus.
(from the [OSCAR Corpus](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301). We cannot reasonably comply with takedown requests.):
```
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging, the metadata and the annotations of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, the OSCAR project, Inria, the Univertity of Mannheim and DFKI GmbH have waived all copyright and related or neighboring rights to OSCAR
This work is published from: France and Germany.
[[[
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
]]]
```
## Citation
```
@ARTICLE{2022arXiv221210440J,
author = {{Jansen}, Tim and {Tong}, Yangling and {Zevallos}, Victoria and {Ortiz Suarez}, Pedro},
title = "{Perplexed by Quality: A Perplexity-based Method for Adult and Harmful Content Detection in Multilingual Heterogeneous Web Data}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = 2022,
month = dec,
eid = {arXiv:2212.10440},
pages = {arXiv:2212.10440},
doi = {10.48550/arXiv.2212.10440},
archivePrefix = {arXiv},
eprint = {2212.10440},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv221210440J},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@inproceedings{abadji-etal-2022-towards,
title = "Towards a Cleaner Document-Oriented Multilingual Crawled Corpus",
author = "Abadji, Julien and
Ortiz Suarez, Pedro and
Romary, Laurent and
Sagot, Beno{\^\i}t",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.463",
pages = "4344--4355",
abstract = "The need for large corpora raw corpora has dramatically increased in recent years with the introduction of transfer learning and semi-supervised learning methods to Natural Language Processing. And while there have been some recent attempts to manually curate the amount of data necessary to train large language models, the main way to obtain this data is still through automatic web crawling. In this paper we take the existing multilingual web corpus OSCAR and its pipeline Ungoliant that extracts and classifies data from Common Crawl at the line level, and propose a set of improvements and automatic annotations in order to produce a new document-oriented version of OSCAR that could prove more suitable to pre-train large generative language models as well as hopefully other applications in Natural Language Processing and Digital Humanities.",
}
@inproceedings{AbadjiOrtizSuarezRomaryetal.2021,
author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot},
title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)},
editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-10468},
url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688},
pages = {1 -- 9},
year = {2021},
abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.},
language = {en}
}
@article{kreutzer-etal-2022-quality,
title = "Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets",
author = {Kreutzer, Julia and
Caswell, Isaac and
Wang, Lisa and
Wahab, Ahsan and
van Esch, Daan and
Ulzii-Orshikh, Nasanbayar and
Tapo, Allahsera and
Subramani, Nishant and
Sokolov, Artem and
Sikasote, Claytone and
Setyawan, Monang and
Sarin, Supheakmungkol and
Samb, Sokhar and
Sagot, Beno{\^\i}t and
Rivera, Clara and
Rios, Annette and
Papadimitriou, Isabel and
Osei, Salomey and
Suarez, Pedro Ortiz and
Orife, Iroro and
Ogueji, Kelechi and
Rubungo, Andre Niyongabo and
Nguyen, Toan Q. and
M{\"u}ller, Mathias and
M{\"u}ller, Andr{\'e} and
Muhammad, Shamsuddeen Hassan and
Muhammad, Nanda and
Mnyakeni, Ayanda and
Mirzakhalov, Jamshidbek and
Matangira, Tapiwanashe and
Leong, Colin and
Lawson, Nze and
Kudugunta, Sneha and
Jernite, Yacine and
Jenny, Mathias and
Firat, Orhan and
Dossou, Bonaventure F. P. and
Dlamini, Sakhile and
de Silva, Nisansa and
{\c{C}}abuk Ball{\i}, Sakine and
Biderman, Stella and
Battisti, Alessia and
Baruwa, Ahmed and
Bapna, Ankur and
Baljekar, Pallavi and
Azime, Israel Abebe and
Awokoya, Ayodele and
Ataman, Duygu and
Ahia, Orevaoghene and
Ahia, Oghenefego and
Agrawal, Sweta and
Adeyemi, Mofetoluwa},
journal = "Transactions of the Association for Computational Linguistics",
volume = "10",
year = "2022",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/2022.tacl-1.4",
doi = "10.1162/tacl_a_00447",
pages = "50--72",
abstract = "With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, Web-mined text datasets covering hundreds of languages. We manually audit the quality of 205 language-specific corpora released with five major public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4). Lower-resource corpora have systematic issues: At least 15 corpora have no usable text, and a significant fraction contains less than 50{\%} sentences of acceptable quality. In addition, many are mislabeled or use nonstandard/ambiguous language codes. We demonstrate that these issues are easy to detect even for non-proficient speakers, and supplement the human audit with automatic analyses. Finally, we recommend techniques to evaluate and improve multilingual corpora and discuss potential risks that come with low-quality data releases.",
}
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
|
bjoernp/oscar2023_deduped_filtered_1.1
|
[
"size_categories:10M<n<100M",
"language:de",
"arxiv:2212.10440",
"region:us"
] |
2023-08-17T06:35:53+00:00
|
{"language": ["de"], "size_categories": ["10M<n<100M"]}
|
2023-11-13T09:18:16+00:00
|
[
"2212.10440"
] |
[
"de"
] |
TAGS
#size_categories-10M<n<100M #language-German #arxiv-2212.10440 #region-us
|
Oscar 2023\_01 DE Deduplicated
==============================
This is a filtered and deduplicated version of the german subset of the 23.01 OSCAR Corpus, a large, crawled, and processed text dataset
curated by the OSCAR project (Open Super-large Crawled Aggregated coRpus).
OSCAR 23.01 is the January 2023 version of the OSCAR Corpus based on the November/December 2022 dump of Common Crawl.
While being quite similar to OSCAR 22.01, it contains several new features, including KenLM-based adult content detection, [...].
It was deduplicated using a MinHash implementation from the 'text-dedup' library by 'ChenghaoMou' available on GitHub. with the following command:
Deduplication statistics
------------------------
Dataset scheme:
---------------
Filtering
---------
Filtered with the following code (hyperparameters might vary slightly):
Licensing
---------
We follow the original licensing scheme of the Oscar Corpus.
(from the OSCAR Corpus. We cannot reasonably comply with takedown requests.):
|
[] |
[
"TAGS\n#size_categories-10M<n<100M #language-German #arxiv-2212.10440 #region-us \n"
] |
[
31
] |
[
"passage: TAGS\n#size_categories-10M<n<100M #language-German #arxiv-2212.10440 #region-us \n"
] |
bf47380185e8896faf152849319efd5be49d83ac
|
# Dataset of lesoir/ルスワール (Pokémon)
This is the dataset of lesoir/ルスワール (Pokémon), containing 45 images and their tags.
The core tags of this character are `blue_eyes, breasts, blue_hair, hat, short_hair, top_hat, bangs, blue_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 45 | 41.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lesoir_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 45 | 25.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lesoir_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 104 | 51.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lesoir_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 45 | 36.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lesoir_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 104 | 68.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lesoir_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/lesoir_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, bare_shoulders, open_mouth, pantyhose, elbow_gloves, solo, black_gloves, blue_dress, looking_at_viewer |
| 1 | 9 |  |  |  |  |  | 1girl, hetero, penis, solo_focus, blush, nipples, 1boy, medium_breasts, fellatio, bar_censor, cum_on_breasts, facial, gloves, pussy, sex, testicles |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | open_mouth | pantyhose | elbow_gloves | solo | black_gloves | blue_dress | looking_at_viewer | hetero | penis | solo_focus | blush | nipples | 1boy | medium_breasts | fellatio | bar_censor | cum_on_breasts | facial | gloves | pussy | sex | testicles |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:-------------|:------------|:---------------|:-------|:---------------|:-------------|:--------------------|:---------|:--------|:-------------|:--------|:----------|:-------|:-----------------|:-----------|:-------------|:-----------------|:---------|:---------|:--------|:------|:------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/lesoir_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T06:36:34+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T13:18:36+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of lesoir/ルスワール (Pokémon)
=================================
This is the dataset of lesoir/ルスワール (Pokémon), containing 45 images and their tags.
The core tags of this character are 'blue\_eyes, breasts, blue\_hair, hat, short\_hair, top\_hat, bangs, blue\_headwear', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
149a4ba8c1bba2bf942a6bdfc9da058d26e66e15
|
# Dataset of matiere/マチエール (Pokémon)
This is the dataset of matiere/マチエール (Pokémon), containing 33 images and their tags.
The core tags of this character are `long_hair, black_hair, dark_skin, dark-skinned_female, bangs, blue_eyes, twintails, eyelashes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 33 | 21.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matiere_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 33 | 15.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matiere_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 70 | 28.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matiere_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 33 | 20.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matiere_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 70 | 36.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matiere_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/matiere_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------|
| 0 | 33 |  |  |  |  |  | 1girl, smile, open_mouth, blush, skirt, pantyhose, looking_at_viewer, shirt, solo, simple_background, pokemon_(creature), sweater |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | open_mouth | blush | skirt | pantyhose | looking_at_viewer | shirt | solo | simple_background | pokemon_(creature) | sweater |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------------|:--------|:--------|:------------|:--------------------|:--------|:-------|:--------------------|:---------------------|:----------|
| 0 | 33 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/matiere_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T06:43:29+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T13:36:57+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of matiere/マチエール (Pokémon)
==================================
This is the dataset of matiere/マチエール (Pokémon), containing 33 images and their tags.
The core tags of this character are 'long\_hair, black\_hair, dark\_skin, dark-skinned\_female, bangs, blue\_eyes, twintails, eyelashes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
a93caa52154a964b262607bfd2f512f2640502c7
|
# Dataset Card for Meteocat
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Point of Contact:** [[email protected]]([email protected])
### Dataset Summary
This is a synthetic dataset that contains examples, each of them, with the following fields:
- Instructions like "El dissabte a la nit, quin temps farà a Mont-real?"
- Context like "Day: dissabte | Location: Mont-real | mati: el cel estarà molt ennuvolat | tarda: plourà escadusserament | nit: el cel tendirà a estar cobert de núvols | temp: Lleugera pujada de les temperatures"
- Response like "A la nit el cel estarà ennuvolat"
Added instructions for answering "yes" or "no" questions.
### Supported Tasks and Leaderboards
This dataset is mainly intended to train models for text-generation and named-entity-recognition.
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
The dataset consists of examples in a jsonl format with 3 fields each: instruction, context and response.
### Data Instances
Changed origina context for a more linguistically natural one: "tarda del divendres a Montesquiu al mati s'esperen més nuvolades, a la tarda guspirejarà amb insistència, a la nit podria guspirejar, i Temperatures sense canvis"
{
"instruction": "Quin temps farà a la nit a Camarasa dijous?",
xxx "context": "Day: dijous | Location: Camarasa | mati: el cel anirà encapotant-se cada cop més | tarda: el sol anirà guanyant terreny als núvols | nit: cel clar | temp: Temperatures sense canvis",
"response": "A la nit, cel ben clar"
}
### Data Fields
- instruction: Weather-related question.
xxx - context: Information in the format "Day: [DAY] | Location: [LOCATION] | mati: [WEATHER FORECAST] | tarda: [WEATHER FORECAST] | nit: [WEATHER FORECAST]".
- response: Whether forecast answering the question.
### Data Splits
* dev.json: 6873 examples
* test.json: 1279 examples
* train.json: 61776 examples
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
???? [Creative Commons Attribution Non-commercial No-Derivatives 4.0 International](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Contributions
[N/A]
|
crodri/meteocat
|
[
"task_categories:text-generation",
"task_categories:token-classification",
"task_categories:question-answering",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ca",
"region:us"
] |
2023-08-17T06:51:04+00:00
|
{"language": ["ca"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "token-classification", "question-answering"], "task_ids": ["named-entity-recognition"], "pretty_name": "synthetic_meteocat"}
|
2023-11-30T06:57:40+00:00
|
[] |
[
"ca"
] |
TAGS
#task_categories-text-generation #task_categories-token-classification #task_categories-question-answering #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-10K<n<100K #language-Catalan #region-us
|
# Dataset Card for Meteocat
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Point of Contact: langtech@URL
### Dataset Summary
This is a synthetic dataset that contains examples, each of them, with the following fields:
- Instructions like "El dissabte a la nit, quin temps farà a Mont-real?"
- Context like "Day: dissabte | Location: Mont-real | mati: el cel estarà molt ennuvolat | tarda: plourà escadusserament | nit: el cel tendirà a estar cobert de núvols | temp: Lleugera pujada de les temperatures"
- Response like "A la nit el cel estarà ennuvolat"
Added instructions for answering "yes" or "no" questions.
### Supported Tasks and Leaderboards
This dataset is mainly intended to train models for text-generation and named-entity-recognition.
### Languages
The dataset is in Catalan ('ca-CA').
## Dataset Structure
The dataset consists of examples in a jsonl format with 3 fields each: instruction, context and response.
### Data Instances
Changed origina context for a more linguistically natural one: "tarda del divendres a Montesquiu al mati s'esperen més nuvolades, a la tarda guspirejarà amb insistència, a la nit podria guspirejar, i Temperatures sense canvis"
{
"instruction": "Quin temps farà a la nit a Camarasa dijous?",
xxx "context": "Day: dijous | Location: Camarasa | mati: el cel anirà encapotant-se cada cop més | tarda: el sol anirà guanyant terreny als núvols | nit: cel clar | temp: Temperatures sense canvis",
"response": "A la nit, cel ben clar"
}
### Data Fields
- instruction: Weather-related question.
xxx - context: Information in the format "Day: [DAY] | Location: [LOCATION] | mati: [WEATHER FORECAST] | tarda: [WEATHER FORECAST] | nit: [WEATHER FORECAST]".
- response: Whether forecast answering the question.
### Data Splits
* URL: 6873 examples
* URL: 1279 examples
* URL: 61776 examples
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)
This work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.
### Licensing Information
???? Creative Commons Attribution Non-commercial No-Derivatives 4.0 International.
### Contributions
[N/A]
|
[
"# Dataset Card for Meteocat",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Point of Contact: langtech@URL",
"### Dataset Summary\n\nThis is a synthetic dataset that contains examples, each of them, with the following fields:\n- Instructions like \"El dissabte a la nit, quin temps farà a Mont-real?\"\n- Context like \"Day: dissabte | Location: Mont-real | mati: el cel estarà molt ennuvolat | tarda: plourà escadusserament | nit: el cel tendirà a estar cobert de núvols | temp: Lleugera pujada de les temperatures\"\n- Response like \"A la nit el cel estarà ennuvolat\"\n\nAdded instructions for answering \"yes\" or \"no\" questions.",
"### Supported Tasks and Leaderboards\n\nThis dataset is mainly intended to train models for text-generation and named-entity-recognition.",
"### Languages\n\nThe dataset is in Catalan ('ca-CA').",
"## Dataset Structure\n\nThe dataset consists of examples in a jsonl format with 3 fields each: instruction, context and response.",
"### Data Instances\nChanged origina context for a more linguistically natural one: \"tarda del divendres a Montesquiu al mati s'esperen més nuvolades, a la tarda guspirejarà amb insistència, a la nit podria guspirejar, i Temperatures sense canvis\"\n{\n \"instruction\": \"Quin temps farà a la nit a Camarasa dijous?\", \n xxx \"context\": \"Day: dijous | Location: Camarasa | mati: el cel anirà encapotant-se cada cop més | tarda: el sol anirà guanyant terreny als núvols | nit: cel clar | temp: Temperatures sense canvis\", \n \"response\": \"A la nit, cel ben clar\"\n}",
"### Data Fields\n- instruction: Weather-related question.\nxxx - context: Information in the format \"Day: [DAY] | Location: [LOCATION] | mati: [WEATHER FORECAST] | tarda: [WEATHER FORECAST] | nit: [WEATHER FORECAST]\".\n \n- response: Whether forecast answering the question.",
"### Data Splits\n\n* URL: 6873 examples\n* URL: 1279 examples\n* URL: 61776 examples",
"## Additional Information",
"### Dataset Curators\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)\n\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.",
"### Licensing Information\n\n???? Creative Commons Attribution Non-commercial No-Derivatives 4.0 International.",
"### Contributions\n\n[N/A]"
] |
[
"TAGS\n#task_categories-text-generation #task_categories-token-classification #task_categories-question-answering #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-10K<n<100K #language-Catalan #region-us \n",
"# Dataset Card for Meteocat",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Point of Contact: langtech@URL",
"### Dataset Summary\n\nThis is a synthetic dataset that contains examples, each of them, with the following fields:\n- Instructions like \"El dissabte a la nit, quin temps farà a Mont-real?\"\n- Context like \"Day: dissabte | Location: Mont-real | mati: el cel estarà molt ennuvolat | tarda: plourà escadusserament | nit: el cel tendirà a estar cobert de núvols | temp: Lleugera pujada de les temperatures\"\n- Response like \"A la nit el cel estarà ennuvolat\"\n\nAdded instructions for answering \"yes\" or \"no\" questions.",
"### Supported Tasks and Leaderboards\n\nThis dataset is mainly intended to train models for text-generation and named-entity-recognition.",
"### Languages\n\nThe dataset is in Catalan ('ca-CA').",
"## Dataset Structure\n\nThe dataset consists of examples in a jsonl format with 3 fields each: instruction, context and response.",
"### Data Instances\nChanged origina context for a more linguistically natural one: \"tarda del divendres a Montesquiu al mati s'esperen més nuvolades, a la tarda guspirejarà amb insistència, a la nit podria guspirejar, i Temperatures sense canvis\"\n{\n \"instruction\": \"Quin temps farà a la nit a Camarasa dijous?\", \n xxx \"context\": \"Day: dijous | Location: Camarasa | mati: el cel anirà encapotant-se cada cop més | tarda: el sol anirà guanyant terreny als núvols | nit: cel clar | temp: Temperatures sense canvis\", \n \"response\": \"A la nit, cel ben clar\"\n}",
"### Data Fields\n- instruction: Weather-related question.\nxxx - context: Information in the format \"Day: [DAY] | Location: [LOCATION] | mati: [WEATHER FORECAST] | tarda: [WEATHER FORECAST] | nit: [WEATHER FORECAST]\".\n \n- response: Whether forecast answering the question.",
"### Data Splits\n\n* URL: 6873 examples\n* URL: 1279 examples\n* URL: 61776 examples",
"## Additional Information",
"### Dataset Curators\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)\n\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.",
"### Licensing Information\n\n???? Creative Commons Attribution Non-commercial No-Derivatives 4.0 International.",
"### Contributions\n\n[N/A]"
] |
[
81,
8,
73,
13,
151,
36,
17,
33,
166,
83,
26,
5,
68,
24,
10
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-token-classification #task_categories-question-answering #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-10K<n<100K #language-Catalan #region-us \n# Dataset Card for Meteocat## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Point of Contact: langtech@URL### Dataset Summary\n\nThis is a synthetic dataset that contains examples, each of them, with the following fields:\n- Instructions like \"El dissabte a la nit, quin temps farà a Mont-real?\"\n- Context like \"Day: dissabte | Location: Mont-real | mati: el cel estarà molt ennuvolat | tarda: plourà escadusserament | nit: el cel tendirà a estar cobert de núvols | temp: Lleugera pujada de les temperatures\"\n- Response like \"A la nit el cel estarà ennuvolat\"\n\nAdded instructions for answering \"yes\" or \"no\" questions.### Supported Tasks and Leaderboards\n\nThis dataset is mainly intended to train models for text-generation and named-entity-recognition.### Languages\n\nThe dataset is in Catalan ('ca-CA').## Dataset Structure\n\nThe dataset consists of examples in a jsonl format with 3 fields each: instruction, context and response."
] |
5fd9e035b3a2e26321dce240a495c4e886cb3bb2
|
# Dataset of yamato (Pokémon)
This is the dataset of yamato (Pokémon), containing 58 images and their tags.
The core tags of this character are `long_hair, breasts, blonde_hair, earrings, purple_eyes, large_breasts, twintails`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 58 | 45.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 58 | 27.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 103 | 49.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 58 | 41.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 103 | 69.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/yamato_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, solo, medium_breasts, nipples, nude, jewelry, female_pubic_hair, lipstick, navel, pussy, orange_hair |
| 1 | 8 |  |  |  |  |  | 1girl, hetero, jewelry, penis, pussy, 1boy, solo_focus, uncensored, nipples, sex, blush, anal, cum, completely_nude, elbow_gloves, navel |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | medium_breasts | nipples | nude | jewelry | female_pubic_hair | lipstick | navel | pussy | orange_hair | hetero | penis | 1boy | solo_focus | uncensored | sex | blush | anal | cum | completely_nude | elbow_gloves |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-----------------|:----------|:-------|:----------|:--------------------|:-----------|:--------|:--------|:--------------|:---------|:--------|:-------|:-------------|:-------------|:------|:--------|:-------|:------|:------------------|:---------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | | | X | | X | | | X | X | | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/yamato_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T06:53:39+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T14:47:13+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of yamato (Pokémon)
===========================
This is the dataset of yamato (Pokémon), containing 58 images and their tags.
The core tags of this character are 'long\_hair, breasts, blonde\_hair, earrings, purple\_eyes, large\_breasts, twintails', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
fff66c1a67ed876e47a967d29a766e01ad7067e5
|
# Dataset of beauty (Pokémon)
This is the dataset of beauty (Pokémon), containing 13 images and their tags.
The core tags of this character are `brown_hair, hat, long_hair, sun_hat, green_eyes, breasts, earrings, large_breasts, white_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 13 | 11.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/beauty_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 13 | 6.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/beauty_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 27 | 12.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/beauty_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 13 | 10.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/beauty_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 27 | 16.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/beauty_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/beauty_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | 1girl, solo, black_choker, collarbone, jewelry, looking_at_viewer, long_sleeves, bare_shoulders, smile, blush, cleavage, open_mouth, white_dress |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | black_choker | collarbone | jewelry | looking_at_viewer | long_sleeves | bare_shoulders | smile | blush | cleavage | open_mouth | white_dress |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:---------------|:-------------|:----------|:--------------------|:---------------|:-----------------|:--------|:--------|:-----------|:-------------|:--------------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/beauty_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T07:07:48+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T13:53:54+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of beauty (Pokémon)
===========================
This is the dataset of beauty (Pokémon), containing 13 images and their tags.
The core tags of this character are 'brown\_hair, hat, long\_hair, sun\_hat, green\_eyes, breasts, earrings, large\_breasts, white\_headwear', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
aafc955be54848a1b5e7f7acc33a9181aff0d502
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
Ebo88/llamae
|
[
"region:us"
] |
2023-08-17T07:11:53+00:00
|
{}
|
2023-08-17T07:15:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
93bd86912f4fe0e3a7e81a6fc78a29fb0eaf2eda
|
# Dataset Card for "guanaco-m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
TinyPixel/guanaco-m
|
[
"region:us"
] |
2023-08-17T07:18:05+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15877537, "num_examples": 9846}], "download_size": 9237302, "dataset_size": 15877537}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-16T14:36:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "guanaco-m"
More Information needed
|
[
"# Dataset Card for \"guanaco-m\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"guanaco-m\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"guanaco-m\"\n\nMore Information needed"
] |
ae877c7f2ec35267e3851900a1d3c8224b692aa4
|
# Dataset Card for "duped-num-frequencies"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
usvsnsp/duped-num-frequencies
|
[
"region:us"
] |
2023-08-17T07:20:29+00:00
|
{"dataset_info": {"features": [{"name": "TokenID", "dtype": "int64"}, {"name": "Frequency", "dtype": "int64"}], "splits": [{"name": "memorized", "num_bytes": 960000, "num_examples": 60000}, {"name": "non_memorized", "num_bytes": 960000, "num_examples": 60000}, {"name": "total", "num_bytes": 960000, "num_examples": 60000}], "download_size": 1965812, "dataset_size": 2880000}}
|
2023-08-17T07:20:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "duped-num-frequencies"
More Information needed
|
[
"# Dataset Card for \"duped-num-frequencies\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"duped-num-frequencies\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"duped-num-frequencies\"\n\nMore Information needed"
] |
943e610c375781b14897d768a4cbb9d518fb8aa1
|
# Dataset Card for "deduped-num-frequencies"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
usvsnsp/deduped-num-frequencies
|
[
"region:us"
] |
2023-08-17T07:20:58+00:00
|
{"dataset_info": {"features": [{"name": "TokenID", "dtype": "int64"}, {"name": "Frequency", "dtype": "int64"}], "splits": [{"name": "memorized", "num_bytes": 960000, "num_examples": 60000}, {"name": "non_memorized", "num_bytes": 960000, "num_examples": 60000}, {"name": "total", "num_bytes": 960000, "num_examples": 60000}], "download_size": 1974196, "dataset_size": 2880000}}
|
2023-08-17T07:21:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "deduped-num-frequencies"
More Information needed
|
[
"# Dataset Card for \"deduped-num-frequencies\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"deduped-num-frequencies\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"deduped-num-frequencies\"\n\nMore Information needed"
] |
6784f317462e030d382e553b7b7f90353228c9a3
|
# Dataset of kazami_yuuka/風見幽香/카자미유카 (Touhou)
This is the dataset of kazami_yuuka/風見幽香/카자미유카 (Touhou), containing 500 images and their tags.
The core tags of this character are `green_hair, short_hair, red_eyes, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 739.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kazami_yuuka_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 460.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kazami_yuuka_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1183 | 893.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kazami_yuuka_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 676.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kazami_yuuka_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1183 | 1.17 GiB | [Download](https://huggingface.co/datasets/CyberHarem/kazami_yuuka_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kazami_yuuka_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 17 |  |  |  |  |  | 1girl, plaid_skirt, plaid_vest, skirt_set, solo, ascot, sunflower, parasol, smile |
| 1 | 5 |  |  |  |  |  | 1girl, ascot, plaid_skirt, plaid_vest, shirt, simple_background, skirt_set, solo, white_background, smile, medium_breasts, parasol |
| 2 | 7 |  |  |  |  |  | 1girl, ascot, plaid_vest, shirt, solo, upper_body, looking_at_viewer, smile |
| 3 | 5 |  |  |  |  |  | 1girl, bangs, collared_shirt, long_sleeves, looking_at_viewer, plaid_skirt, red_skirt, simple_background, white_background, white_shirt, closed_mouth, hair_between_eyes, plaid_vest, red_vest, solo, frills, standing, yellow_ascot, black_pantyhose, blush, cowboy_shot, long_skirt, skirt_set, wavy_hair |
| 4 | 6 |  |  |  |  |  | 1girl, cleavage, open_shirt, solo, black_panties, large_breasts, black_bra, navel, black_thighhighs, blush, lingerie, looking_at_viewer, plaid_vest, sitting |
| 5 | 5 |  |  |  |  |  | 1girl, beach, looking_at_viewer, solo, cleavage, large_breasts, red_bikini, smile, day, plaid_bikini, side-tie_bikini_bottom, water, blush, flower, medium_breasts, navel, sky |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | plaid_skirt | plaid_vest | skirt_set | solo | ascot | sunflower | parasol | smile | shirt | simple_background | white_background | medium_breasts | upper_body | looking_at_viewer | bangs | collared_shirt | long_sleeves | red_skirt | white_shirt | closed_mouth | hair_between_eyes | red_vest | frills | standing | yellow_ascot | black_pantyhose | blush | cowboy_shot | long_skirt | wavy_hair | cleavage | open_shirt | black_panties | large_breasts | black_bra | navel | black_thighhighs | lingerie | sitting | beach | red_bikini | day | plaid_bikini | side-tie_bikini_bottom | water | flower | sky |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:-------------|:------------|:-------|:--------|:------------|:----------|:--------|:--------|:--------------------|:-------------------|:-----------------|:-------------|:--------------------|:--------|:-----------------|:---------------|:------------|:--------------|:---------------|:--------------------|:-----------|:---------|:-----------|:---------------|:------------------|:--------|:--------------|:-------------|:------------|:-----------|:-------------|:----------------|:----------------|:------------|:--------|:-------------------|:-----------|:----------|:--------|:-------------|:------|:---------------|:-------------------------|:--------|:---------|:------|
| 0 | 17 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | | X | | X | X | | | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | X | X | X | | | | | | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | | X | | X | | | | | | | | | | X | | | | | | | | | | | | | X | | | | X | X | X | X | X | X | X | X | X | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | | | X | | | | X | | | | X | | X | | | | | | | | | | | | | X | | | | X | | | X | | X | | | | X | X | X | X | X | X | X | X |
|
CyberHarem/kazami_yuuka_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T07:26:02+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T10:28:07+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of kazami\_yuuka/風見幽香/카자미유카 (Touhou)
============================================
This is the dataset of kazami\_yuuka/風見幽香/카자미유카 (Touhou), containing 500 images and their tags.
The core tags of this character are 'green\_hair, short\_hair, red\_eyes, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
565e871e3b97e013b8353ab742593043f6cbdb20
|
# Dataset Card for "Soldering-Data-Tiny-More-Data-aug-appearance-hole-0817"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AndyLiu0104/Soldering-Data-Tiny-More-Data-aug-appearance-hole-0817
|
[
"region:us"
] |
2023-08-17T07:27:08+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18877490.625, "num_examples": 11075}], "download_size": 11614140, "dataset_size": 18877490.625}}
|
2023-08-17T07:27:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Soldering-Data-Tiny-More-Data-aug-appearance-hole-0817"
More Information needed
|
[
"# Dataset Card for \"Soldering-Data-Tiny-More-Data-aug-appearance-hole-0817\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Soldering-Data-Tiny-More-Data-aug-appearance-hole-0817\"\n\nMore Information needed"
] |
[
6,
31
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Soldering-Data-Tiny-More-Data-aug-appearance-hole-0817\"\n\nMore Information needed"
] |
b03fa0a7d338f210f7bae9ff9fdbe0f36d62a8b4
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
SunidhiSriram/twcs
|
[
"region:us"
] |
2023-08-17T07:30:58+00:00
|
{}
|
2023-08-17T07:44:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
a995453f947a31b227ce0fa64301c40fdecff1e6
|
# Dataset Card for "ner_jobkeyword_dev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yenstdi/ner_jobkeyword_dev
|
[
"region:us"
] |
2023-08-17T07:35:13+00:00
|
{"dataset_info": {"features": [{"name": "words", "sequence": "string"}, {"name": "ner", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 79694, "num_examples": 845}, {"name": "val", "num_bytes": 8450, "num_examples": 91}, {"name": "test", "num_bytes": 7967, "num_examples": 91}], "download_size": 23195, "dataset_size": 96111, "viewer": true}}
|
2023-08-21T04:41:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ner_jobkeyword_dev"
More Information needed
|
[
"# Dataset Card for \"ner_jobkeyword_dev\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ner_jobkeyword_dev\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ner_jobkeyword_dev\"\n\nMore Information needed"
] |
7bb0a925ece961c7f60114ff16e5532e9f5890b5
|
# OpenPlatypus
This dataset is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. It is comprised of the following datasets, which were filtered using keyword search and then Sentence Transformers to remove questions with a similarity above 80%:
| Dataset Name | License Type |
|--------------------------------------------------------------|--------------|
| [PRM800K](https://github.com/openai/prm800k) | MIT |
| [ScienceQA](https://github.com/lupantech/ScienceQA) | [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/) |
| [SciBench](https://github.com/mandyyyyii/scibench) | MIT |
| [ReClor](https://whyu.me/reclor/) | Non-commercial |
| [TheoremQA](https://huggingface.co/datasets/wenhu/TheoremQA) | MIT |
| [`nuprl/leetcode-solutions-python-testgen-gpt4`](https://huggingface.co/datasets/nuprl/leetcode-solutions-python-testgen-gpt4/viewer/nuprl--leetcode-solutions-python-testgen-gpt4/train?p=1) | None listed |
| [`jondurbin/airoboros-gpt4-1.4.1`](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1) | other |
| [`TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k`](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k/viewer/TigerResearch--tigerbot-kaggle-leetcodesolutions-en-2k/train?p=2) | apache-2.0 |
| [openbookQA](https://huggingface.co/datasets/openbookqa/viewer/additional/train?row=35) | apache-2.0 |
| [ARB](https://arb.duckai.org) | MIT |
| [`timdettmers/openassistant-guanaco`](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) | apache-2.0 |
## Data Contamination Check
We've removed approximately 200 questions that appear in the Hugging Face benchmark test sets. Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information.
## Model Info
Please see models at [`garage-bAInd`](https://huggingface.co/garage-bAInd).
## Training and filtering code
Please see the [Platypus GitHub repo](https://github.com/arielnlee/Platypus).
## Citations
```bibtex
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
```
```bibtex
@article{lightman2023lets,
title={Let's Verify Step by Step},
author={Lightman, Hunter and Kosaraju, Vineet and Burda, Yura and Edwards, Harri and Baker, Bowen and Lee, Teddy and Leike, Jan and Schulman, John and Sutskever, Ilya and Cobbe, Karl},
journal={preprint arXiv:2305.20050},
year={2023}
}
```
```bibtex
@inproceedings{lu2022learn,
title={Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering},
author={Lu, Pan and Mishra, Swaroop and Xia, Tony and Qiu, Liang and Chang, Kai-Wei and Zhu, Song-Chun and Tafjord, Oyvind and Clark, Peter and Ashwin Kalyan},
booktitle={The 36th Conference on Neural Information Processing Systems (NeurIPS)},
year={2022}
}
```
```bibtex
@misc{wang2023scibench,
title={SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models},
author={Xiaoxuan Wang and Ziniu Hu and Pan Lu and Yanqiao Zhu and Jieyu Zhang and Satyen Subramaniam and Arjun R. Loomba and Shichang Zhang and Yizhou Sun and Wei Wang},
year={2023},
arXiv eprint 2307.10635
}
```
```bibtex
@inproceedings{yu2020reclor,
author = {Yu, Weihao and Jiang, Zihang and Dong, Yanfei and Feng, Jiashi},
title = {ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning},
booktitle = {International Conference on Learning Representations (ICLR)},
month = {April},
year = {2020}
}
```
```bibtex
@article{chen2023theoremqa,
title={TheoremQA: A Theorem-driven Question Answering dataset},
author={Chen, Wenhu and Ming Yin, Max Ku, Elaine Wan, Xueguang Ma, Jianyu Xu, Tony Xia, Xinyi Wang, Pan Lu},
journal={preprint arXiv:2305.12524},
year={2023}
}
```
```bibtex
@inproceedings{OpenBookQA2018,
title={Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering},
author={Todor Mihaylov and Peter Clark and Tushar Khot and Ashish Sabharwal},
booktitle={EMNLP},
year={2018}
}
```
```bibtex
@misc{sawada2023arb,
title={ARB: Advanced Reasoning Benchmark for Large Language Models},
author={Tomohiro Sawada and Daniel Paleka and Alexander Havrilla and Pranav Tadepalli and Paula Vidas and Alexander Kranias and John J. Nay and Kshitij Gupta and Aran Komatsuzaki},
arXiv eprint 2307.13692,
year={2023}
}
```
|
botp/Open-Platypus
|
[
"size_categories:10K<n<100K",
"language:en",
"arxiv:2308.07317",
"region:us"
] |
2023-08-17T07:56:35+00:00
|
{"language": ["en"], "size_categories": ["10K<n<100K"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30418784, "num_examples": 24926}], "download_size": 15545530, "dataset_size": 30418784}, "duplicated_from": "garage-bAInd/Open-Platypus"}
|
2023-08-17T07:56:35+00:00
|
[
"2308.07317"
] |
[
"en"
] |
TAGS
#size_categories-10K<n<100K #language-English #arxiv-2308.07317 #region-us
|
OpenPlatypus
============
This dataset is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. It is comprised of the following datasets, which were filtered using keyword search and then Sentence Transformers to remove questions with a similarity above 80%:
Data Contamination Check
------------------------
We've removed approximately 200 questions that appear in the Hugging Face benchmark test sets. Please see our paper and project webpage for additional information.
Model Info
----------
Please see models at 'garage-bAInd'.
Training and filtering code
---------------------------
Please see the Platypus GitHub repo.
s
|
[] |
[
"TAGS\n#size_categories-10K<n<100K #language-English #arxiv-2308.07317 #region-us \n"
] |
[
31
] |
[
"passage: TAGS\n#size_categories-10K<n<100K #language-English #arxiv-2308.07317 #region-us \n"
] |
e311997eea993361c1228054a7999d262918b092
|
# Dataset Card for "preprocessed_data_5000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aviroes/preprocessed_data_5000
|
[
"region:us"
] |
2023-08-17T08:00:18+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "test", "num_bytes": 33938240, "num_examples": 100}, {"name": "train", "num_bytes": 1669453080, "num_examples": 5000}, {"name": "validation", "num_bytes": 33638336, "num_examples": 100}], "download_size": 1726620668, "dataset_size": 1737029656}}
|
2023-08-17T08:01:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "preprocessed_data_5000"
More Information needed
|
[
"# Dataset Card for \"preprocessed_data_5000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"preprocessed_data_5000\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"preprocessed_data_5000\"\n\nMore Information needed"
] |
ecb500aea7ca475f074d91a61060e5c27173109d
|
# Dataset Card for [Bengali Asr Corpus]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@parambharat](https://github.com/parambharat) for adding this dataset.
|
parambharat/bengali_asr_corpus
|
[
"task_categories:automatic-speech-recognition",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|openslr",
"language:bn",
"license:cc-by-4.0",
"region:us"
] |
2023-08-17T08:03:50+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["bn"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|openslr"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "Bengali ASR Corpus", "tags": []}
|
2023-08-29T04:12:50+00:00
|
[] |
[
"bn"
] |
TAGS
#task_categories-automatic-speech-recognition #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|openslr #language-Bengali #license-cc-by-4.0 #region-us
|
# Dataset Card for [Bengali Asr Corpus]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @parambharat for adding this dataset.
|
[
"# Dataset Card for [Bengali Asr Corpus]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @parambharat for adding this dataset."
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|openslr #language-Bengali #license-cc-by-4.0 #region-us \n",
"# Dataset Card for [Bengali Asr Corpus]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @parambharat for adding this dataset."
] |
[
88,
12,
125,
24,
6,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
18
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|openslr #language-Bengali #license-cc-by-4.0 #region-us \n# Dataset Card for [Bengali Asr Corpus]## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @parambharat for adding this dataset."
] |
a1724d0abf62fdaa5c12a0f60118392dc543365f
|
# Dataset of aloe/アロエ (Pokémon)
This is the dataset of aloe/アロエ (Pokémon), containing 73 images and their tags.
The core tags of this character are `breasts, dark_skin, dark-skinned_female, green_hair, very_dark_skin, afro, big_hair, large_breasts, hairband, green_eyes, long_hair, blue_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 73 | 55.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aloe_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 73 | 38.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aloe_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 125 | 60.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aloe_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 73 | 52.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aloe_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 125 | 77.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aloe_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/aloe_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 20 |  |  |  |  |  | 1girl, smile, lipstick, solo, pants, shirt, short_sleeves |
| 1 | 5 |  |  |  |  |  | 1girl, huge_breasts, smile, solo, dark_nipples, lipstick, navel, female_pubic_hair, looking_at_viewer, nude, pussy, spread_legs, ;), armpits, censored, light_areolae, one_eye_closed, sweat, swimsuit |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | lipstick | solo | pants | shirt | short_sleeves | huge_breasts | dark_nipples | navel | female_pubic_hair | looking_at_viewer | nude | pussy | spread_legs | ;) | armpits | censored | light_areolae | one_eye_closed | sweat | swimsuit |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-----------|:-------|:--------|:--------|:----------------|:---------------|:---------------|:--------|:--------------------|:--------------------|:-------|:--------|:--------------|:-----|:----------|:-----------|:----------------|:-----------------|:--------|:-----------|
| 0 | 20 |  |  |  |  |  | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/aloe_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T08:04:03+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T13:02:39+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of aloe/アロエ (Pokémon)
=============================
This is the dataset of aloe/アロエ (Pokémon), containing 73 images and their tags.
The core tags of this character are 'breasts, dark\_skin, dark-skinned\_female, green\_hair, very\_dark\_skin, afro, big\_hair, large\_breasts, hairband, green\_eyes, long\_hair, blue\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
8ff4d15330cf29ac721c5dcc722c72c5eba49024
|
# Dataset of reiuji_utsuho/霊烏路空/레이우지우츠호 (Touhou)
This is the dataset of reiuji_utsuho/霊烏路空/레이우지우츠호 (Touhou), containing 500 images and their tags.
The core tags of this character are `long_hair, bow, hair_bow, green_bow, wings, red_eyes, third_eye, black_hair, black_wings, brown_hair, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 680.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/reiuji_utsuho_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 425.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/reiuji_utsuho_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1119 | 784.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/reiuji_utsuho_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 621.44 MiB | [Download](https://huggingface.co/datasets/CyberHarem/reiuji_utsuho_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1119 | 1.01 GiB | [Download](https://huggingface.co/datasets/CyberHarem/reiuji_utsuho_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/reiuji_utsuho_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, arm_cannon, cape, green_skirt, open_mouth, short_sleeves, solo, shirt, smile, blush |
| 1 | 6 |  |  |  |  |  | 1girl, arm_cannon, cape, green_skirt, solo, grin |
| 2 | 16 |  |  |  |  |  | 1girl, arm_cannon, black_thighhighs, cape, green_skirt, solo, smile, zettai_ryouiki |
| 3 | 6 |  |  |  |  |  | 1girl, arm_cannon, cape, green_skirt, mismatched_footwear, smile, solo, sun |
| 4 | 5 |  |  |  |  |  | 1girl, arm_cannon, black_thighhighs, cape, green_skirt, looking_at_viewer, shirt, solo, zettai_ryouiki, puffy_short_sleeves, bird_wings, open_mouth, white_background |
| 5 | 8 |  |  |  |  |  | 1girl, arm_cannon, bangs, bird_wings, closed_mouth, collared_shirt, green_skirt, looking_at_viewer, solo, starry_sky_print, white_cape, white_shirt, black_socks, frilled_skirt, full_body, kneehighs, mismatched_footwear, puffy_short_sleeves, frilled_shirt_collar, smile, feathered_wings, simple_background, single_shoe, white_background, buttons, hair_between_eyes, black_footwear, brown_footwear, very_long_hair |
| 6 | 5 |  |  |  |  |  | 1girl, arm_cannon, bird_wings, black_socks, collared_shirt, feathered_wings, frilled_shirt_collar, frilled_skirt, green_skirt, looking_at_viewer, puffy_short_sleeves, solo, starry_sky_print, white_cape, white_shirt, bangs, kneehighs, blouse, open_mouth, shoes, feet_out_of_frame, foot_out_of_frame, medium_breasts, very_long_hair |
| 7 | 6 |  |  |  |  |  | 1girl, arm_cannon, bangs, bird_wings, collared_shirt, green_skirt, puffy_short_sleeves, solo, white_cape, white_shirt, closed_mouth, feathered_wings, hair_between_eyes, looking_at_viewer, smile, center_frills, upper_body |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | arm_cannon | cape | green_skirt | open_mouth | short_sleeves | solo | shirt | smile | blush | grin | black_thighhighs | zettai_ryouiki | mismatched_footwear | sun | looking_at_viewer | puffy_short_sleeves | bird_wings | white_background | bangs | closed_mouth | collared_shirt | starry_sky_print | white_cape | white_shirt | black_socks | frilled_skirt | full_body | kneehighs | frilled_shirt_collar | feathered_wings | simple_background | single_shoe | buttons | hair_between_eyes | black_footwear | brown_footwear | very_long_hair | blouse | shoes | feet_out_of_frame | foot_out_of_frame | medium_breasts | center_frills | upper_body |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:-------|:--------------|:-------------|:----------------|:-------|:--------|:--------|:--------|:-------|:-------------------|:-----------------|:----------------------|:------|:--------------------|:----------------------|:-------------|:-------------------|:--------|:---------------|:-----------------|:-------------------|:-------------|:--------------|:--------------|:----------------|:------------|:------------|:-----------------------|:------------------|:--------------------|:--------------|:----------|:--------------------|:-----------------|:-----------------|:-----------------|:---------|:--------|:--------------------|:--------------------|:-----------------|:----------------|:-------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | X | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 16 |  |  |  |  |  | X | X | X | X | | | X | | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | X | X | | | X | | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | X | X | X | | X | X | | | | X | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 8 |  |  |  |  |  | X | X | | X | | | X | | X | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | X | | X | X | | X | | | | | | | | | X | X | X | | X | | X | X | X | X | X | X | | X | X | X | | | | | | | X | X | X | X | X | X | | |
| 7 | 6 |  |  |  |  |  | X | X | | X | | | X | | X | | | | | | | X | X | X | | X | X | X | | X | X | | | | | | X | | | | X | | | | | | | | | X | X |
|
CyberHarem/reiuji_utsuho_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T08:12:18+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T12:35:55+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of reiuji\_utsuho/霊烏路空/레이우지우츠호 (Touhou)
===============================================
This is the dataset of reiuji\_utsuho/霊烏路空/레이우지우츠호 (Touhou), containing 500 images and their tags.
The core tags of this character are 'long\_hair, bow, hair\_bow, green\_bow, wings, red\_eyes, third\_eye, black\_hair, black\_wings, brown\_hair, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
336d65ee7aef8b317bf586049c5cea1f6257f5c5
|
# Dataset Card for Visual Attributes in the Wild (VAW)
## Dataset Description
**Homepage:** http://vawdataset.com/
**Repository:** https://github.com/adobe-research/vaw_dataset;
- The raw dataset files will be downloaded from: https://github.com/adobe-research/vaw_dataset/tree/main/data, where one can also find additional metadata files such as attribute types.
- The train split loaded from this hf dataset is a concatenation of the train_part1.json and train_part2.json.
- The image_id field corresponds to respective image ids in the v1.4 Visual Genome dataset.
**LICENSE:** https://github.com/adobe-research/vaw_dataset/blob/main/LICENSE.md
**Paper Citation:**
```
@InProceedings{Pham_2021_CVPR,
author = {Pham, Khoi and Kafle, Kushal and Lin, Zhe and Ding, Zhihong and Cohen, Scott and Tran, Quan and Shrivastava, Abhinav},
title = {Learning To Predict Visual Attributes in the Wild},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021},
pages = {13018-13028}
}
```
## Dataset Summary
A large scale visual attributes dataset with explicitly labelled positive and negative attributes.
- 620 Unique Attributes including color, shape, texture, posture and many others
- 260,895 Instances of different objects
- 2260 Unique Objects observed in the wild
- 72,274 Images from the Visual Genome Dataset
- 4 different evaluation metrics for measuring multi-faceted performance metrics
|
mikewang/vaw
|
[
"language:en",
"region:us"
] |
2023-08-17T08:19:28+00:00
|
{"language": ["en"], "pretty_name": "Visual Attributes in the Wild (VAW)"}
|
2023-08-18T02:10:46+00:00
|
[] |
[
"en"
] |
TAGS
#language-English #region-us
|
# Dataset Card for Visual Attributes in the Wild (VAW)
## Dataset Description
Homepage: URL
Repository: URL
- The raw dataset files will be downloaded from: URL where one can also find additional metadata files such as attribute types.
- The train split loaded from this hf dataset is a concatenation of the train_part1.json and train_part2.json.
- The image_id field corresponds to respective image ids in the v1.4 Visual Genome dataset.
LICENSE: URL
Paper Citation:
## Dataset Summary
A large scale visual attributes dataset with explicitly labelled positive and negative attributes.
- 620 Unique Attributes including color, shape, texture, posture and many others
- 260,895 Instances of different objects
- 2260 Unique Objects observed in the wild
- 72,274 Images from the Visual Genome Dataset
- 4 different evaluation metrics for measuring multi-faceted performance metrics
|
[
"# Dataset Card for Visual Attributes in the Wild (VAW)",
"## Dataset Description\n\nHomepage: URL\n\nRepository: URL\n- The raw dataset files will be downloaded from: URL where one can also find additional metadata files such as attribute types. \n- The train split loaded from this hf dataset is a concatenation of the train_part1.json and train_part2.json. \n- The image_id field corresponds to respective image ids in the v1.4 Visual Genome dataset.\n\nLICENSE: URL\n\nPaper Citation:",
"## Dataset Summary\nA large scale visual attributes dataset with explicitly labelled positive and negative attributes.\n\n- 620 Unique Attributes including color, shape, texture, posture and many others\n- 260,895 Instances of different objects\n- 2260 Unique Objects observed in the wild\n- 72,274 Images from the Visual Genome Dataset\n- 4 different evaluation metrics for measuring multi-faceted performance metrics"
] |
[
"TAGS\n#language-English #region-us \n",
"# Dataset Card for Visual Attributes in the Wild (VAW)",
"## Dataset Description\n\nHomepage: URL\n\nRepository: URL\n- The raw dataset files will be downloaded from: URL where one can also find additional metadata files such as attribute types. \n- The train split loaded from this hf dataset is a concatenation of the train_part1.json and train_part2.json. \n- The image_id field corresponds to respective image ids in the v1.4 Visual Genome dataset.\n\nLICENSE: URL\n\nPaper Citation:",
"## Dataset Summary\nA large scale visual attributes dataset with explicitly labelled positive and negative attributes.\n\n- 620 Unique Attributes including color, shape, texture, posture and many others\n- 260,895 Instances of different objects\n- 2260 Unique Objects observed in the wild\n- 72,274 Images from the Visual Genome Dataset\n- 4 different evaluation metrics for measuring multi-faceted performance metrics"
] |
[
10,
16,
107,
97
] |
[
"passage: TAGS\n#language-English #region-us \n# Dataset Card for Visual Attributes in the Wild (VAW)## Dataset Description\n\nHomepage: URL\n\nRepository: URL\n- The raw dataset files will be downloaded from: URL where one can also find additional metadata files such as attribute types. \n- The train split loaded from this hf dataset is a concatenation of the train_part1.json and train_part2.json. \n- The image_id field corresponds to respective image ids in the v1.4 Visual Genome dataset.\n\nLICENSE: URL\n\nPaper Citation:## Dataset Summary\nA large scale visual attributes dataset with explicitly labelled positive and negative attributes.\n\n- 620 Unique Attributes including color, shape, texture, posture and many others\n- 260,895 Instances of different objects\n- 2260 Unique Objects observed in the wild\n- 72,274 Images from the Visual Genome Dataset\n- 4 different evaluation metrics for measuring multi-faceted performance metrics"
] |
455df478db2fcf025eefccb221961b8dd4cb9bbf
|
# Dataset Card for "three"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
xxxlllfff/three
|
[
"region:us"
] |
2023-08-17T08:20:37+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "du", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 232, "num_examples": 4}], "download_size": 1930, "dataset_size": 232}}
|
2023-08-17T08:20:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "three"
More Information needed
|
[
"# Dataset Card for \"three\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"three\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"three\"\n\nMore Information needed"
] |
ab905ce98d52513840a9ba54cf55f403352e7310
|
# Dataset of hinata (Pokémon)
This is the dataset of hinata (Pokémon), containing 36 images and their tags.
The core tags of this character are `blue_hair, breasts, headband, red_eyes, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 36 | 19.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hinata_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 36 | 14.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hinata_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 51 | 22.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hinata_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 36 | 18.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hinata_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 51 | 27.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hinata_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hinata_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, cameltoe, fingerless_gloves, pokemon_(creature), spandex, thighhighs, ass, covered_nipples, torn_clothes, jacket, medium_breasts, open_shirt, tight, bike_shorts, blush, cleavage, shoes, sweat |
| 1 | 7 |  |  |  |  |  | 1girl, solo, thighhighs, fingerless_gloves, leotard, ass, blush, jacket, sweat, short_hair |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | cameltoe | fingerless_gloves | pokemon_(creature) | spandex | thighhighs | ass | covered_nipples | torn_clothes | jacket | medium_breasts | open_shirt | tight | bike_shorts | blush | cleavage | shoes | sweat | solo | leotard | short_hair |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:--------------------|:---------------------|:----------|:-------------|:------|:------------------|:---------------|:---------|:-----------------|:-------------|:--------|:--------------|:--------|:-----------|:--------|:--------|:-------|:----------|:-------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | |
| 1 | 7 |  |  |  |  |  | X | | X | | | X | X | | | X | | | | | X | | | X | X | X | X |
|
CyberHarem/hinata_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T08:20:39+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T14:34:04+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of hinata (Pokémon)
===========================
This is the dataset of hinata (Pokémon), containing 36 images and their tags.
The core tags of this character are 'blue\_hair, breasts, headband, red\_eyes, large\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
7e2b68ea96123bc66d08076d3bd950bed6f3ba9c
|
# Dataset Card for "research_paper_multi_label_data_balanced"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dhiruHF/research_paper_multi_label_data_balanced
|
[
"region:us"
] |
2023-08-17T08:20:59+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2637884, "num_examples": 1985}], "download_size": 1359885, "dataset_size": 2637884}}
|
2023-08-17T08:21:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "research_paper_multi_label_data_balanced"
More Information needed
|
[
"# Dataset Card for \"research_paper_multi_label_data_balanced\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"research_paper_multi_label_data_balanced\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"research_paper_multi_label_data_balanced\"\n\nMore Information needed"
] |
fd49ae0afbf2854f172f96ad4975cb462f84bb64
|
# Dataset Card for CEIL
## Dataset Description
- **Website:** https://aina.bsc.es
- **Point of Contact:** [Carlos Rodríguez-Penagos]([email protected])
### Dataset Summary
NERC for understanding meteorological queries for an AI assistant
This dataset was developed by [BSC LangTech Unit](https://langtech.bsc.es/) as part of the [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
### Supported Tasks and Leaderboards
Named Entities Recognition, Language Model
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
Three two-column files, one for each split.
<pre>
Com O
serà O
a O
l O
mati interval
el O
temps O
a O
O location
Grove location
el O
dijous day
? O
</pre>
### Data Fields
Every file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one.
### Data Splits
85/15 Train and development sets, balanced for all NERC tags.
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan.
### Source Data
Synthetic data
#### Initial Data Collection and Normalization
The word tokenization used to convert offset annotations into CONLL files was done using spacy
#### Who are the source language producers?
### Annotations
#### Annotation process
We adapted the NER labels from to a token-per-line, multi-column format.
#### Who are the annotators?
Original annotators from
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/en/inici/index.html) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
### Licensing information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">Attribution 4.0 International License</a>.
### Citation Information
```
```
### Contributions
[N/A]
|
crodri/ccma_meteo_instruct
|
[
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"language:ca",
"license:mit",
"region:us"
] |
2023-08-17T08:22:33+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["ca"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "ccma_meteo_instruct"}
|
2023-11-30T08:46:37+00:00
|
[] |
[
"ca"
] |
TAGS
#annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-unknown #language-Catalan #license-mit #region-us
|
# Dataset Card for CEIL
## Dataset Description
- Website: URL
- Point of Contact: Carlos Rodríguez-Penagos
### Dataset Summary
NERC for understanding meteorological queries for an AI assistant
This dataset was developed by BSC LangTech Unit as part of the Projecte AINA, to enrich the Catalan Language Understanding Benchmark (CLUB).
### Supported Tasks and Leaderboards
Named Entities Recognition, Language Model
### Languages
The dataset is in Catalan ('ca-CA').
## Dataset Structure
### Data Instances
Three two-column files, one for each split.
<pre>
Com O
serà O
a O
l O
mati interval
el O
temps O
a O
O location
Grove location
el O
dijous day
? O
</pre>
### Data Fields
Every file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one.
### Data Splits
85/15 Train and development sets, balanced for all NERC tags.
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan.
### Source Data
Synthetic data
#### Initial Data Collection and Normalization
The word tokenization used to convert offset annotations into CONLL files was done using spacy
#### Who are the source language producers?
### Annotations
#### Annotation process
We adapted the NER labels from to a token-per-line, multi-column format.
#### Who are the annotators?
Original annotators from
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
This work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.
### Licensing information
This work is licensed under a <a rel="license" href="URL 4.0 International License</a>.
### Contributions
[N/A]
|
[
"# Dataset Card for CEIL",
"## Dataset Description\n\n- Website: URL\n- Point of Contact: Carlos Rodríguez-Penagos",
"### Dataset Summary\n\nNERC for understanding meteorological queries for an AI assistant \n\nThis dataset was developed by BSC LangTech Unit as part of the Projecte AINA, to enrich the Catalan Language Understanding Benchmark (CLUB).",
"### Supported Tasks and Leaderboards\n\nNamed Entities Recognition, Language Model",
"### Languages\n\nThe dataset is in Catalan ('ca-CA').",
"## Dataset Structure",
"### Data Instances\n\nThree two-column files, one for each split. \n\n<pre>\nCom\tO\nserà\tO\na\tO\nl\tO\nmati\tinterval\nel\tO\ntemps\tO\na\tO\nO\tlocation\nGrove\tlocation\nel\tO\ndijous\tday\n?\tO\n</pre>",
"### Data Fields\n\nEvery file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one.",
"### Data Splits\n\n85/15 Train and development sets, balanced for all NERC tags.",
"## Dataset Creation",
"### Curation Rationale\n\nWe created this corpus to contribute to the development of language models in Catalan.",
"### Source Data\n\nSynthetic data",
"#### Initial Data Collection and Normalization\n\nThe word tokenization used to convert offset annotations into CONLL files was done using spacy",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\nWe adapted the NER labels from to a token-per-line, multi-column format.",
"#### Who are the annotators?\n\nOriginal annotators from",
"### Personal and Sensitive Information\n\nNo personal or sensitive information included.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nWe hope this corpus contributes to the development of language models in Catalan, a low-resource language.",
"### Discussion of Biases\n\n[N/A]",
"### Other Known Limitations\n\n[N/A]",
"## Additional Information",
"### Dataset Curators\n\n\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.",
"### Licensing information\n\nThis work is licensed under a <a rel=\"license\" href=\"URL 4.0 International License</a>.",
"### Contributions\n\n[N/A]"
] |
[
"TAGS\n#annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-unknown #language-Catalan #license-mit #region-us \n",
"# Dataset Card for CEIL",
"## Dataset Description\n\n- Website: URL\n- Point of Contact: Carlos Rodríguez-Penagos",
"### Dataset Summary\n\nNERC for understanding meteorological queries for an AI assistant \n\nThis dataset was developed by BSC LangTech Unit as part of the Projecte AINA, to enrich the Catalan Language Understanding Benchmark (CLUB).",
"### Supported Tasks and Leaderboards\n\nNamed Entities Recognition, Language Model",
"### Languages\n\nThe dataset is in Catalan ('ca-CA').",
"## Dataset Structure",
"### Data Instances\n\nThree two-column files, one for each split. \n\n<pre>\nCom\tO\nserà\tO\na\tO\nl\tO\nmati\tinterval\nel\tO\ntemps\tO\na\tO\nO\tlocation\nGrove\tlocation\nel\tO\ndijous\tday\n?\tO\n</pre>",
"### Data Fields\n\nEvery file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one.",
"### Data Splits\n\n85/15 Train and development sets, balanced for all NERC tags.",
"## Dataset Creation",
"### Curation Rationale\n\nWe created this corpus to contribute to the development of language models in Catalan.",
"### Source Data\n\nSynthetic data",
"#### Initial Data Collection and Normalization\n\nThe word tokenization used to convert offset annotations into CONLL files was done using spacy",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\nWe adapted the NER labels from to a token-per-line, multi-column format.",
"#### Who are the annotators?\n\nOriginal annotators from",
"### Personal and Sensitive Information\n\nNo personal or sensitive information included.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nWe hope this corpus contributes to the development of language models in Catalan, a low-resource language.",
"### Discussion of Biases\n\n[N/A]",
"### Other Known Limitations\n\n[N/A]",
"## Additional Information",
"### Dataset Curators\n\n\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.",
"### Licensing information\n\nThis work is licensed under a <a rel=\"license\" href=\"URL 4.0 International License</a>.",
"### Contributions\n\n[N/A]"
] |
[
54,
7,
19,
53,
20,
17,
6,
52,
38,
21,
5,
22,
8,
32,
10,
5,
30,
14,
15,
8,
29,
13,
12,
5,
43,
32,
10
] |
[
"passage: TAGS\n#annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-unknown #language-Catalan #license-mit #region-us \n# Dataset Card for CEIL## Dataset Description\n\n- Website: URL\n- Point of Contact: Carlos Rodríguez-Penagos### Dataset Summary\n\nNERC for understanding meteorological queries for an AI assistant \n\nThis dataset was developed by BSC LangTech Unit as part of the Projecte AINA, to enrich the Catalan Language Understanding Benchmark (CLUB).### Supported Tasks and Leaderboards\n\nNamed Entities Recognition, Language Model### Languages\n\nThe dataset is in Catalan ('ca-CA').## Dataset Structure### Data Instances\n\nThree two-column files, one for each split. \n\n<pre>\nCom\tO\nserà\tO\na\tO\nl\tO\nmati\tinterval\nel\tO\ntemps\tO\na\tO\nO\tlocation\nGrove\tlocation\nel\tO\ndijous\tday\n?\tO\n</pre>### Data Fields\n\nEvery file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one.### Data Splits\n\n85/15 Train and development sets, balanced for all NERC tags.## Dataset Creation### Curation Rationale\n\nWe created this corpus to contribute to the development of language models in Catalan.### Source Data\n\nSynthetic data#### Initial Data Collection and Normalization\n\nThe word tokenization used to convert offset annotations into CONLL files was done using spacy#### Who are the source language producers?### Annotations#### Annotation process\n\nWe adapted the NER labels from to a token-per-line, multi-column format.#### Who are the annotators?\n\nOriginal annotators from### Personal and Sensitive Information\n\nNo personal or sensitive information included.## Considerations for Using the Data### Social Impact of Dataset\n\nWe hope this corpus contributes to the development of language models in Catalan, a low-resource language.### Discussion of Biases\n\n[N/A]### Other Known Limitations\n\n[N/A]## Additional Information"
] |
56389fd723671e9136a1c0592fe626a65629e3f1
|
# Dataset of azusa/アズサ (Pokémon)
This is the dataset of azusa/アズサ (Pokémon), containing 11 images and their tags.
The core tags of this character are `breasts, short_hair, medium_breasts, orange_hair, brown_eyes, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:---------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 11 | 4.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/azusa_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 11 | 3.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/azusa_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 12 | 5.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/azusa_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 11 | 4.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/azusa_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 12 | 6.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/azusa_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/azusa_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, nipples, solo, blush, navel, smile, jewelry, pussy, shirt_lift |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | nipples | solo | blush | navel | smile | jewelry | pussy | shirt_lift |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------|:-------|:--------|:--------|:--------|:----------|:--------|:-------------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X |
|
CyberHarem/azusa_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T08:30:51+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T13:50:43+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of azusa/アズサ (Pokémon)
==============================
This is the dataset of azusa/アズサ (Pokémon), containing 11 images and their tags.
The core tags of this character are 'breasts, short\_hair, medium\_breasts, orange\_hair, brown\_eyes, large\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
1e7704a795562088d9f381ce24c21215b41c51ce
|
# Dataset of pansy (Pokémon)
This is the dataset of pansy (Pokémon), containing 25 images and their tags.
The core tags of this character are `breasts, brown_hair, green_eyes, earrings, large_breasts, medium_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 25 | 13.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/pansy_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 25 | 10.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/pansy_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 52 | 18.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/pansy_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 25 | 12.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/pansy_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 52 | 22.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/pansy_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/pansy_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, nipples, jewelry, open_mouth, pussy, smile, navel, blush, eyelashes, solo, uncensored, completely_nude, day, long_hair, looking_at_viewer, lying, outdoors, shiny_skin, spread_legs, tongue |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | nipples | jewelry | open_mouth | pussy | smile | navel | blush | eyelashes | solo | uncensored | completely_nude | day | long_hair | looking_at_viewer | lying | outdoors | shiny_skin | spread_legs | tongue |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------|:----------|:-------------|:--------|:--------|:--------|:--------|:------------|:-------|:-------------|:------------------|:------|:------------|:--------------------|:--------|:-----------|:-------------|:--------------|:---------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/pansy_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T08:40:51+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T14:09:29+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of pansy (Pokémon)
==========================
This is the dataset of pansy (Pokémon), containing 25 images and their tags.
The core tags of this character are 'breasts, brown\_hair, green\_eyes, earrings, large\_breasts, medium\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.