sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
listlengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
listlengths
0
25
languages
listlengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
listlengths
0
352
processed_texts
listlengths
1
353
tokens_length
listlengths
1
353
input_texts
listlengths
1
40
ad3e95e36712098976fc11be894e04aa3be0659f
# Dataset of mima/魅魔 (Touhou) This is the dataset of mima/魅魔 (Touhou), containing 500 images and their tags. The core tags of this character are `green_hair, long_hair, hat, green_eyes, wizard_hat, bow, ribbon, breasts, ghost_tail`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 414.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mima_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 300.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mima_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 892 | 525.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mima_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 389.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mima_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 892 | 646.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mima_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/mima_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blue_capelet, blue_headwear, blue_skirt, holding_staff, long_sleeves, looking_at_viewer, solo, frills, smile, white_shirt, yellow_bowtie, blue_vest, crescent, closed_mouth, parted_bangs, very_long_hair, white_ribbon | | 1 | 19 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, staff, capelet, crescent, dress, smile, star_(symbol) | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, blue_capelet, smile, solo, staff | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, blue_capelet, holding_staff, long_sleeves, looking_at_viewer, solo, blue_dress, bowtie, parted_bangs, smile, star_(symbol), blue_headwear, crescent_print, closed_mouth, demon_wings, frilled_dress, purple_cape, simple_background, very_long_hair, white_ribbon | | 4 | 10 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, blue_capelet, large_breasts, hair_intakes, hair_ribbon, solo, underboob, no_headwear, white_ribbon, blue_skirt, looking_at_viewer, closed_mouth, v-shaped_eyebrows, smile, upper_body | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, blue_sailor_collar, blue_skirt, solo, white_shirt, red_neckerchief, capelet, holding_knife, looking_at_viewer, blood_on_knife, short_sleeves, white_headwear | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | large_breasts, nipples, nude, 1girl, censored, convenient_censoring, solo, ass, breast_hold, multiple_girls, open_mouth | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_capelet | blue_headwear | blue_skirt | holding_staff | long_sleeves | looking_at_viewer | solo | frills | smile | white_shirt | yellow_bowtie | blue_vest | crescent | closed_mouth | parted_bangs | very_long_hair | white_ribbon | staff | capelet | dress | star_(symbol) | blue_dress | bowtie | crescent_print | demon_wings | frilled_dress | purple_cape | simple_background | large_breasts | hair_intakes | hair_ribbon | underboob | no_headwear | v-shaped_eyebrows | upper_body | blue_sailor_collar | red_neckerchief | holding_knife | blood_on_knife | short_sleeves | white_headwear | nipples | nude | censored | convenient_censoring | ass | breast_hold | multiple_girls | open_mouth | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:----------------|:-------------|:----------------|:---------------|:--------------------|:-------|:---------|:--------|:--------------|:----------------|:------------|:-----------|:---------------|:---------------|:-----------------|:---------------|:--------|:----------|:--------|:----------------|:-------------|:---------|:-----------------|:--------------|:----------------|:--------------|:--------------------|:----------------|:---------------|:--------------|:------------|:--------------|:--------------------|:-------------|:---------------------|:------------------|:----------------|:-----------------|:----------------|:-----------------|:----------|:-------|:-----------|:-----------------------|:------|:--------------|:-----------------|:-------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 19 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | | | | | | X | | X | | | | X | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | | | | | X | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | | X | X | X | X | | X | | | | | X | X | X | X | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | 4 | 10 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | X | | | X | X | | X | | | | | X | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | | X | | | X | X | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X |
CyberHarem/mima_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-18T22:29:08+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-14T23:02:54+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of mima/魅魔 (Touhou) =========================== This is the dataset of mima/魅魔 (Touhou), containing 500 images and their tags. The core tags of this character are 'green\_hair, long\_hair, hat, green\_eyes, wizard\_hat, bow, ribbon, breasts, ghost\_tail', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
9b1173338eb0b7f3c63af89e1defa526dcaec5f9
# Dataset of mugetsu/夢月 (Touhou) This is the dataset of mugetsu/夢月 (Touhou), containing 165 images and their tags. The core tags of this character are `blonde_hair, short_hair, yellow_eyes, maid_headdress, bow, red_bow`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 165 | 154.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mugetsu_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 165 | 111.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mugetsu_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 308 | 205.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mugetsu_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 165 | 144.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mugetsu_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 308 | 256.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mugetsu_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/mugetsu_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, apron, blue_dress, maid, puffy_short_sleeves, solo, blue_footwear, full_body, looking_at_viewer, frills, simple_background, white_background, white_socks, mary_janes, red_bowtie, closed_mouth, smile, bobby_socks | | 1 | 17 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, blue_dress, maid, puffy_short_sleeves, solo, red_bowtie, frilled_dress, waist_apron, smile, white_apron, blush, open_mouth, frilled_apron, looking_at_viewer, white_background, medium_breasts, back_bow, closed_mouth, cowboy_shot | | 2 | 14 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, dress, maid, solo, apron, smile | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | apron | blue_dress | maid | puffy_short_sleeves | solo | blue_footwear | full_body | looking_at_viewer | frills | simple_background | white_background | white_socks | mary_janes | red_bowtie | closed_mouth | smile | bobby_socks | frilled_dress | waist_apron | white_apron | blush | open_mouth | frilled_apron | medium_breasts | back_bow | cowboy_shot | dress | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------------|:-------|:----------------------|:-------|:----------------|:------------|:--------------------|:---------|:--------------------|:-------------------|:--------------|:-------------|:-------------|:---------------|:--------|:--------------|:----------------|:--------------|:--------------|:--------|:-------------|:----------------|:-----------------|:-----------|:--------------|:--------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | 1 | 17 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | X | X | X | | | X | | | X | | | X | X | X | | X | X | X | X | X | X | X | X | X | | | 2 | 14 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | X | | X | | | | | | | | | | | X | | | | | | | | | | | X |
CyberHarem/mugetsu_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-18T22:32:06+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T03:08:13+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of mugetsu/夢月 (Touhou) ============================== This is the dataset of mugetsu/夢月 (Touhou), containing 165 images and their tags. The core tags of this character are 'blonde\_hair, short\_hair, yellow\_eyes, maid\_headdress, bow, red\_bow', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
4c5a711d460452d472e45fe04071cb6923e51199
# Dataset Card for "github-issues" annotations_creators: - no-annotation language: - en language_creators: - found license: - unknown multilinguality: - monolingual pretty_name: Hugging Face Github Issues size_categories: [] source_datasets: - original tags: [] task_categories: - text-classification - text-retrieval task_ids: - multi-class-classification - multi-label-classification - document-retrieval [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ptah23/github-issues
[ "region:us" ]
2023-08-18T22:42:45+00:00
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "labels", "list": [{"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "assignees", "list": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "milestone", "struct": [{"name": "closed_at", "dtype": "null"}, {"name": "closed_issues", "dtype": "int64"}, {"name": "created_at", "dtype": "string"}, {"name": "creator", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "description", "dtype": "string"}, {"name": "due_on", "dtype": "null"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "labels_url", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "open_issues", "dtype": "int64"}, {"name": "state", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "updated_at", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "comments", "sequence": "string"}, {"name": "created_at", "dtype": "timestamp[ns, tz=UTC]"}, {"name": "updated_at", "dtype": "timestamp[ns, tz=UTC]"}, {"name": "closed_at", "dtype": "timestamp[ns, tz=UTC]"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "float64"}, {"name": "draft", "dtype": "float64"}, {"name": "pull_request", "struct": [{"name": "diff_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "merged_at", "dtype": "string"}, {"name": "patch_url", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "body", "dtype": "string"}, {"name": "reactions", "struct": [{"name": "+1", "dtype": "int64"}, {"name": "-1", "dtype": "int64"}, {"name": "confused", "dtype": "int64"}, {"name": "eyes", "dtype": "int64"}, {"name": "heart", "dtype": "int64"}, {"name": "hooray", "dtype": "int64"}, {"name": "laugh", "dtype": "int64"}, {"name": "rocket", "dtype": "int64"}, {"name": "total_count", "dtype": "int64"}, {"name": "url", "dtype": "string"}]}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "float64"}, {"name": "state_reason", "dtype": "string"}, {"name": "is_pull_request", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 15538609, "num_examples": 2000}], "download_size": 4270838, "dataset_size": 15538609}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-18T23:12:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "github-issues" annotations_creators: - no-annotation language: - en language_creators: - found license: - unknown multilinguality: - monolingual pretty_name: Hugging Face Github Issues size_categories: [] source_datasets: - original tags: [] task_categories: - text-classification - text-retrieval task_ids: - multi-class-classification - multi-label-classification - document-retrieval More Information needed
[ "# Dataset Card for \"github-issues\"\nannotations_creators:\n- no-annotation\nlanguage:\n- en\nlanguage_creators:\n- found\nlicense:\n- unknown\nmultilinguality:\n- monolingual\npretty_name: Hugging Face Github Issues\nsize_categories: []\nsource_datasets:\n- original\ntags: []\ntask_categories:\n- text-classification\n- text-retrieval\ntask_ids:\n- multi-class-classification\n- multi-label-classification\n- document-retrieval\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"github-issues\"\nannotations_creators:\n- no-annotation\nlanguage:\n- en\nlanguage_creators:\n- found\nlicense:\n- unknown\nmultilinguality:\n- monolingual\npretty_name: Hugging Face Github Issues\nsize_categories: []\nsource_datasets:\n- original\ntags: []\ntask_categories:\n- text-classification\n- text-retrieval\ntask_ids:\n- multi-class-classification\n- multi-label-classification\n- document-retrieval\n\nMore Information needed" ]
[ 6, 125 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"github-issues\"\nannotations_creators:\n- no-annotation\nlanguage:\n- en\nlanguage_creators:\n- found\nlicense:\n- unknown\nmultilinguality:\n- monolingual\npretty_name: Hugging Face Github Issues\nsize_categories: []\nsource_datasets:\n- original\ntags: []\ntask_categories:\n- text-classification\n- text-retrieval\ntask_ids:\n- multi-class-classification\n- multi-label-classification\n- document-retrieval\n\nMore Information needed" ]
6a62b00b8a2633604afb8c367a93a96732e65339
# Dataset of tamatsukuri_misumaru (Touhou) This is the dataset of tamatsukuri_misumaru (Touhou), containing 125 images and their tags. The core tags of this character are `blonde_hair, short_hair, brown_hair, brown_eyes, hair_ornament, earrings, multicolored_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 125 | 106.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tamatsukuri_misumaru_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 125 | 75.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tamatsukuri_misumaru_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 261 | 145.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tamatsukuri_misumaru_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 125 | 100.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tamatsukuri_misumaru_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 261 | 180.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tamatsukuri_misumaru_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/tamatsukuri_misumaru_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, magatama, puffy_short_sleeves, red_vest, solo, tiara, white_shirt, yellow_skirt, yin_yang_orb, blouse, red_shirt, jewelry, open_mouth, smile, two-tone_hair, barefoot, light_brown_hair, patterned_clothing, red_eyes | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | magatama | puffy_short_sleeves | red_vest | solo | tiara | white_shirt | yellow_skirt | yin_yang_orb | blouse | red_shirt | jewelry | open_mouth | smile | two-tone_hair | barefoot | light_brown_hair | patterned_clothing | red_eyes | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:----------------------|:-----------|:-------|:--------|:--------------|:---------------|:---------------|:---------|:------------|:----------|:-------------|:--------|:----------------|:-----------|:-------------------|:---------------------|:-----------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/tamatsukuri_misumaru_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-18T22:53:38+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T06:46:04+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of tamatsukuri\_misumaru (Touhou) ========================================= This is the dataset of tamatsukuri\_misumaru (Touhou), containing 125 images and their tags. The core tags of this character are 'blonde\_hair, short\_hair, brown\_hair, brown\_eyes, hair\_ornament, earrings, multicolored\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
890299e7a27707afbd8fd0f61ae190dd2f101ee5
# Dataset of sashiromiya_sasha/左城宮則紗 (Touhou) This is the dataset of sashiromiya_sasha/左城宮則紗 (Touhou), containing 21 images and their tags. The core tags of this character are `long_hair, blue_hair, green_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 21 | 12.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sashiromiya_sasha_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 21 | 12.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sashiromiya_sasha_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 29 | 16.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sashiromiya_sasha_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 21 | 12.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sashiromiya_sasha_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 29 | 17.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sashiromiya_sasha_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/sashiromiya_sasha_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------| | 0 | 21 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, smile, blue_kimono, open_mouth | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | smile | blue_kimono | open_mouth | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:--------------|:-------------| | 0 | 21 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X |
CyberHarem/sashiromiya_sasha_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-18T22:59:15+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T04:25:42+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of sashiromiya\_sasha/左城宮則紗 (Touhou) ============================================ This is the dataset of sashiromiya\_sasha/左城宮則紗 (Touhou), containing 21 images and their tags. The core tags of this character are 'long\_hair, blue\_hair, green\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
e49ed6bb438731a20d2f6e56a9a2558fde57955b
# Dataset of rika (Touhou) This is the dataset of rika (Touhou), containing 44 images and their tags. The core tags of this character are `braid, brown_hair, long_hair, brown_eyes, twin_braids, bow, hair_bow`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 44 | 29.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rika_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 44 | 20.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rika_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 72 | 36.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rika_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 44 | 26.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rika_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 72 | 45.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rika_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/rika_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, bangs, collared_shirt, solo, white_shirt, :d, belt, brown_footwear, brown_shorts, full_body, looking_at_viewer, open_mouth, simple_background, white_background, white_socks, blush, holding, wrench, breasts, kneehighs, loafers, long_sleeves, purple_bow, red_bowtie, short_sleeves, sleeves_rolled_up | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, smile, wrench, blush, sitting | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bangs | collared_shirt | solo | white_shirt | :d | belt | brown_footwear | brown_shorts | full_body | looking_at_viewer | open_mouth | simple_background | white_background | white_socks | blush | holding | wrench | breasts | kneehighs | loafers | long_sleeves | purple_bow | red_bowtie | short_sleeves | sleeves_rolled_up | smile | sitting | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-----------------|:-------|:--------------|:-----|:-------|:-----------------|:---------------|:------------|:--------------------|:-------------|:--------------------|:-------------------|:--------------|:--------|:----------|:---------|:----------|:------------|:----------|:---------------|:-------------|:-------------|:----------------|:--------------------|:--------|:----------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | | X | | | | | | | | | | | | X | | X | | | | | | | | | X | X |
CyberHarem/rika_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-18T23:08:02+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T05:39:54+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of rika (Touhou) ======================== This is the dataset of rika (Touhou), containing 44 images and their tags. The core tags of this character are 'braid, brown\_hair, long\_hair, brown\_eyes, twin\_braids, bow, hair\_bow', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
ade5fc1e754d6af9b18eaa5c4c284ab2ab9852f3
# Dataset of asakura_rikako/朝倉理香子 (Touhou) This is the dataset of asakura_rikako/朝倉理香子 (Touhou), containing 67 images and their tags. The core tags of this character are `long_hair, purple_hair, glasses, purple_eyes, ribbon, hair_ribbon, bow, hairband, yellow_bow, white_ribbon, breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 67 | 46.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asakura_rikako_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 67 | 32.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asakura_rikako_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 119 | 59.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asakura_rikako_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 67 | 43.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asakura_rikako_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 119 | 73.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asakura_rikako_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/asakura_rikako_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, black_pants, buttons, collared_shirt, labcoat, long_sleeves, open_mouth, solo, standing, yellow_bowtie, full_body, shoes, simple_background, white_background, yellow_ribbon, holding, teeth, test_tube, very_long_hair, white_coat, white_hairband, white_shirt, :d, adjusting_eyewear, black_footwear, breast_pocket, chibi, hand_up, long_coat, looking_at_viewer, medium_breasts, neck_ribbon, parted_bangs, pink_eyes, rimless_eyewear, white_bow | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, collared_shirt, labcoat, long_sleeves, looking_at_viewer, solo, white_shirt, black_pants, closed_mouth, smile, very_long_hair, white_coat, white_hairband, yellow_bowtie, simple_background, adjusting_eyewear, buttons, cowboy_shot, holding, medium_breasts, parted_bangs, standing, white_background | | 2 | 16 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, solo, labcoat, bowtie, long_sleeves, looking_at_viewer, shirt, upper_body | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, black_shorts, looking_at_viewer, simple_background, solo, striped_thighhighs, very_long_hair, yellow_bowtie, collared_shirt, full_body, hair_bow, long_sleeves, purple_footwear, semi-rimless_eyewear, short_shorts, white_background, white_coat, yellow_ribbon, bangs, labcoat, purple_belt, purple_thighhighs, standing, white_bow, black-framed_eyewear, buckle, closed_mouth, grin, hand_on_hip, lace-up_boots, medium_breasts, open_coat, teeth, white_hairband, yellow_belt | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_pants | buttons | collared_shirt | labcoat | long_sleeves | open_mouth | solo | standing | yellow_bowtie | full_body | shoes | simple_background | white_background | yellow_ribbon | holding | teeth | test_tube | very_long_hair | white_coat | white_hairband | white_shirt | :d | adjusting_eyewear | black_footwear | breast_pocket | chibi | hand_up | long_coat | looking_at_viewer | medium_breasts | neck_ribbon | parted_bangs | pink_eyes | rimless_eyewear | white_bow | closed_mouth | smile | cowboy_shot | bowtie | shirt | upper_body | black_shorts | striped_thighhighs | hair_bow | purple_footwear | semi-rimless_eyewear | short_shorts | bangs | purple_belt | purple_thighhighs | black-framed_eyewear | buckle | grin | hand_on_hip | lace-up_boots | open_coat | yellow_belt | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:----------|:-----------------|:----------|:---------------|:-------------|:-------|:-----------|:----------------|:------------|:--------|:--------------------|:-------------------|:----------------|:----------|:--------|:------------|:-----------------|:-------------|:-----------------|:--------------|:-----|:--------------------|:-----------------|:----------------|:--------|:----------|:------------|:--------------------|:-----------------|:--------------|:---------------|:------------|:------------------|:------------|:---------------|:--------|:--------------|:---------|:--------|:-------------|:---------------|:---------------------|:-----------|:------------------|:-----------------------|:---------------|:--------|:--------------|:--------------------|:-----------------------|:---------|:-------|:--------------|:----------------|:------------|:--------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | | X | X | X | | | X | X | | X | | | X | X | X | X | | X | | | | | | X | X | | X | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | 2 | 16 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | X | X | X | | X | X | X | X | | X | X | X | | X | | X | X | X | | | | | | | | | X | X | | | | | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/asakura_rikako_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-18T23:15:44+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T04:47:43+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of asakura\_rikako/朝倉理香子 (Touhou) ========================================= This is the dataset of asakura\_rikako/朝倉理香子 (Touhou), containing 67 images and their tags. The core tags of this character are 'long\_hair, purple\_hair, glasses, purple\_eyes, ribbon, hair\_ribbon, bow, hairband, yellow\_bow, white\_ribbon, breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
676afd6dd6c194609e6ddbb057ffda88a6c18bf4
# Dataset of yuki/ユキ (Touhou) This is the dataset of yuki/ユキ (Touhou), containing 112 images and their tags. The core tags of this character are `blonde_hair, hat, yellow_eyes, short_hair, bow, ribbon, hat_bow, black_headwear`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 112 | 75.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yuki_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 112 | 55.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yuki_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 178 | 97.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yuki_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 112 | 70.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yuki_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 178 | 120.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yuki_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/yuki_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | black_skirt, black_vest, puffy_short_sleeves, white_bow, white_shirt, smile, fedora, white_ribbon, 1girl, bangs, solo, collared_shirt, open_mouth, simple_background, shoes, socks, back_bow, black_footwear, breasts, closed_mouth, collared_vest, frills, full_body, hand_on_hip, white_background | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, grin | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | black_skirt | black_vest | puffy_short_sleeves | white_bow | white_shirt | smile | fedora | white_ribbon | 1girl | bangs | solo | collared_shirt | open_mouth | simple_background | shoes | socks | back_bow | black_footwear | breasts | closed_mouth | collared_vest | frills | full_body | hand_on_hip | white_background | grin | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------|:-------------|:----------------------|:------------|:--------------|:--------|:---------|:---------------|:--------|:--------|:-------|:-----------------|:-------------|:--------------------|:--------|:--------|:-----------|:-----------------|:----------|:---------------|:----------------|:---------|:------------|:--------------|:-------------------|:-------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | | | | | | | | | X | | X | | | | | | | | | | | | | | | X |
CyberHarem/yuki_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-18T23:28:20+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T05:01:22+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of yuki/ユキ (Touhou) =========================== This is the dataset of yuki/ユキ (Touhou), containing 112 images and their tags. The core tags of this character are 'blonde\_hair, hat, yellow\_eyes, short\_hair, bow, ribbon, hat\_bow, black\_headwear', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
183add3f2cfc4b49f83d39d09ae03445340f4d10
# Dataset of torisumi_horou/鳥澄珠烏 (Touhou) This is the dataset of torisumi_horou/鳥澄珠烏 (Touhou), containing 23 images and their tags. The core tags of this character are `multicolored_hair, white_hair, bow, short_hair, hat, red_bow, white_headwear, wings, yellow_eyes, black_hair, bangs`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 23 | 32.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/torisumi_horou_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 23 | 19.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/torisumi_horou_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 58 | 40.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/torisumi_horou_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 23 | 29.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/torisumi_horou_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 58 | 54.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/torisumi_horou_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/torisumi_horou_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, puffy_short_sleeves, solo, white_shirt, pink_vest, smile, closed_mouth, collared_shirt, looking_at_viewer, red_bowtie, red_socks, pink_shorts, book, frills, multicolored_wings, pink_skirt, shoes, white_background, white_footwear, belt, blush, full_body, rainbow_gradient, simple_background, test_tube | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | puffy_short_sleeves | solo | white_shirt | pink_vest | smile | closed_mouth | collared_shirt | looking_at_viewer | red_bowtie | red_socks | pink_shorts | book | frills | multicolored_wings | pink_skirt | shoes | white_background | white_footwear | belt | blush | full_body | rainbow_gradient | simple_background | test_tube | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------------|:-------|:--------------|:------------|:--------|:---------------|:-----------------|:--------------------|:-------------|:------------|:--------------|:-------|:---------|:---------------------|:-------------|:--------|:-------------------|:-----------------|:-------|:--------|:------------|:-------------------|:--------------------|:------------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/torisumi_horou_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-18T23:33:53+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T05:24:32+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of torisumi\_horou/鳥澄珠烏 (Touhou) ======================================== This is the dataset of torisumi\_horou/鳥澄珠烏 (Touhou), containing 23 images and their tags. The core tags of this character are 'multicolored\_hair, white\_hair, bow, short\_hair, hat, red\_bow, white\_headwear, wings, yellow\_eyes, black\_hair, bangs', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
fe915e6c91bdae01741c8ba0831278bf927d271b
# Dataset Card for "val_oneanswer" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Jing24/val_oneanswer
[ "region:us" ]
2023-08-18T23:35:44+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int32"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 9832949, "num_examples": 10570}], "download_size": 1675804, "dataset_size": 9832949}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-18T23:35:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for "val_oneanswer" More Information needed
[ "# Dataset Card for \"val_oneanswer\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"val_oneanswer\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"val_oneanswer\"\n\nMore Information needed" ]
b54b3df2bdcce14d8c820708ca11a096915647bf
# Dataset of sunny_milk/サニーミルク/서니밀크 (Touhou) This is the dataset of sunny_milk/サニーミルク/서니밀크 (Touhou), containing 500 images and their tags. The core tags of this character are `short_hair, wings, blue_eyes, bow, twintails, blonde_hair, fairy_wings, fang, hair_bow, orange_hair, headdress, two_side_up`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 417.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sunny_milk_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 305.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sunny_milk_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 984 | 575.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sunny_milk_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 390.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sunny_milk_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 984 | 704.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sunny_milk_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/sunny_milk_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, open_mouth, solo, dress, full_body, simple_background, white_background, looking_at_viewer, sash, fairy, puffy_sleeves, mary_janes, :d, long_sleeves, red_skirt, white_socks, wide_sleeves, yellow_ascot, black_footwear, short_sleeves, blush, frills | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, ascot, dress, long_sleeves, looking_at_viewer, smile, solo, open_mouth, sash, wide_sleeves, puffy_sleeves, ribbon, blush | | 2 | 31 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, solo, open_mouth, ascot, smile, blush, ribbon | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, ascot, blue_sky, cloud, open_mouth, sash, solo, blush, day, long_sleeves, smile, wide_sleeves, dress, ribbon | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | ascot, open_mouth, smile, 3girls, drill_hair, purple_eyes | | 5 | 18 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | loli, nipples, 1girl, flat_chest, nude, blush, pussy, solo, navel, open_mouth, smile | | 6 | 15 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, hetero, solo_focus, 1boy, blush, penis, cum, loli, censored, sex, pov, vaginal, :>=, fellatio | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | open_mouth | solo | dress | full_body | simple_background | white_background | looking_at_viewer | sash | fairy | puffy_sleeves | mary_janes | :d | long_sleeves | red_skirt | white_socks | wide_sleeves | yellow_ascot | black_footwear | short_sleeves | blush | frills | ascot | smile | ribbon | blue_sky | cloud | day | 3girls | drill_hair | purple_eyes | loli | nipples | flat_chest | nude | pussy | navel | hetero | solo_focus | 1boy | penis | cum | censored | sex | pov | vaginal | :>= | fellatio | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:-------|:--------|:------------|:--------------------|:-------------------|:--------------------|:-------|:--------|:----------------|:-------------|:-----|:---------------|:------------|:--------------|:---------------|:---------------|:-----------------|:----------------|:--------|:---------|:--------|:--------|:---------|:-----------|:--------|:------|:---------|:-------------|:--------------|:-------|:----------|:-------------|:-------|:--------|:--------|:---------|:-------------|:-------|:--------|:------|:-----------|:------|:------|:----------|:------|:-----------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | | | | X | X | | X | | | X | | | X | | | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 31 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | | | | | | | | | | | | | | | | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | X | | | | | X | | | | | X | | | X | | | | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | | X | | | | | | | | | | | | | | | | | | | | | X | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | 5 | 18 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | X | | | | | | | | | | | | | | | | | | X | | | X | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | 6 | 15 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/sunny_milk_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-18T23:35:48+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-14T22:57:39+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of sunny\_milk/サニーミルク/서니밀크 (Touhou) =========================================== This is the dataset of sunny\_milk/サニーミルク/서니밀크 (Touhou), containing 500 images and their tags. The core tags of this character are 'short\_hair, wings, blue\_eyes, bow, twintails, blonde\_hair, fairy\_wings, fang, hair\_bow, orange\_hair, headdress, two\_side\_up', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
30537d261504b0e19bbeffa5261c322b6a709671
# Dataset of ruukoto (Touhou) This is the dataset of ruukoto (Touhou), containing 40 images and their tags. The core tags of this character are `green_hair, maid_headdress, short_hair, bow, blue_eyes, red_bow`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 40 | 27.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ruukoto_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 40 | 19.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ruukoto_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 64 | 32.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ruukoto_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 40 | 25.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ruukoto_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 64 | 43.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ruukoto_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/ruukoto_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, maid, solo, apron, smile, blush, dress, open_mouth | | 1 | 22 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | blue_dress, 1girl, puffy_short_sleeves, solo, frills, maid_apron, looking_at_viewer, smile, holding, white_apron, bangs, open_mouth, simple_background, full_body, red_bowtie, waist_apron, broom, mary_janes, mop, white_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | maid | solo | apron | smile | blush | dress | open_mouth | blue_dress | puffy_short_sleeves | frills | maid_apron | looking_at_viewer | holding | white_apron | bangs | simple_background | full_body | red_bowtie | waist_apron | broom | mary_janes | mop | white_background | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-------|:--------|:--------|:--------|:--------|:-------------|:-------------|:----------------------|:---------|:-------------|:--------------------|:----------|:--------------|:--------|:--------------------|:------------|:-------------|:--------------|:--------|:-------------|:------|:-------------------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | 1 | 22 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/ruukoto_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-18T23:38:09+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T06:35:56+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of ruukoto (Touhou) =========================== This is the dataset of ruukoto (Touhou), containing 40 images and their tags. The core tags of this character are 'green\_hair, maid\_headdress, short\_hair, bow, blue\_eyes, red\_bow', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
580f460c4167daeae337e4319afe934ea2cd38b6
# Dataset Card for "complexity_ranked" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
cassanof/complexity_ranked
[ "region:us" ]
2023-08-18T23:50:28+00:00
{"dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "complexity", "dtype": "string"}, {"name": "file_name", "dtype": "string"}, {"name": "complexity_ranked", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 1781623, "num_examples": 932}], "download_size": 0, "dataset_size": 1781623}}
2023-08-18T23:51:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for "complexity_ranked" More Information needed
[ "# Dataset Card for \"complexity_ranked\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"complexity_ranked\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"complexity_ranked\"\n\nMore Information needed" ]
fdcda873c9494c99fa190ef2d90593635f722943
# Dataset of miramikaru_riran (Touhou) This is the dataset of miramikaru_riran (Touhou), containing 205 images and their tags. The core tags of this character are `blonde_hair, animal_ears, fox_ears, bangs, fox_girl, animal_ear_fluff, medium_hair, asymmetrical_hair, sidelocks, fox_tail, tail, red_eyes, hair_between_eyes, fang, ahoge, breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 205 | 166.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miramikaru_riran_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 205 | 118.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miramikaru_riran_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 487 | 246.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miramikaru_riran_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 205 | 157.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miramikaru_riran_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 487 | 305.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miramikaru_riran_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/miramikaru_riran_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 25 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, brown_vest, collared_shirt, white_shirt, short_sleeves, black_necktie, solo, open_mouth, smile, looking_at_viewer, simple_background, upper_body, white_background, blush, brown_skirt | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | brown_vest | collared_shirt | white_shirt | short_sleeves | black_necktie | solo | open_mouth | smile | looking_at_viewer | simple_background | upper_body | white_background | blush | brown_skirt | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:-----------------|:--------------|:----------------|:----------------|:-------|:-------------|:--------|:--------------------|:--------------------|:-------------|:-------------------|:--------|:--------------| | 0 | 25 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/miramikaru_riran_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-19T00:08:14+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T07:51:41+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of miramikaru\_riran (Touhou) ===================================== This is the dataset of miramikaru\_riran (Touhou), containing 205 images and their tags. The core tags of this character are 'blonde\_hair, animal\_ears, fox\_ears, bangs, fox\_girl, animal\_ear\_fluff, medium\_hair, asymmetrical\_hair, sidelocks, fox\_tail, tail, red\_eyes, hair\_between\_eyes, fang, ahoge, breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
65492ec9596dabd419f8fa0248c953934e4e442c
## Dataset Description TODO ### Dataset Summary TODO ## Dataset Creatioon TODO
ClaudioCU/Perritos-y-no-Perritos
[ "task_categories:image-classification", "annotations_creators:found", "size_categories:n<1K", "source_datasets:original", "license:apache-2.0", "animals", "dogs", "creature-dataset", "region:us" ]
2023-08-19T00:20:31+00:00
{"annotations_creators": ["found"], "language_creators": [], "language": [], "license": ["apache-2.0"], "multilinguality": [], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["binary-class-image-classification"], "pretty_name": "Perritos-y-no-Perritos", "tags": ["animals", "dogs", "creature-dataset"]}
2023-08-19T01:53:38+00:00
[]
[]
TAGS #task_categories-image-classification #annotations_creators-found #size_categories-n<1K #source_datasets-original #license-apache-2.0 #animals #dogs #creature-dataset #region-us
## Dataset Description TODO ### Dataset Summary TODO ## Dataset Creatioon TODO
[ "## Dataset Description\n\nTODO", "### Dataset Summary\n\nTODO", "## Dataset Creatioon\n\nTODO" ]
[ "TAGS\n#task_categories-image-classification #annotations_creators-found #size_categories-n<1K #source_datasets-original #license-apache-2.0 #animals #dogs #creature-dataset #region-us \n", "## Dataset Description\n\nTODO", "### Dataset Summary\n\nTODO", "## Dataset Creatioon\n\nTODO" ]
[ 66, 6, 8, 8 ]
[ "passage: TAGS\n#task_categories-image-classification #annotations_creators-found #size_categories-n<1K #source_datasets-original #license-apache-2.0 #animals #dogs #creature-dataset #region-us \n## Dataset Description\n\nTODO### Dataset Summary\n\nTODO## Dataset Creatioon\n\nTODO" ]
02fe0b1ec0e81579da35635a3e6a4697b95c23bb
# Dataset of komano_aunn/高麗野あうん/코마노아운 (Touhou) This is the dataset of komano_aunn/高麗野あうん/코마노아운 (Touhou), containing 500 images and their tags. The core tags of this character are `horns, single_horn, long_hair, green_hair, curly_hair, green_eyes, bangs`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 461.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komano_aunn_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 306.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komano_aunn_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1027 | 596.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komano_aunn_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 423.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komano_aunn_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1027 | 776.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komano_aunn_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/komano_aunn_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blush, collared_shirt, looking_at_viewer, open_mouth, paw_pose, short_sleeves, solo, white_shorts, :d, buttons, cloud_print, fang, heart, very_long_hair | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, cloud_print, collared_shirt, short_sleeves, simple_background, solo, white_background, white_shorts, looking_at_viewer, smile, paw_pose, open_mouth, blush, buttons | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, blush, cloud_print, collared_shirt, open_mouth, short_sleeves, signature, smile, solo, white_background, white_shorts, full_body, simple_background, barefoot, fang | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | collared_shirt | looking_at_viewer | open_mouth | paw_pose | short_sleeves | solo | white_shorts | :d | buttons | cloud_print | fang | heart | very_long_hair | simple_background | white_background | smile | signature | full_body | barefoot | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-----------------|:--------------------|:-------------|:-----------|:----------------|:-------|:---------------|:-----|:----------|:--------------|:-------|:--------|:-----------------|:--------------------|:-------------------|:--------|:------------|:------------|:-----------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | X | X | | X | X | | | | X | X | X | | | | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | X | | X | X | X | | | X | X | | | X | X | X | X | X | X |
CyberHarem/komano_aunn_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-19T00:31:07+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T00:54:34+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of komano\_aunn/高麗野あうん/코마노아운 (Touhou) ============================================= This is the dataset of komano\_aunn/高麗野あうん/코마노아운 (Touhou), containing 500 images and their tags. The core tags of this character are 'horns, single\_horn, long\_hair, green\_hair, curly\_hair, green\_eyes, bangs', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
6cb8352d3a3554f4d0c25130ad9f46a5c29ec71f
# Dataset Card for "fairness_chef_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVasNLPExperiments/fairness_chef_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800
[ "region:us" ]
2023-08-19T00:43:49+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "scores", "sequence": "float64"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices", "num_bytes": 2513915, "num_examples": 4800}], "download_size": 236939, "dataset_size": 2513915}}
2023-08-19T00:43:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "fairness_chef_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800" More Information needed
[ "# Dataset Card for \"fairness_chef_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"fairness_chef_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
[ 6, 38 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"fairness_chef_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
e9832c03908e059ac98d6c67f804b44c10dfa8df
# Dataset Card for "fairness_mechanic_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVasNLPExperiments/fairness_mechanic_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800
[ "region:us" ]
2023-08-19T00:46:14+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "scores", "sequence": "float64"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices", "num_bytes": 2448421, "num_examples": 4800}], "download_size": 181885, "dataset_size": 2448421}}
2023-08-19T00:46:18+00:00
[]
[]
TAGS #region-us
# Dataset Card for "fairness_mechanic_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800" More Information needed
[ "# Dataset Card for \"fairness_mechanic_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"fairness_mechanic_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
[ 6, 40 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"fairness_mechanic_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
9226df59c9f2fc1d794f7aef4886ba41fd5c8c04
# Dataset Card for "fairness_doctor_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVasNLPExperiments/fairness_doctor_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800
[ "region:us" ]
2023-08-19T00:48:33+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "scores", "sequence": "float64"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices", "num_bytes": 2154888, "num_examples": 4800}], "download_size": 244714, "dataset_size": 2154888}}
2023-08-19T00:48:37+00:00
[]
[]
TAGS #region-us
# Dataset Card for "fairness_doctor_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800" More Information needed
[ "# Dataset Card for \"fairness_doctor_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"fairness_doctor_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
[ 6, 39 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"fairness_doctor_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
e3c191b1805ec798e59f7d02090ad51403dabebb
# Dataset Card for "fairness_firefighter_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVasNLPExperiments/fairness_firefighter_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800
[ "region:us" ]
2023-08-19T00:52:22+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "scores", "sequence": "float64"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices", "num_bytes": 2480232, "num_examples": 4800}], "download_size": 179504, "dataset_size": 2480232}}
2023-08-19T00:52:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for "fairness_firefighter_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800" More Information needed
[ "# Dataset Card for \"fairness_firefighter_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"fairness_firefighter_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
[ 6, 39 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"fairness_firefighter_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
46c56fd8466ca27a683f05b0bd00aed2376e4e06
# Dataset Card for "fairness_pilot_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVasNLPExperiments/fairness_pilot_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800
[ "region:us" ]
2023-08-19T00:57:35+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "scores", "sequence": "float64"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices", "num_bytes": 2102429, "num_examples": 4800}], "download_size": 304923, "dataset_size": 2102429}}
2023-08-19T00:57:39+00:00
[]
[]
TAGS #region-us
# Dataset Card for "fairness_pilot_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800" More Information needed
[ "# Dataset Card for \"fairness_pilot_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"fairness_pilot_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
[ 6, 38 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"fairness_pilot_google_flan_t5_xl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
b439794bcfa6b0508d5250987abb6b87b53324b9
# Dataset Card for "generate_sub_4" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Jing24/generate_sub_4
[ "region:us" ]
2023-08-19T01:10:47+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 42500621, "num_examples": 46640}], "download_size": 0, "dataset_size": 42500621}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-19T12:34:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "generate_sub_4" More Information needed
[ "# Dataset Card for \"generate_sub_4\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"generate_sub_4\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"generate_sub_4\"\n\nMore Information needed" ]
99b7daf5203eee7cc3d3c0e62ac1ac5cd359cd7c
# Dataset Card for "Wikipedia-Corpora-Report" This dataset is used as a metadata database for the online [WIKIPEDIA CORPORA META REPORT](https://wikipedia-corpora-report.streamlit.app/) dashboard that illustrates how humans and bots generate or edit Wikipedia editions and provides metrics for “pages” and “edits” for all Wikipedia editions (320 languages). The “pages” metric counts articles and non-articles, while the “edits” metric tallies edits on articles and non-articles, all categorized by contributor type: humans or bots. The metadata is downloaded from [Wikimedia Statistics](https://stats.wikimedia.org/#/all-projects), then processed and uploaded to the Hugging Face Hub as a dataset. For more details about the dataset, please **read** and **cite** our paper: ```bash @inproceedings{alshahrani-etal-2023-performance, title = "{Performance Implications of Using Unrepresentative Corpora in {A}rabic Natural Language Processing}", author = "Alshahrani, Saied and Alshahrani, Norah and Dey, Soumyabrata and Matthews, Jeanna", booktitle = "Proceedings of the The First Arabic Natural Language Processing Conference (ArabicNLP 2023)", month = December, year = "2023", address = "Singapore (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.arabicnlp-1.19", doi = "10.18653/v1/2023.arabicnlp-1.19", pages = "218--231", abstract = "Wikipedia articles are a widely used source of training data for Natural Language Processing (NLP) research, particularly as corpora for low-resource languages like Arabic. However, it is essential to understand the extent to which these corpora reflect the representative contributions of native speakers, especially when many entries in a given language are directly translated from other languages or automatically generated through automated mechanisms. In this paper, we study the performance implications of using inorganic corpora that are not representative of native speakers and are generated through automated techniques such as bot generation or automated template-based translation. The case of the Arabic Wikipedia editions gives a unique case study of this since the Moroccan Arabic Wikipedia edition (ARY) is small but representative, the Egyptian Arabic Wikipedia edition (ARZ) is large but unrepresentative, and the Modern Standard Arabic Wikipedia edition (AR) is both large and more representative. We intrinsically evaluate the performance of two main NLP upstream tasks, namely word representation and language modeling, using word analogy evaluations and fill-mask evaluations using our two newly created datasets: Arab States Analogy Dataset (ASAD) and Masked Arab States Dataset (MASD). We demonstrate that for good NLP performance, we need both large and organic corpora; neither alone is sufficient. We show that producing large corpora through automated means can be a counter-productive, producing models that both perform worse and lack cultural richness and meaningful representation of the Arabic language and its native speakers.", }
SaiedAlshahrani/Wikipedia-Corpora-Report
[ "size_categories:1K<n<10K", "license:mit", "region:us" ]
2023-08-19T01:28:29+00:00
{"license": "mit", "size_categories": ["1K<n<10K"], "pretty_name": "Wikipedia-Corpora-Report"}
2024-01-05T15:12:31+00:00
[]
[]
TAGS #size_categories-1K<n<10K #license-mit #region-us
# Dataset Card for "Wikipedia-Corpora-Report" This dataset is used as a metadata database for the online WIKIPEDIA CORPORA META REPORT dashboard that illustrates how humans and bots generate or edit Wikipedia editions and provides metrics for “pages” and “edits” for all Wikipedia editions (320 languages). The “pages” metric counts articles and non-articles, while the “edits” metric tallies edits on articles and non-articles, all categorized by contributor type: humans or bots. The metadata is downloaded from Wikimedia Statistics, then processed and uploaded to the Hugging Face Hub as a dataset. For more details about the dataset, please read and cite our paper: '''bash @inproceedings{alshahrani-etal-2023-performance, title = "{Performance Implications of Using Unrepresentative Corpora in {A}rabic Natural Language Processing}", author = "Alshahrani, Saied and Alshahrani, Norah and Dey, Soumyabrata and Matthews, Jeanna", booktitle = "Proceedings of the The First Arabic Natural Language Processing Conference (ArabicNLP 2023)", month = December, year = "2023", address = "Singapore (Hybrid)", publisher = "Association for Computational Linguistics", url = "URL doi = "10.18653/v1/2023.arabicnlp-1.19", pages = "218--231", abstract = "Wikipedia articles are a widely used source of training data for Natural Language Processing (NLP) research, particularly as corpora for low-resource languages like Arabic. However, it is essential to understand the extent to which these corpora reflect the representative contributions of native speakers, especially when many entries in a given language are directly translated from other languages or automatically generated through automated mechanisms. In this paper, we study the performance implications of using inorganic corpora that are not representative of native speakers and are generated through automated techniques such as bot generation or automated template-based translation. The case of the Arabic Wikipedia editions gives a unique case study of this since the Moroccan Arabic Wikipedia edition (ARY) is small but representative, the Egyptian Arabic Wikipedia edition (ARZ) is large but unrepresentative, and the Modern Standard Arabic Wikipedia edition (AR) is both large and more representative. We intrinsically evaluate the performance of two main NLP upstream tasks, namely word representation and language modeling, using word analogy evaluations and fill-mask evaluations using our two newly created datasets: Arab States Analogy Dataset (ASAD) and Masked Arab States Dataset (MASD). We demonstrate that for good NLP performance, we need both large and organic corpora; neither alone is sufficient. We show that producing large corpora through automated means can be a counter-productive, producing models that both perform worse and lack cultural richness and meaningful representation of the Arabic language and its native speakers.", }
[ "# Dataset Card for \"Wikipedia-Corpora-Report\"\n\nThis dataset is used as a metadata database for the online WIKIPEDIA CORPORA META REPORT dashboard that illustrates how humans and bots generate or edit Wikipedia editions and provides metrics for “pages” and “edits” for all Wikipedia editions (320 languages). The “pages” metric counts articles and non-articles, while the “edits” metric tallies edits on articles and non-articles, all categorized by contributor type: humans or bots. The metadata is downloaded from Wikimedia Statistics, then processed and uploaded to the Hugging Face Hub as a dataset. \n\nFor more details about the dataset, please read and cite our paper:\n\n'''bash\n@inproceedings{alshahrani-etal-2023-performance,\n title = \"{Performance Implications of Using Unrepresentative Corpora in {A}rabic Natural Language Processing}\",\n author = \"Alshahrani, Saied and Alshahrani, Norah and Dey, Soumyabrata and Matthews, Jeanna\",\n booktitle = \"Proceedings of the The First Arabic Natural Language Processing Conference (ArabicNLP 2023)\",\n month = December,\n year = \"2023\",\n address = \"Singapore (Hybrid)\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n doi = \"10.18653/v1/2023.arabicnlp-1.19\",\n pages = \"218--231\",\n abstract = \"Wikipedia articles are a widely used source of training data for Natural Language Processing (NLP) research, particularly as corpora for low-resource languages like Arabic. However, it is essential to understand the extent to which these corpora reflect the representative contributions of native speakers, especially when many entries in a given language are directly translated from other languages or automatically generated through automated mechanisms. In this paper, we study the performance implications of using inorganic corpora that are not representative of native speakers and are generated through automated techniques such as bot generation or automated template-based translation. The case of the Arabic Wikipedia editions gives a unique case study of this since the Moroccan Arabic Wikipedia edition (ARY) is small but representative, the Egyptian Arabic Wikipedia edition (ARZ) is large but unrepresentative, and the Modern Standard Arabic Wikipedia edition (AR) is both large and more representative. We intrinsically evaluate the performance of two main NLP upstream tasks, namely word representation and language modeling, using word analogy evaluations and fill-mask evaluations using our two newly created datasets: Arab States Analogy Dataset (ASAD) and Masked Arab States Dataset (MASD). We demonstrate that for good NLP performance, we need both large and organic corpora; neither alone is sufficient. We show that producing large corpora through automated means can be a counter-productive, producing models that both perform worse and lack cultural richness and meaningful representation of the Arabic language and its native speakers.\",\n}" ]
[ "TAGS\n#size_categories-1K<n<10K #license-mit #region-us \n", "# Dataset Card for \"Wikipedia-Corpora-Report\"\n\nThis dataset is used as a metadata database for the online WIKIPEDIA CORPORA META REPORT dashboard that illustrates how humans and bots generate or edit Wikipedia editions and provides metrics for “pages” and “edits” for all Wikipedia editions (320 languages). The “pages” metric counts articles and non-articles, while the “edits” metric tallies edits on articles and non-articles, all categorized by contributor type: humans or bots. The metadata is downloaded from Wikimedia Statistics, then processed and uploaded to the Hugging Face Hub as a dataset. \n\nFor more details about the dataset, please read and cite our paper:\n\n'''bash\n@inproceedings{alshahrani-etal-2023-performance,\n title = \"{Performance Implications of Using Unrepresentative Corpora in {A}rabic Natural Language Processing}\",\n author = \"Alshahrani, Saied and Alshahrani, Norah and Dey, Soumyabrata and Matthews, Jeanna\",\n booktitle = \"Proceedings of the The First Arabic Natural Language Processing Conference (ArabicNLP 2023)\",\n month = December,\n year = \"2023\",\n address = \"Singapore (Hybrid)\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n doi = \"10.18653/v1/2023.arabicnlp-1.19\",\n pages = \"218--231\",\n abstract = \"Wikipedia articles are a widely used source of training data for Natural Language Processing (NLP) research, particularly as corpora for low-resource languages like Arabic. However, it is essential to understand the extent to which these corpora reflect the representative contributions of native speakers, especially when many entries in a given language are directly translated from other languages or automatically generated through automated mechanisms. In this paper, we study the performance implications of using inorganic corpora that are not representative of native speakers and are generated through automated techniques such as bot generation or automated template-based translation. The case of the Arabic Wikipedia editions gives a unique case study of this since the Moroccan Arabic Wikipedia edition (ARY) is small but representative, the Egyptian Arabic Wikipedia edition (ARZ) is large but unrepresentative, and the Modern Standard Arabic Wikipedia edition (AR) is both large and more representative. We intrinsically evaluate the performance of two main NLP upstream tasks, namely word representation and language modeling, using word analogy evaluations and fill-mask evaluations using our two newly created datasets: Arab States Analogy Dataset (ASAD) and Masked Arab States Dataset (MASD). We demonstrate that for good NLP performance, we need both large and organic corpora; neither alone is sufficient. We show that producing large corpora through automated means can be a counter-productive, producing models that both perform worse and lack cultural richness and meaningful representation of the Arabic language and its native speakers.\",\n}" ]
[ 23, 697 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #license-mit #region-us \n" ]
59d83b640b2cc84df354f533b07ac0ef3462c682
# Dataset of kicchou_yachie/吉弔八千慧 (Touhou) This is the dataset of kicchou_yachie/吉弔八千慧 (Touhou), containing 500 images and their tags. The core tags of this character are `short_hair, blonde_hair, horns, dragon_horns, red_eyes, dragon_tail, tail, bangs, dragon_girl`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 655.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kicchou_yachie_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 373.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kicchou_yachie_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1213 | 794.90 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kicchou_yachie_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 582.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kicchou_yachie_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1213 | 1.10 GiB | [Download](https://huggingface.co/datasets/CyberHarem/kicchou_yachie_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/kicchou_yachie_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, antlers, black_footwear, blue_shirt, blue_skirt, collarbone, full_body, looking_at_viewer, mary_janes, pleated_skirt, short_sleeves, simple_background, solo, standing, turtle_shell, white_socks, green_skirt, white_background, grin, blue_bow, teeth | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, antlers, black_footwear, blue_shirt, blue_skirt, collarbone, full_body, looking_at_viewer, mary_janes, open_mouth, smile, solo, turtle_shell, white_socks, simple_background, white_background, green_skirt, short_sleeves, standing, teeth | | 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, antlers, blue_shirt, blue_skirt, solo, turtle_shell, collarbone, looking_at_viewer, open_mouth, simple_background, smile, white_background, green_skirt, long_sleeves, short_sleeves | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, antlers, blue_shirt, looking_at_viewer, short_sleeves, solo, blue_bow, closed_mouth, collarbone, turtle_shell, smile, upper_body, white_background, blush, simple_background, skirt | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, antlers, blue_shirt, closed_mouth, feet_out_of_frame, looking_at_viewer, short_sleeves, smile, solo, turtle_shell, white_socks, blue_skirt, bow, collarbone, standing, holding, pleated_skirt | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, antlers, long_sleeves, looking_at_viewer, smile, solo, blue_kimono, turtle_shell, wide_sleeves, closed_mouth, holding, alternate_costume, collarbone, obi | | 6 | 11 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, antlers, china_dress, looking_at_viewer, solo, alternate_costume, smile, blue_dress, sitting, simple_background, holding_smoking_pipe, turtle_shell, earrings, medium_breasts, white_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | antlers | black_footwear | blue_shirt | blue_skirt | collarbone | full_body | looking_at_viewer | mary_janes | pleated_skirt | short_sleeves | simple_background | solo | standing | turtle_shell | white_socks | green_skirt | white_background | grin | blue_bow | teeth | open_mouth | smile | long_sleeves | closed_mouth | upper_body | blush | skirt | feet_out_of_frame | bow | holding | blue_kimono | wide_sleeves | alternate_costume | obi | china_dress | blue_dress | sitting | holding_smoking_pipe | earrings | medium_breasts | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------|:-----------------|:-------------|:-------------|:-------------|:------------|:--------------------|:-------------|:----------------|:----------------|:--------------------|:-------|:-----------|:---------------|:--------------|:--------------|:-------------------|:-------|:-----------|:--------|:-------------|:--------|:---------------|:---------------|:-------------|:--------|:--------|:--------------------|:------|:----------|:--------------|:---------------|:--------------------|:------|:--------------|:-------------|:----------|:-----------------------|:-----------|:-----------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | X | X | X | | X | | | X | X | X | | X | | X | X | | | | X | X | X | | | | | | | | | | | | | | | | | | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | X | | X | | X | | | X | X | X | | X | | | X | | X | | | X | | X | X | X | X | | | | | | | | | | | | | | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | X | X | X | | X | | X | X | | X | X | X | X | | | | | | | X | | X | | | | X | X | X | | | | | | | | | | | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | | | | X | | X | | | | | X | | X | | | | | | | | X | X | X | | | | | | X | X | X | X | X | | | | | | | | 6 | 11 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | | | | | | X | | | | X | X | | X | | | X | | | | | X | | | | | | | | | | | X | | X | X | X | X | X | X |
CyberHarem/kicchou_yachie_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-19T01:35:38+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T01:11:45+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of kicchou\_yachie/吉弔八千慧 (Touhou) ========================================= This is the dataset of kicchou\_yachie/吉弔八千慧 (Touhou), containing 500 images and their tags. The core tags of this character are 'short\_hair, blonde\_hair, horns, dragon\_horns, red\_eyes, dragon\_tail, tail, bangs, dragon\_girl', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
be0f553e5ee9be47d0e133714f913dee3450d344
# Dataset Card for "final_train_v2_test_500000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v2_test_500000
[ "region:us" ]
2023-08-19T02:09:12+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 304216.2, "num_examples": 900}, {"name": "test", "num_bytes": 33801.8, "num_examples": 100}], "download_size": 154115, "dataset_size": 338018.0}}
2023-08-19T02:09:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v2_test_500000" More Information needed
[ "# Dataset Card for \"final_train_v2_test_500000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v2_test_500000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v2_test_500000\"\n\nMore Information needed" ]
cdf5bfb7850c98635a640402d0d30bb3054562a7
# Dataset Card for "music-wiki" 📚🎵 Introducing **music-wiki** 📊🎶 Our data collection process unfolds as follows: 1) Starting with a seed page from Wikipedia's music section, we navigate through a referenced page graph, employing recursive crawling up to a depth of 20 levels. 2) Simultaneously, tapping into the rich MusicBrainz dump, we encounter a staggering 11 million unique music entities spanning 10 distinct categories. These entities serve as the foundation for utilizing the Wikipedia API to meticulously crawl corresponding pages. The culmination of these efforts results in the assembly of data: 167k pages from the first method and an additional 193k pages through the second method. While totaling at 361k pages, this compilation provides a substantial groundwork for establishing a Music-Text-Database. 🎵📚🔍 - **Repository:** [music-wiki](https://github.com/seungheondoh/music-wiki) [![image](https://i.imgur.com/NJrjVyy.png)](https://github.com/seungheondoh/music-wiki) ### splits - wikipedia_music: 167890 - musicbrainz_genre: 1459 - musicbrainz_instrument: 872 - musicbrainz_artist: 7002 - musicbrainz_release: 163068 - musicbrainz_release_group: 15942 - musicbrainz_label: 158 - musicbrainz_work: 4282 - musicbrainz_series: 12 - musicbrainz_place: 49 - musicbrainz_event: 16 - musicbrainz_area: 360
seungheondoh/music-wiki
[ "size_categories:100K<n<1M", "language:en", "license:mit", "music", "wiki", "region:us" ]
2023-08-19T02:20:36+00:00
{"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "tags": ["music", "wiki"]}
2023-08-19T03:16:06+00:00
[]
[ "en" ]
TAGS #size_categories-100K<n<1M #language-English #license-mit #music #wiki #region-us
# Dataset Card for "music-wiki" Introducing music-wiki Our data collection process unfolds as follows: 1) Starting with a seed page from Wikipedia's music section, we navigate through a referenced page graph, employing recursive crawling up to a depth of 20 levels. 2) Simultaneously, tapping into the rich MusicBrainz dump, we encounter a staggering 11 million unique music entities spanning 10 distinct categories. These entities serve as the foundation for utilizing the Wikipedia API to meticulously crawl corresponding pages. The culmination of these efforts results in the assembly of data: 167k pages from the first method and an additional 193k pages through the second method. While totaling at 361k pages, this compilation provides a substantial groundwork for establishing a Music-Text-Database. - Repository: music-wiki ![image](URL ### splits - wikipedia_music: 167890 - musicbrainz_genre: 1459 - musicbrainz_instrument: 872 - musicbrainz_artist: 7002 - musicbrainz_release: 163068 - musicbrainz_release_group: 15942 - musicbrainz_label: 158 - musicbrainz_work: 4282 - musicbrainz_series: 12 - musicbrainz_place: 49 - musicbrainz_event: 16 - musicbrainz_area: 360
[ "# Dataset Card for \"music-wiki\"\n\n Introducing music-wiki \n\n Our data collection process unfolds as follows: \n1) Starting with a seed page from Wikipedia's music section, we navigate through a referenced page graph, employing recursive crawling up to a depth of 20 levels.\n2) Simultaneously, tapping into the rich MusicBrainz dump, we encounter a staggering 11 million unique music entities spanning 10 distinct categories. These entities serve as the foundation for utilizing the Wikipedia API to meticulously crawl corresponding pages.\n\nThe culmination of these efforts results in the assembly of data: 167k pages from the first method and an additional 193k pages through the second method. \nWhile totaling at 361k pages, this compilation provides a substantial groundwork for establishing a Music-Text-Database. \n\n\n- Repository: music-wiki\n\n![image](URL", "### splits\n - wikipedia_music: 167890\n - musicbrainz_genre: 1459\n - musicbrainz_instrument: 872\n - musicbrainz_artist: 7002\n - musicbrainz_release: 163068\n - musicbrainz_release_group: 15942\n - musicbrainz_label: 158\n - musicbrainz_work: 4282\n - musicbrainz_series: 12\n - musicbrainz_place: 49\n - musicbrainz_event: 16\n - musicbrainz_area: 360" ]
[ "TAGS\n#size_categories-100K<n<1M #language-English #license-mit #music #wiki #region-us \n", "# Dataset Card for \"music-wiki\"\n\n Introducing music-wiki \n\n Our data collection process unfolds as follows: \n1) Starting with a seed page from Wikipedia's music section, we navigate through a referenced page graph, employing recursive crawling up to a depth of 20 levels.\n2) Simultaneously, tapping into the rich MusicBrainz dump, we encounter a staggering 11 million unique music entities spanning 10 distinct categories. These entities serve as the foundation for utilizing the Wikipedia API to meticulously crawl corresponding pages.\n\nThe culmination of these efforts results in the assembly of data: 167k pages from the first method and an additional 193k pages through the second method. \nWhile totaling at 361k pages, this compilation provides a substantial groundwork for establishing a Music-Text-Database. \n\n\n- Repository: music-wiki\n\n![image](URL", "### splits\n - wikipedia_music: 167890\n - musicbrainz_genre: 1459\n - musicbrainz_instrument: 872\n - musicbrainz_artist: 7002\n - musicbrainz_release: 163068\n - musicbrainz_release_group: 15942\n - musicbrainz_label: 158\n - musicbrainz_work: 4282\n - musicbrainz_series: 12\n - musicbrainz_place: 49\n - musicbrainz_event: 16\n - musicbrainz_area: 360" ]
[ 31, 200, 123 ]
[ "passage: TAGS\n#size_categories-100K<n<1M #language-English #license-mit #music #wiki #region-us \n# Dataset Card for \"music-wiki\"\n\n Introducing music-wiki \n\n Our data collection process unfolds as follows: \n1) Starting with a seed page from Wikipedia's music section, we navigate through a referenced page graph, employing recursive crawling up to a depth of 20 levels.\n2) Simultaneously, tapping into the rich MusicBrainz dump, we encounter a staggering 11 million unique music entities spanning 10 distinct categories. These entities serve as the foundation for utilizing the Wikipedia API to meticulously crawl corresponding pages.\n\nThe culmination of these efforts results in the assembly of data: 167k pages from the first method and an additional 193k pages through the second method. \nWhile totaling at 361k pages, this compilation provides a substantial groundwork for establishing a Music-Text-Database. \n\n\n- Repository: music-wiki\n\n![image](URL### splits\n - wikipedia_music: 167890\n - musicbrainz_genre: 1459\n - musicbrainz_instrument: 872\n - musicbrainz_artist: 7002\n - musicbrainz_release: 163068\n - musicbrainz_release_group: 15942\n - musicbrainz_label: 158\n - musicbrainz_work: 4282\n - musicbrainz_series: 12\n - musicbrainz_place: 49\n - musicbrainz_event: 16\n - musicbrainz_area: 360" ]
0be75d08f4843a97edcc248491d12c8362e1b82b
# Dataset of eternity_larva/エタニティラルバ (Touhou) This is the dataset of eternity_larva/エタニティラルバ (Touhou), containing 500 images and their tags. The core tags of this character are `butterfly_wings, wings, short_hair, leaf_on_head, aqua_hair, hair_between_eyes, orange_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 591.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eternity_larva_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 347.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eternity_larva_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1109 | 717.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eternity_larva_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 525.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eternity_larva_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1109 | 980.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eternity_larva_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/eternity_larva_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, antennae, blush, fairy, green_dress, leaf, multicolored_dress, open_mouth, short_sleeves, smile, solo, upper_body, yellow_eyes | | 1 | 9 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, antennae, barefoot, fairy, green_dress, leaf, multicolored_dress, short_sleeves, smile, solo, full_body, open_mouth, blush, brown_eyes | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, antennae, blush, closed_mouth, fairy, green_dress, leaf, multicolored_dress, short_sleeves, smile, solo, yellow_eyes, feet_out_of_frame | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, antennae, closed_mouth, fairy, green_dress, leaf, multicolored_dress, short_sleeves, simple_background, solo, upper_body, white_background, smile, blush, looking_at_viewer, yellow_eyes | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | antennae | blush | fairy | green_dress | leaf | multicolored_dress | open_mouth | short_sleeves | smile | solo | upper_body | yellow_eyes | barefoot | full_body | brown_eyes | closed_mouth | feet_out_of_frame | simple_background | white_background | looking_at_viewer | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:--------|:--------|:--------------|:-------|:---------------------|:-------------|:----------------|:--------|:-------|:-------------|:--------------|:-----------|:------------|:-------------|:---------------|:--------------------|:--------------------|:-------------------|:--------------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | 1 | 9 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | | | X | X | X | | | | | | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | X | X | | X | X | X | | X | | | | X | X | | | | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | X | X | X | X | | X | X | X | X | X | | | | X | | X | X | X |
CyberHarem/eternity_larva_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-19T02:24:41+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T03:22:58+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of eternity\_larva/エタニティラルバ (Touhou) ============================================ This is the dataset of eternity\_larva/エタニティラルバ (Touhou), containing 500 images and their tags. The core tags of this character are 'butterfly\_wings, wings, short\_hair, leaf\_on\_head, aqua\_hair, hair\_between\_eyes, orange\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
52024a59d308fadf6850614d5e285f507468db77
# Dataset Card for "final_train_v4_test_20000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_20000
[ "region:us" ]
2023-08-19T02:26:19+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5752586.7, "num_examples": 18000}, {"name": "test", "num_bytes": 639176.3, "num_examples": 2000}], "download_size": 2775469, "dataset_size": 6391763.0}}
2023-08-19T02:26:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_20000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_20000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_20000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_20000\"\n\nMore Information needed" ]
89e81569dfaaa5a4f37fefc2708b2d7ce85ccf59
# Dataset Card for "final_train_v4_test_40000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_40000
[ "region:us" ]
2023-08-19T02:26:23+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5756432.4, "num_examples": 18000}, {"name": "test", "num_bytes": 639603.6, "num_examples": 2000}], "download_size": 2778012, "dataset_size": 6396036.0}}
2023-08-19T02:26:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_40000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_40000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_40000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_40000\"\n\nMore Information needed" ]
626945e6c37f0a16212174a88bc5a958d96e5f25
# Dataset Card for "final_train_v4_test_60000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_60000
[ "region:us" ]
2023-08-19T02:26:28+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5773223.7, "num_examples": 18000}, {"name": "test", "num_bytes": 641469.3, "num_examples": 2000}], "download_size": 2789176, "dataset_size": 6414693.0}}
2023-08-19T02:26:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_60000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_60000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_60000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_60000\"\n\nMore Information needed" ]
f29693d605308d8db71315949aeff512dc740092
# Dataset Card for "final_train_v4_test_80000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_80000
[ "region:us" ]
2023-08-19T02:26:32+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5763784.5, "num_examples": 18000}, {"name": "test", "num_bytes": 640420.5, "num_examples": 2000}], "download_size": 2785737, "dataset_size": 6404205.0}}
2023-08-19T02:26:35+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_80000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_80000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_80000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_80000\"\n\nMore Information needed" ]
d65a438060423ef4e6c58eb79017d3c062de32ef
# Dataset Card for "final_train_v4_test_100000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_100000
[ "region:us" ]
2023-08-19T02:26:36+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5754300.3, "num_examples": 18000}, {"name": "test", "num_bytes": 639366.7, "num_examples": 2000}], "download_size": 2776539, "dataset_size": 6393667.0}}
2023-08-19T02:26:39+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_100000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_100000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_100000\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_100000\"\n\nMore Information needed" ]
995975d319ea520f997debd2c88ad74eef4f4053
# Dataset Card for "final_train_v4_test_120000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_120000
[ "region:us" ]
2023-08-19T02:26:40+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5770639.8, "num_examples": 18000}, {"name": "test", "num_bytes": 641182.2, "num_examples": 2000}], "download_size": 2789087, "dataset_size": 6411822.0}}
2023-08-19T02:26:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_120000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_120000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_120000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_120000\"\n\nMore Information needed" ]
fb912cd7dd78f3e87c3bc8a8acd4a7e16ebf94fa
# Dataset Card for "final_train_v4_test_140000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_140000
[ "region:us" ]
2023-08-19T02:26:45+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5764600.8, "num_examples": 18000}, {"name": "test", "num_bytes": 640511.2, "num_examples": 2000}], "download_size": 2782749, "dataset_size": 6405112.0}}
2023-08-19T02:26:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_140000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_140000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_140000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_140000\"\n\nMore Information needed" ]
45f3ea6e74fd1a7f16340c4c2c2d2554005ad860
# Dataset Card for "final_train_v4_test_160000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_160000
[ "region:us" ]
2023-08-19T02:26:49+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5763884.4, "num_examples": 18000}, {"name": "test", "num_bytes": 640431.6, "num_examples": 2000}], "download_size": 2783712, "dataset_size": 6404316.0}}
2023-08-19T02:26:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_160000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_160000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_160000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_160000\"\n\nMore Information needed" ]
06e0b14734c4d1ac506a154b4b12ec093b43e073
# Dataset Card for "final_train_v4_test_180000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_180000
[ "region:us" ]
2023-08-19T02:26:53+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5783295.6, "num_examples": 18000}, {"name": "test", "num_bytes": 642588.4, "num_examples": 2000}], "download_size": 2793014, "dataset_size": 6425884.0}}
2023-08-19T02:26:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_180000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_180000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_180000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_180000\"\n\nMore Information needed" ]
1f3c8850b06371dd7d82d0406eed08b0ee2e9ba1
# Dataset Card for "final_train_v4_test_200000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_200000
[ "region:us" ]
2023-08-19T02:26:57+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5758853.4, "num_examples": 18000}, {"name": "test", "num_bytes": 639872.6, "num_examples": 2000}], "download_size": 2783998, "dataset_size": 6398726.0}}
2023-08-19T02:27:00+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_200000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_200000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_200000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_200000\"\n\nMore Information needed" ]
c231c5cfcb6838c4234086f62c7dcd800e4ee388
# Dataset Card for "final_train_v4_test_220000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_220000
[ "region:us" ]
2023-08-19T02:27:01+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5748833.7, "num_examples": 18000}, {"name": "test", "num_bytes": 638759.3, "num_examples": 2000}], "download_size": 2780155, "dataset_size": 6387593.0}}
2023-08-19T02:27:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_220000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_220000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_220000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_220000\"\n\nMore Information needed" ]
dbfd449393b5e90138ad8e4878db78d9dac4f489
# Dataset Card for "final_train_v4_test_240000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_240000
[ "region:us" ]
2023-08-19T02:27:05+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5783176.8, "num_examples": 18000}, {"name": "test", "num_bytes": 642575.2, "num_examples": 2000}], "download_size": 2790764, "dataset_size": 6425752.0}}
2023-08-19T02:27:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_240000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_240000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_240000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_240000\"\n\nMore Information needed" ]
4a6328040d213faaf3b0d74a6898307e5b56761e
# Dataset Card for "final_train_v4_test_260000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_260000
[ "region:us" ]
2023-08-19T02:27:09+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6664618.8, "num_examples": 18000}, {"name": "test", "num_bytes": 740513.2, "num_examples": 2000}], "download_size": 3195836, "dataset_size": 7405132.0}}
2023-08-19T02:27:12+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_260000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_260000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_260000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_260000\"\n\nMore Information needed" ]
88dde10c690c0039c7a939a05916ed8e7a2ac1e9
# Dataset Card for "final_train_v4_test_280000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_280000
[ "region:us" ]
2023-08-19T02:27:13+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6704579.7, "num_examples": 18000}, {"name": "test", "num_bytes": 744953.3, "num_examples": 2000}], "download_size": 3210023, "dataset_size": 7449533.0}}
2023-08-19T02:27:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_280000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_280000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_280000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_280000\"\n\nMore Information needed" ]
0bcd134bef9c5b5c4d1e743dd61dbda14ca9a88d
# Dataset Card for "final_train_v4_test_300000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_300000
[ "region:us" ]
2023-08-19T02:27:17+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6724266.3, "num_examples": 18000}, {"name": "test", "num_bytes": 747140.7, "num_examples": 2000}], "download_size": 3220758, "dataset_size": 7471407.0}}
2023-08-19T02:27:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_300000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_300000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_300000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_300000\"\n\nMore Information needed" ]
fc47ca9d86a3907ba370a100d5caab7ab16843aa
# Dataset Card for "final_train_v4_test_320000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_320000
[ "region:us" ]
2023-08-19T02:27:22+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6731992.8, "num_examples": 18000}, {"name": "test", "num_bytes": 747999.2, "num_examples": 2000}], "download_size": 3222890, "dataset_size": 7479992.0}}
2023-08-19T02:27:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_320000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_320000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_320000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_320000\"\n\nMore Information needed" ]
ade96eb7c9381a3e78290db22584c8988ff6e0a1
# Dataset Card for "final_train_v4_test_340000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_340000
[ "region:us" ]
2023-08-19T02:27:26+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6698695.5, "num_examples": 18000}, {"name": "test", "num_bytes": 744299.5, "num_examples": 2000}], "download_size": 3209889, "dataset_size": 7442995.0}}
2023-08-19T02:27:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_340000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_340000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_340000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_340000\"\n\nMore Information needed" ]
df5cbb3adc80dd6873424643c54f50ce401d720d
# Dataset Card for "final_train_v4_test_360000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_360000
[ "region:us" ]
2023-08-19T02:27:31+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6741504.9, "num_examples": 18000}, {"name": "test", "num_bytes": 749056.1, "num_examples": 2000}], "download_size": 3235727, "dataset_size": 7490561.0}}
2023-08-19T02:27:34+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_360000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_360000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_360000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_360000\"\n\nMore Information needed" ]
3ca1d29fbe2a837522789fccd519a513eefa9f26
# Dataset Card for "final_train_v4_test_380000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_380000
[ "region:us" ]
2023-08-19T02:27:35+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6729122.7, "num_examples": 18000}, {"name": "test", "num_bytes": 747680.3, "num_examples": 2000}], "download_size": 3220998, "dataset_size": 7476803.0}}
2023-08-19T02:27:39+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_380000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_380000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_380000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_380000\"\n\nMore Information needed" ]
5e4df808f54007e3789151aaf916071f5b6fbfa1
# Dataset Card for "final_train_v4_test_400000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_400000
[ "region:us" ]
2023-08-19T02:27:40+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6678342.9, "num_examples": 18000}, {"name": "test", "num_bytes": 742038.1, "num_examples": 2000}], "download_size": 3194440, "dataset_size": 7420381.0}}
2023-08-19T02:27:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_400000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_400000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_400000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_400000\"\n\nMore Information needed" ]
cebaef4432bdb554b53b0168a85f9b2caa14f50a
# Dataset Card for "final_train_v4_test_420000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_420000
[ "region:us" ]
2023-08-19T02:27:44+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6734425.5, "num_examples": 18000}, {"name": "test", "num_bytes": 748269.5, "num_examples": 2000}], "download_size": 3209513, "dataset_size": 7482695.0}}
2023-08-19T02:27:47+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_420000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_420000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_420000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_420000\"\n\nMore Information needed" ]
3963a1cd88741e8e349e952325beb9a77cc6067d
# Dataset Card for "final_train_v4_test_440000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_440000
[ "region:us" ]
2023-08-19T02:27:48+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6780762.0, "num_examples": 18000}, {"name": "test", "num_bytes": 753418.0, "num_examples": 2000}], "download_size": 3237871, "dataset_size": 7534180.0}}
2023-08-19T02:27:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_440000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_440000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_440000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_440000\"\n\nMore Information needed" ]
6294064efc33a5478fbfd16e3a0065b10623fbdb
# Dataset Card for "final_train_v4_test_460000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_460000
[ "region:us" ]
2023-08-19T02:27:53+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6698375.1, "num_examples": 18000}, {"name": "test", "num_bytes": 744263.9, "num_examples": 2000}], "download_size": 3208239, "dataset_size": 7442639.0}}
2023-08-19T02:27:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_460000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_460000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_460000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_460000\"\n\nMore Information needed" ]
063e84f3233ddaf1bae53fc9715f600b19562b5c
# Dataset Card for "final_train_v4_test_480000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_480000
[ "region:us" ]
2023-08-19T02:27:58+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6682680.0, "num_examples": 18000}, {"name": "test", "num_bytes": 742520.0, "num_examples": 2000}], "download_size": 3203503, "dataset_size": 7425200.0}}
2023-08-19T02:28:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_480000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_480000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_480000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_480000\"\n\nMore Information needed" ]
1a91ebe624ac2821c2dc74cb17a1321545202286
# Dataset Card for "final_train_v4_test_500000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_500000
[ "region:us" ]
2023-08-19T02:28:02+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6695806.5, "num_examples": 18000}, {"name": "test", "num_bytes": 743978.5, "num_examples": 2000}], "download_size": 3213893, "dataset_size": 7439785.0}}
2023-08-19T02:28:06+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_500000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_500000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_500000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_500000\"\n\nMore Information needed" ]
cc2d5c3a6bfb041c30ae33192ed12dc6247e3bad
# Dataset Card for "final_train_v4_test_520000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_520000
[ "region:us" ]
2023-08-19T02:28:07+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6673807.8, "num_examples": 18000}, {"name": "test", "num_bytes": 741534.2, "num_examples": 2000}], "download_size": 3192450, "dataset_size": 7415342.0}}
2023-08-19T02:28:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_520000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_520000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_520000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_520000\"\n\nMore Information needed" ]
801bcc44dc665e361c763ed1b2eb57a22269b542
# Dataset Card for "final_train_v4_test_540000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_540000
[ "region:us" ]
2023-08-19T02:28:11+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6696897.3, "num_examples": 18000}, {"name": "test", "num_bytes": 744099.7, "num_examples": 2000}], "download_size": 3205086, "dataset_size": 7440997.0}}
2023-08-19T02:28:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_540000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_540000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_540000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_540000\"\n\nMore Information needed" ]
3ff2efc9ac807389eeb689211901165f6b715947
# Dataset Card for "final_train_v4_test_560000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_560000
[ "region:us" ]
2023-08-19T02:28:16+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6709050.0, "num_examples": 18000}, {"name": "test", "num_bytes": 745450.0, "num_examples": 2000}], "download_size": 3206263, "dataset_size": 7454500.0}}
2023-08-19T02:28:20+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_560000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_560000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_560000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_560000\"\n\nMore Information needed" ]
79bbb9891147eafe19bc4d4aa6c6b51e70e4a340
# Dataset Card for "final_train_v4_test_580000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_580000
[ "region:us" ]
2023-08-19T02:28:21+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6699513.6, "num_examples": 18000}, {"name": "test", "num_bytes": 744390.4, "num_examples": 2000}], "download_size": 3205598, "dataset_size": 7443904.0}}
2023-08-19T02:28:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_580000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_580000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_580000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_580000\"\n\nMore Information needed" ]
2a6191412fe6f7252b67e89f7bc032d5c5490803
# Dataset Card for "final_train_v4_test_600000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_600000
[ "region:us" ]
2023-08-19T02:28:25+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6744185.1, "num_examples": 18000}, {"name": "test", "num_bytes": 749353.9, "num_examples": 2000}], "download_size": 3205027, "dataset_size": 7493539.0}}
2023-08-19T02:28:28+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_600000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_600000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_600000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_600000\"\n\nMore Information needed" ]
387c65e5ae31d023c1dbb6efbfda24ec6c11e306
# Dataset Card for "final_train_v4_test_620000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_620000
[ "region:us" ]
2023-08-19T02:28:29+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6651231.3, "num_examples": 18000}, {"name": "test", "num_bytes": 739025.7, "num_examples": 2000}], "download_size": 3194261, "dataset_size": 7390257.0}}
2023-08-19T02:28:32+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_620000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_620000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_620000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_620000\"\n\nMore Information needed" ]
2424cdab9b0afc7caa882f7cb609254d552498c8
# Dataset Card for "final_train_v4_test_640000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_640000
[ "region:us" ]
2023-08-19T02:28:33+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6648643.8, "num_examples": 18000}, {"name": "test", "num_bytes": 738738.2, "num_examples": 2000}], "download_size": 3190543, "dataset_size": 7387382.0}}
2023-08-19T02:28:37+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_640000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_640000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_640000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_640000\"\n\nMore Information needed" ]
1b9a3da7baf0c9f1992716e33b9d4009db610151
# Dataset Card for "final_train_v4_test_660000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_660000
[ "region:us" ]
2023-08-19T02:28:38+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6717820.5, "num_examples": 18000}, {"name": "test", "num_bytes": 746424.5, "num_examples": 2000}], "download_size": 3226337, "dataset_size": 7464245.0}}
2023-08-19T02:28:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_660000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_660000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_660000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_660000\"\n\nMore Information needed" ]
7f6b26cd5c60b82382149a4d2d925d1f551074c9
# Dataset Card for "final_train_v4_test_680000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_680000
[ "region:us" ]
2023-08-19T02:28:42+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6705421.2, "num_examples": 18000}, {"name": "test", "num_bytes": 745046.8, "num_examples": 2000}], "download_size": 3198100, "dataset_size": 7450468.0}}
2023-08-19T02:28:45+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_680000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_680000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_680000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_680000\"\n\nMore Information needed" ]
10be44947cd7788062a9d22724d0535942660d52
# Dataset Card for "final_train_v4_test_700000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_700000
[ "region:us" ]
2023-08-19T02:28:46+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6742140.3, "num_examples": 18000}, {"name": "test", "num_bytes": 749126.7, "num_examples": 2000}], "download_size": 3216875, "dataset_size": 7491267.0}}
2023-08-19T02:28:51+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_700000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_700000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_700000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_700000\"\n\nMore Information needed" ]
57d9abc043a0d87cca2812c56103019a8fa3b5f9
# Dataset Card for "final_train_v4_test_720000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_720000
[ "region:us" ]
2023-08-19T02:28:52+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6734938.5, "num_examples": 18000}, {"name": "test", "num_bytes": 748326.5, "num_examples": 2000}], "download_size": 3226399, "dataset_size": 7483265.0}}
2023-08-19T02:28:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_720000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_720000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_720000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_720000\"\n\nMore Information needed" ]
0864daad09a509fb06c75a7a0053eeaa967bace8
# Dataset Card for "final_train_v4_test_740000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_740000
[ "region:us" ]
2023-08-19T02:28:56+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6717019.5, "num_examples": 18000}, {"name": "test", "num_bytes": 746335.5, "num_examples": 2000}], "download_size": 3224246, "dataset_size": 7463355.0}}
2023-08-19T02:28:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_740000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_740000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_740000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_740000\"\n\nMore Information needed" ]
20f1748095a3be1766d18e098797b89d9f3155a5
# Dataset Card for "final_train_v4_test_760000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_760000
[ "region:us" ]
2023-08-19T02:29:00+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6734172.6, "num_examples": 18000}, {"name": "test", "num_bytes": 748241.4, "num_examples": 2000}], "download_size": 3236773, "dataset_size": 7482414.0}}
2023-08-19T02:29:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_760000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_760000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_760000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_760000\"\n\nMore Information needed" ]
89cd07b9d1a10ec5ca094afdfe27b0c181d56946
# Dataset Card for "final_train_v4_test_780000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_780000
[ "region:us" ]
2023-08-19T02:29:04+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6683220.9, "num_examples": 18000}, {"name": "test", "num_bytes": 742580.1, "num_examples": 2000}], "download_size": 3207945, "dataset_size": 7425801.0}}
2023-08-19T02:29:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_780000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_780000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_780000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_780000\"\n\nMore Information needed" ]
75ecdfbf4c7cf82f7da72632cbc231660ffc55d4
# Dataset Card for "final_train_v4_test_800000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_800000
[ "region:us" ]
2023-08-19T02:29:09+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6685194.6, "num_examples": 18000}, {"name": "test", "num_bytes": 742799.4, "num_examples": 2000}], "download_size": 3208395, "dataset_size": 7427994.0}}
2023-08-19T02:29:13+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_800000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_800000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_800000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_800000\"\n\nMore Information needed" ]
1dd2d021caa63bb7d2ae1f76af4c218a9ec20ca4
# Dataset Card for "final_train_v4_test_820000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_820000
[ "region:us" ]
2023-08-19T02:29:14+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6750900.9, "num_examples": 18000}, {"name": "test", "num_bytes": 750100.1, "num_examples": 2000}], "download_size": 3232883, "dataset_size": 7501001.0}}
2023-08-19T02:29:17+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_820000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_820000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_820000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_820000\"\n\nMore Information needed" ]
8b7894acd447a44dc522ca19087b4998b0e42645
# Dataset Card for "final_train_v4_test_840000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_840000
[ "region:us" ]
2023-08-19T02:29:18+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7182613.8, "num_examples": 18000}, {"name": "test", "num_bytes": 798068.2, "num_examples": 2000}], "download_size": 3446799, "dataset_size": 7980682.0}}
2023-08-19T02:29:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_840000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_840000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_840000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_840000\"\n\nMore Information needed" ]
042cf8d3bc35e4429288b7c64f3d133c3c62ed0a
# Dataset Card for "final_train_v4_test_860000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_860000
[ "region:us" ]
2023-08-19T02:29:23+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7271267.4, "num_examples": 18000}, {"name": "test", "num_bytes": 807918.6, "num_examples": 2000}], "download_size": 3497291, "dataset_size": 8079186.0}}
2023-08-19T02:29:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_860000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_860000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_860000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_860000\"\n\nMore Information needed" ]
e03f3281c24cd8eb1f8d8c3f38a45039a1742450
# Dataset Card for "final_train_v4_test_880000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_880000
[ "region:us" ]
2023-08-19T02:29:28+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7307297.1, "num_examples": 18000}, {"name": "test", "num_bytes": 811921.9, "num_examples": 2000}], "download_size": 3499994, "dataset_size": 8119219.0}}
2023-08-19T02:29:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_880000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_880000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_880000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_880000\"\n\nMore Information needed" ]
35c180b41bb74e068479d2be0840d6e9acda3e58
# Dataset Card for "final_train_v4_test_900000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_900000
[ "region:us" ]
2023-08-19T02:29:32+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7400834.1, "num_examples": 18000}, {"name": "test", "num_bytes": 822314.9, "num_examples": 2000}], "download_size": 3538671, "dataset_size": 8223149.0}}
2023-08-19T02:29:35+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_900000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_900000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_900000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_900000\"\n\nMore Information needed" ]
ad0873d57d2b5682dfda7731de26512143ab8e17
# Dataset Card for "final_train_v4_test_920000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_920000
[ "region:us" ]
2023-08-19T02:29:36+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7524029.7, "num_examples": 18000}, {"name": "test", "num_bytes": 836003.3, "num_examples": 2000}], "download_size": 3597294, "dataset_size": 8360033.0}}
2023-08-19T02:29:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_920000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_920000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_920000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_920000\"\n\nMore Information needed" ]
89644332561785cfb7dc0aa04a4901048d7f06fc
# Dataset Card for "final_train_v4_test_940000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_940000
[ "region:us" ]
2023-08-19T02:29:41+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7533398.7, "num_examples": 18000}, {"name": "test", "num_bytes": 837044.3, "num_examples": 2000}], "download_size": 3605948, "dataset_size": 8370443.0}}
2023-08-19T02:29:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_940000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_940000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_940000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_940000\"\n\nMore Information needed" ]
701095291bae71653527d4b16a627be3096b5c20
# Dataset Card for "final_train_v4_test_960000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_960000
[ "region:us" ]
2023-08-19T02:29:46+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7439547.6, "num_examples": 18000}, {"name": "test", "num_bytes": 826616.4, "num_examples": 2000}], "download_size": 3558375, "dataset_size": 8266164.0}}
2023-08-19T02:29:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_960000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_960000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_960000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_960000\"\n\nMore Information needed" ]
d9f06566d63829497890fc488a867b6aeb661696
# Dataset Card for "final_train_v4_test_980000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_980000
[ "region:us" ]
2023-08-19T02:29:51+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7397010.9, "num_examples": 18000}, {"name": "test", "num_bytes": 821890.1, "num_examples": 2000}], "download_size": 3537723, "dataset_size": 8218901.0}}
2023-08-19T02:29:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_980000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_980000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_980000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_980000\"\n\nMore Information needed" ]
eb09209f1dd3f95674f06459bf4121b805605c3b
# Dataset Card for "final_train_v4_test_1000000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_1000000
[ "region:us" ]
2023-08-19T02:29:56+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7463866.5, "num_examples": 18000}, {"name": "test", "num_bytes": 829318.5, "num_examples": 2000}], "download_size": 3566518, "dataset_size": 8293185.0}}
2023-08-19T02:29:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_1000000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_1000000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_1000000\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_1000000\"\n\nMore Information needed" ]
c9e9121e08ccf0aa096390c2268d9927724c80fa
# Dataset Card for "final_train_v4_test_1020000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_1020000
[ "region:us" ]
2023-08-19T02:30:01+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7446663.0, "num_examples": 18000}, {"name": "test", "num_bytes": 827407.0, "num_examples": 2000}], "download_size": 3554301, "dataset_size": 8274070.0}}
2023-08-19T02:30:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_1020000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_1020000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_1020000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_1020000\"\n\nMore Information needed" ]
3f84649180863dcbcfc3711374e9a6be37a85b4f
# Dataset Card for "final_train_v4_test_1040000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_1040000
[ "region:us" ]
2023-08-19T02:30:05+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7345201.5, "num_examples": 18000}, {"name": "test", "num_bytes": 816133.5, "num_examples": 2000}], "download_size": 3516028, "dataset_size": 8161335.0}}
2023-08-19T02:30:09+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_1040000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_1040000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_1040000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_1040000\"\n\nMore Information needed" ]
f89843e5a7d42563eefd2d248a0e242f3d0d0232
# Dataset Card for `Reddit-Movie-raw` ## Dataset Description - **Homepage:** https://github.com/AaronHeee/LLMs-as-Zero-Shot-Conversational-RecSys - **Repository:** https://github.com/AaronHeee/LLMs-as-Zero-Shot-Conversational-RecSys - **Paper:** To appear - **Point of Contact:** [email protected] ### Dataset Summary This dataset provides the raw text from [Reddit](https://reddit.com) related to movie recommendation conversations. The dataset is extracted from the data dump of [pushshift.io](https://arxiv.org/abs/2001.08435) and only for research use. ### Disclaimer ⚠️ **Please note that conversations processed from Reddit raw data may include content that is not entirely conducive to a positive experience (e.g., toxic speech). Exercise caution and discretion when utilizing this information.** ### Folder Structure We explain our data folder as follows: ```bash reddit_movie_raw ├── IMDB-database │ ├── clean.py # script to obtain clean IMDB movie titles, which can be used for movie name matching if needed. │ ├── movie_clean.tsv # results after movie title cleaning │ ├── title.basics.tsv # original movie title information from IMDB │ └── title.ratings.tsv # # original movie title and rating information from IMDB ├── Reddit-Movie-large │ ├── sentences.jsonl # raw sentences from the subreddit/* data, it can be used for following processing │ └── subreddit # raw text from different subreddits from Jan. 2012 to Dec. 2022 (large) │ ├── bestofnetflix.jsonl │ ├── movies.jsonl │ ├── moviesuggestions.jsonl │ ├── netflixbestof.jsonl │ └── truefilm.jsonl └── Reddit-Movie-small ├── sentences.jsonl # raw sentences from the subreddit/* data, it can be used for following processing └── subreddit # raw text from different subreddits from Jan. 2022 to Dec. 2022 (small) ├── bestofnetflix.jsonl ├── movies.jsonl ├── moviesuggestions.jsonl ├── netflixbestof.jsonl └── truefilm.jsonl ``` ### Data Processing We also provide first-version processed Reddit-Movie datasets as [Reddit-Movie-small-V1]() and [Reddit-Movie-large-V1](). Join us if you want to improve the processing quality as well! ### Citation Information Please cite these two papers if you used this raw data, thanks! ```bib @inproceedings{baumgartner2020pushshift, title={The pushshift reddit dataset}, author={Baumgartner, Jason and Zannettou, Savvas and Keegan, Brian and Squire, Megan and Blackburn, Jeremy}, booktitle={Proceedings of the international AAAI conference on web and social media}, volume={14}, pages={830--839}, year={2020} } ``` ```bib @inproceedings{he23large, title = Large language models as zero-shot conversational recommenders", author = "Zhankui He and Zhouhang Xie and Rahul Jha and Harald Steck and Dawen Liang and Yesu Feng and Bodhisattwa Majumder and Nathan Kallus and Julian McAuley", year = "2023", booktitle = "CIKM" } ``` Please contact [Zhankui He](https://aaronheee.github.io) if you have any questions or suggestions.
ZhankuiHe/reddit_movie_raw
[ "task_categories:conversational", "language:en", "recommendation", "arxiv:2001.08435", "region:us" ]
2023-08-19T02:30:06+00:00
{"language": ["en"], "task_categories": ["conversational"], "tags": ["recommendation"], "viewer": false}
2023-08-19T02:53:31+00:00
[ "2001.08435" ]
[ "en" ]
TAGS #task_categories-conversational #language-English #recommendation #arxiv-2001.08435 #region-us
# Dataset Card for 'Reddit-Movie-raw' ## Dataset Description - Homepage: URL - Repository: URL - Paper: To appear - Point of Contact: zhh004@URL ### Dataset Summary This dataset provides the raw text from Reddit related to movie recommendation conversations. The dataset is extracted from the data dump of URL and only for research use. ### Disclaimer ️ Please note that conversations processed from Reddit raw data may include content that is not entirely conducive to a positive experience (e.g., toxic speech). Exercise caution and discretion when utilizing this information. ### Folder Structure We explain our data folder as follows: ### Data Processing We also provide first-version processed Reddit-Movie datasets as [Reddit-Movie-small-V1]() and [Reddit-Movie-large-V1](). Join us if you want to improve the processing quality as well! Please cite these two papers if you used this raw data, thanks! Please contact Zhankui He if you have any questions or suggestions.
[ "# Dataset Card for 'Reddit-Movie-raw'", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: To appear\n- Point of Contact: zhh004@URL", "### Dataset Summary\n\nThis dataset provides the raw text from Reddit related to movie recommendation conversations. \nThe dataset is extracted from the data dump of URL and only for research use.", "### Disclaimer\n\n️ Please note that conversations processed from Reddit raw data may include content that is not entirely conducive to a positive experience (e.g., toxic speech). Exercise caution and discretion when utilizing this information.", "### Folder Structure\n\nWe explain our data folder as follows:", "### Data Processing\n\nWe also provide first-version processed Reddit-Movie datasets as [Reddit-Movie-small-V1]() and [Reddit-Movie-large-V1]().\nJoin us if you want to improve the processing quality as well!\n\n\n\nPlease cite these two papers if you used this raw data, thanks!\n\n\n\n\n\nPlease contact Zhankui He if you have any questions or suggestions." ]
[ "TAGS\n#task_categories-conversational #language-English #recommendation #arxiv-2001.08435 #region-us \n", "# Dataset Card for 'Reddit-Movie-raw'", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: To appear\n- Point of Contact: zhh004@URL", "### Dataset Summary\n\nThis dataset provides the raw text from Reddit related to movie recommendation conversations. \nThe dataset is extracted from the data dump of URL and only for research use.", "### Disclaimer\n\n️ Please note that conversations processed from Reddit raw data may include content that is not entirely conducive to a positive experience (e.g., toxic speech). Exercise caution and discretion when utilizing this information.", "### Folder Structure\n\nWe explain our data folder as follows:", "### Data Processing\n\nWe also provide first-version processed Reddit-Movie datasets as [Reddit-Movie-small-V1]() and [Reddit-Movie-large-V1]().\nJoin us if you want to improve the processing quality as well!\n\n\n\nPlease cite these two papers if you used this raw data, thanks!\n\n\n\n\n\nPlease contact Zhankui He if you have any questions or suggestions." ]
[ 34, 13, 29, 42, 52, 16, 90 ]
[ "passage: TAGS\n#task_categories-conversational #language-English #recommendation #arxiv-2001.08435 #region-us \n# Dataset Card for 'Reddit-Movie-raw'## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: To appear\n- Point of Contact: zhh004@URL### Dataset Summary\n\nThis dataset provides the raw text from Reddit related to movie recommendation conversations. \nThe dataset is extracted from the data dump of URL and only for research use.### Disclaimer\n\n️ Please note that conversations processed from Reddit raw data may include content that is not entirely conducive to a positive experience (e.g., toxic speech). Exercise caution and discretion when utilizing this information.### Folder Structure\n\nWe explain our data folder as follows:### Data Processing\n\nWe also provide first-version processed Reddit-Movie datasets as [Reddit-Movie-small-V1]() and [Reddit-Movie-large-V1]().\nJoin us if you want to improve the processing quality as well!\n\n\n\nPlease cite these two papers if you used this raw data, thanks!\n\n\n\n\n\nPlease contact Zhankui He if you have any questions or suggestions." ]
babd3b29f60dec2130905b4705a037e5376dbdb6
# Dataset Card for "final_train_v4_test_1060000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_1060000
[ "region:us" ]
2023-08-19T02:30:10+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7384662.0, "num_examples": 18000}, {"name": "test", "num_bytes": 820518.0, "num_examples": 2000}], "download_size": 3543931, "dataset_size": 8205180.0}}
2023-08-19T02:30:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_1060000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_1060000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_1060000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_1060000\"\n\nMore Information needed" ]
f198c0523e813b44cad67895469f8809c4639ff7
# Dataset Card for "final_train_v4_test_1080000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_1080000
[ "region:us" ]
2023-08-19T02:30:15+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7366470.3, "num_examples": 18000}, {"name": "test", "num_bytes": 818496.7, "num_examples": 2000}], "download_size": 3526599, "dataset_size": 8184967.0}}
2023-08-19T02:30:18+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_1080000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_1080000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_1080000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_1080000\"\n\nMore Information needed" ]
ea8c46253a547915c2f3de6acddbed61d1eed710
# Dataset Card for "final_train_v4_test_1100000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_1100000
[ "region:us" ]
2023-08-19T02:30:19+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7290159.3, "num_examples": 18000}, {"name": "test", "num_bytes": 810017.7, "num_examples": 2000}], "download_size": 3489433, "dataset_size": 8100177.0}}
2023-08-19T02:30:23+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_1100000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_1100000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_1100000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_1100000\"\n\nMore Information needed" ]
c9ff65f10a1f8897832fc2ec8728ecdad939354a
# Dataset Card for "final_train_v4_test_1120000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_1120000
[ "region:us" ]
2023-08-19T02:30:24+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7308363.6, "num_examples": 18000}, {"name": "test", "num_bytes": 812040.4, "num_examples": 2000}], "download_size": 3492386, "dataset_size": 8120404.0}}
2023-08-19T02:30:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_1120000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_1120000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_1120000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_1120000\"\n\nMore Information needed" ]
e5d00d07b74a6103462e71c39fceda60f86789fc
# Dataset Card for "final_train_v4_test_1140000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_1140000
[ "region:us" ]
2023-08-19T02:30:29+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7315927.2, "num_examples": 18000}, {"name": "test", "num_bytes": 812880.8, "num_examples": 2000}], "download_size": 3505075, "dataset_size": 8128808.0}}
2023-08-19T02:30:32+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_1140000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_1140000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_1140000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_1140000\"\n\nMore Information needed" ]
e5520c33a28483a173758598490dd9a230c7d202
# Dataset Card for "final_train_v4_test_1160000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/final_train_v4_test_1160000
[ "region:us" ]
2023-08-19T02:32:53+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "prefix", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7153427.7, "num_examples": 18000}, {"name": "test", "num_bytes": 794825.3, "num_examples": 2000}], "download_size": 3422745, "dataset_size": 7948253.0}}
2023-08-19T02:32:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for "final_train_v4_test_1160000" More Information needed
[ "# Dataset Card for \"final_train_v4_test_1160000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"final_train_v4_test_1160000\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"final_train_v4_test_1160000\"\n\nMore Information needed" ]
e29bc1911d10a98ac7a001014987d7803b12c67f
# Open NER (English)" This is the processed version of [Universal-NER/Pile-NER-type](https://huggingface.co/datasets/Universal-NER/Pile-NER-type).
yongsun-yoon/open-ner-english
[ "region:us" ]
2023-08-19T02:40:23+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "entities", "list": [{"name": "entity_mentions", "sequence": "string"}, {"name": "entity_type", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 51881074.954063065, "num_examples": 36711}, {"name": "validation", "num_bytes": 12970622.045936935, "num_examples": 9178}], "download_size": 40944137, "dataset_size": 64851697.0}}
2023-08-19T02:41:49+00:00
[]
[]
TAGS #region-us
# Open NER (English)" This is the processed version of Universal-NER/Pile-NER-type.
[ "# Open NER (English)\"\n\nThis is the processed version of Universal-NER/Pile-NER-type." ]
[ "TAGS\n#region-us \n", "# Open NER (English)\"\n\nThis is the processed version of Universal-NER/Pile-NER-type." ]
[ 6, 26 ]
[ "passage: TAGS\n#region-us \n# Open NER (English)\"\n\nThis is the processed version of Universal-NER/Pile-NER-type." ]
1de793f7af0ada51d26a123a89614a5cb45ad336
### Dataset Card This dataset is a subset of the Open Assistant dataset, which you can find here: https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main This subset of the data only contains the highest-rated (quality >= 0.75) english conversation paths of maximum 1 turn - rank 1/top-k=2 - which means the maximum loop one conversation has is Human - Assistant - Human - Assistant, with a total of 4355 samples. If you want all the turns, you can refer to https://huggingface.co/datasets/timdettmers/openassistant-guanaco For further information, please see the original dataset. License: Apache 2.0
shockroborty/oasst_best_k2_en
[ "region:us" ]
2023-08-19T02:49:23+00:00
{}
2023-08-20T13:09:15+00:00
[]
[]
TAGS #region-us
### Dataset Card This dataset is a subset of the Open Assistant dataset, which you can find here: URL This subset of the data only contains the highest-rated (quality >= 0.75) english conversation paths of maximum 1 turn - rank 1/top-k=2 - which means the maximum loop one conversation has is Human - Assistant - Human - Assistant, with a total of 4355 samples. If you want all the turns, you can refer to URL For further information, please see the original dataset. License: Apache 2.0
[ "### Dataset Card\n\nThis dataset is a subset of the Open Assistant dataset, which you can find here: URL\n\nThis subset of the data only contains the highest-rated (quality >= 0.75) english conversation paths of maximum 1 turn - rank 1/top-k=2 - which means the maximum loop one conversation has is Human - Assistant - Human - Assistant, with a total of 4355 samples.\n\nIf you want all the turns, you can refer to URL\n\nFor further information, please see the original dataset.\n\nLicense: Apache 2.0" ]
[ "TAGS\n#region-us \n", "### Dataset Card\n\nThis dataset is a subset of the Open Assistant dataset, which you can find here: URL\n\nThis subset of the data only contains the highest-rated (quality >= 0.75) english conversation paths of maximum 1 turn - rank 1/top-k=2 - which means the maximum loop one conversation has is Human - Assistant - Human - Assistant, with a total of 4355 samples.\n\nIf you want all the turns, you can refer to URL\n\nFor further information, please see the original dataset.\n\nLicense: Apache 2.0" ]
[ 6, 119 ]
[ "passage: TAGS\n#region-us \n### Dataset Card\n\nThis dataset is a subset of the Open Assistant dataset, which you can find here: URL\n\nThis subset of the data only contains the highest-rated (quality >= 0.75) english conversation paths of maximum 1 turn - rank 1/top-k=2 - which means the maximum loop one conversation has is Human - Assistant - Human - Assistant, with a total of 4355 samples.\n\nIf you want all the turns, you can refer to URL\n\nFor further information, please see the original dataset.\n\nLicense: Apache 2.0" ]
8fda5c6a4f8460da41f2b7af4fe5e859fc27cb5a
# Dataset of ushizaki_urumi/牛崎潤美 (Touhou) This is the dataset of ushizaki_urumi/牛崎潤美 (Touhou), containing 444 images and their tags. The core tags of this character are `horns, multicolored_hair, two-tone_hair, black_hair, cow_horns, animal_ears, split-color_hair, cow_ears, red_horns, red_eyes, breasts, grey_hair, cow_girl, short_hair, tail, cow_tail, large_breasts, red_tail, white_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 444 | 520.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ushizaki_urumi_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 444 | 295.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ushizaki_urumi_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1068 | 645.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ushizaki_urumi_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 444 | 458.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ushizaki_urumi_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1068 | 917.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ushizaki_urumi_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/ushizaki_urumi_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, cleavage, collarbone, cow_print, crop_top, haori, long_sleeves, smile, solo, wide_sleeves, frills, holding, looking_at_viewer, midriff, navel, statue, yellow_shorts, medium_breasts, simple_background, white_background, open_mouth | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, cow_print, crop_top, frilled_shorts, full_body, haori, midriff, navel, sandals, solo, yellow_shorts, yellow_tank_top, holding, long_sleeves, looking_at_viewer, red_footwear, smile, collarbone, simple_background, statue, white_background, cleavage, medium_breasts, open_mouth, standing | | 2 | 9 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, collarbone, cow_print, haori, solo, upper_body, cleavage, looking_at_viewer, simple_background, bare_shoulders, smile, crop_top, white_background, closed_mouth, sports_bra | | 3 | 8 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, bikini, huge_breasts, solo, cleavage, simple_background, navel, white_background, blush, cow_print, looking_at_viewer, open_mouth, smile, yellow_tank_top | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1boy, 1girl, blush, hetero, solo_focus, cum_on_breasts, looking_at_viewer, nipples, paizuri, gigantic_breasts, medium_hair, open_mouth, penis, pov, simple_background, smile, sweat, upper_body, :q, bangs, censored, completely_nude, ejaculation, facial, haori, heart, huge_breasts, lactation, symbol-shaped_pupils, white_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | cleavage | collarbone | cow_print | crop_top | haori | long_sleeves | smile | solo | wide_sleeves | frills | holding | looking_at_viewer | midriff | navel | statue | yellow_shorts | medium_breasts | simple_background | white_background | open_mouth | frilled_shorts | full_body | sandals | yellow_tank_top | red_footwear | standing | upper_body | bare_shoulders | closed_mouth | sports_bra | bikini | huge_breasts | blush | 1boy | hetero | solo_focus | cum_on_breasts | nipples | paizuri | gigantic_breasts | medium_hair | penis | pov | sweat | :q | bangs | censored | completely_nude | ejaculation | facial | heart | lactation | symbol-shaped_pupils | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:-------------|:------------|:-----------|:--------|:---------------|:--------|:-------|:---------------|:---------|:----------|:--------------------|:----------|:--------|:---------|:----------------|:-----------------|:--------------------|:-------------------|:-------------|:-----------------|:------------|:----------|:------------------|:---------------|:-----------|:-------------|:-----------------|:---------------|:-------------|:---------|:---------------|:--------|:-------|:---------|:-------------|:-----------------|:----------|:----------|:-------------------|:--------------|:--------|:------|:--------|:-----|:--------|:-----------|:------------------|:--------------|:---------|:--------|:------------|:-----------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 9 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | X | | X | X | | | | X | | | | | | X | X | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 8 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | X | | | | X | X | | | | X | | X | | | | X | X | X | | | | X | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | | | | X | | X | | | | | X | | | | | | X | X | X | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/ushizaki_urumi_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-19T03:21:17+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T03:52:20+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of ushizaki\_urumi/牛崎潤美 (Touhou) ======================================== This is the dataset of ushizaki\_urumi/牛崎潤美 (Touhou), containing 444 images and their tags. The core tags of this character are 'horns, multicolored\_hair, two-tone\_hair, black\_hair, cow\_horns, animal\_ears, split-color\_hair, cow\_ears, red\_horns, red\_eyes, breasts, grey\_hair, cow\_girl, short\_hair, tail, cow\_tail, large\_breasts, red\_tail, white\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
123695c5194573094d4d27b0fb80a225cf7acb98
# Dataset Card for "Third" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
JorangHorse/Third
[ "region:us" ]
2023-08-19T03:23:47+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1213654.0, "num_examples": 2}], "download_size": 623252, "dataset_size": 1213654.0}}
2023-08-19T03:35:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Third" More Information needed
[ "# Dataset Card for \"Third\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Third\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Third\"\n\nMore Information needed" ]
eeed99549874dcb87aa5808872049b48f45223f3
# Dataset Card for "generate_sub_0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Jing24/generate_sub_0
[ "region:us" ]
2023-08-19T04:03:50+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 71556466, "num_examples": 78391}], "download_size": 12827716, "dataset_size": 71556466}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-19T04:03:51+00:00
[]
[]
TAGS #region-us
# Dataset Card for "generate_sub_0" More Information needed
[ "# Dataset Card for \"generate_sub_0\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"generate_sub_0\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"generate_sub_0\"\n\nMore Information needed" ]
326d250362a9abda4c4413f62da7925fbab15f86
# Dataset of joutougu_mayumi (Touhou) This is the dataset of joutougu_mayumi (Touhou), containing 500 images and their tags. The core tags of this character are `blonde_hair, ribbon, hair_bun, double_bun, hair_ribbon, bangs, short_hair, white_ribbon, yellow_eyes, blunt_bangs`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 516.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/joutougu_mayumi_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 329.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/joutougu_mayumi_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1077 | 676.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/joutougu_mayumi_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 471.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/joutougu_mayumi_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1077 | 921.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/joutougu_mayumi_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/joutougu_mayumi_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, armored_dress, closed_mouth, haniwa_(statue), puffy_short_sleeves, solo, upper_body, vambraces, white_shirt, yellow_dress, looking_at_viewer, holding, red_ribbon | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, armored_dress, haniwa_(statue), open_mouth, puffy_short_sleeves, solo, upper_body, vambraces, white_shirt, yellow_dress, holding, looking_at_viewer, simple_background, smile, hands_up, white_background | | 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, armored_dress, full_body, haniwa_(statue), looking_at_viewer, puffy_short_sleeves, solo, vambraces, yellow_dress, bloomers, boots, white_shirt, holding, simple_background, standing, white_background, black_footwear, blue_footwear, closed_mouth, red_ribbon | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, armored_dress, haniwa_(statue), puffy_short_sleeves, solo, vambraces, yellow_dress, open_mouth, bloomers, one-hour_drawing_challenge | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, armored_dress, holding_sword, open_mouth, puffy_short_sleeves, solo, vambraces, haniwa_(statue), simple_background, white_shirt, yellow_dress, looking_to_the_side, cowboy_shot, sheath, white_background | | 5 | 8 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, blush, dress_bow, haniwa_(statue), puffy_short_sleeves, red_bow, solo, yellow_dress, red_ribbon, simple_background, white_shirt, looking_at_viewer, standing, arm_ribbon, breasts, open_mouth, purple_belt, white_background, closed_mouth, hands_up, upper_body | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | armored_dress | closed_mouth | haniwa_(statue) | puffy_short_sleeves | solo | upper_body | vambraces | white_shirt | yellow_dress | looking_at_viewer | holding | red_ribbon | open_mouth | simple_background | smile | hands_up | white_background | full_body | bloomers | boots | standing | black_footwear | blue_footwear | one-hour_drawing_challenge | holding_sword | looking_to_the_side | cowboy_shot | sheath | blush | dress_bow | red_bow | arm_ribbon | breasts | purple_belt | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------|:---------------|:------------------|:----------------------|:-------|:-------------|:------------|:--------------|:---------------|:--------------------|:----------|:-------------|:-------------|:--------------------|:--------|:-----------|:-------------------|:------------|:-----------|:--------|:-----------|:-----------------|:----------------|:-----------------------------|:----------------|:----------------------|:--------------|:---------|:--------|:------------|:----------|:-------------|:----------|:--------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | X | X | X | X | X | X | X | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | X | | X | X | X | X | X | X | | X | | | X | X | X | X | X | X | X | | | | | | | | | | | | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | X | X | X | | X | | X | | | | X | | | | | | X | | | | | X | | | | | | | | | | | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | X | X | X | | X | X | X | | | | X | X | | | X | | | | | | | | X | X | X | X | | | | | | | | 5 | 8 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | X | X | X | X | X | | X | X | X | | X | X | X | | X | X | | | | X | | | | | | | | X | X | X | X | X | X |
CyberHarem/joutougu_mayumi_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-19T04:15:59+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T08:07:17+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of joutougu\_mayumi (Touhou) ==================================== This is the dataset of joutougu\_mayumi (Touhou), containing 500 images and their tags. The core tags of this character are 'blonde\_hair, ribbon, hair\_bun, double\_bun, hair\_ribbon, bangs, short\_hair, white\_ribbon, yellow\_eyes, blunt\_bangs', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
0cc6a130d055e3de1820cf2bcb8cd8353b16c71c
# Dataset of merlin_prismriver/メルラン・プリズムリバー/메를랑프리즘리버 (Touhou) This is the dataset of merlin_prismriver/メルラン・プリズムリバー/메를랑프리즘리버 (Touhou), containing 37 images and their tags. The core tags of this character are `hat, short_hair, blue_eyes, blue_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 37 | 35.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/merlin_prismriver_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 37 | 26.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/merlin_prismriver_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 81 | 50.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/merlin_prismriver_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 37 | 34.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/merlin_prismriver_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 81 | 62.90 MiB | [Download](https://huggingface.co/datasets/CyberHarem/merlin_prismriver_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/merlin_prismriver_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 19 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, smile, open_mouth, trumpet, blush, skirt | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, pink_headwear, pink_shirt, smile, solo, pink_skirt, trumpet, bangs, blush, looking_at_viewer, open_mouth, frills, juliet_sleeves, shoes, socks, eighth_note, full_body, hair_between_eyes, one_eye_closed, pink_vest, simple_background, standing, sun_symbol, white_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | smile | open_mouth | trumpet | blush | skirt | pink_headwear | pink_shirt | pink_skirt | bangs | looking_at_viewer | frills | juliet_sleeves | shoes | socks | eighth_note | full_body | hair_between_eyes | one_eye_closed | pink_vest | simple_background | standing | sun_symbol | white_background | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:-------------|:----------|:--------|:--------|:----------------|:-------------|:-------------|:--------|:--------------------|:---------|:-----------------|:--------|:--------|:--------------|:------------|:--------------------|:-----------------|:------------|:--------------------|:-----------|:-------------|:-------------------| | 0 | 19 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/merlin_prismriver_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-19T04:27:26+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T01:24:12+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of merlin\_prismriver/メルラン・プリズムリバー/메를랑프리즘리버 (Touhou) ============================================================ This is the dataset of merlin\_prismriver/メルラン・プリズムリバー/메를랑프리즘리버 (Touhou), containing 37 images and their tags. The core tags of this character are 'hat, short\_hair, blue\_eyes, blue\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
b9c4c462167bd17dbabf75e9e355fd3e7c1aa5cb
# Dataset of yumeko/夢子 (Touhou) This is the dataset of yumeko/夢子 (Touhou), containing 184 images and their tags. The core tags of this character are `blonde_hair, maid_headdress, yellow_eyes, long_hair, ribbon, breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 184 | 195.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yumeko_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 184 | 120.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yumeko_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 341 | 219.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yumeko_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 184 | 173.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yumeko_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 341 | 291.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yumeko_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/yumeko_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 2girls, maid_apron, puffy_short_sleeves, red_dress, blush, white_apron, black_ribbon, frilled_apron, ribbon_trim | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, bangs, black_ribbon, closed_mouth, maid_apron, puffy_short_sleeves, smile, solo, white_apron, back_bow, frilled_apron, looking_at_viewer, waist_apron, blush, collared_dress, medium_breasts, neck_ribbon | | 2 | 20 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, looking_at_viewer, red_dress, solo, holding_sword, maid_apron, white_apron, puffy_short_sleeves, frilled_apron, bangs, closed_mouth, frilled_dress, medium_breasts, black_ribbon, neck_ribbon | | 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, apron, maid, solo, dress, sword | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 2girls | maid_apron | puffy_short_sleeves | red_dress | blush | white_apron | black_ribbon | frilled_apron | ribbon_trim | 1girl | bangs | closed_mouth | smile | solo | back_bow | looking_at_viewer | waist_apron | collared_dress | medium_breasts | neck_ribbon | holding_sword | frilled_dress | apron | maid | dress | sword | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------|:-------------|:----------------------|:------------|:--------|:--------------|:---------------|:----------------|:--------------|:--------|:--------|:---------------|:--------|:-------|:-----------|:--------------------|:--------------|:-----------------|:-----------------|:--------------|:----------------|:----------------|:--------|:-------|:--------|:--------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | | X | X | | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | 2 | 20 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | | X | X | X | | X | X | X | | X | X | X | | X | | X | | | X | X | X | X | | | | | | 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | | | | | | | | | | X | | | | X | | | | | | | | | X | X | X | X |
CyberHarem/yumeko_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-19T04:55:05+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T04:33:52+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of yumeko/夢子 (Touhou) ============================= This is the dataset of yumeko/夢子 (Touhou), containing 184 images and their tags. The core tags of this character are 'blonde\_hair, maid\_headdress, yellow\_eyes, long\_hair, ribbon, breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
cfe18cf3c52edeb7c39b5c8cc23e2bdc17a2c2c9
mc4 but in HPC friendly parquet format (32GiB shards) Attribution,license, copyright info: [Google](https://www.tensorflow.org/datasets/catalog/c4) and [AI^2](https://huggingface.co/datasets/allenai/c4) for producing and uploading them.
duckaiml/mc4_310
[ "license:other", "region:us" ]
2023-08-19T05:09:57+00:00
{"license": "other", "dataset_info": {"config_name": "ko", "features": [{"name": "source", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "added", "dtype": "string"}, {"name": "timestamp", "dtype": "timestamp[s]"}, {"name": "metadata", "struct": [{"name": "url", "dtype": "string"}]}, {"name": "lang", "struct": [{"name": "ko.tfrecord", "dtype": "float64"}]}], "splits": [{"name": "train", "num_bytes": 151177516676, "num_examples": 24035493}], "download_size": 16185376673, "dataset_size": 151177516676}, "configs": [{"config_name": "ko", "data_files": [{"split": "train", "path": "ko/train-*"}]}]}
2023-08-19T21:29:41+00:00
[]
[]
TAGS #license-other #region-us
mc4 but in HPC friendly parquet format (32GiB shards) Attribution,license, copyright info: Google and AI^2 for producing and uploading them.
[]
[ "TAGS\n#license-other #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#license-other #region-us \n" ]
98269a4865042e0e74756c066694cd8c0fb77535
# Dataset of kurumi/くるみ (Touhou) This is the dataset of kurumi/くるみ (Touhou), containing 133 images and their tags. The core tags of this character are `blonde_hair, long_hair, wings, bow, bat_wings, yellow_eyes, ribbon, purple_wings, hair_ribbon, bangs, very_long_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 133 | 117.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kurumi_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 133 | 78.44 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kurumi_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 239 | 144.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kurumi_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 133 | 108.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kurumi_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 239 | 187.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kurumi_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/kurumi_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 29 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | suspender_skirt, 1girl, long_sleeves, white_shirt, solo, red_bowtie, black_skirt, center_frills, smile, looking_at_viewer, white_ribbon, blush, frilled_skirt, demon_wings, hair_bow, white_bow, closed_mouth, open_mouth, shoes | | 1 | 18 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, skirt, solo, suspenders, smile, open_mouth | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | suspender_skirt | 1girl | long_sleeves | white_shirt | solo | red_bowtie | black_skirt | center_frills | smile | looking_at_viewer | white_ribbon | blush | frilled_skirt | demon_wings | hair_bow | white_bow | closed_mouth | open_mouth | shoes | skirt | suspenders | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------|:--------|:---------------|:--------------|:-------|:-------------|:--------------|:----------------|:--------|:--------------------|:---------------|:--------|:----------------|:--------------|:-----------|:------------|:---------------|:-------------|:--------|:--------|:-------------| | 0 | 29 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | 1 | 18 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | | X | | | X | | | | X | | | | | | | | | X | | X | X |
CyberHarem/kurumi_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-19T05:15:40+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T03:20:42+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of kurumi/くるみ (Touhou) ============================== This is the dataset of kurumi/くるみ (Touhou), containing 133 images and their tags. The core tags of this character are 'blonde\_hair, long\_hair, wings, bow, bat\_wings, yellow\_eyes, ribbon, purple\_wings, hair\_ribbon, bangs, very\_long\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
df29de695081fddcec15106a151fc2272ed12682
# Dataset Card for "generate_sub_1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Jing24/generate_sub_1
[ "region:us" ]
2023-08-19T05:39:48+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 63954468, "num_examples": 70370}], "download_size": 11445492, "dataset_size": 63954468}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-19T05:39:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for "generate_sub_1" More Information needed
[ "# Dataset Card for \"generate_sub_1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"generate_sub_1\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"generate_sub_1\"\n\nMore Information needed" ]
2e7a026cd19c2b3255be628696280ac8a3a44b97
# Dataset Card for "semantic-try2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Michael823/semantic-try2
[ "region:us" ]
2023-08-19T05:42:40+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3347017.0, "num_examples": 10}, {"name": "validation", "num_bytes": 834103.0, "num_examples": 3}], "download_size": 4200704, "dataset_size": 4181120.0}}
2023-08-19T05:45:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for "semantic-try2" More Information needed
[ "# Dataset Card for \"semantic-try2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"semantic-try2\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"semantic-try2\"\n\nMore Information needed" ]
2ad1406cbd17b61b942bc08bb3580290005abcc0
# Dataset Card for "audio-emotions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
antonjaragon/audio-emotions
[ "region:us" ]
2023-08-19T05:55:18+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "paths", "dtype": "string"}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 624270.4512534819, "num_examples": 9764}, {"name": "test", "num_bytes": 156131.5487465181, "num_examples": 2442}], "download_size": 167160, "dataset_size": 780402.0}}
2023-08-19T16:19:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "audio-emotions" More Information needed
[ "# Dataset Card for \"audio-emotions\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"audio-emotions\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"audio-emotions\"\n\nMore Information needed" ]
03691112dc1ffe5a588e38d20f4b79ac0f2e0b3e
# Dataset of ringo/鈴瑚/링고 (Touhou) This is the dataset of ringo/鈴瑚/링고 (Touhou), containing 500 images and their tags. The core tags of this character are `animal_ears, rabbit_ears, blonde_hair, short_hair, hat, floppy_ears, red_eyes, flat_cap, brown_headwear, breasts, cabbie_hat`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 437.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ringo_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 296.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ringo_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1096 | 601.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ringo_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 406.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ringo_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1096 | 782.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ringo_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/ringo_touhou', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 23 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, dango, orange_shirt, solo, midriff, short_sleeves, shorts, skewer, eating, looking_at_viewer, navel, barefoot, smile | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, dango, holding_food, orange_shirt, short_sleeves, solo, striped_shorts, yellow_shorts, closed_mouth, midriff, navel, simple_background, eating, white_background, bangs, vertical_stripes, :t, barefoot, blush_stickers, frills, full_body, medium_breasts, crop_top, one-hour_drawing_challenge, yellow_shirt | | 2 | 10 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, orange_shirt, solo, upper_body, open_mouth, short_sleeves, looking_at_viewer, simple_background, bangs, smile, collarbone | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | dango | orange_shirt | solo | midriff | short_sleeves | shorts | skewer | eating | looking_at_viewer | navel | barefoot | smile | holding_food | striped_shorts | yellow_shorts | closed_mouth | simple_background | white_background | bangs | vertical_stripes | :t | blush_stickers | frills | full_body | medium_breasts | crop_top | one-hour_drawing_challenge | yellow_shirt | upper_body | open_mouth | collarbone | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:---------------|:-------|:----------|:----------------|:---------|:---------|:---------|:--------------------|:--------|:-----------|:--------|:---------------|:-----------------|:----------------|:---------------|:--------------------|:-------------------|:--------|:-------------------|:-----|:-----------------|:---------|:------------|:-----------------|:-----------|:-----------------------------|:---------------|:-------------|:-------------|:-------------| | 0 | 23 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | | | X | | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | 2 | 10 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | X | X | | X | | | | X | | | X | | | | | X | | X | | | | | | | | | | X | X | X |
CyberHarem/ringo_touhou
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-19T06:16:31+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T01:09:34+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of ringo/鈴瑚/링고 (Touhou) =============================== This is the dataset of ringo/鈴瑚/링고 (Touhou), containing 500 images and their tags. The core tags of this character are 'animal\_ears, rabbit\_ears, blonde\_hair, short\_hair, hat, floppy\_ears, red\_eyes, flat\_cap, brown\_headwear, breasts, cabbie\_hat', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]