sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
listlengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
listlengths
0
25
languages
listlengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
listlengths
0
352
processed_texts
listlengths
1
353
tokens_length
listlengths
1
353
input_texts
listlengths
1
40
9e6ecb9efed66d925a4c1e93e6d63df05a88e21d
# Dataset of fuyutsuki/ๅ†ฌๆœˆ/ํ›„์œ ์ธ ํ‚ค (Kantai Collection) This is the dataset of fuyutsuki/ๅ†ฌๆœˆ/ํ›„์œ ์ธ ํ‚ค (Kantai Collection), containing 315 images and their tags. The core tags of this character are `long_hair, one_side_up, grey_eyes, headband, white_headband, white_hair, breasts, hair_between_eyes, grey_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 315 | 393.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuyutsuki_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 315 | 243.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuyutsuki_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 751 | 508.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuyutsuki_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 315 | 355.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuyutsuki_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 751 | 690.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuyutsuki_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/fuyutsuki_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 45 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, black_gloves, clothes_writing, grey_neckerchief, serafuku, solo, white_sailor_collar, hachimaki, black_skirt, pleated_skirt, shawl, short_sleeves, microskirt, grey_thighhighs, half_gloves, cowboy_shot, white_background, simple_background, smile, closed_mouth | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, black_skirt, cowboy_shot, grey_neckerchief, hachimaki, microskirt, pleated_skirt, serafuku, simple_background, solo, white_background, white_sailor_collar, black_gloves, clothes_writing, shawl, twitter_username, one-hour_drawing_challenge, looking_at_viewer, machinery | | 2 | 26 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, clothes_writing, hachimaki, serafuku, solo, white_sailor_collar, grey_neckerchief, upper_body, black_gloves, closed_mouth, smile, short_sleeves, simple_background, white_background, half_gloves, looking_at_viewer | | 3 | 10 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | clothes_writing, hachimaki, serafuku, white_bodysuit, white_neckerchief, black_sailor_collar, pleated_skirt, white_skirt, 2girls, blue_eyes, black_gloves, blush, miniskirt, smile, cowboy_shot, grey_jacket, jacket_on_shoulders, short_sleeves, solo_focus, white_necktie | | 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, hachimaki, hetero, large_breasts, nipples, 1boy, penis, solo_focus, blush, open_mouth, bar_censor, navel, sex, thighhighs, vaginal, black_gloves, cowgirl_position, girl_on_top, half_gloves, paizuri | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, cleavage, detached_collar, fake_animal_ears, looking_at_viewer, playboy_bunny, rabbit_ears, simple_background, solo, strapless_leotard, white_background, wrist_cuffs, alternate_costume, blush, large_breasts, black_bowtie, covered_navel, cowboy_shot, rabbit_tail | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, competition_swimsuit, cowboy_shot, looking_at_viewer, solo, dated, hachimaki, highleg_swimsuit, one-hour_drawing_challenge, simple_background, smile, twitter_username, white_background, black_one-piece_swimsuit, blue_one-piece_swimsuit, clothes_writing, large_breasts | | 7 | 11 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | underwear_only, 1girl, solo, blush, cleavage, cowboy_shot, closed_mouth, simple_background, very_long_hair, large_breasts, looking_at_viewer, medium_breasts, navel, white_background, white_bra, white_panties, collarbone, grey_panties, smile | | 8 | 5 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 2girls, cleavage, solo_focus, black_bikini, large_breasts, blue_sky, cloud, day, outdoors, see-through, smile, white_bikini, bikini_skirt, dated, hair_flower, medium_breasts, navel, sarong, very_long_hair, white_hairband | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_gloves | clothes_writing | grey_neckerchief | serafuku | solo | white_sailor_collar | hachimaki | black_skirt | pleated_skirt | shawl | short_sleeves | microskirt | grey_thighhighs | half_gloves | cowboy_shot | white_background | simple_background | smile | closed_mouth | twitter_username | one-hour_drawing_challenge | looking_at_viewer | machinery | upper_body | white_bodysuit | white_neckerchief | black_sailor_collar | white_skirt | 2girls | blue_eyes | blush | miniskirt | grey_jacket | jacket_on_shoulders | solo_focus | white_necktie | hetero | large_breasts | nipples | 1boy | penis | open_mouth | bar_censor | navel | sex | thighhighs | vaginal | cowgirl_position | girl_on_top | paizuri | cleavage | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | strapless_leotard | wrist_cuffs | alternate_costume | black_bowtie | covered_navel | rabbit_tail | competition_swimsuit | dated | highleg_swimsuit | black_one-piece_swimsuit | blue_one-piece_swimsuit | underwear_only | very_long_hair | medium_breasts | white_bra | white_panties | collarbone | grey_panties | black_bikini | blue_sky | cloud | day | outdoors | see-through | white_bikini | bikini_skirt | hair_flower | sarong | white_hairband | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:------------------|:-------------------|:-----------|:-------|:----------------------|:------------|:--------------|:----------------|:--------|:----------------|:-------------|:------------------|:--------------|:--------------|:-------------------|:--------------------|:--------|:---------------|:-------------------|:-----------------------------|:--------------------|:------------|:-------------|:-----------------|:--------------------|:----------------------|:--------------|:---------|:------------|:--------|:------------|:--------------|:----------------------|:-------------|:----------------|:---------|:----------------|:----------|:-------|:--------|:-------------|:-------------|:--------|:------|:-------------|:----------|:-------------------|:--------------|:----------|:-----------|:------------------|:-------------------|:----------------|:--------------|:--------------------|:--------------|:--------------------|:---------------|:----------------|:--------------|:-----------------------|:--------|:-------------------|:---------------------------|:--------------------------|:-----------------|:-----------------|:-----------------|:------------|:----------------|:-------------|:---------------|:---------------|:-----------|:--------|:------|:-----------|:--------------|:---------------|:---------------|:--------------|:---------|:-----------------| | 0 | 45 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | | X | | | X | X | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 26 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | X | X | X | | | | X | | | X | | X | X | X | X | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 10 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | | X | X | | X | | | X | | X | | X | | | | X | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | X | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | | | | X | | | | | | | | | | X | X | X | | | | | X | | | | | | | | | X | | | | | | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | X | | | X | | X | | | | | | | | X | X | X | X | | X | X | X | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | 7 | 11 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | | | | X | | | | | | | | | | X | X | X | X | X | | | X | | | | | | | | | X | | | | | | | X | | | | | | X | | | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | 8 | 5 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | X | | | X | | | | | | X | | | | | | | X | | | | | | | | | | | | X | | | | | X | X | | | | | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/fuyutsuki_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-22T16:25:29+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T23:43:15+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of fuyutsuki/ๅ†ฌๆœˆ/ํ›„์œ ์ธ ํ‚ค (Kantai Collection) ================================================ This is the dataset of fuyutsuki/ๅ†ฌๆœˆ/ํ›„์œ ์ธ ํ‚ค (Kantai Collection), containing 315 images and their tags. The core tags of this character are 'long\_hair, one\_side\_up, grey\_eyes, headband, white\_headband, white\_hair, breasts, hair\_between\_eyes, grey\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
ec7a54fad47a64133d3861ec9027e6a2dfc6d82e
# Hello!
metaltiger775/test
[ "task_categories:text-to-image", "size_categories:n<1K", "language:en", "license:mit", "region:us" ]
2023-08-22T16:34:27+00:00
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "pretty_name": "test"}
2023-08-25T21:50:20+00:00
[]
[ "en" ]
TAGS #task_categories-text-to-image #size_categories-n<1K #language-English #license-mit #region-us
# Hello!
[ "# Hello!" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #language-English #license-mit #region-us \n", "# Hello!" ]
[ 37, 3 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #language-English #license-mit #region-us \n# Hello!" ]
46a228e2de3a9ea3f08914f068224557c33c6a3f
# Dataset Card for "stratio-doc-q-response" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yvillamil/stratio-doc-q-response
[ "region:us" ]
2023-08-22T16:46:58+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 43028, "num_examples": 3}], "download_size": 20560, "dataset_size": 43028}}
2023-08-22T16:47:00+00:00
[]
[]
TAGS #region-us
# Dataset Card for "stratio-doc-q-response" More Information needed
[ "# Dataset Card for \"stratio-doc-q-response\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"stratio-doc-q-response\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"stratio-doc-q-response\"\n\nMore Information needed" ]
376c8115e376876c2bd4f0b243fc81ed729b39b1
# Dataset Card for CUB200FD ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
JiabaoWangTS/CUB200FD
[ "task_categories:image-to-text", "size_categories:n<1K", "language:en", "license:openrail", "region:us" ]
2023-08-22T17:04:19+00:00
{"language": ["en"], "license": "openrail", "size_categories": ["n<1K"], "task_categories": ["image-to-text"], "pretty_name": "tiny_demo"}
2023-08-24T07:01:30+00:00
[]
[ "en" ]
TAGS #task_categories-image-to-text #size_categories-n<1K #language-English #license-openrail #region-us
# Dataset Card for CUB200FD ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for CUB200FD", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#task_categories-image-to-text #size_categories-n<1K #language-English #license-openrail #region-us \n", "# Dataset Card for CUB200FD", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 38, 9, 24, 32, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#task_categories-image-to-text #size_categories-n<1K #language-English #license-openrail #region-us \n# Dataset Card for CUB200FD## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
194a4064d17dad41adc14da76a5e135106ee54ec
# PICKLE PICKLE is the dataset associated with the manuscript *In a PICKLE: A gold standard entity and relation corpus for the molecular plant sciences*. The code repository associated with this dataset can be found [here](https://github.com/serenalotreck/pickle-corpus-code). ## Format specification This dataset is formatted according to the [specifications for use with the DyGIE++ architecture](https://github.com/dwadden/dygiepp/blob/master/doc/data.md). **NOTE:** At this time, the dataset will throw a `JSONDecodeError` when used with `load_datasets` (see [#6460 on `datasets`](https://github.com/huggingface/datasets/issues/6460)). In the meantime, you can access the data by downloading the `.jsonl` files directly from the GUI, and importing to Python with the following code: ``` import jsonlines with jsonlines.open('train.jsonl') as reader: train = [] for obj in reader: train.append(obj) ``` ## Dataset details There are a total of 250 documents in `all.jsonl`, split up into 68%/12%/20% train/dev/test. Each document is an abstract from a scientific paper in the search results for the terms "gibberellic acid" and "jasmonic acid". There are 6,245 entity and 2,149 relation annotations across the 250 documents.
slotreck/pickle
[ "language:en", "license:cc-by-4.0", "biology", "plant science", "named entity recognition", "relation extraction", "region:us" ]
2023-08-22T17:10:12+00:00
{"language": ["en"], "license": "cc-by-4.0", "pretty_name": "PICKLE", "tags": ["biology", "plant science", "named entity recognition", "relation extraction"]}
2023-12-04T15:33:49+00:00
[]
[ "en" ]
TAGS #language-English #license-cc-by-4.0 #biology #plant science #named entity recognition #relation extraction #region-us
# PICKLE PICKLE is the dataset associated with the manuscript *In a PICKLE: A gold standard entity and relation corpus for the molecular plant sciences*. The code repository associated with this dataset can be found here. ## Format specification This dataset is formatted according to the specifications for use with the DyGIE++ architecture. NOTE: At this time, the dataset will throw a 'JSONDecodeError' when used with 'load_datasets' (see #6460 on 'datasets'). In the meantime, you can access the data by downloading the '.jsonl' files directly from the GUI, and importing to Python with the following code: ## Dataset details There are a total of 250 documents in 'URL', split up into 68%/12%/20% train/dev/test. Each document is an abstract from a scientific paper in the search results for the terms "gibberellic acid" and "jasmonic acid". There are 6,245 entity and 2,149 relation annotations across the 250 documents.
[ "# PICKLE\nPICKLE is the dataset associated with the manuscript *In a PICKLE: A gold standard entity and relation corpus for the molecular plant sciences*. The code repository associated with this dataset can be found here.", "## Format specification\nThis dataset is formatted according to the specifications for use with the DyGIE++ architecture.\nNOTE: At this time, the dataset will throw a 'JSONDecodeError' when used with 'load_datasets' (see #6460 on 'datasets'). In the meantime, you can access the data by downloading the '.jsonl' files directly from the GUI, and importing to Python with the following code:", "## Dataset details\nThere are a total of 250 documents in 'URL', split up into 68%/12%/20% train/dev/test. Each document is an abstract from a scientific paper in the search results for the terms \"gibberellic acid\" and \"jasmonic acid\". There are 6,245 entity and 2,149 relation annotations across the 250 documents." ]
[ "TAGS\n#language-English #license-cc-by-4.0 #biology #plant science #named entity recognition #relation extraction #region-us \n", "# PICKLE\nPICKLE is the dataset associated with the manuscript *In a PICKLE: A gold standard entity and relation corpus for the molecular plant sciences*. The code repository associated with this dataset can be found here.", "## Format specification\nThis dataset is formatted according to the specifications for use with the DyGIE++ architecture.\nNOTE: At this time, the dataset will throw a 'JSONDecodeError' when used with 'load_datasets' (see #6460 on 'datasets'). In the meantime, you can access the data by downloading the '.jsonl' files directly from the GUI, and importing to Python with the following code:", "## Dataset details\nThere are a total of 250 documents in 'URL', split up into 68%/12%/20% train/dev/test. Each document is an abstract from a scientific paper in the search results for the terms \"gibberellic acid\" and \"jasmonic acid\". There are 6,245 entity and 2,149 relation annotations across the 250 documents." ]
[ 36, 54, 104, 82 ]
[ "passage: TAGS\n#language-English #license-cc-by-4.0 #biology #plant science #named entity recognition #relation extraction #region-us \n# PICKLE\nPICKLE is the dataset associated with the manuscript *In a PICKLE: A gold standard entity and relation corpus for the molecular plant sciences*. The code repository associated with this dataset can be found here.## Format specification\nThis dataset is formatted according to the specifications for use with the DyGIE++ architecture.\nNOTE: At this time, the dataset will throw a 'JSONDecodeError' when used with 'load_datasets' (see #6460 on 'datasets'). In the meantime, you can access the data by downloading the '.jsonl' files directly from the GUI, and importing to Python with the following code:## Dataset details\nThere are a total of 250 documents in 'URL', split up into 68%/12%/20% train/dev/test. Each document is an abstract from a scientific paper in the search results for the terms \"gibberellic acid\" and \"jasmonic acid\". There are 6,245 entity and 2,149 relation annotations across the 250 documents." ]
e41875f52df0b97f99b07973b643d41adb40ae4a
# Dataset of kinu/้ฌผๆ€’/้ฌผๆ€’ (Kantai Collection) This is the dataset of kinu/้ฌผๆ€’/้ฌผๆ€’ (Kantai Collection), containing 323 images and their tags. The core tags of this character are `short_hair, red_hair, breasts, red_eyes, orange_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 323 | 222.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kinu_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 323 | 163.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kinu_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 632 | 308.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kinu_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 323 | 211.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kinu_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 632 | 385.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kinu_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/kinu_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | hoodie, looking_at_viewer, 1girl, alternate_costume, jacket, open_mouth, simple_background, smile, solo, black_pants, hair_between_eyes, hair_intakes, white_background, hands_in_pockets, long_sleeves, blush, hooded_sweater, vest, boots, brown_footwear, full_body, white_sweater | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, hooded_sweater, hoodie, simple_background, solo, upper_body, white_background, one-hour_drawing_challenge, white_sweater, official_alternate_costume, black_jacket, hooded_jacket, smile | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, looking_at_viewer, open_mouth, pleated_skirt, serafuku, solo, machinery, boots, turret, white_background | | 3 | 10 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, looking_at_viewer, serafuku, short_sleeves, solo, white_background, sailor_collar, simple_background, smile, blush, pleated_skirt, hair_between_eyes, sitting, buttons, upper_body | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, black_gloves, black_jacket, grey_sailor_collar, grey_skirt, looking_at_viewer, neck_ribbon, partially_fingerless_gloves, red_ribbon, serafuku, short_sleeves, simple_background, solo, white_background, pleated_skirt, grin, upper_body, one-hour_drawing_challenge, one_eye_closed, twitter_username | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, bike_shorts, black_gloves, black_jacket, grey_sailor_collar, grey_skirt, neck_ribbon, pleated_skirt, red_ribbon, serafuku, short_sleeves, solo, partially_fingerless_gloves, shorts_under_skirt, simple_background, white_background, cowboy_shot, looking_at_viewer, smile, hair_between_eyes, open_mouth, twitter_username | | 6 | 6 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, black_gloves, black_jacket, grey_sailor_collar, neck_ribbon, red_ribbon, serafuku, short_sleeves, simple_background, solo, upper_body, white_background, partially_fingerless_gloves, hair_between_eyes, looking_at_viewer, smile, twitter_username | | 7 | 8 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, cleavage, solo, cowboy_shot, looking_at_viewer, medium_breasts, navel, open_mouth, pink_hair, black_bikini, large_breasts, side-tie_bikini_bottom, smile, simple_background | | 8 | 8 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | detached_collar, playboy_bunny, rabbit_ears, wrist_cuffs, black_leotard, fake_animal_ears, looking_at_viewer, medium_breasts, strapless_leotard, open_mouth, rabbit_tail, 1girl, alternate_costume, solo, brown_pantyhose, cleavage, simple_background, white_background, full_body, hair_between_eyes, multiple_girls, red_bowtie, smile | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | hoodie | looking_at_viewer | 1girl | alternate_costume | jacket | open_mouth | simple_background | smile | solo | black_pants | hair_between_eyes | hair_intakes | white_background | hands_in_pockets | long_sleeves | blush | hooded_sweater | vest | boots | brown_footwear | full_body | white_sweater | upper_body | one-hour_drawing_challenge | official_alternate_costume | black_jacket | hooded_jacket | pleated_skirt | serafuku | machinery | turret | short_sleeves | sailor_collar | sitting | buttons | black_gloves | grey_sailor_collar | grey_skirt | neck_ribbon | partially_fingerless_gloves | red_ribbon | grin | one_eye_closed | twitter_username | bike_shorts | shorts_under_skirt | cowboy_shot | cleavage | medium_breasts | navel | pink_hair | black_bikini | large_breasts | side-tie_bikini_bottom | detached_collar | playboy_bunny | rabbit_ears | wrist_cuffs | black_leotard | fake_animal_ears | strapless_leotard | rabbit_tail | brown_pantyhose | multiple_girls | red_bowtie | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------|:--------------------|:--------|:--------------------|:---------|:-------------|:--------------------|:--------|:-------|:--------------|:--------------------|:---------------|:-------------------|:-------------------|:---------------|:--------|:-----------------|:-------|:--------|:-----------------|:------------|:----------------|:-------------|:-----------------------------|:-----------------------------|:---------------|:----------------|:----------------|:-----------|:------------|:---------|:----------------|:----------------|:----------|:----------|:---------------|:---------------------|:-------------|:--------------|:------------------------------|:-------------|:-------|:-----------------|:-------------------|:--------------|:---------------------|:--------------|:-----------|:-----------------|:--------|:------------|:---------------|:----------------|:-------------------------|:------------------|:----------------|:--------------|:--------------|:----------------|:-------------------|:--------------------|:--------------|:------------------|:-----------------|:-------------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | | | | X | X | X | | | | X | | | | X | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | | X | X | | | X | | | X | | | | X | | | | | | X | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 10 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | | X | X | | | | X | X | X | | X | | X | | | X | | | | | | | X | | | | | X | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | | X | X | | | | X | | X | | | | X | | | | | | | | | | X | X | | X | | X | X | | | X | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | | X | X | | | X | X | X | X | | X | | X | | | | | | | | | | | | | X | | X | X | | | X | | | | X | X | X | X | X | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | 6 | 6 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | | X | X | | | | X | X | X | | X | | X | | | | | | | | | | X | | | X | | | X | | | X | | | | X | X | | X | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | 7 | 8 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | | X | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | 8 | 8 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | | X | X | X | | X | X | X | X | | X | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/kinu_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-22T17:11:39+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T20:10:10+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of kinu/้ฌผๆ€’/้ฌผๆ€’ (Kantai Collection) ========================================= This is the dataset of kinu/้ฌผๆ€’/้ฌผๆ€’ (Kantai Collection), containing 323 images and their tags. The core tags of this character are 'short\_hair, red\_hair, breasts, red\_eyes, orange\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
e2ade17b3aedba4356d8a2c9950a58429718a8bc
# Summary `aya-telugu-paraphrase` is an open source dataset of instruct-style records generated from the Telugu split of [ai4bharat/IndicXParaphrase](https://huggingface.co/datasets/ai4bharat/IndicXParaphrase/viewer/te/test) dataset. This was created as part of [Aya Open Science Initiative](https://sites.google.com/cohere.com/aya-en/home) from Cohere For AI. This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Apache 2.0](https://opensource.org/license/apache-2-0) License. Supported Tasks: - Training LLMs - Synthetic Data Generation - Data Augmentation Languages: Telugu Version: 1.0 # Dataset Overview `aya-telugu-paraphrase` is a corpus of more than 1.5k records generated by conversion of Telugu split of [ai4bharat/IndicXParaphrase](https://huggingface.co/datasets/ai4bharat/IndicXParaphrase/viewer/te/test) dataset into Instruct-Style format. This Dataset can be used for the following task: - Given a sentence, generate a sentence with similar meaning. # Intended Uses While immediately valuable for instruction fine tuning large language models, as a corpus of instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation in the methods. For example, prompt-completions could be submitted as few-shot examples to a large open language model to generate sentence and corresponding paraphrased sentence. # Dataset ## Load with Datasets To load this dataset with Datasets, you'll just need to install Datasets as `pip install datasets --upgrade` and then use the following code: ```python from datasets import load_dataset ds = load_dataset('SuryaKrishna02/aya-telugu-paraphrase') ``` ## Purpose of Collection Telugu is a low-resource language where there no paraphase generation instruct-style dataset to the best of my knowledge. This was created as a part of [Aya Open Science Initiative](https://sites.google.com/cohere.com/aya-en/home) from Cohere For AI to make sure Telugu is well represented in the space of AI/ML. Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications. ## Sources - **[ai4bharat/IndicXParaphrase](https://huggingface.co/datasets/ai4bharat/IndicXParaphrase/viewer/te/test)**: Converted this dataset into Instruct-style prompts and completions. ## Data Fields - `inputs` : Prompt or input to the language model. - `targets` : Completion or output of the language model. - `template_id` : Id of the template used in `inputs` and `targets`. - `template_lang`: ISO code of the language used in the `inputs` and `targets` where *tel* refers to Telugu. ## Templates For the creation of instruct-style prompts and completions from the original dataset, the following one template category with 6 different variations were used: 1. Given a sentence, generate a sentence with similar meaning. | template_id | inputs | targets | |-------------|--------|---------| | 1 | ```เฐˆ เฐ•เฑเฐฐเฐฟเฐ‚เฐฆเฐฟ เฐตเฐพเฐ•เฑเฐฏเฐ‚ เฐฎเฐฐเฑ‹เฐฐเฑ€เฐคเฐฟเฐฒเฑ‹ เฐฐเฐพเฐฏเฐฟ:\n{{Original Sentence}}``` | ```{{Paraphrased Sentence}}``` | | 2 | ```เฐˆ เฐตเฐพเฐ•เฑเฐฏเฐ‚ เฐฎเฐฐเฑ‹เฐฐเฑ€เฐคเฐฟเฐฒเฑ‹ เฐฐเฐพเฐฏเฐฟ: {Original Sentence}}``` | ```{{Paraphrased Sentence}}``` | | 3 | ```เฐˆ เฐ•เฑเฐฐเฐฟเฐ‚เฐฆเฐฟ เฐตเฐพเฐ•เฑเฐฏเฐ‚ เฐ‡เฐ‚เฐ•เฑŠเฐฒเฐพเฐ—เฐพ เฐฐเฐพเฐฏเฐฟ:\n{{Original Sentence}}``` | ```{{Paraphrased Sentence}}``` | | 4 | ```เฐˆ เฐตเฐพเฐ•เฑเฐฏเฐ‚ เฐ‡เฐ‚เฐ•เฑŠเฐฒเฐพเฐ—เฐพ เฐฐเฐพเฐฏเฐฟ: {{Original Sentence}}``` | ```{{Paraphrased Sentence}}``` | | 5 | ```เฐˆ เฐ•เฑเฐฐเฐฟเฐ‚เฐฆเฐฟ เฐตเฐพเฐ•เฑเฐฏเฐ‚ เฐฎเฐฐเฑ‹เฐฐเฐ•เฐ‚เฐ—เฐพ เฐฐเฐพเฐฏเฐฟ:\n{{Original Sentence}}``` | ```{{Paraphrased Sentence}}``` | | 6 | ```เฐˆ เฐตเฐพเฐ•เฑเฐฏเฐ‚ เฐฎเฐฐเฑ‹เฐฐเฐ•เฐ‚เฐ—เฐพ เฐฐเฐพเฐฏเฐฟ: {{Original Sentence}}``` | ```{{Paraphrased Sentence}}``` | ## Personal or Sensitive Data This dataset contains public information. To our knowledge, there are no private personโ€™s personal identifiers or sensitive information. ## Language Telugu # Known Limitations - The Dataset is converted from the existing dataset and the contents of this dataset may reflect the bias, factual errors and sensitive matters. - Although there is utmost care taken to keep the dataset as monolingual, there might be some records that may contain English Language along with Telugu. # Contributors [SuryaKrishna02](https://github.com/SuryaKrishna02) and [Desik98](https://github.com/desik1998)
SuryaKrishna02/aya-telugu-paraphrase
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:extended|ai4bharat/IndicXParaphrase", "language:te", "license:apache-2.0", "paraphrase", "region:us" ]
2023-08-22T17:23:29+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["te"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["extended|ai4bharat/IndicXParaphrase"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Telugu Paraphrase", "tags": ["paraphrase"]}
2024-01-23T13:09:52+00:00
[]
[ "te" ]
TAGS #task_categories-text-generation #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-extended|ai4bharat/IndicXParaphrase #language-Telugu #license-apache-2.0 #paraphrase #region-us
Summary ======= 'aya-telugu-paraphrase' is an open source dataset of instruct-style records generated from the Telugu split of ai4bharat/IndicXParaphrase dataset. This was created as part of Aya Open Science Initiative from Cohere For AI. This dataset can be used for any purpose, whether academic or commercial, under the terms of the Apache 2.0 License. Supported Tasks: * Training LLMs * Synthetic Data Generation * Data Augmentation Languages: Telugu Version: 1.0 Dataset Overview ================ 'aya-telugu-paraphrase' is a corpus of more than 1.5k records generated by conversion of Telugu split of ai4bharat/IndicXParaphrase dataset into Instruct-Style format. This Dataset can be used for the following task: * Given a sentence, generate a sentence with similar meaning. Intended Uses ============= While immediately valuable for instruction fine tuning large language models, as a corpus of instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation in the methods. For example, prompt-completions could be submitted as few-shot examples to a large open language model to generate sentence and corresponding paraphrased sentence. Dataset ======= Load with Datasets ------------------ To load this dataset with Datasets, you'll just need to install Datasets as 'pip install datasets --upgrade' and then use the following code: Purpose of Collection --------------------- Telugu is a low-resource language where there no paraphase generation instruct-style dataset to the best of my knowledge. This was created as a part of Aya Open Science Initiative from Cohere For AI to make sure Telugu is well represented in the space of AI/ML. Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications. Sources ------- * ai4bharat/IndicXParaphrase: Converted this dataset into Instruct-style prompts and completions. Data Fields ----------- * 'inputs' : Prompt or input to the language model. * 'targets' : Completion or output of the language model. * 'template\_id' : Id of the template used in 'inputs' and 'targets'. * 'template\_lang': ISO code of the language used in the 'inputs' and 'targets' where *tel* refers to Telugu. Templates --------- For the creation of instruct-style prompts and completions from the original dataset, the following one template category with 6 different variations were used: 1. Given a sentence, generate a sentence with similar meaning. template\_id: 1, inputs: , targets: template\_id: 2, inputs: , targets: template\_id: 3, inputs: , targets: template\_id: 4, inputs: , targets: template\_id: 5, inputs: , targets: template\_id: 6, inputs: , targets: Personal or Sensitive Data -------------------------- This dataset contains public information. To our knowledge, there are no private personโ€™s personal identifiers or sensitive information. Language -------- Telugu Known Limitations ================= * The Dataset is converted from the existing dataset and the contents of this dataset may reflect the bias, factual errors and sensitive matters. * Although there is utmost care taken to keep the dataset as monolingual, there might be some records that may contain English Language along with Telugu. Contributors ============ SuryaKrishna02 and Desik98
[]
[ "TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-extended|ai4bharat/IndicXParaphrase #language-Telugu #license-apache-2.0 #paraphrase #region-us \n" ]
[ 107 ]
[ "passage: TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-extended|ai4bharat/IndicXParaphrase #language-Telugu #license-apache-2.0 #paraphrase #region-us \n" ]
76f3a33f5489b6fff3ce3e4773aa046c7ea5a40b
# Summary `aya-telugu-jokes` is an open source dataset of instruct-style records generated by webscraping a Telugu Jokes website. This was created as part of [Aya Open Science Initiative](https://sites.google.com/cohere.com/aya-en/home) from Cohere For AI. This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Apache 2.0](https://opensource.org/license/apache-2-0) License. Supported Tasks: - Training LLMs - Synthetic Data Generation - Data Augmentation Languages: Telugu Version: 1.0 # Dataset Overview `aya-telugu-jokes` is a corpus of more than 900 records generated by webscraping of the Telugu Jokes website. This Dataset can be used for the following task: - Given the title of a funny conversation, generate a funny conversation based on the title. # Intended Uses While immediately valuable for instruction fine tuning large language models, as a corpus of instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation in the methods. For example, prompt-completions could be submitted as few-shot examples to a large open language model to generate additional funny conversations and their titles. # Dataset ## Load with Datasets To load this dataset with Datasets, you'll just need to install Datasets as `pip install datasets --upgrade` and then use the following code: ```python from datasets import load_dataset ds = load_dataset('SuryaKrishna02/aya-telugu-jokes') ``` ## Purpose of Collection Telugu is a low-resource language where there no funny conversation generation instruct-style dataset to the best of my knowledge. This was created as a part of [Aya Open Science Initiative](https://sites.google.com/cohere.com/aya-en/home) from Cohere For AI to make sure Telugu is well represented in the space of AI/ML. Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications. ## Sources - **Andhrajyothi Website**: Performed webscraping from [Andhrajyothi Website](https://lit.andhrajyothy.com/jokes/) which is a website consisting of funny conversations. Next, performed some pre-processing of the data like removing unwanted characters from the scraped data. Finally, converted the scraped data into Instruct-style prompts and completions. ## Data Fields - `inputs` : Prompt or input to the language model. - `targets` : Completion or output of the language model. - `template_id` : Id of the template used in `inputs` and `targets`. - `template_lang`: ISO code of the language used in the `inputs` and `targets` where *tel* refers to Telugu. ## Templates For the creation of instruct-style prompts and completions from the scraped data, the following one template category with 14 different variations were used: 1. Given the title of a funny conversation, generate a funny conversation based on the title. | template_id | inputs | targets | |-------------|--------|---------| | 1 | ```{{Title}} เฐ…เฐจเฑ‡ เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ• เฐคเฑ‹ เฐœเฑ‹เฐ•เฑ เฐ‡เฐตเฑเฐตเฑ``` | ```เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ•: {{Title}}\n\n{{Funny Conversation}}``` | | 2 | ```{{Title}} เฐ…เฐจเฑ‡ เฐŸเฑˆเฐŸเฐฟเฐฒเฑ เฐคเฑ‹ เฐœเฑ‹เฐ•เฑ เฐ‡เฐตเฑเฐตเฑ``` | ```เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ•: {{Title}}\n\n{{Funny Conversation}}``` | | 3 | ```เฐ’เฐ• เฐนเฐพเฐธเฑเฐฏ เฐธเฐ‚เฐญเฐพเฐทเฐฃ เฐ‡เฐตเฑเฐตเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฐพเฐจเฐฟ เฐฏเฑŠเฐ•เฑเฐ• เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ• {{Title}} เฐ‰เฐ‚เฐกเฑ‡ เฐฒเฐพเฐ—เฐพ เฐ‡เฐตเฑเฐตเฑ.``` | ```เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ•: {{Title}}\n\n{{Funny Conversation}}``` | | 4 | ```เฐ’เฐ• เฐšเฐฟเฐจเฑเฐจ เฐนเฐพเฐธเฑเฐฏ เฐธเฐจเฑเฐจเฐฟเฐตเฑ‡เฐถเฐ‚ เฐ‡เฐตเฑเฐตเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฐพเฐจเฐฟ เฐฏเฑŠเฐ•เฑเฐ• เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ• {{Title}} เฐ‰เฐ‚เฐกเฑ‡ เฐฒเฐพเฐ—เฐพ เฐ‡เฐตเฑเฐตเฑ.``` | ```เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ•: {{Title}}\n\n{{Funny Conversation}}``` | | 5 | ```เฐ’เฐ• เฐšเฐฎเฐคเฑเฐ•เฐพเฐฐเฐฎเฐฏเฐฟเฐจ เฐธเฐ‚เฐญเฐพเฐทเฐฃ เฐ‡เฐตเฑเฐตเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฐพเฐจเฐฟ เฐฏเฑŠเฐ•เฑเฐ• เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ• {{Title}} เฐ‰เฐ‚เฐกเฑ‡ เฐฒเฐพเฐ—เฐพ เฐ‡เฐตเฑเฐตเฑ.``` | ```เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ•: {{Title}}\n\n{{Funny Conversation}}``` | | 6 | ```เฐ’เฐ• เฐšเฐฟเฐจเฑเฐจ เฐšเฐฎเฐคเฑเฐ•เฐพเฐฐเฐฎเฐฏเฐฟเฐจ เฐธเฐจเฑเฐจเฐฟเฐตเฑ‡เฐถเฐ‚ เฐ‡เฐตเฑเฐตเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฐพเฐจเฐฟ เฐฏเฑŠเฐ•เฑเฐ• เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ• {{Title}} เฐ‰เฐ‚เฐกเฑ‡ เฐฒเฐพเฐ—เฐพ เฐ‡เฐตเฑเฐตเฑ.``` | ```เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ•: {{Title}}\n\n{{Funny Conversation}}``` | | 7 | ```เฐ’เฐ• เฐคเฐฎเฐพเฐทเฐพ เฐ…เฐฏเฐฟเฐจเฐพ เฐธเฐ‚เฐญเฐพเฐทเฐฃ เฐ‡เฐตเฑเฐตเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฐพเฐจเฐฟ เฐฏเฑŠเฐ•เฑเฐ• เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ• {{Title}} เฐ‰เฐ‚เฐกเฑ‡ เฐฒเฐพเฐ—เฐพ เฐ‡เฐตเฑเฐตเฑ.``` | ```เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ•: {{Title}}\n\n{{Funny Conversation}}``` | | 8 | ```เฐ’เฐ• เฐšเฐฟเฐจเฑเฐจ เฐคเฐฎเฐพเฐทเฐพ เฐ…เฐฏเฐฟเฐจเฐพ เฐธเฐจเฑเฐจเฐฟเฐตเฑ‡เฐถเฐ‚ เฐ‡เฐตเฑเฐตเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฐพเฐจเฐฟ เฐฏเฑŠเฐ•เฑเฐ• เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ• {{Title}} เฐ‰เฐ‚เฐกเฑ‡ เฐฒเฐพเฐ—เฐพ เฐ‡เฐตเฑเฐตเฑ.``` | ```เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ•: {{Title}}\n\n{{Funny Conversation}}``` | | 9 | ```เฐ’เฐ• เฐนเฐพเฐธเฑเฐฏ เฐธเฐ‚เฐญเฐพเฐทเฐฃ เฐ‡เฐตเฑเฐตเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฐพเฐจเฐฟ เฐฏเฑŠเฐ•เฑเฐ• เฐŸเฑˆเฐŸเฐฟเฐฒเฑ {{Title}} เฐ‰เฐ‚เฐกเฑ‡ เฐฒเฐพเฐ—เฐพ เฐ‡เฐตเฑเฐตเฑ.``` | ```เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ•: {{Title}}\n\n{{Funny Conversation}}``` | | 10 | ```เฐ’เฐ• เฐšเฐฟเฐจเฑเฐจ เฐนเฐพเฐธเฑเฐฏ เฐธเฐจเฑเฐจเฐฟเฐตเฑ‡เฐถเฐ‚ เฐ‡เฐตเฑเฐตเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฐพเฐจเฐฟ เฐฏเฑŠเฐ•เฑเฐ• เฐŸเฑˆเฐŸเฐฟเฐฒเฑ {{Title}} เฐ‰เฐ‚เฐกเฑ‡ เฐฒเฐพเฐ—เฐพ เฐ‡เฐตเฑเฐตเฑ.``` | ```เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ•: {{Title}}\n\n{{Funny Conversation}}``` | | 11 | ```เฐ’เฐ• เฐšเฐฎเฐคเฑเฐ•เฐพเฐฐเฐฎเฐฏเฐฟเฐจ เฐธเฐ‚เฐญเฐพเฐทเฐฃ เฐ‡เฐตเฑเฐตเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฐพเฐจเฐฟ เฐฏเฑŠเฐ•เฑเฐ• เฐŸเฑˆเฐŸเฐฟเฐฒเฑ {{Title}} เฐ‰เฐ‚เฐกเฑ‡ เฐฒเฐพเฐ—เฐพ เฐ‡เฐตเฑเฐตเฑ.``` | ```เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ•: {{Title}}\n\n{{Funny Conversation}}``` | | 12 | ```เฐ’เฐ• เฐšเฐฟเฐจเฑเฐจ เฐšเฐฎเฐคเฑเฐ•เฐพเฐฐเฐฎเฐฏเฐฟเฐจ เฐธเฐจเฑเฐจเฐฟเฐตเฑ‡เฐถเฐ‚ เฐ‡เฐตเฑเฐตเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฐพเฐจเฐฟ เฐฏเฑŠเฐ•เฑเฐ• เฐŸเฑˆเฐŸเฐฟเฐฒเฑ {{Title}} เฐ‰เฐ‚เฐกเฑ‡ เฐฒเฐพเฐ—เฐพ เฐ‡เฐตเฑเฐตเฑ.``` | ```เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ•: {{Title}}\n\n{{Funny Conversation}}``` | | 13 | ```เฐ’เฐ• เฐคเฐฎเฐพเฐทเฐพ เฐ…เฐฏเฐฟเฐจเฐพ เฐธเฐ‚เฐญเฐพเฐทเฐฃ เฐ‡เฐตเฑเฐตเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฐพเฐจเฐฟ เฐฏเฑŠเฐ•เฑเฐ• เฐŸเฑˆเฐŸเฐฟเฐฒเฑ {{Title}} เฐ‰เฐ‚เฐกเฑ‡ เฐฒเฐพเฐ—เฐพ เฐ‡เฐตเฑเฐตเฑ.``` | ```เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ•: {{Title}}\n\n{{Funny Conversation}}``` | | 14 | ```เฐ’เฐ• เฐšเฐฟเฐจเฑเฐจ เฐคเฐฎเฐพเฐทเฐพ เฐ…เฐฏเฐฟเฐจเฐพ เฐธเฐจเฑเฐจเฐฟเฐตเฑ‡เฐถเฐ‚ เฐ‡เฐตเฑเฐตเฑ เฐฎเฐฐเฐฟเฐฏเฑ เฐฆเฐพเฐจเฐฟ เฐฏเฑŠเฐ•เฑเฐ• เฐŸเฑˆเฐŸเฐฟเฐฒเฑ {{Title}} เฐ‰เฐ‚เฐกเฑ‡ เฐฒเฐพเฐ—เฐพ เฐ‡เฐตเฑเฐตเฑ.``` | ```เฐถเฑ€เฐฐเฑเฐทเฐฟเฐ•: {{Title}}\n\n{{Funny Conversation}}``` | ## Personal or Sensitive Data This dataset contains public information. To our knowledge, there are no private personโ€™s personal identifiers or sensitive information. ## Language Telugu # Known Limitations - The Dataset is scraped from the Jokes Website and the contents of this dataset may reflect the bias, factual errors, inappropriate and sensitive matters. - Although there is utmost care taken to keep the dataset as monolingual, there might be some records that may contain English Language along with Telugu. # Contributors [SuryaKrishna02](https://github.com/SuryaKrishna02) and [Desik98](https://github.com/desik1998)
SuryaKrishna02/aya-telugu-jokes
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:te", "license:apache-2.0", "jokes", "humor", "fun conversations", "region:us" ]
2023-08-22T17:26:27+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["te"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Telugu Jokes", "tags": ["jokes", "humor", "fun conversations"]}
2024-01-23T13:10:51+00:00
[]
[ "te" ]
TAGS #task_categories-text-generation #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Telugu #license-apache-2.0 #jokes #humor #fun conversations #region-us
Summary ======= 'aya-telugu-jokes' is an open source dataset of instruct-style records generated by webscraping a Telugu Jokes website. This was created as part of Aya Open Science Initiative from Cohere For AI. This dataset can be used for any purpose, whether academic or commercial, under the terms of the Apache 2.0 License. Supported Tasks: * Training LLMs * Synthetic Data Generation * Data Augmentation Languages: Telugu Version: 1.0 Dataset Overview ================ 'aya-telugu-jokes' is a corpus of more than 900 records generated by webscraping of the Telugu Jokes website. This Dataset can be used for the following task: * Given the title of a funny conversation, generate a funny conversation based on the title. Intended Uses ============= While immediately valuable for instruction fine tuning large language models, as a corpus of instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation in the methods. For example, prompt-completions could be submitted as few-shot examples to a large open language model to generate additional funny conversations and their titles. Dataset ======= Load with Datasets ------------------ To load this dataset with Datasets, you'll just need to install Datasets as 'pip install datasets --upgrade' and then use the following code: Purpose of Collection --------------------- Telugu is a low-resource language where there no funny conversation generation instruct-style dataset to the best of my knowledge. This was created as a part of Aya Open Science Initiative from Cohere For AI to make sure Telugu is well represented in the space of AI/ML. Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications. Sources ------- * Andhrajyothi Website: Performed webscraping from Andhrajyothi Website which is a website consisting of funny conversations. Next, performed some pre-processing of the data like removing unwanted characters from the scraped data. Finally, converted the scraped data into Instruct-style prompts and completions. Data Fields ----------- * 'inputs' : Prompt or input to the language model. * 'targets' : Completion or output of the language model. * 'template\_id' : Id of the template used in 'inputs' and 'targets'. * 'template\_lang': ISO code of the language used in the 'inputs' and 'targets' where *tel* refers to Telugu. Templates --------- For the creation of instruct-style prompts and completions from the scraped data, the following one template category with 14 different variations were used: 1. Given the title of a funny conversation, generate a funny conversation based on the title. template\_id: 1, inputs: , targets: template\_id: 2, inputs: , targets: template\_id: 3, inputs: , targets: template\_id: 4, inputs: , targets: template\_id: 5, inputs: , targets: template\_id: 6, inputs: , targets: template\_id: 7, inputs: , targets: template\_id: 8, inputs: , targets: template\_id: 9, inputs: , targets: template\_id: 10, inputs: , targets: template\_id: 11, inputs: , targets: template\_id: 12, inputs: , targets: template\_id: 13, inputs: , targets: template\_id: 14, inputs: , targets: Personal or Sensitive Data -------------------------- This dataset contains public information. To our knowledge, there are no private personโ€™s personal identifiers or sensitive information. Language -------- Telugu Known Limitations ================= * The Dataset is scraped from the Jokes Website and the contents of this dataset may reflect the bias, factual errors, inappropriate and sensitive matters. * Although there is utmost care taken to keep the dataset as monolingual, there might be some records that may contain English Language along with Telugu. Contributors ============ SuryaKrishna02 and Desik98
[]
[ "TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Telugu #license-apache-2.0 #jokes #humor #fun conversations #region-us \n" ]
[ 99 ]
[ "passage: TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Telugu #license-apache-2.0 #jokes #humor #fun conversations #region-us \n" ]
3e5effe749aca126562e38c5f92917c640d53bbc
# Dataset of kuroshio/้ป’ๆฝฎ/้ป’ๆฝฎ (Kantai Collection) This is the dataset of kuroshio/้ป’ๆฝฎ/้ป’ๆฝฎ (Kantai Collection), containing 500 images and their tags. The core tags of this character are `black_hair, hair_ornament, hairclip, short_hair, green_eyes, ribbon, neck_ribbon, blue_ribbon, breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 368.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kuroshio_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 265.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kuroshio_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1100 | 535.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kuroshio_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 344.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kuroshio_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1100 | 664.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kuroshio_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/kuroshio_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 31 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, school_uniform, short_sleeves, upper_body, white_shirt, smile, black_vest, looking_at_viewer, simple_background, white_gloves, white_background, open_mouth, blush | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, bike_shorts, black_shorts, black_vest, pleated_skirt, school_uniform, short_sleeves, shorts_under_skirt, solo, white_gloves, white_shirt, black_skirt, looking_at_viewer, smile, cowboy_shot, blush, simple_background, white_background, grey_skirt, open_mouth | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, bike_shorts, looking_at_viewer, school_uniform, shirt, solo, vest, white_gloves, pleated_skirt, short_sleeves, yellow_eyes, blush, open_mouth | | 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, cowboy_shot, looking_at_viewer, solo, black_one-piece_swimsuit, flat_chest, artist_name, one-hour_drawing_challenge, smile, character_name, competition_swimsuit, lying, school_swimsuit | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, alternate_costume, looking_at_viewer, open_mouth, smile, solo, floral_print, obi, one-hour_drawing_challenge, twitter_username, upper_body, yukata, blush, holding_food, purple_kimono, simple_background, takoyaki, white_background, wide_sleeves | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, looking_at_viewer, solo, alternate_costume, blue_one-piece_swimsuit, collarbone, dated, simple_background, sitting, blush, competition_school_swimsuit, signature, white_background | | 6 | 23 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | rabbit_ears, 1girl, fake_animal_ears, playboy_bunny, solo, detached_collar, black_leotard, black_pantyhose, blush, looking_at_viewer, wrist_cuffs, bowtie, medium_breasts, smile, cleavage, cowboy_shot, rabbit_tail, simple_background, open_mouth, strapless_leotard, yellow_eyes, white_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | school_uniform | short_sleeves | upper_body | white_shirt | smile | black_vest | looking_at_viewer | simple_background | white_gloves | white_background | open_mouth | blush | bike_shorts | black_shorts | pleated_skirt | shorts_under_skirt | black_skirt | cowboy_shot | grey_skirt | shirt | vest | yellow_eyes | black_one-piece_swimsuit | flat_chest | artist_name | one-hour_drawing_challenge | character_name | competition_swimsuit | lying | school_swimsuit | alternate_costume | floral_print | obi | twitter_username | yukata | holding_food | purple_kimono | takoyaki | wide_sleeves | blue_one-piece_swimsuit | collarbone | dated | sitting | competition_school_swimsuit | signature | rabbit_ears | fake_animal_ears | playboy_bunny | detached_collar | black_leotard | black_pantyhose | wrist_cuffs | bowtie | medium_breasts | cleavage | rabbit_tail | strapless_leotard | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-----------------|:----------------|:-------------|:--------------|:--------|:-------------|:--------------------|:--------------------|:---------------|:-------------------|:-------------|:--------|:--------------|:---------------|:----------------|:---------------------|:--------------|:--------------|:-------------|:--------|:-------|:--------------|:---------------------------|:-------------|:--------------|:-----------------------------|:-----------------|:-----------------------|:--------|:------------------|:--------------------|:---------------|:------|:-------------------|:---------|:---------------|:----------------|:-----------|:---------------|:--------------------------|:-------------|:--------|:----------|:------------------------------|:------------|:--------------|:-------------------|:----------------|:------------------|:----------------|:------------------|:--------------|:---------|:-----------------|:-----------|:--------------|:--------------------| | 0 | 31 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | | | | | X | | X | | X | X | X | | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | | | | X | | X | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | | X | | X | | X | X | | X | X | X | | | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | | | | | | | X | X | | X | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | 6 | 23 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | | | | | X | | X | X | | X | X | X | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/kuroshio_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-22T17:29:07+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T10:27:18+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of kuroshio/้ป’ๆฝฎ/้ป’ๆฝฎ (Kantai Collection) ============================================= This is the dataset of kuroshio/้ป’ๆฝฎ/้ป’ๆฝฎ (Kantai Collection), containing 500 images and their tags. The core tags of this character are 'black\_hair, hair\_ornament, hairclip, short\_hair, green\_eyes, ribbon, neck\_ribbon, blue\_ribbon, breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
e1ef03e7cb07015559c2ec7e0fd30bc89cb07a7c
# Dataset Card for "eu_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
KatMarie/eu_test
[ "region:us" ]
2023-08-22T17:37:06+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 53466625, "num_examples": 496313}], "download_size": 31031837, "dataset_size": 53466625}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-29T14:28:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "eu_test" More Information needed
[ "# Dataset Card for \"eu_test\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"eu_test\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"eu_test\"\n\nMore Information needed" ]
c41d0cd8ebd0d0af16d12636a3a948af55ec853d
# Dataset of hamanami (Kantai Collection) This is the dataset of hamanami (Kantai Collection), containing 276 images and their tags. The core tags of this character are `long_hair, grey_hair, braid, single_braid, ribbon, hair_ribbon, ahoge, hair_over_eyes, brown_eyes, black_ribbon, bow`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 276 | 242.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hamanami_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 276 | 160.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hamanami_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 612 | 338.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hamanami_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 276 | 222.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hamanami_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 612 | 449.90 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hamanami_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/hamanami_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, bowtie, grey_pantyhose, long_sleeves, looking_at_viewer, pleated_dress, purple_dress, school_uniform, simple_background, solo, white_background, white_shirt, cowboy_shot, seamed_legwear, smile, blush | | 1 | 37 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, bowtie, long_sleeves, school_uniform, solo, white_shirt, purple_dress, looking_at_viewer, upper_body, white_background, simple_background, hair_over_one_eye, open_mouth, blush | | 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, grey_pantyhose, long_sleeves, pleated_dress, purple_dress, school_uniform, solo, white_shirt, full_body, lace-up_boots, open_mouth, seamed_legwear, bowtie, white_background, bangs, chibi, standing, blue_bow, blush_stickers, character_name, collared_shirt | | 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, blue_dress, solo, bag, long_sleeves, official_alternate_costume, white_shirt, cowboy_shot, hair_over_one_eye, looking_at_viewer, blush | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, black_dress, halloween_costume, solo, blush, ghost_costume, long_sleeves, official_alternate_costume, black_footwear, full_body, high_heels, open_mouth, orange_eyes | | 5 | 5 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, full_body, simple_background, solo, white_background, white_shirt, white_socks, alternate_costume, blue_dress, long_sleeves, shoes, blush, looking_at_viewer, open_mouth, smile | | 6 | 7 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, detached_collar, fake_animal_ears, playboy_bunny, rabbit_ears, solo, strapless_leotard, bowtie, open_mouth, rabbit_tail, simple_background, wrist_cuffs, blush, purple_leotard, white_background, adapted_costume, breasts, covered_navel, grey_pantyhose, looking_at_viewer, seamed_legwear | | 7 | 10 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | long_sleeves, reindeer_antlers, 1girl, blush, red_skirt, reindeer_costume, simple_background, solo, white_background, pleated_skirt, open_mouth, fur_trim, sack, animal_hood, kneehighs, looking_at_viewer | | 8 | 5 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, cowboy_shot, looking_at_viewer, solo, purple_panties, blush, purple_bra, simple_background, small_breasts, underwear_only, blue_panties, camisole, collarbone, hair_over_one_eye, white_background | | 9 | 5 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 1boy, 1girl, blush, hetero, nipples, solo_focus, sweat, bangs, cum_in_pussy, open_mouth, penis, small_breasts, vaginal, happy_sex, looking_at_viewer, medium_breasts, missionary, on_back, overflow, spread_legs, bar_censor, blue_bra, blue_panties, breasts_out, collarbone, completely_nude, hair_over_one_eye, heart, mosaic_censoring, navel, on_bed, smile | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bowtie | grey_pantyhose | long_sleeves | looking_at_viewer | pleated_dress | purple_dress | school_uniform | simple_background | solo | white_background | white_shirt | cowboy_shot | seamed_legwear | smile | blush | upper_body | hair_over_one_eye | open_mouth | full_body | lace-up_boots | bangs | chibi | standing | blue_bow | blush_stickers | character_name | collared_shirt | blue_dress | bag | official_alternate_costume | black_dress | halloween_costume | ghost_costume | black_footwear | high_heels | orange_eyes | white_socks | alternate_costume | shoes | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | strapless_leotard | rabbit_tail | wrist_cuffs | purple_leotard | adapted_costume | breasts | covered_navel | reindeer_antlers | red_skirt | reindeer_costume | pleated_skirt | fur_trim | sack | animal_hood | kneehighs | purple_panties | purple_bra | small_breasts | underwear_only | blue_panties | camisole | collarbone | 1boy | hetero | nipples | solo_focus | sweat | cum_in_pussy | penis | vaginal | happy_sex | medium_breasts | missionary | on_back | overflow | spread_legs | bar_censor | blue_bra | breasts_out | completely_nude | heart | mosaic_censoring | navel | on_bed | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:-----------------|:---------------|:--------------------|:----------------|:---------------|:-----------------|:--------------------|:-------|:-------------------|:--------------|:--------------|:-----------------|:--------|:--------|:-------------|:--------------------|:-------------|:------------|:----------------|:--------|:--------|:-----------|:-----------|:-----------------|:-----------------|:-----------------|:-------------|:------|:-----------------------------|:--------------|:--------------------|:----------------|:-----------------|:-------------|:--------------|:--------------|:--------------------|:--------|:------------------|:-------------------|:----------------|:--------------|:--------------------|:--------------|:--------------|:-----------------|:------------------|:----------|:----------------|:-------------------|:------------|:-------------------|:----------------|:-----------|:-------|:--------------|:------------|:-----------------|:-------------|:----------------|:-----------------|:---------------|:-----------|:-------------|:-------|:---------|:----------|:-------------|:--------|:---------------|:--------|:----------|:------------|:-----------------|:-------------|:----------|:-----------|:--------------|:-------------|:-----------|:--------------|:------------------|:--------|:-------------------|:--------|:---------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 37 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | X | X | | X | X | X | X | X | X | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | | X | X | X | | X | X | X | | X | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | X | X | | | | | X | | X | X | | | X | | X | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | | X | | | | | | X | | | | | | X | | | X | X | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 5 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | | X | X | | | | X | X | X | X | | | X | X | | | X | X | | | | | | | | | X | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 7 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | X | | X | | | | X | X | X | | | X | | X | | | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 10 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | | X | X | | | | X | X | X | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 5 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | | | | X | | | | X | X | X | | X | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | 9 | 5 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | | | | X | | | | | | | | | | X | X | | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/hamanami_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-22T17:58:35+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T07:43:42+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of hamanami (Kantai Collection) ======================================= This is the dataset of hamanami (Kantai Collection), containing 276 images and their tags. The core tags of this character are 'long\_hair, grey\_hair, braid, single\_braid, ribbon, hair\_ribbon, ahoge, hair\_over\_eyes, brown\_eyes, black\_ribbon, bow', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
d85fc5e4f099eaec6e30d2b124d55ab42cad60dc
# Dataset Card for "starcoderdata_py_smol" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
loubnabnl/starcoderdata_py_smol
[ "region:us" ]
2023-08-22T17:59:26+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "max_stars_repo_path", "dtype": "string"}, {"name": "max_stars_repo_name", "dtype": "string"}, {"name": "max_stars_count", "dtype": "int64"}, {"name": "id", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "size", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 613350299.4376, "num_examples": 129320}], "download_size": 144202092, "dataset_size": 613350299.4376}}
2023-08-22T19:16:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "starcoderdata_py_smol" More Information needed
[ "# Dataset Card for \"starcoderdata_py_smol\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"starcoderdata_py_smol\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"starcoderdata_py_smol\"\n\nMore Information needed" ]
a265538049bf81c7ba45c9524d511d5d2530af8b
# Dataset of mikuma/ไธ‰้šˆ/ไธ‰้šˆ (Kantai Collection) This is the dataset of mikuma/ไธ‰้šˆ/ไธ‰้šˆ (Kantai Collection), containing 428 images and their tags. The core tags of this character are `long_hair, twintails, black_hair, ribbon, hair_ribbon, green_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 428 | 265.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mikuma_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 428 | 205.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mikuma_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 793 | 370.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mikuma_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 428 | 254.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mikuma_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 793 | 438.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mikuma_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/mikuma_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, brown_skirt, long_sleeves, looking_at_viewer, pleated_skirt, serafuku, solo, brown_neckerchief, smile, blush, brown_sailor_collar, brown_shirt, cowboy_shot, open_mouth, outdoors | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, brown_skirt, long_sleeves, neckerchief, pleated_skirt, sailor_collar, serafuku, smile, solo, belt, looking_at_viewer, searchlight, green_hair, machinery | | 2 | 14 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, serafuku, solo, blush, looking_at_viewer, smile, open_mouth, skirt | | 3 | 10 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, serafuku, smile, upper_body, long_sleeves, simple_background, brown_sailor_collar, open_mouth, solo, black_neckerchief, white_background, brown_neckerchief, green_hair, looking_at_viewer, blush | | 4 | 14 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | brown_skirt, pleated_skirt, serafuku, kneehighs, long_sleeves, white_background, simple_background, 1girl, black_eyes, solo, black_socks, blush, brown_sailor_collar, looking_at_viewer, brown_neckerchief, brown_shirt | | 5 | 5 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, cowboy_shot, halterneck, red_bikini, side-tie_bikini_bottom, small_breasts, solo, standing, looking_at_viewer, string_bikini, blue_sky, day, green_hair, smile, cloud, dated, gradient_background, one-hour_drawing_challenge | | 6 | 9 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 2girls, serafuku, green_hair, blush, open_mouth, short_hair, smile, twitter_username, skirt, solo_focus | | 7 | 16 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | red_dress, 1girl, solo, polka_dot_dress, smile, white_jacket, green_hair, official_alternate_costume, belt, hooded_jacket, simple_background, white_background, full_body, long_sleeves, red_ribbon, blush, cowboy_shot, one-hour_drawing_challenge, open_mouth, red_footwear, twitter_username | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | brown_skirt | long_sleeves | looking_at_viewer | pleated_skirt | serafuku | solo | brown_neckerchief | smile | blush | brown_sailor_collar | brown_shirt | cowboy_shot | open_mouth | outdoors | neckerchief | sailor_collar | belt | searchlight | green_hair | machinery | skirt | upper_body | simple_background | black_neckerchief | white_background | kneehighs | black_eyes | black_socks | halterneck | red_bikini | side-tie_bikini_bottom | small_breasts | standing | string_bikini | blue_sky | day | cloud | dated | gradient_background | one-hour_drawing_challenge | 2girls | short_hair | twitter_username | solo_focus | red_dress | polka_dot_dress | white_jacket | official_alternate_costume | hooded_jacket | full_body | red_ribbon | red_footwear | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:---------------|:--------------------|:----------------|:-----------|:-------|:--------------------|:--------|:--------|:----------------------|:--------------|:--------------|:-------------|:-----------|:--------------|:----------------|:-------|:--------------|:-------------|:------------|:--------|:-------------|:--------------------|:--------------------|:-------------------|:------------|:-------------|:--------------|:-------------|:-------------|:-------------------------|:----------------|:-----------|:----------------|:-----------|:------|:--------|:--------|:----------------------|:-----------------------------|:---------|:-------------|:-------------------|:-------------|:------------|:------------------|:---------------|:-----------------------------|:----------------|:------------|:-------------|:---------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | | X | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 14 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | | X | | X | X | | X | X | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 10 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | X | X | | X | X | X | X | X | X | | | X | | | | | | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 14 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | X | X | X | X | X | X | | X | X | X | | | | | | | | | | | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 5 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | | X | | | X | | X | | | | X | | | | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | 6 | 9 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | | | | | | X | | | X | X | | | | X | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | 7 | 16 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | X | | | | X | | X | X | | | X | X | | | | X | | X | | | | X | | X | | | | | | | | | | | | | | | X | | | X | | X | X | X | X | X | X | X | X |
CyberHarem/mikuma_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-22T18:33:41+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T18:58:18+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of mikuma/ไธ‰้šˆ/ไธ‰้šˆ (Kantai Collection) =========================================== This is the dataset of mikuma/ไธ‰้šˆ/ไธ‰้šˆ (Kantai Collection), containing 428 images and their tags. The core tags of this character are 'long\_hair, twintails, black\_hair, ribbon, hair\_ribbon, green\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
42efb9bf5b0c2d4254f02ffeaabaae8a75f55e82
# Dataset Card for "50e86b1c" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/50e86b1c
[ "region:us" ]
2023-08-22T18:37:44+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 182, "num_examples": 10}], "download_size": 1336, "dataset_size": 182}}
2023-08-22T18:37:45+00:00
[]
[]
TAGS #region-us
# Dataset Card for "50e86b1c" More Information needed
[ "# Dataset Card for \"50e86b1c\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"50e86b1c\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"50e86b1c\"\n\nMore Information needed" ]
9b8148ec73262d233ea90e23b549d49767b5f0dd
# Dataset of okinami/ๆฒ–ๆณข (Kantai Collection) This is the dataset of okinami/ๆฒ–ๆณข (Kantai Collection), containing 283 images and their tags. The core tags of this character are `short_hair, glasses, multicolored_hair, green_eyes, blue-framed_eyewear, pink_hair, black_hair, brown_hair, bow`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 283 | 264.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/okinami_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 283 | 174.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/okinami_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 611 | 350.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/okinami_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 283 | 239.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/okinami_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 611 | 460.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/okinami_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/okinami_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, long_sleeves, school_uniform, smile, solo, white_shirt, blush, bowtie, looking_at_viewer, sleeveless_dress, adjusting_eyewear, grey_pantyhose, hair_ornament, sitting | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, school_uniform, solo, white_shirt, long_sleeves, looking_at_viewer, simple_background, bowtie, open_mouth, sleeveless_dress, white_background, upper_body, adjusting_eyewear, hair_ornament | | 2 | 10 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, blazer, school_uniform, solo, purple_dress, long_sleeves, looking_at_viewer, aqua_bowtie, cowboy_shot, open_mouth, simple_background, thighhighs, white_background | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, bowtie, lace-up_boots, school_uniform, solo, white_shirt, full_body, open_mouth, grey_pantyhose, long_sleeves, sleeveless_dress, white_background, purple_dress, simple_background | | 4 | 15 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, navel, solo, green_bra, looking_at_viewer, green_panties, small_breasts, polka_dot_bra, cowboy_shot, polka_dot_panties, blush, collarbone, open_mouth, simple_background, underwear_only, white_background, open_shirt | | 5 | 12 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, solo, navel, yellow_bikini, looking_at_viewer, polka_dot_bikini, cowboy_shot, open_mouth, small_breasts, simple_background | | 6 | 6 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, adapted_costume, detached_collar, fake_animal_ears, playboy_bunny, rabbit_ears, small_breasts, solo, strapless_leotard, purple_leotard, wrist_cuffs, aqua_bowtie, grey_pantyhose, covered_navel, cowboy_shot, highleg_leotard, looking_at_viewer, rabbit_tail, simple_background, thighband_pantyhose | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | long_sleeves | school_uniform | smile | solo | white_shirt | blush | bowtie | looking_at_viewer | sleeveless_dress | adjusting_eyewear | grey_pantyhose | hair_ornament | sitting | simple_background | open_mouth | white_background | upper_body | blazer | purple_dress | aqua_bowtie | cowboy_shot | thighhighs | lace-up_boots | full_body | navel | green_bra | green_panties | small_breasts | polka_dot_bra | polka_dot_panties | collarbone | underwear_only | open_shirt | yellow_bikini | polka_dot_bikini | adapted_costume | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | strapless_leotard | purple_leotard | wrist_cuffs | covered_navel | highleg_leotard | rabbit_tail | thighband_pantyhose | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-----------------|:--------|:-------|:--------------|:--------|:---------|:--------------------|:-------------------|:--------------------|:-----------------|:----------------|:----------|:--------------------|:-------------|:-------------------|:-------------|:---------|:---------------|:--------------|:--------------|:-------------|:----------------|:------------|:--------|:------------|:----------------|:----------------|:----------------|:--------------------|:-------------|:-----------------|:-------------|:----------------|:-------------------|:------------------|:------------------|:-------------------|:----------------|:--------------|:--------------------|:-----------------|:--------------|:----------------|:------------------|:--------------|:----------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | | X | X | | X | X | X | X | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 10 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | X | | | | X | | | | | | X | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | | X | X | | X | | X | | X | | | X | X | X | | | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 15 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | | | X | | X | | X | | | | | | X | X | X | | | | | X | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | 5 | 12 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | | | X | | | | X | | | | | | X | X | | | | | | X | | | | X | | | X | | | | | | X | X | | | | | | | | | | | | | | 6 | 6 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | | | X | | | | X | | | X | | | X | | | | | | X | X | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/okinami_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-22T18:46:43+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T21:29:16+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of okinami/ๆฒ–ๆณข (Kantai Collection) ========================================= This is the dataset of okinami/ๆฒ–ๆณข (Kantai Collection), containing 283 images and their tags. The core tags of this character are 'short\_hair, glasses, multicolored\_hair, green\_eyes, blue-framed\_eyewear, pink\_hair, black\_hair, brown\_hair, bow', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
e3e30c04629ad755b8945c5b3a772b38a9aabd34
# Dataset Card for "spots_audios" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Seenka/spots_audios
[ "region:us" ]
2023-08-22T19:00:31+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "id", "dtype": "int64"}, {"name": "brand_id", "dtype": "int64"}, {"name": "brand_name", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "created_at", "dtype": "timestamp[us, tz=UTC]"}, {"name": "confirmed_at", "dtype": "timestamp[us, tz=UTC]"}, {"name": "confirmed_by_id", "dtype": "int64"}, {"name": "clip_url", "dtype": "string"}, {"name": "duration", "dtype": "float64"}, {"name": "thumb_url", "dtype": "string"}, {"name": "clip_duration", "dtype": "float64"}, {"name": "filename", "dtype": "string"}, {"name": "embeddings", "sequence": {"sequence": "float32"}}], "splits": [{"name": "train", "num_bytes": 261559300.0, "num_examples": 417}], "download_size": 242934514, "dataset_size": 261559300.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-23T12:34:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for "spots_audios" More Information needed
[ "# Dataset Card for \"spots_audios\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"spots_audios\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"spots_audios\"\n\nMore Information needed" ]
7c741b9ef310a80e6adce623e5a4e51a58c7a8dc
# Dataset Card for "vkscoredata" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
m1b/vkscoredata
[ "region:us" ]
2023-08-22T19:07:59+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "SCORE", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3243305276.0080004, "num_examples": 79928}, {"name": "test", "num_bytes": 826538625.188, "num_examples": 19982}], "download_size": 4061274094, "dataset_size": 4069843901.196}}
2023-08-22T19:13:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for "vkscoredata" More Information needed
[ "# Dataset Card for \"vkscoredata\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"vkscoredata\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"vkscoredata\"\n\nMore Information needed" ]
d439ca90825e5b4e5ef97798d9b5950e16ba7065
# Dataset Card for ValuePrism ## Dataset Description - **Paper:** https://arxiv.org/abs/2309.00779 - **Demo:** https://kaleido.allen.ai - **Repository:** https://github.com/tsor13/kaleido - **Datasheet for Datasets:** https://drive.google.com/file/d/1zDWvO0NljqxBMfDAGW7Jx60Iw54bjsEE/view?usp=sharing - **License:** https://allenai.org/licenses/impact-mr - **Point of Contact:** [Taylor Sorensen](mailto:[email protected]) ### Dataset Summary ValuePrism was created 1) to understand what pluralistic human values, rights, and duties are already present in large language models, and 2) to serve as a resource to to support open, value pluralistic modeling (e.g., [Kaleido](https://huggingface.co/tsor13/kaleido-xl)). It contains human-written situations and machine-generated candidate values, rights, duties, along with their valences and post-hoc explanations relating them to the situations. For additional documentation, see ValuePrism's [Datasheet](https://drive.google.com/file/d/1zDWvO0NljqxBMfDAGW7Jx60Iw54bjsEE/view?usp=sharing). The dataset was created and intended for research purposes. It is openly released under AI2โ€™s ImpACT license as a medium risk artifact. ### Supported Tasks The dataset supports 4 tasks: - **Generation (open-text)** *What values, rights, and duties are relevant for a situation?* Generate a value, right, or duty that could be considered when reasoning about the action. Values are generated one at a time, as opposed to a batch. - **Relevance (2-way classification)** *Is a value relevant for a situation?* Some values are more relevant than others. - **Valence (3-way classification)** *Does the value support or oppose the action, or might it depend on context?* Disentangling the valence is critical for understanding how plural considerations may interact with a decision. - **Explanation (open-text)** *How does the value relate to the action?* Generating a post-hoc rationale for why a value consideration may relate to a situation. ### Languages All data is in English. ## Dataset Structure ### Dataset Splits There are 6 data configurations: - `full`: The full structured dataset of situations paired with values, rights, and duties paired with GPT-4. Only one split with all of the data. - `generative`: Generative task train, val, and test splits. - `relevance`: Relevance task train, val, and test splits. - `valence`: Valence task train, val, and test splits. - `explanation`: Explanation task train, val, and test splits. - `mixture`: Generative, relevance, valence, and explanation tasks combined wtih train, val, and test splits. ### Data Fields While different configurations have different fields, these are all the corresponding fields in the dataset: - `situation` (string): A one sentence of a particular scenario or situation. For example, "buying some chocolate for my grandparents". - `vrd` (string): Type of instance, either "Value", "Right", or "Duty". - `text` (string): The text of the value, right, or duty. For example, "Honesty", "Right to property", "Duty to protect". - `explanation` (string): A post-hoc explanation of why the specified value, right, or duty is relevant or important in the given situation. For example, "Buying chocolate for your grandparents can strengthen family connections and show appreciation for your relationship with them." - `valence` (string): Indicates whether the value, right, or duty supports or opposes the action in the situation, or if it might depend on the context. Either "Supports", "Opposes", or "Either". - `input` (string): For the seq2seq task (generative, relevance, valence, explanation), the input to the model. - `output` (string): For the seq2seq task (generative, relevance, valence, explanation), the output of the model. ### Data Splits All configurations (except for the raw outputs in `full`) have 80%/10%/10% train/validation/test splits. ## Dataset Creation ### Source Data #### Data Collection Situations are sourced from the Delphi user demo, and candidate values, rights, duties, their valences, and explanations connecting them to the situations are machine generated by GPT-4. #### Who are the source language producers? The situations are sourced from users of the Delphi user demo, for whom we do not have demographic information. ### Personal and Sensitive Information There is no personal or sensitive information in ValuePrism. ## Considerations for Using the Data ### Social Impact of Dataset We intend the dataset to be used to enable research and not to be used for real-world use or decision-making. ### Discussion of Biases The value, right, and duty data was generated by GPT-4, which is known to exhibit [biases](https://arxiv.org/pdf/2304.03738.pdf). Thus, we expect ValuePrism to inherit biases from GPT-4. That being said, we have tried to prompt the model to output a diversity of values in an attempt to mitigate bias with breadth. ## Additional Information 91% of values, rights, and duties were marked as high-quality by 3/3 annotators, and 87% of valence scores were marked as correct by 3/3 annotators. Additionally, we perform a human study on the data and do not find large disparities in agreement between demographic groups tested, although future work in this area is a promising direction. See [our paper] for more details and analysis. ### Licensing Information ValuePrism is made available under the [**AI2 ImpACT License - Medium Risk Artifacts (โ€œMR Agreementโ€)**](https://allenai.org/licenses/impact-mr) ### Citation Information Please cite [our paper](https://arxiv.org/abs/2309.00779) when using this dataset: ``` @misc{sorensen2023value, title={Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties}, author={Taylor Sorensen and Liwei Jiang and Jena Hwang and Sydney Levine and Valentina Pyatkin and Peter West and Nouha Dziri and Ximing Lu and Kavel Rao and Chandra Bhagavatula and Maarten Sap and John Tasioulas and Yejin Choi}, year={2023}, eprint={2309.00779}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` #### Raw Dataset Statistics The total, number of unique, and average number of generated values, rights, and duties per situation are shown. | **Type** | **Total** | **Unique** | **Per Situation** | |--------------|-----------|------------|--------------------| | **Situations** | 31.0k | 31.0k | 1 | | **Values** | 97.7k | 4.2k | 3.15 | | **Rights** | 49.0k | 4.6k | 1.58 | | **Duties** | 71.6k | 12.8k | 2.31 | #### Task Dataset Statistics | | **Relevance** | **Valence** | **Generation** | **Explanation** | **Mixture** | |---------------|------------|-------------|----------|-----------|-------------| | **Train** | 349k | 175k | 175k | 175k | 874k | | **Val** | 44k | 22k | 22k | 22k | 109k | | **Test** | 44k | 22k | 22k | 22k | 109k | | **Total** | 437k | 219k | 219k | 219k | 1.1M |
allenai/ValuePrism
[ "size_categories:100K<n<1M", "language:en", "not-for-all-audiences", "arxiv:2309.00779", "arxiv:2304.03738", "region:us" ]
2023-08-22T19:08:41+00:00
{"annotations_creators": [{}], "language": ["en"], "size_categories": ["100K<n<1M"], "pretty_name": "ValuePrism", "configs": [{"config_name": "full", "data_files": "full/*csv", "default": true}, {"config_name": "mixture", "data_files": [{"split": "train", "path": "mixture/*train.csv"}, {"split": "val", "path": "mixture/*val.csv"}, {"split": "test", "path": "mixture/*test.csv"}]}, {"config_name": "generative", "data_files": [{"split": "train", "path": "generative/*train.csv"}, {"split": "val", "path": "generative/*val.csv"}, {"split": "test", "path": "generative/*test.csv"}]}, {"config_name": "relevance", "data_files": [{"split": "train", "path": "relevance/*train.csv"}, {"split": "val", "path": "relevance/*val.csv"}, {"split": "test", "path": "relevance/*test.csv"}]}, {"config_name": "explanation", "data_files": [{"split": "train", "path": "explanation/*train.csv"}, {"split": "val", "path": "explanation/*val.csv"}, {"split": "test", "path": "explanation/*test.csv"}]}, {"config_name": "valence", "data_files": [{"split": "train", "path": "valence/*train.csv"}, {"split": "val", "path": "valence/*val.csv"}, {"split": "test", "path": "valence/*test.csv"}]}], "extra_gated_prompt": "Access to this dataset is automatically granted upon accepting the [**AI2 ImpACT License - Medium Risk Artifacts (\u201cMR Agreement\u201d)**](https://allenai.org/licenses/impact-mr) and completing all fields below.", "extra_gated_fields": {"Your full name": "text", "Organization or entity you are affiliated with": "text", "State or country you are located in": "text", "Contact email": "text", "Please describe your intended use of the medium risk artifact(s)": "text", "I UNDERSTAND that the dataset is intended for research purposes and not for real-world use-cases": "checkbox", "I AGREE to the terms and conditions of the MR Agreement above": "checkbox", "I AGREE to AI2\u2019s use of my information for legal notices and administrative matters": "checkbox", "I CERTIFY that the information I have provided is true and accurate": "checkbox"}, "tags": ["not-for-all-audiences"]}
2023-09-08T22:05:50+00:00
[ "2309.00779", "2304.03738" ]
[ "en" ]
TAGS #size_categories-100K<n<1M #language-English #not-for-all-audiences #arxiv-2309.00779 #arxiv-2304.03738 #region-us
Dataset Card for ValuePrism =========================== Dataset Description ------------------- * Paper: URL * Demo: URL * Repository: URL * Datasheet for Datasets: URL * License: URL * Point of Contact: Taylor Sorensen ### Dataset Summary ValuePrism was created 1) to understand what pluralistic human values, rights, and duties are already present in large language models, and 2) to serve as a resource to to support open, value pluralistic modeling (e.g., Kaleido). It contains human-written situations and machine-generated candidate values, rights, duties, along with their valences and post-hoc explanations relating them to the situations. For additional documentation, see ValuePrism's Datasheet. The dataset was created and intended for research purposes. It is openly released under AI2โ€™s ImpACT license as a medium risk artifact. ### Supported Tasks The dataset supports 4 tasks: * Generation (open-text) *What values, rights, and duties are relevant for a situation?* Generate a value, right, or duty that could be considered when reasoning about the action. Values are generated one at a time, as opposed to a batch. * Relevance (2-way classification) *Is a value relevant for a situation?* Some values are more relevant than others. * Valence (3-way classification) *Does the value support or oppose the action, or might it depend on context?* Disentangling the valence is critical for understanding how plural considerations may interact with a decision. * Explanation (open-text) *How does the value relate to the action?* Generating a post-hoc rationale for why a value consideration may relate to a situation. ### Languages All data is in English. Dataset Structure ----------------- ### Dataset Splits There are 6 data configurations: * 'full': The full structured dataset of situations paired with values, rights, and duties paired with GPT-4. Only one split with all of the data. * 'generative': Generative task train, val, and test splits. * 'relevance': Relevance task train, val, and test splits. * 'valence': Valence task train, val, and test splits. * 'explanation': Explanation task train, val, and test splits. * 'mixture': Generative, relevance, valence, and explanation tasks combined wtih train, val, and test splits. ### Data Fields While different configurations have different fields, these are all the corresponding fields in the dataset: * 'situation' (string): A one sentence of a particular scenario or situation. For example, "buying some chocolate for my grandparents". * 'vrd' (string): Type of instance, either "Value", "Right", or "Duty". * 'text' (string): The text of the value, right, or duty. For example, "Honesty", "Right to property", "Duty to protect". * 'explanation' (string): A post-hoc explanation of why the specified value, right, or duty is relevant or important in the given situation. For example, "Buying chocolate for your grandparents can strengthen family connections and show appreciation for your relationship with them." * 'valence' (string): Indicates whether the value, right, or duty supports or opposes the action in the situation, or if it might depend on the context. Either "Supports", "Opposes", or "Either". * 'input' (string): For the seq2seq task (generative, relevance, valence, explanation), the input to the model. * 'output' (string): For the seq2seq task (generative, relevance, valence, explanation), the output of the model. ### Data Splits All configurations (except for the raw outputs in 'full') have 80%/10%/10% train/validation/test splits. Dataset Creation ---------------- ### Source Data #### Data Collection Situations are sourced from the Delphi user demo, and candidate values, rights, duties, their valences, and explanations connecting them to the situations are machine generated by GPT-4. #### Who are the source language producers? The situations are sourced from users of the Delphi user demo, for whom we do not have demographic information. ### Personal and Sensitive Information There is no personal or sensitive information in ValuePrism. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset We intend the dataset to be used to enable research and not to be used for real-world use or decision-making. ### Discussion of Biases The value, right, and duty data was generated by GPT-4, which is known to exhibit biases. Thus, we expect ValuePrism to inherit biases from GPT-4. That being said, we have tried to prompt the model to output a diversity of values in an attempt to mitigate bias with breadth. Additional Information ---------------------- 91% of values, rights, and duties were marked as high-quality by 3/3 annotators, and 87% of valence scores were marked as correct by 3/3 annotators. Additionally, we perform a human study on the data and do not find large disparities in agreement between demographic groups tested, although future work in this area is a promising direction. See [our paper] for more details and analysis. ### Licensing Information ValuePrism is made available under the AI2 ImpACT License - Medium Risk Artifacts (โ€œMR Agreementโ€) Please cite our paper when using this dataset: #### Raw Dataset Statistics The total, number of unique, and average number of generated values, rights, and duties per situation are shown. #### Task Dataset Statistics
[ "### Dataset Summary\n\n\nValuePrism was created 1) to understand what pluralistic human values, rights, and duties are already present in large language models, and 2) to serve as a resource to to support open, value pluralistic modeling (e.g., Kaleido). It contains human-written situations and machine-generated candidate values, rights, duties, along with their valences and post-hoc explanations relating them to the situations.\nFor additional documentation, see ValuePrism's Datasheet.\n\n\nThe dataset was created and intended for research purposes. It is openly released under AI2โ€™s ImpACT license as a medium risk artifact.", "### Supported Tasks\n\n\nThe dataset supports 4 tasks:\n\n\n* Generation (open-text)\n*What values, rights, and duties are relevant for a situation?*\nGenerate a value, right, or duty\nthat could be considered when reasoning about the action. Values are generated one at a time, as opposed to a batch.\n* Relevance (2-way classification)\n*Is a value relevant for a situation?* Some values are more relevant than others.\n* Valence (3-way classification)\n*Does the value support or oppose the action, or might it depend on context?*\nDisentangling the valence is critical for understanding how plural considerations may interact with a decision.\n* Explanation (open-text)\n*How does the value relate to the action?* Generating a post-hoc rationale for why a value consideration may relate to a situation.", "### Languages\n\n\nAll data is in English.\n\n\nDataset Structure\n-----------------", "### Dataset Splits\n\n\nThere are 6 data configurations:\n\n\n* 'full': The full structured dataset of situations paired with values, rights, and duties paired with GPT-4. Only one split with all of the data.\n* 'generative': Generative task train, val, and test splits.\n* 'relevance': Relevance task train, val, and test splits.\n* 'valence': Valence task train, val, and test splits.\n* 'explanation': Explanation task train, val, and test splits.\n* 'mixture': Generative, relevance, valence, and explanation tasks combined wtih train, val, and test splits.", "### Data Fields\n\n\nWhile different configurations have different fields, these are all the corresponding fields in the dataset:\n\n\n* 'situation' (string): A one sentence of a particular scenario or situation. For example, \"buying some chocolate for my grandparents\".\n* 'vrd' (string): Type of instance, either \"Value\", \"Right\", or \"Duty\".\n* 'text' (string): The text of the value, right, or duty. For example, \"Honesty\", \"Right to property\", \"Duty to protect\".\n* 'explanation' (string): A post-hoc explanation of why the specified value, right, or duty is relevant or important in the given situation. For example, \"Buying chocolate for your grandparents can strengthen family connections and show appreciation for your relationship with them.\"\n* 'valence' (string): Indicates whether the value, right, or duty supports or opposes the action in the situation, or if it might depend on the context. Either \"Supports\", \"Opposes\", or \"Either\".\n* 'input' (string): For the seq2seq task (generative, relevance, valence, explanation), the input to the model.\n* 'output' (string): For the seq2seq task (generative, relevance, valence, explanation), the output of the model.", "### Data Splits\n\n\nAll configurations (except for the raw outputs in 'full') have 80%/10%/10% train/validation/test splits.\n\n\nDataset Creation\n----------------", "### Source Data", "#### Data Collection\n\n\nSituations are sourced from the Delphi user demo, and candidate values, rights, duties, their valences, and explanations connecting them to the situations are machine generated by GPT-4.", "#### Who are the source language producers?\n\n\nThe situations are sourced from users of the Delphi user demo, for whom we do not have demographic information.", "### Personal and Sensitive Information\n\n\nThere is no personal or sensitive information in ValuePrism.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nWe intend the dataset to be used to enable research and not to be used for real-world use or decision-making.", "### Discussion of Biases\n\n\nThe value, right, and duty data was generated by GPT-4, which is known to exhibit biases. Thus, we expect ValuePrism to inherit biases from GPT-4.\nThat being said, we have tried to prompt the model to output a diversity of values in an attempt to mitigate bias with breadth.\n\n\nAdditional Information\n----------------------\n\n\n91% of values, rights, and duties were marked as high-quality by 3/3 annotators, and 87% of valence scores were marked as correct by 3/3 annotators.\nAdditionally, we perform a human study on the data and do not find large disparities in agreement between demographic groups tested, although future work in this area is a promising direction.\nSee [our paper] for more details and analysis.", "### Licensing Information\n\n\nValuePrism is made available under the AI2\nImpACT License - Medium Risk Artifacts (โ€œMR\nAgreementโ€)\n\n\nPlease cite our paper when using this dataset:", "#### Raw Dataset Statistics\n\n\nThe total, number of unique, and average number of generated values, rights, and duties per situation are shown.", "#### Task Dataset Statistics" ]
[ "TAGS\n#size_categories-100K<n<1M #language-English #not-for-all-audiences #arxiv-2309.00779 #arxiv-2304.03738 #region-us \n", "### Dataset Summary\n\n\nValuePrism was created 1) to understand what pluralistic human values, rights, and duties are already present in large language models, and 2) to serve as a resource to to support open, value pluralistic modeling (e.g., Kaleido). It contains human-written situations and machine-generated candidate values, rights, duties, along with their valences and post-hoc explanations relating them to the situations.\nFor additional documentation, see ValuePrism's Datasheet.\n\n\nThe dataset was created and intended for research purposes. It is openly released under AI2โ€™s ImpACT license as a medium risk artifact.", "### Supported Tasks\n\n\nThe dataset supports 4 tasks:\n\n\n* Generation (open-text)\n*What values, rights, and duties are relevant for a situation?*\nGenerate a value, right, or duty\nthat could be considered when reasoning about the action. Values are generated one at a time, as opposed to a batch.\n* Relevance (2-way classification)\n*Is a value relevant for a situation?* Some values are more relevant than others.\n* Valence (3-way classification)\n*Does the value support or oppose the action, or might it depend on context?*\nDisentangling the valence is critical for understanding how plural considerations may interact with a decision.\n* Explanation (open-text)\n*How does the value relate to the action?* Generating a post-hoc rationale for why a value consideration may relate to a situation.", "### Languages\n\n\nAll data is in English.\n\n\nDataset Structure\n-----------------", "### Dataset Splits\n\n\nThere are 6 data configurations:\n\n\n* 'full': The full structured dataset of situations paired with values, rights, and duties paired with GPT-4. Only one split with all of the data.\n* 'generative': Generative task train, val, and test splits.\n* 'relevance': Relevance task train, val, and test splits.\n* 'valence': Valence task train, val, and test splits.\n* 'explanation': Explanation task train, val, and test splits.\n* 'mixture': Generative, relevance, valence, and explanation tasks combined wtih train, val, and test splits.", "### Data Fields\n\n\nWhile different configurations have different fields, these are all the corresponding fields in the dataset:\n\n\n* 'situation' (string): A one sentence of a particular scenario or situation. For example, \"buying some chocolate for my grandparents\".\n* 'vrd' (string): Type of instance, either \"Value\", \"Right\", or \"Duty\".\n* 'text' (string): The text of the value, right, or duty. For example, \"Honesty\", \"Right to property\", \"Duty to protect\".\n* 'explanation' (string): A post-hoc explanation of why the specified value, right, or duty is relevant or important in the given situation. For example, \"Buying chocolate for your grandparents can strengthen family connections and show appreciation for your relationship with them.\"\n* 'valence' (string): Indicates whether the value, right, or duty supports or opposes the action in the situation, or if it might depend on the context. Either \"Supports\", \"Opposes\", or \"Either\".\n* 'input' (string): For the seq2seq task (generative, relevance, valence, explanation), the input to the model.\n* 'output' (string): For the seq2seq task (generative, relevance, valence, explanation), the output of the model.", "### Data Splits\n\n\nAll configurations (except for the raw outputs in 'full') have 80%/10%/10% train/validation/test splits.\n\n\nDataset Creation\n----------------", "### Source Data", "#### Data Collection\n\n\nSituations are sourced from the Delphi user demo, and candidate values, rights, duties, their valences, and explanations connecting them to the situations are machine generated by GPT-4.", "#### Who are the source language producers?\n\n\nThe situations are sourced from users of the Delphi user demo, for whom we do not have demographic information.", "### Personal and Sensitive Information\n\n\nThere is no personal or sensitive information in ValuePrism.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nWe intend the dataset to be used to enable research and not to be used for real-world use or decision-making.", "### Discussion of Biases\n\n\nThe value, right, and duty data was generated by GPT-4, which is known to exhibit biases. Thus, we expect ValuePrism to inherit biases from GPT-4.\nThat being said, we have tried to prompt the model to output a diversity of values in an attempt to mitigate bias with breadth.\n\n\nAdditional Information\n----------------------\n\n\n91% of values, rights, and duties were marked as high-quality by 3/3 annotators, and 87% of valence scores were marked as correct by 3/3 annotators.\nAdditionally, we perform a human study on the data and do not find large disparities in agreement between demographic groups tested, although future work in this area is a promising direction.\nSee [our paper] for more details and analysis.", "### Licensing Information\n\n\nValuePrism is made available under the AI2\nImpACT License - Medium Risk Artifacts (โ€œMR\nAgreementโ€)\n\n\nPlease cite our paper when using this dataset:", "#### Raw Dataset Statistics\n\n\nThe total, number of unique, and average number of generated values, rights, and duties per situation are shown.", "#### Task Dataset Statistics" ]
[ 47, 148, 197, 17, 160, 315, 43, 4, 48, 34, 31, 34, 188, 40, 32, 8 ]
[ "passage: TAGS\n#size_categories-100K<n<1M #language-English #not-for-all-audiences #arxiv-2309.00779 #arxiv-2304.03738 #region-us \n### Dataset Summary\n\n\nValuePrism was created 1) to understand what pluralistic human values, rights, and duties are already present in large language models, and 2) to serve as a resource to to support open, value pluralistic modeling (e.g., Kaleido). It contains human-written situations and machine-generated candidate values, rights, duties, along with their valences and post-hoc explanations relating them to the situations.\nFor additional documentation, see ValuePrism's Datasheet.\n\n\nThe dataset was created and intended for research purposes. It is openly released under AI2โ€™s ImpACT license as a medium risk artifact.### Supported Tasks\n\n\nThe dataset supports 4 tasks:\n\n\n* Generation (open-text)\n*What values, rights, and duties are relevant for a situation?*\nGenerate a value, right, or duty\nthat could be considered when reasoning about the action. Values are generated one at a time, as opposed to a batch.\n* Relevance (2-way classification)\n*Is a value relevant for a situation?* Some values are more relevant than others.\n* Valence (3-way classification)\n*Does the value support or oppose the action, or might it depend on context?*\nDisentangling the valence is critical for understanding how plural considerations may interact with a decision.\n* Explanation (open-text)\n*How does the value relate to the action?* Generating a post-hoc rationale for why a value consideration may relate to a situation.### Languages\n\n\nAll data is in English.\n\n\nDataset Structure\n-----------------", "passage: ### Dataset Splits\n\n\nThere are 6 data configurations:\n\n\n* 'full': The full structured dataset of situations paired with values, rights, and duties paired with GPT-4. Only one split with all of the data.\n* 'generative': Generative task train, val, and test splits.\n* 'relevance': Relevance task train, val, and test splits.\n* 'valence': Valence task train, val, and test splits.\n* 'explanation': Explanation task train, val, and test splits.\n* 'mixture': Generative, relevance, valence, and explanation tasks combined wtih train, val, and test splits.### Data Fields\n\n\nWhile different configurations have different fields, these are all the corresponding fields in the dataset:\n\n\n* 'situation' (string): A one sentence of a particular scenario or situation. For example, \"buying some chocolate for my grandparents\".\n* 'vrd' (string): Type of instance, either \"Value\", \"Right\", or \"Duty\".\n* 'text' (string): The text of the value, right, or duty. For example, \"Honesty\", \"Right to property\", \"Duty to protect\".\n* 'explanation' (string): A post-hoc explanation of why the specified value, right, or duty is relevant or important in the given situation. For example, \"Buying chocolate for your grandparents can strengthen family connections and show appreciation for your relationship with them.\"\n* 'valence' (string): Indicates whether the value, right, or duty supports or opposes the action in the situation, or if it might depend on the context. Either \"Supports\", \"Opposes\", or \"Either\".\n* 'input' (string): For the seq2seq task (generative, relevance, valence, explanation), the input to the model.\n* 'output' (string): For the seq2seq task (generative, relevance, valence, explanation), the output of the model.### Data Splits\n\n\nAll configurations (except for the raw outputs in 'full') have 80%/10%/10% train/validation/test splits.\n\n\nDataset Creation\n----------------### Source Data#### Data Collection\n\n\nSituations are sourced from the Delphi user demo, and candidate values, rights, duties, their valences, and explanations connecting them to the situations are machine generated by GPT-4.#### Who are the source language producers?\n\n\nThe situations are sourced from users of the Delphi user demo, for whom we do not have demographic information.### Personal and Sensitive Information\n\n\nThere is no personal or sensitive information in ValuePrism.\n\n\nConsiderations for Using the Data\n---------------------------------" ]
d4659690b179d74099aa1ec11151ed66da28fc36
# Dataset of sakawa/้…’ๅŒ‚ (Kantai Collection) This is the dataset of sakawa/้…’ๅŒ‚ (Kantai Collection), containing 269 images and their tags. The core tags of this character are `short_hair, purple_hair, brown_eyes, purple_eyes, ahoge`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 269 | 175.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakawa_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 269 | 126.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakawa_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 491 | 234.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakawa_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 269 | 165.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakawa_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 491 | 292.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakawa_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/sakawa_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | pleated_skirt, red_skirt, serafuku, 2girls, anchor_symbol, sailor_collar, white_gloves, open_mouth, black_hair, midriff, black_necktie, long_hair, simple_background, smile, solo_focus, white_background, sleeveless_shirt, garter_straps, thighhighs | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, red_skirt, serafuku, simple_background, sleeveless_shirt, smile, solo, white_background, white_gloves, anchor_symbol, black_necktie, black_sailor_collar, looking_at_viewer, open_mouth, pleated_skirt, navel | | 2 | 14 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, anchor_symbol, pleated_skirt, red_skirt, serafuku, white_gloves, black_necktie, black_sailor_collar, solo, looking_at_viewer, sleeveless_shirt, garter_straps, simple_background, single_thighhigh, cowboy_shot, white_background, character_name, dated, full_body, midriff, standing | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, detached_collar, fake_animal_ears, playboy_bunny, rabbit_ears, breasts, looking_at_viewer, solo, strapless_leotard, wrist_cuffs, simple_background, white_background, alternate_costume, ass, black_pantyhose, fake_tail, rabbit_tail, red_leotard, black_necktie, dated, high_heels, smile, thighband_pantyhose, twitter_username | | 4 | 8 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, navel, solo, looking_at_viewer, cowboy_shot, small_breasts, open_mouth, simple_background, smile, twitter_username, white_bikini | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | enmaided, smile, blush, looking_at_viewer, maid_apron, yellow_eyes, 1girl, solo, multiple_girls | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | pleated_skirt | red_skirt | serafuku | 2girls | anchor_symbol | sailor_collar | white_gloves | open_mouth | black_hair | midriff | black_necktie | long_hair | simple_background | smile | solo_focus | white_background | sleeveless_shirt | garter_straps | thighhighs | 1girl | solo | black_sailor_collar | looking_at_viewer | navel | single_thighhigh | cowboy_shot | character_name | dated | full_body | standing | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | breasts | strapless_leotard | wrist_cuffs | alternate_costume | ass | black_pantyhose | fake_tail | rabbit_tail | red_leotard | high_heels | thighband_pantyhose | twitter_username | small_breasts | white_bikini | enmaided | blush | maid_apron | yellow_eyes | multiple_girls | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------|:------------|:-----------|:---------|:----------------|:----------------|:---------------|:-------------|:-------------|:----------|:----------------|:------------|:--------------------|:--------|:-------------|:-------------------|:-------------------|:----------------|:-------------|:--------|:-------|:----------------------|:--------------------|:--------|:-------------------|:--------------|:-----------------|:--------|:------------|:-----------|:------------------|:-------------------|:----------------|:--------------|:----------|:--------------------|:--------------|:--------------------|:------|:------------------|:------------|:--------------|:--------------|:-------------|:----------------------|:-------------------|:----------------|:---------------|:-----------|:--------|:-------------|:--------------|:-----------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | | X | | X | X | | | X | | X | X | | X | X | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 14 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | X | | X | | | X | X | | X | | | X | X | X | | X | X | X | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | | | | | | | | | | | X | | X | X | | X | | | | X | X | | X | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | 4 | 8 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | | | | | | | | X | | | | | X | X | | | | | | X | X | | X | X | | X | | | | | | | | | | | | | | | | | | | | X | X | X | | | | | | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | | | | | | | | | | | | | | X | | | | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X |
CyberHarem/sakawa_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-22T19:22:10+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T18:58:53+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of sakawa/้…’ๅŒ‚ (Kantai Collection) ======================================== This is the dataset of sakawa/้…’ๅŒ‚ (Kantai Collection), containing 269 images and their tags. The core tags of this character are 'short\_hair, purple\_hair, brown\_eyes, purple\_eyes, ahoge', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
97cd02f377d6bc750487fc6373d5685e6031c50c
# Dataset of maikaze/่ˆž้ขจ (Kantai Collection) This is the dataset of maikaze/่ˆž้ขจ (Kantai Collection), containing 430 images and their tags. The core tags of this character are `blonde_hair, ponytail, short_hair, green_eyes, bangs, parted_bangs, ribbon`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 430 | 267.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/maikaze_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 430 | 201.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/maikaze_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 849 | 392.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/maikaze_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 430 | 253.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/maikaze_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 849 | 472.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/maikaze_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/maikaze_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, red_ribbon, school_uniform, smile, solo, white_gloves, white_shirt, looking_at_viewer, short_ponytail, short_sleeves, simple_background, blouse, scrunchie, black_vest, dress_shirt, one-hour_drawing_challenge, pleated_skirt, twitter_username, upper_body, white_background, open_mouth, black_skirt, grey_vest, neck_ribbon, bowtie | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, black_skirt, black_vest, cowboy_shot, looking_at_viewer, pleated_skirt, red_ribbon, school_uniform, short_ponytail, short_sleeves, solo, white_gloves, white_shirt, grey_skirt, neck_ribbon, simple_background, smile, white_background, blue_eyes, blush, open_mouth | | 2 | 9 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | black_skirt, pleated_skirt, red_ribbon, school_uniform, white_shirt, 1girl, black_vest, red_bowtie, simple_background, solo, standing_split, white_background, white_gloves, dress_shirt, flexible, looking_at_viewer, neck_ribbon, short_ponytail, short_sleeves, kneehighs, panties, black_socks, blouse, brown_footwear, open_mouth, loafers, smile | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, pleated_skirt, school_uniform, smile, solo, vest, white_gloves, looking_at_viewer, white_background, side_ponytail, simple_background, character_name | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, looking_at_viewer, school_uniform, smile, solo, vest, white_gloves, bow, open_mouth, pleated_skirt, blush | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, blue_eyes, open_mouth, school_uniform, solo, vest, white_gloves, pleated_skirt, shirt, smile, looking_at_viewer | | 6 | 10 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 2girls, open_mouth, school_uniform, vest, white_gloves, pleated_skirt, ^_^, :d, bow | | 7 | 17 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | alternate_costume, 1girl, solo, white_shirt, looking_at_viewer, purple_jacket, smile, hair_ribbon, open_mouth, pleated_skirt, purple_skirt, simple_background, long_sleeves, white_background, black_pantyhose, blush, cowboy_shot, gift_box, heart-shaped_box, valentine, collarbone, holding_gift, black_leggings, open_jacket, sneakers, twitter_username | | 8 | 5 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, navel, solo, blush, collarbone, panties, smile, underwear_only, looking_at_viewer, open_mouth, sports_bra, barefoot, blue_eyes, cowboy_shot, full_body, scrunchie, small_breasts, white_bra | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | red_ribbon | school_uniform | smile | solo | white_gloves | white_shirt | looking_at_viewer | short_ponytail | short_sleeves | simple_background | blouse | scrunchie | black_vest | dress_shirt | one-hour_drawing_challenge | pleated_skirt | twitter_username | upper_body | white_background | open_mouth | black_skirt | grey_vest | neck_ribbon | bowtie | cowboy_shot | grey_skirt | blue_eyes | blush | red_bowtie | standing_split | flexible | kneehighs | panties | black_socks | brown_footwear | loafers | vest | side_ponytail | character_name | bow | shirt | 2girls | ^_^ | :d | alternate_costume | purple_jacket | hair_ribbon | purple_skirt | long_sleeves | black_pantyhose | gift_box | heart-shaped_box | valentine | collarbone | holding_gift | black_leggings | open_jacket | sneakers | navel | underwear_only | sports_bra | barefoot | full_body | small_breasts | white_bra | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:-----------------|:--------|:-------|:---------------|:--------------|:--------------------|:-----------------|:----------------|:--------------------|:---------|:------------|:-------------|:--------------|:-----------------------------|:----------------|:-------------------|:-------------|:-------------------|:-------------|:--------------|:------------|:--------------|:---------|:--------------|:-------------|:------------|:--------|:-------------|:-----------------|:-----------|:------------|:----------|:--------------|:-----------------|:----------|:-------|:----------------|:-----------------|:------|:--------|:---------|:------|:-----|:--------------------|:----------------|:--------------|:---------------|:---------------|:------------------|:-----------|:-------------------|:------------|:-------------|:---------------|:-----------------|:--------------|:-----------|:--------|:-----------------|:-------------|:-----------|:------------|:----------------|:------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | | | X | | | X | | | X | X | X | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 9 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | | X | X | | X | | | X | X | X | | X | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | X | X | X | X | | X | | | X | | | | | | X | | | X | | | | | | | | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | X | X | X | X | | X | | | | | | | | | X | | | | X | | | | | | | | X | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | X | X | X | X | | X | | | | | | | | | X | | | | X | | | | | | | X | | | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 10 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | | | X | | | X | | | | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | X | | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | 7 | 17 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | | X | X | | X | X | | | X | | | | | | X | X | | X | X | | | | | X | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | 8 | 5 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | | | X | X | | | X | | | | | X | | | | | | | | X | | | | | X | | X | X | | | | | X | | | | | | | | | | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X |
CyberHarem/maikaze_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-22T19:43:15+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T14:06:54+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of maikaze/่ˆž้ขจ (Kantai Collection) ========================================= This is the dataset of maikaze/่ˆž้ขจ (Kantai Collection), containing 430 images and their tags. The core tags of this character are 'blonde\_hair, ponytail, short\_hair, green\_eyes, bangs, parted\_bangs, ribbon', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
99c7ec6bc0755aa2bf686313a7bd35c4cb018933
# Dataset Card for "rust-github-issues" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
matteopilotto/rust-github-issues
[ "region:us" ]
2023-08-22T19:51:24+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "issue_num", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "comments", "dtype": "string"}, {"name": "labels", "dtype": "string"}, {"name": "comment_count", "dtype": "int64"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 632530, "num_examples": 52}], "download_size": 339415, "dataset_size": 632530}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-23T05:13:06+00:00
[]
[]
TAGS #region-us
# Dataset Card for "rust-github-issues" More Information needed
[ "# Dataset Card for \"rust-github-issues\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"rust-github-issues\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"rust-github-issues\"\n\nMore Information needed" ]
fb2c86b3a7f6944c912e5ed3536c4fdeee83dea6
# Dataset Card for "bge_base_features_cot" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
prateeky2806/bge_base_features_cot
[ "region:us" ]
2023-08-22T19:58:50+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 422099575, "num_examples": 100000}], "download_size": 421912325, "dataset_size": 422099575}}
2023-08-22T19:59:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bge_base_features_cot" More Information needed
[ "# Dataset Card for \"bge_base_features_cot\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bge_base_features_cot\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"bge_base_features_cot\"\n\nMore Information needed" ]
c84b7de01fffbf87727844759608d3eb3bfd97c6
# Dataset Card for "bge_base_features_alpaca" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
prateeky2806/bge_base_features_alpaca
[ "region:us" ]
2023-08-22T20:00:58+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 182429113, "num_examples": 52002}], "download_size": 204415093, "dataset_size": 182429113}}
2023-08-22T20:01:09+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bge_base_features_alpaca" More Information needed
[ "# Dataset Card for \"bge_base_features_alpaca\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bge_base_features_alpaca\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"bge_base_features_alpaca\"\n\nMore Information needed" ]
1dd2702cfc25eb848de2a99563191179d46708c5
# Dataset of chikuma/็ญ‘ๆ‘ฉ (Kantai Collection) This is the dataset of chikuma/็ญ‘ๆ‘ฉ (Kantai Collection), containing 309 images and their tags. The core tags of this character are `long_hair, black_hair, brown_eyes, breasts, large_breasts, bow`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 309 | 249.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/chikuma_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 309 | 180.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/chikuma_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 611 | 327.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/chikuma_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 309 | 234.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/chikuma_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 611 | 404.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/chikuma_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/chikuma_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, black_gloves, pelvic_curtain, puffy_short_sleeves, single_glove, solo, bowtie, looking_at_viewer, side_slit, simple_background, single_elbow_glove, smile, no_panties, twitter_username, black_skirt, blush, long_skirt, red_bow | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, black_gloves, hair_between_eyes, looking_at_viewer, no_panties, pelvic_curtain, puffy_short_sleeves, simple_background, single_glove, solo, white_background, black_eyes, long_skirt, side_slit, single_elbow_glove, smile, open_mouth, red_bowtie, belt, blush, cowboy_shot | | 2 | 10 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, black_gloves, solo, looking_at_viewer, hair_between_eyes, military_uniform, single_elbow_glove, pelvic_curtain, simple_background, smile, white_background, red_bowtie, puffy_short_sleeves, skirt, blush, shirt, single_glove, twitter_username, upper_body | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, ass, black_gloves, elbow_gloves, looking_back, pelvic_curtain, solo, blush, from_behind, looking_at_viewer, bent_over, side_slit, twitter_username, boots, brown_hair, no_panties | | 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, bell, solo, reindeer_antlers, reindeer_costume, gloves, looking_at_viewer, open_mouth, blush, smile | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_gloves | pelvic_curtain | puffy_short_sleeves | single_glove | solo | bowtie | looking_at_viewer | side_slit | simple_background | single_elbow_glove | smile | no_panties | twitter_username | black_skirt | blush | long_skirt | red_bow | hair_between_eyes | white_background | black_eyes | open_mouth | red_bowtie | belt | cowboy_shot | military_uniform | skirt | shirt | upper_body | ass | elbow_gloves | looking_back | from_behind | bent_over | boots | brown_hair | bell | reindeer_antlers | reindeer_costume | gloves | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-----------------|:----------------------|:---------------|:-------|:---------|:--------------------|:------------|:--------------------|:---------------------|:--------|:-------------|:-------------------|:--------------|:--------|:-------------|:----------|:--------------------|:-------------------|:-------------|:-------------|:-------------|:-------|:--------------|:-------------------|:--------|:--------|:-------------|:------|:---------------|:---------------|:--------------|:------------|:--------|:-------------|:-------|:-------------------|:-------------------|:---------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | | X | X | X | X | X | X | | | X | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | 2 | 10 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | X | | X | | X | X | X | | X | | X | | | X | X | | | X | | | X | X | X | X | | | | | | | | | | | | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | | | X | | X | X | | | | X | X | | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | | | | X | | X | | | | X | | | | X | | | | | | X | | | | | | | | | | | | | | | X | X | X | X |
CyberHarem/chikuma_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-22T20:16:15+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T20:04:20+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of chikuma/็ญ‘ๆ‘ฉ (Kantai Collection) ========================================= This is the dataset of chikuma/็ญ‘ๆ‘ฉ (Kantai Collection), containing 309 images and their tags. The core tags of this character are 'long\_hair, black\_hair, brown\_eyes, breasts, large\_breasts, bow', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
095989e61878af80fbe03bd93f2baf3bef7a1ec3
# Dataset of hatsuharu/ๅˆๆ˜ฅ/ๅˆๆ˜ฅ (Kantai Collection) This is the dataset of hatsuharu/ๅˆๆ˜ฅ/ๅˆๆ˜ฅ (Kantai Collection), containing 365 images and their tags. The core tags of this character are `purple_hair, long_hair, ponytail, purple_eyes, very_long_hair, short_eyebrows, ribbon, hair_ribbon`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 365 | 333.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hatsuharu_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 365 | 239.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hatsuharu_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 796 | 476.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hatsuharu_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 365 | 314.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hatsuharu_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 796 | 591.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hatsuharu_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/hatsuharu_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 14 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, folded_fan, hikimayu, looking_at_viewer, sailor_dress, solo, black_gloves, black_thighhighs, smile, simple_background, white_background, character_name, zettai_ryouiki | | 1 | 9 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, folded_fan, hikimayu, looking_at_viewer, sailor_dress, shide, solo, white_gloves, sleeveless, smile | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, looking_at_viewer, sailor_dress, simple_background, solo, white_background, black_gloves, hikimayu, shide, twitter_username, white_sailor_collar, dated, one-hour_drawing_challenge, white_dress, cowboy_shot, folded_fan, upper_body | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, hikimayu, looking_at_viewer, sailor_dress, shide, sleeveless_dress, solo, white_dress, white_gloves, white_sailor_collar, folded_fan, simple_background, holding, twitter_username | | 4 | 9 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, alternate_costume, solo, obi, hikimayu, looking_at_viewer, smile, floral_print, shide, print_kimono, wide_sleeves, open_mouth, twitter_username | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, detached_collar, fake_animal_ears, playboy_bunny, rabbit_ears, solo, looking_at_viewer, rabbit_tail, shide, strapless_leotard, white_background, wrist_cuffs, alternate_costume, simple_background, black_bowtie, black_leotard, black_thighhighs, cleavage, cowboy_shot, large_breasts, ass, hikimayu, medium_breasts | | 6 | 6 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, alternate_costume, hikimayu, solo, white_shirt, cowboy_shot, gym_shirt, gym_uniform, looking_at_viewer, name_tag, dated, medium_breasts, open_mouth, red_buruma, short_sleeves, simple_background, white_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | folded_fan | hikimayu | looking_at_viewer | sailor_dress | solo | black_gloves | black_thighhighs | smile | simple_background | white_background | character_name | zettai_ryouiki | shide | white_gloves | sleeveless | twitter_username | white_sailor_collar | dated | one-hour_drawing_challenge | white_dress | cowboy_shot | upper_body | sleeveless_dress | holding | alternate_costume | obi | floral_print | print_kimono | wide_sleeves | open_mouth | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | rabbit_tail | strapless_leotard | wrist_cuffs | black_bowtie | black_leotard | cleavage | large_breasts | ass | medium_breasts | white_shirt | gym_shirt | gym_uniform | name_tag | red_buruma | short_sleeves | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:-----------|:--------------------|:---------------|:-------|:---------------|:-------------------|:--------|:--------------------|:-------------------|:-----------------|:-----------------|:--------|:---------------|:-------------|:-------------------|:----------------------|:--------|:-----------------------------|:--------------|:--------------|:-------------|:-------------------|:----------|:--------------------|:------|:---------------|:---------------|:---------------|:-------------|:------------------|:-------------------|:----------------|:--------------|:--------------|:--------------------|:--------------|:---------------|:----------------|:-----------|:----------------|:------|:-----------------|:--------------|:------------|:--------------|:-----------|:-------------|:----------------| | 0 | 14 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 9 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | | | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | X | X | | | X | X | | | X | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | X | X | X | | | | X | | | | X | X | | X | X | | | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 9 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | X | X | | X | | | X | | | | | X | | | X | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | X | X | | X | | X | | X | X | | | X | | | | | | | | X | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | 6 | 6 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | X | X | | X | | | | X | X | | | | | | | | X | | | X | | | | X | | | | | X | | | | | | | | | | | | | X | X | X | X | X | X | X |
CyberHarem/hatsuharu_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-22T20:54:27+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T16:19:46+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of hatsuharu/ๅˆๆ˜ฅ/ๅˆๆ˜ฅ (Kantai Collection) ============================================== This is the dataset of hatsuharu/ๅˆๆ˜ฅ/ๅˆๆ˜ฅ (Kantai Collection), containing 365 images and their tags. The core tags of this character are 'purple\_hair, long\_hair, ponytail, purple\_eyes, very\_long\_hair, short\_eyebrows, ribbon, hair\_ribbon', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
e5c31b9aa3dfa848f90e195e72a9fa3dbae25c84
# Dataset of hachijou (Kantai Collection) This is the dataset of hachijou (Kantai Collection), containing 171 images and their tags. The core tags of this character are `hair_ribbon, ribbon, brown_hair, short_hair, black_ribbon, blue_eyes, red_ribbon, neck_ribbon`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 171 | 138.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hachijou_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 171 | 95.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hachijou_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 410 | 211.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hachijou_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 171 | 128.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hachijou_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 410 | 266.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hachijou_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/hachijou_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 15 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, green_jacket, green_sailor_collar, green_skirt, long_sleeves, serafuku, solo, looking_at_viewer, pleated_skirt, holding_lollipop, pom_pom_(clothes), cowboy_shot, simple_background, white_background, smile, twitter_username, drawstring, open_mouth | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, green_jacket, green_sailor_collar, green_skirt, long_sleeves, pleated_skirt, serafuku, simple_background, solo, cowboy_shot, looking_at_viewer, white_background, blush | | 2 | 18 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | green_jacket, green_skirt, long_sleeves, pleated_skirt, serafuku, 1girl, green_sailor_collar, solo, simple_background, lollipop, white_socks, looking_at_viewer, white_background, black_footwear, kneehighs, mouth_hold, pom_pom_(clothes), shoes, sitting, full_body | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | green_jacket | green_sailor_collar | green_skirt | long_sleeves | serafuku | solo | looking_at_viewer | pleated_skirt | holding_lollipop | pom_pom_(clothes) | cowboy_shot | simple_background | white_background | smile | twitter_username | drawstring | open_mouth | blush | lollipop | white_socks | black_footwear | kneehighs | mouth_hold | shoes | sitting | full_body | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:----------------------|:--------------|:---------------|:-----------|:-------|:--------------------|:----------------|:-------------------|:--------------------|:--------------|:--------------------|:-------------------|:--------|:-------------------|:-------------|:-------------|:--------|:-----------|:--------------|:-----------------|:------------|:-------------|:--------|:----------|:------------| | 0 | 15 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | X | X | | | X | X | X | | | | | X | | | | | | | | | | 2 | 18 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | X | X | X | X | | X | | X | X | | | | | | X | X | X | X | X | X | X | X |
CyberHarem/hachijou_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-22T20:55:50+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T08:33:39+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of hachijou (Kantai Collection) ======================================= This is the dataset of hachijou (Kantai Collection), containing 171 images and their tags. The core tags of this character are 'hair\_ribbon, ribbon, brown\_hair, short\_hair, black\_ribbon, blue\_eyes, red\_ribbon, neck\_ribbon', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
ab09c3cf01dc599b6ae3440d27d1aec7fcb476b9
# Stock Price Chat Dataset This is the dataset for <https://github.com/getorca/stock_price_chat>. More details are available there.
winddude/stock_price_chat_ds
[ "size_categories:1K<n<10K", "language:en", "license:mit", "finance", "stock", "region:us" ]
2023-08-22T21:30:03+00:00
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "tags": ["finance", "stock"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "stock_prices_cleaned.jsonl"}, {"split": "eval", "path": "cleaned_eval_stock_price.jsonl"}, {"split": "tokenized", "path": "stock_prices_tokenized.hf/"}]}]}
2023-08-22T21:41:34+00:00
[]
[ "en" ]
TAGS #size_categories-1K<n<10K #language-English #license-mit #finance #stock #region-us
# Stock Price Chat Dataset This is the dataset for <URL More details are available there.
[ "# Stock Price Chat Dataset\n\nThis is the dataset for <URL More details are available there." ]
[ "TAGS\n#size_categories-1K<n<10K #language-English #license-mit #finance #stock #region-us \n", "# Stock Price Chat Dataset\n\nThis is the dataset for <URL More details are available there." ]
[ 32, 20 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #language-English #license-mit #finance #stock #region-us \n# Stock Price Chat Dataset\n\nThis is the dataset for <URL More details are available there." ]
8ca02b74b76ad2f42a2affe69d948f312cb4e7f0
# Dataset of kikuzuki/่Šๆœˆ (Kantai Collection) This is the dataset of kikuzuki/่Šๆœˆ (Kantai Collection), containing 226 images and their tags. The core tags of this character are `long_hair, white_hair, red_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 226 | 153.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kikuzuki_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 226 | 109.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kikuzuki_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 472 | 217.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kikuzuki_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 226 | 143.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kikuzuki_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 472 | 273.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kikuzuki_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/kikuzuki_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 14 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, black_serafuku, simple_background, solo, black_pantyhose, long_sleeves, white_background, black_skirt, looking_at_viewer, blush, crescent_pin, pleated_skirt, sitting | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, black_pantyhose, black_sailor_collar, black_serafuku, black_skirt, long_sleeves, looking_at_viewer, solo, cowboy_shot, crescent_pin, pleated_skirt, white_necktie, black_shirt, brown_eyes, white_neckerchief, belt, blush, simple_background, grey_background, hair_between_eyes, one-hour_drawing_challenge, white_background, twitter_username | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, black_serafuku, solo, crescent, skirt, blush, long_sleeves, open_mouth, black_pantyhose, looking_at_viewer | | 3 | 15 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, solo, looking_at_viewer, cowboy_shot, simple_background, flat_chest, brown_eyes, one-piece_swimsuit, blush, school_swimsuit, white_background, bikini, twitter_username | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_serafuku | simple_background | solo | black_pantyhose | long_sleeves | white_background | black_skirt | looking_at_viewer | blush | crescent_pin | pleated_skirt | sitting | black_sailor_collar | cowboy_shot | white_necktie | black_shirt | brown_eyes | white_neckerchief | belt | grey_background | hair_between_eyes | one-hour_drawing_challenge | twitter_username | crescent | skirt | open_mouth | flat_chest | one-piece_swimsuit | school_swimsuit | bikini | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:--------------------|:-------|:------------------|:---------------|:-------------------|:--------------|:--------------------|:--------|:---------------|:----------------|:----------|:----------------------|:--------------|:----------------|:--------------|:-------------|:--------------------|:-------|:------------------|:--------------------|:-----------------------------|:-------------------|:-----------|:--------|:-------------|:-------------|:---------------------|:------------------|:---------| | 0 | 14 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | X | X | X | | | X | X | | | | | | | | | | | | | | | X | X | X | | | | | | 3 | 15 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | X | X | | | X | | X | X | | | | | X | | | X | | | | | | X | | | | X | X | X | X |
CyberHarem/kikuzuki_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-22T21:32:08+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T10:21:45+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of kikuzuki/่Šๆœˆ (Kantai Collection) ========================================== This is the dataset of kikuzuki/่Šๆœˆ (Kantai Collection), containing 226 images and their tags. The core tags of this character are 'long\_hair, white\_hair, red\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
cb9d83bd84193760120e0fdf1ca1518b5c95a83e
# Dataset of natori/ๅๅ–/ๅๅ– (Kantai Collection) This is the dataset of natori/ๅๅ–/ๅๅ– (Kantai Collection), containing 268 images and their tags. The core tags of this character are `short_hair, brown_hair, brown_eyes, hairband, white_hairband, breasts, large_breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 268 | 198.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/natori_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 268 | 142.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/natori_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 544 | 269.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/natori_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 268 | 185.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/natori_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 544 | 334.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/natori_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/natori_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, detached_sleeves, serafuku, solo, brown_sailor_collar, dated, looking_at_viewer, twitter_username, one-hour_drawing_challenge, simple_background, white_background, pleated_skirt, red_skirt, white_thighhighs, cowboy_shot, black_neckerchief, brown_neckerchief, shirt | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, blush, solo, cleavage, collarbone, looking_at_viewer, yukata, bare_shoulders, off_shoulder, open_mouth, simple_background, twitter_username, white_background, obi, tears, upper_body | | 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, yukata, obi, solo, bagged_fish, goldfish, smile, uchiwa, twitter_username, alternate_costume, blush, open_mouth, wide_sleeves | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, cleavage, simple_background, solo, blush, cowboy_shot, white_background, bikini, collarbone, looking_at_viewer, navel, cropped_legs, open_mouth, twitter_username | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, blush, gym_uniform, open_mouth, red_buruma, short_sleeves, solo, white_shirt, looking_at_viewer, gym_shirt, simple_background | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, blush, hetero, open_mouth, penis, solo_focus, 1boy, sex, bar_censor, cum_in_pussy, nipples, vaginal, cowgirl_position, girl_on_top, sweat, tears | | 6 | 11 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | fake_animal_ears, playboy_bunny, rabbit_ears, cleavage, detached_collar, 1girl, solo, strapless_leotard, looking_at_viewer, black_bowtie, blush, simple_background, white_background, wrist_cuffs, black_leotard, sitting, alternate_costume, black_pantyhose, cowboy_shot | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | detached_sleeves | serafuku | solo | brown_sailor_collar | dated | looking_at_viewer | twitter_username | one-hour_drawing_challenge | simple_background | white_background | pleated_skirt | red_skirt | white_thighhighs | cowboy_shot | black_neckerchief | brown_neckerchief | shirt | blush | cleavage | collarbone | yukata | bare_shoulders | off_shoulder | open_mouth | obi | tears | upper_body | bagged_fish | goldfish | smile | uchiwa | alternate_costume | wide_sleeves | bikini | navel | cropped_legs | gym_uniform | red_buruma | short_sleeves | white_shirt | gym_shirt | hetero | penis | solo_focus | 1boy | sex | bar_censor | cum_in_pussy | nipples | vaginal | cowgirl_position | girl_on_top | sweat | fake_animal_ears | playboy_bunny | rabbit_ears | detached_collar | strapless_leotard | black_bowtie | wrist_cuffs | black_leotard | sitting | black_pantyhose | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------------|:-----------|:-------|:----------------------|:--------|:--------------------|:-------------------|:-----------------------------|:--------------------|:-------------------|:----------------|:------------|:-------------------|:--------------|:--------------------|:--------------------|:--------|:--------|:-----------|:-------------|:---------|:-----------------|:---------------|:-------------|:------|:--------|:-------------|:--------------|:-----------|:--------|:---------|:--------------------|:---------------|:---------|:--------|:---------------|:--------------|:-------------|:----------------|:--------------|:------------|:---------|:--------|:-------------|:-------|:------|:-------------|:---------------|:----------|:----------|:-------------------|:--------------|:--------|:-------------------|:----------------|:--------------|:------------------|:--------------------|:---------------|:--------------|:----------------|:----------|:------------------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | | X | | | X | X | | X | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | | X | | | | X | | | | | | | | | | | X | | | X | | | X | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | X | | | X | X | | X | X | | | | X | | | | X | X | X | | | | X | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | | X | | | X | | | X | | | | | | | | | X | | | | | | X | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | | | | | | | | | | | | | | | | | X | | | | | | X | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | 6 | 11 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | | X | | | X | | | X | X | | | | X | | | | X | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
CyberHarem/natori_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-22T22:15:38+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T21:22:10+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of natori/ๅๅ–/ๅๅ– (Kantai Collection) =========================================== This is the dataset of natori/ๅๅ–/ๅๅ– (Kantai Collection), containing 268 images and their tags. The core tags of this character are 'short\_hair, brown\_hair, brown\_eyes, hairband, white\_hairband, breasts, large\_breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
94e7558fee3211bc3b69da8c761430648fae369e
# Dataset of littorio (Kantai Collection) This is the dataset of littorio (Kantai Collection), containing 338 images and their tags. The core tags of this character are `long_hair, brown_hair, brown_eyes, breasts, large_breasts, ponytail, wavy_hair, hat`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 338 | 320.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/littorio_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 338 | 221.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/littorio_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 726 | 441.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/littorio_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 338 | 298.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/littorio_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 726 | 562.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/littorio_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/littorio_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 31 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, detached_sleeves, necktie, solo, bare_shoulders, smile, miniskirt, looking_at_viewer, thighhighs, garter_straps, zettai_ryouiki, open_mouth, machinery, turret | | 1 | 12 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, bare_shoulders, detached_sleeves, necktie, smile, solo, blush, looking_at_viewer, upper_body, white_background, open_mouth, simple_background | | 2 | 13 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, hair_flower, looking_at_viewer, solo, cleavage, red_bikini, navel, simple_background, smile, white_background, blush, criss-cross_halter, open_mouth, cowboy_shot | | 3 | 8 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, drinking_glass, hair_flower, red_bikini, solo, cleavage, drinking_straw, looking_at_viewer, white_background, hibiscus, navel, simple_background, smile, blush, holding, lemon_slice, open_mouth, cowboy_shot, hair_between_eyes, sarong | | 4 | 8 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, solo, white_sweater, blush, alternate_costume, long_sleeves, looking_at_viewer, simple_background, gift_box, heart-shaped_box, holding_gift, skirt, white_background, one-hour_drawing_challenge, smile, valentine, black_footwear, full_body, hair_between_eyes, twitter_username, upper_body | | 5 | 12 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | santa_costume, santa_hat, 1girl, looking_at_viewer, sack, solo, white_gloves, alternate_costume, christmas, cleavage, smile, bare_shoulders, blush, open_mouth, striped_scarf, boots, red_dress | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1boy, 1girl, blush, hetero, open_mouth, solo_focus, sweat, detached_sleeves, garter_straps, miniskirt, necktie, nipples, sex_from_behind, thighhighs, vaginal, bare_shoulders, clothes_lift, mosaic_censoring, naval_uniform, penis, arm_grab, bouncing_breasts, breasts_out, cum_in_pussy, full_nelson, male_pubic_hair, open_fly, panties_aside, reverse_suspended_congress, saliva, straddling, striped, symbol-shaped_pupils, tears | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | detached_sleeves | necktie | solo | bare_shoulders | smile | miniskirt | looking_at_viewer | thighhighs | garter_straps | zettai_ryouiki | open_mouth | machinery | turret | blush | upper_body | white_background | simple_background | hair_flower | cleavage | red_bikini | navel | criss-cross_halter | cowboy_shot | drinking_glass | drinking_straw | hibiscus | holding | lemon_slice | hair_between_eyes | sarong | white_sweater | alternate_costume | long_sleeves | gift_box | heart-shaped_box | holding_gift | skirt | one-hour_drawing_challenge | valentine | black_footwear | full_body | twitter_username | santa_costume | santa_hat | sack | white_gloves | christmas | striped_scarf | boots | red_dress | 1boy | hetero | solo_focus | sweat | nipples | sex_from_behind | vaginal | clothes_lift | mosaic_censoring | naval_uniform | penis | arm_grab | bouncing_breasts | breasts_out | cum_in_pussy | full_nelson | male_pubic_hair | open_fly | panties_aside | reverse_suspended_congress | saliva | straddling | striped | symbol-shaped_pupils | tears | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------------|:----------|:-------|:-----------------|:--------|:------------|:--------------------|:-------------|:----------------|:-----------------|:-------------|:------------|:---------|:--------|:-------------|:-------------------|:--------------------|:--------------|:-----------|:-------------|:--------|:---------------------|:--------------|:-----------------|:-----------------|:-----------|:----------|:--------------|:--------------------|:---------|:----------------|:--------------------|:---------------|:-----------|:-------------------|:---------------|:--------|:-----------------------------|:------------|:-----------------|:------------|:-------------------|:----------------|:------------|:-------|:---------------|:------------|:----------------|:--------|:------------|:-------|:---------|:-------------|:--------|:----------|:------------------|:----------|:---------------|:-------------------|:----------------|:--------|:-----------|:-------------------|:--------------|:---------------|:--------------|:------------------|:-----------|:----------------|:-----------------------------|:---------|:-------------|:----------|:-----------------------|:--------| | 0 | 31 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 12 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | | X | | | | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 13 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | | X | | X | | X | | | | X | | | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 8 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | X | | X | | X | | | | X | | | X | | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 8 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | | X | | X | | X | | | | | | | X | X | X | X | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 12 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | | X | X | X | | X | | | | X | | | X | | | | | X | | | | | | | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | X | | X | | X | | X | X | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/littorio_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-22T22:25:50+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T01:22:39+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of littorio (Kantai Collection) ======================================= This is the dataset of littorio (Kantai Collection), containing 338 images and their tags. The core tags of this character are 'long\_hair, brown\_hair, brown\_eyes, breasts, large\_breasts, ponytail, wavy\_hair, hat', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
bd8b7de704b8f4a21679542e27d08f85471d1139
# Dataset Card for "acholi-crowd-validated-paths" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mekaneeky/acholi-crowd-validated-paths
[ "region:us" ]
2023-08-22T22:29:40+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Path", "dtype": "string"}, {"name": "Key", "dtype": "int64"}, {"name": "Speaker", "dtype": "string"}, {"name": "Transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 617369, "num_examples": 4804}, {"name": "valid", "num_bytes": 13082, "num_examples": 101}, {"name": "test", "num_bytes": 12723, "num_examples": 96}], "download_size": 281385, "dataset_size": 643174}}
2023-08-25T13:18:13+00:00
[]
[]
TAGS #region-us
# Dataset Card for "acholi-crowd-validated-paths" More Information needed
[ "# Dataset Card for \"acholi-crowd-validated-paths\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"acholi-crowd-validated-paths\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"acholi-crowd-validated-paths\"\n\nMore Information needed" ]
d3470867049f19e0d09b27e6364114aac48b89df
# Dataset Card for "lugbara-crowd-validated-paths" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mekaneeky/lugbara-crowd-validated-paths
[ "region:us" ]
2023-08-22T22:29:44+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Path", "dtype": "string"}, {"name": "Key", "dtype": "int64"}, {"name": "Speaker", "dtype": "string"}, {"name": "Transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 584439, "num_examples": 4772}, {"name": "valid", "num_bytes": 11769, "num_examples": 98}, {"name": "test", "num_bytes": 11561, "num_examples": 95}], "download_size": 293237, "dataset_size": 607769}}
2023-08-25T13:18:17+00:00
[]
[]
TAGS #region-us
# Dataset Card for "lugbara-crowd-validated-paths" More Information needed
[ "# Dataset Card for \"lugbara-crowd-validated-paths\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"lugbara-crowd-validated-paths\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"lugbara-crowd-validated-paths\"\n\nMore Information needed" ]
56731deb166dfde9fddbb6a2f8318822e3c9dfc2
# Dataset Card for "luganda-crowd-validated-paths" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mekaneeky/luganda-crowd-validated-paths
[ "region:us" ]
2023-08-22T22:29:47+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Path", "dtype": "string"}, {"name": "Key", "dtype": "int64"}, {"name": "Speaker", "dtype": "string"}, {"name": "Transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 667665, "num_examples": 5019}, {"name": "valid", "num_bytes": 14080, "num_examples": 103}, {"name": "test", "num_bytes": 13463, "num_examples": 99}], "download_size": 299492, "dataset_size": 695208}}
2023-08-25T13:18:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for "luganda-crowd-validated-paths" More Information needed
[ "# Dataset Card for \"luganda-crowd-validated-paths\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"luganda-crowd-validated-paths\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"luganda-crowd-validated-paths\"\n\nMore Information needed" ]
7148b9566f0d25d2a7ff3035c8569c67e851251f
# Dataset Card for "runyankole-crowd-validated-paths" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mekaneeky/runyankole-crowd-validated-paths
[ "region:us" ]
2023-08-22T22:29:51+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Path", "dtype": "string"}, {"name": "Key", "dtype": "int64"}, {"name": "Speaker", "dtype": "string"}, {"name": "Transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 685134, "num_examples": 4831}, {"name": "valid", "num_bytes": 14297, "num_examples": 101}, {"name": "test", "num_bytes": 14075, "num_examples": 96}], "download_size": 303064, "dataset_size": 713506}}
2023-08-25T13:18:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "runyankole-crowd-validated-paths" More Information needed
[ "# Dataset Card for \"runyankole-crowd-validated-paths\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"runyankole-crowd-validated-paths\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"runyankole-crowd-validated-paths\"\n\nMore Information needed" ]
c5e39506dcc2b3e8cf3033466e198f97017af94e
# Dataset Card for "ateso-crowd-validated-paths" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mekaneeky/ateso-crowd-validated-paths
[ "region:us" ]
2023-08-22T22:29:54+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Path", "dtype": "string"}, {"name": "Key", "dtype": "int64"}, {"name": "Speaker", "dtype": "string"}, {"name": "Transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 691846, "num_examples": 4829}, {"name": "valid", "num_bytes": 14470, "num_examples": 100}, {"name": "test", "num_bytes": 13881, "num_examples": 96}], "download_size": 274753, "dataset_size": 720197}}
2023-08-25T13:18:29+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ateso-crowd-validated-paths" More Information needed
[ "# Dataset Card for \"ateso-crowd-validated-paths\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ateso-crowd-validated-paths\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ateso-crowd-validated-paths\"\n\nMore Information needed" ]
f66e8bf0242f32c2b886ebb3ded37cc9eee64f83
# Dataset Card for "english-crowd-validated-paths" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mekaneeky/english-crowd-validated-paths
[ "region:us" ]
2023-08-22T22:29:58+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Path", "dtype": "string"}, {"name": "Key", "dtype": "int64"}, {"name": "Speaker", "dtype": "string"}, {"name": "Transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 502461, "num_examples": 4783}, {"name": "valid", "num_bytes": 10633, "num_examples": 100}, {"name": "test", "num_bytes": 10734, "num_examples": 96}], "download_size": 279912, "dataset_size": 523828}}
2023-08-25T13:18:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for "english-crowd-validated-paths" More Information needed
[ "# Dataset Card for \"english-crowd-validated-paths\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"english-crowd-validated-paths\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"english-crowd-validated-paths\"\n\nMore Information needed" ]
fe4edb7751c06fc8651896eef037b8564e370aec
# Dataset of irako (Kantai Collection) This is the dataset of irako (Kantai Collection), containing 315 images and their tags. The core tags of this character are `long_hair, ponytail, green_hair, ribbon, hair_ribbon, green_eyes, breasts, antenna_hair, large_breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 315 | 330.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/irako_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 315 | 193.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/irako_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 704 | 397.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/irako_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 315 | 288.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/irako_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 704 | 551.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/irako_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/irako_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 17 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, simple_background, solo, cleavage, looking_at_viewer, white_background, black_bikini, bikini_skirt, navel, cowboy_shot, smile, twitter_username, collarbone, one-hour_drawing_challenge | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, cleavage, looking_at_viewer, navel, open_clothes, solo, bikini_skirt, black_bikini, collarbone, cowboy_shot, simple_background, smile, white_jacket, open_mouth | | 2 | 15 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, alternate_costume, green_skirt, looking_at_viewer, smile, solo, simple_background, white_shirt, long_skirt, long_sleeves, blouse, full_body, white_background, bow, standing, open_mouth | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, alternate_costume, pleated_skirt, sailor_collar, simple_background, solo, black_footwear, black_pantyhose, long_sleeves, looking_at_viewer, neckerchief, white_background, black_serafuku, black_skirt, full_body, loafers, smile, standing, open_mouth | | 4 | 17 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, blue_skirt, kappougi, simple_background, solo, looking_at_viewer, smile, full_body, pink_shirt, white_background, sandals, tabi, standing, long_sleeves, open_mouth, red_necktie, white_socks, food, tray | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, hair_bow, kappougi, looking_at_viewer, solo, blush, black_hair, necktie, twitter_username, upper_body, smile | | 6 | 6 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 2girls, kappougi, open_mouth, ahoge, necktie, :d, black_hair, brown_hair, hair_bow, pink_shirt, upper_body | | 7 | 9 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, bow, looking_at_viewer, open_mouth, santa_costume, solo, smile, alternate_costume, red_dress, christmas, fur-trimmed_dress, blush, cake, plate, simple_background, white_background, white_thighhighs | | 8 | 13 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, rabbit_ears, solo, detached_collar, looking_at_viewer, playboy_bunny, wrist_cuffs, cleavage, fake_animal_ears, simple_background, rabbit_tail, strapless_leotard, alternate_costume, black_pantyhose, smile, white_background, black_leotard, cowboy_shot, red_bowtie, sitting | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | simple_background | solo | cleavage | looking_at_viewer | white_background | black_bikini | bikini_skirt | navel | cowboy_shot | smile | twitter_username | collarbone | one-hour_drawing_challenge | open_clothes | white_jacket | open_mouth | alternate_costume | green_skirt | white_shirt | long_skirt | long_sleeves | blouse | full_body | bow | standing | pleated_skirt | sailor_collar | black_footwear | black_pantyhose | neckerchief | black_serafuku | black_skirt | loafers | blue_skirt | kappougi | pink_shirt | sandals | tabi | red_necktie | white_socks | food | tray | hair_bow | blush | black_hair | necktie | upper_body | 2girls | ahoge | :d | brown_hair | santa_costume | red_dress | christmas | fur-trimmed_dress | cake | plate | white_thighhighs | rabbit_ears | detached_collar | playboy_bunny | wrist_cuffs | fake_animal_ears | rabbit_tail | strapless_leotard | black_leotard | red_bowtie | sitting | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:-----------|:--------------------|:-------------------|:---------------|:---------------|:--------|:--------------|:--------|:-------------------|:-------------|:-----------------------------|:---------------|:---------------|:-------------|:--------------------|:--------------|:--------------|:-------------|:---------------|:---------|:------------|:------|:-----------|:----------------|:----------------|:-----------------|:------------------|:--------------|:-----------------|:--------------|:----------|:-------------|:-----------|:-------------|:----------|:-------|:--------------|:--------------|:-------|:-------|:-----------|:--------|:-------------|:----------|:-------------|:---------|:--------|:-----|:-------------|:----------------|:------------|:------------|:--------------------|:-------|:--------|:-------------------|:--------------|:------------------|:----------------|:--------------|:-------------------|:--------------|:--------------------|:----------------|:-------------|:----------| | 0 | 17 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | | X | X | X | X | X | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 15 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | X | X | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | | X | X | | | | | X | | | | | | X | X | | | | X | | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 17 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | X | | X | X | | | | | X | | | | | | X | | | | | X | | X | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | X | | X | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | 6 | 6 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | X | | | | | | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | 7 | 9 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | X | X | | X | X | | | | | X | | | | | | X | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | 8 | 13 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | X | X | X | X | X | | | | X | X | | | | | | | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
CyberHarem/irako_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-22T22:57:17+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T09:38:44+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of irako (Kantai Collection) ==================================== This is the dataset of irako (Kantai Collection), containing 315 images and their tags. The core tags of this character are 'long\_hair, ponytail, green\_hair, ribbon, hair\_ribbon, green\_eyes, breasts, antenna\_hair, large\_breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
d70c8ac284d961071242b62bce848eca7a27ea8e
# Dataset Card for "HowFarAreYou_3DSpeakerTrain" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
BigSuperbPrivate/HowFarAreYou_3DSpeakerTrain
[ "region:us" ]
2023-08-22T23:02:52+00:00
{"dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "label", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1999470432.1925142, "num_examples": 4284}, {"name": "validation", "num_bytes": 216314162.76748583, "num_examples": 477}], "download_size": 2048624215, "dataset_size": 2215784594.96}}
2023-08-22T23:22:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for "HowFarAreYou_3DSpeakerTrain" More Information needed
[ "# Dataset Card for \"HowFarAreYou_3DSpeakerTrain\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"HowFarAreYou_3DSpeakerTrain\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"HowFarAreYou_3DSpeakerTrain\"\n\nMore Information needed" ]
e86ff47be69f127ca36f07458d6bd0a6453cbb53
# Dataset of matsukaze/ๆพ้ขจ (Kantai Collection) This is the dataset of matsukaze/ๆพ้ขจ (Kantai Collection), containing 500 images and their tags. The core tags of this character are `long_hair, two_side_up, brown_eyes, grey_hair, white_hair, hat`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 668.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsukaze_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 383.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsukaze_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1262 | 842.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsukaze_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 596.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsukaze_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1262 | 1.17 GiB | [Download](https://huggingface.co/datasets/CyberHarem/matsukaze_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/matsukaze_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 15 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, hair_tubes, sailor_dress, solo, upper_body, white_sailor_collar, brown_dress, looking_at_viewer, simple_background, smokestack_hair_ornament, mini_hat, white_background, choker, blush, lifebuoy, smile, grey_neckerchief | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, blush, choker, hair_tubes, looking_at_viewer, sailor_dress, solo, white_background, simple_background, upper_body, hairband | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, garter_straps, looking_at_viewer, sailor_dress, short_dress, simple_background, solo, white_background, zettai_ryouiki, striped_thighhighs, choker, gloves, hair_tubes, chibi | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, garter_straps, looking_at_viewer, sailor_dress, short_dress, solo, striped, thighhighs, zettai_ryouiki, choker | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, black_panties, blush, choker, garter_straps, hair_tubes, looking_at_viewer, small_breasts, nipples, sailor_dress, solo, thighhighs, navel, open_clothes, side-tie_panties, single_glove, very_long_hair, white_gloves, fang, open_mouth, simple_background | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, blush, brown_dress, hair_tubes, heart, mini_hat, red_thighhighs, sailor_dress, short_dress, single_glove, solo, striped_thighhighs, bar_censor, female_masturbation, garter_straps, open_mouth, pussy_juice, simple_background, spread_legs, white_background, white_gloves, black_panties, fingering, twitter_username, grey_neckerchief, lifebuoy_ornament, long_sleeves, navel, panties_aside, sailor_collar, smokestack_hair_ornament | | 6 | 9 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, blush, looking_at_viewer, solo, hair_tubes, simple_background, small_breasts, navel, white_background, black_bikini, cowboy_shot, hair_between_eyes, nipples, nude, open_mouth | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, blush, hair_between_eyes, hair_tubes, solo, wide_sleeves, alternate_costume, long_sleeves, looking_at_viewer, open_mouth, smile, holding, bangs, floral_print, hair_ornament, obi, print_kimono, upper_body | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | hair_tubes | sailor_dress | solo | upper_body | white_sailor_collar | brown_dress | looking_at_viewer | simple_background | smokestack_hair_ornament | mini_hat | white_background | choker | blush | lifebuoy | smile | grey_neckerchief | hairband | garter_straps | short_dress | zettai_ryouiki | striped_thighhighs | gloves | chibi | striped | thighhighs | black_panties | small_breasts | nipples | navel | open_clothes | side-tie_panties | single_glove | very_long_hair | white_gloves | fang | open_mouth | heart | red_thighhighs | bar_censor | female_masturbation | pussy_juice | spread_legs | fingering | twitter_username | lifebuoy_ornament | long_sleeves | panties_aside | sailor_collar | black_bikini | cowboy_shot | hair_between_eyes | nude | wide_sleeves | alternate_costume | holding | bangs | floral_print | hair_ornament | obi | print_kimono | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:---------------|:-------|:-------------|:----------------------|:--------------|:--------------------|:--------------------|:---------------------------|:-----------|:-------------------|:---------|:--------|:-----------|:--------|:-------------------|:-----------|:----------------|:--------------|:-----------------|:---------------------|:---------|:--------|:----------|:-------------|:----------------|:----------------|:----------|:--------|:---------------|:-------------------|:---------------|:-----------------|:---------------|:-------|:-------------|:--------|:-----------------|:-------------|:----------------------|:--------------|:--------------|:------------|:-------------------|:--------------------|:---------------|:----------------|:----------------|:---------------|:--------------|:--------------------|:-------|:---------------|:--------------------|:----------|:--------|:---------------|:----------------|:------|:---------------| | 0 | 15 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | | | X | X | | | X | X | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | | | | X | X | | | X | X | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | X | X | | | | X | | | | | X | | | | | | X | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | X | X | | | | X | X | | | | X | X | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | X | X | | | X | | X | X | X | X | | X | | | X | | X | X | | X | | | | | X | | | X | | | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | 6 | 9 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | | X | | | | X | X | | | X | | X | | | | | | | | | | | | | | X | X | X | | | | | | | X | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | X | | X | X | | | X | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | | | | | X | | X | X | X | X | X | X | X | X |
CyberHarem/matsukaze_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-22T23:33:57+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T23:43:55+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of matsukaze/ๆพ้ขจ (Kantai Collection) =========================================== This is the dataset of matsukaze/ๆพ้ขจ (Kantai Collection), containing 500 images and their tags. The core tags of this character are 'long\_hair, two\_side\_up, brown\_eyes, grey\_hair, white\_hair, hat', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
5c25693c92ae0a045c5f145b4d9b37b7da88c9a0
# Dataset of nowaki/้‡Žๅˆ†/้‡Žๅˆ† (Kantai Collection) This is the dataset of nowaki/้‡Žๅˆ†/้‡Žๅˆ† (Kantai Collection), containing 433 images and their tags. The core tags of this character are `grey_hair, asymmetrical_hair, grey_eyes, bangs, flipped_hair, swept_bangs, long_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 433 | 308.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nowaki_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 433 | 223.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nowaki_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 871 | 440.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nowaki_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 433 | 289.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nowaki_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 871 | 549.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nowaki_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/nowaki_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, black_pantyhose, black_skirt, pleated_skirt, solo, white_background, white_shirt, yellow_necktie, black_vest, simple_background, dress_shirt, white_gloves, looking_at_viewer, school_uniform, standing, full_body | | 1 | 15 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, pleated_skirt, school_uniform, shirt, solo, vest, white_gloves, yellow_necktie, black_pantyhose, machinery, simple_background, short_sleeves, white_background | | 2 | 13 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, black_skirt, black_vest, dress_shirt, paw_gloves, pleated_skirt, solo, white_shirt, wolf_ears, wolf_tail, yellow_necktie, black_pantyhose, adapted_costume, simple_background, white_background, cowboy_shot, long_sleeves, looking_at_viewer, fake_animal_ears | | 3 | 10 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, black_vest, short_sleeves, solo, upper_body, white_shirt, yellow_necktie, simple_background, looking_at_viewer, school_uniform, white_background, open_vest, white_gloves, hair_between_eyes | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | detached_collar, fake_animal_ears, playboy_bunny, rabbit_ears, 1girl, solo, yellow_necktie, black_leotard, black_pantyhose, cowboy_shot, simple_background, small_breasts, strapless_leotard, wrist_cuffs, covered_navel, looking_at_viewer, rabbit_tail, white_gloves | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, blush, solo, looking_at_viewer, simple_background, underwear_only, white_background, black_bra, cat_cutout, cat_lingerie, cleavage_cutout, navel, black_panties, cat_ears, cat_tail, collarbone, cowboy_shot, frilled_bra, hair_between_eyes, small_breasts | | 6 | 15 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, solo, yukata, obi, looking_at_viewer, white_kimono, upper_body, blush, wide_sleeves, holding, smile, alternate_costume, green_eyes, open_mouth, simple_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_pantyhose | black_skirt | pleated_skirt | solo | white_background | white_shirt | yellow_necktie | black_vest | simple_background | dress_shirt | white_gloves | looking_at_viewer | school_uniform | standing | full_body | shirt | vest | machinery | short_sleeves | paw_gloves | wolf_ears | wolf_tail | adapted_costume | cowboy_shot | long_sleeves | fake_animal_ears | upper_body | open_vest | hair_between_eyes | detached_collar | playboy_bunny | rabbit_ears | black_leotard | small_breasts | strapless_leotard | wrist_cuffs | covered_navel | rabbit_tail | blush | underwear_only | black_bra | cat_cutout | cat_lingerie | cleavage_cutout | navel | black_panties | cat_ears | cat_tail | collarbone | frilled_bra | yukata | obi | white_kimono | wide_sleeves | holding | smile | alternate_costume | green_eyes | open_mouth | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------------|:--------------|:----------------|:-------|:-------------------|:--------------|:-----------------|:-------------|:--------------------|:--------------|:---------------|:--------------------|:-----------------|:-----------|:------------|:--------|:-------|:------------|:----------------|:-------------|:------------|:------------|:------------------|:--------------|:---------------|:-------------------|:-------------|:------------|:--------------------|:------------------|:----------------|:--------------|:----------------|:----------------|:--------------------|:--------------|:----------------|:--------------|:--------|:-----------------|:------------|:-------------|:---------------|:------------------|:--------|:----------------|:-----------|:-----------|:-------------|:--------------|:---------|:------|:---------------|:---------------|:----------|:--------|:--------------------|:-------------|:-------------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 15 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | X | X | X | | X | | X | | X | | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 13 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | | X | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 10 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | | X | X | X | X | X | X | | X | X | X | | | | | | X | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | | X | | | X | | X | | X | X | | | | | | | | | | | | X | | X | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | | | X | X | | | | X | | | X | | | | | | | | | | | | X | | | | | X | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | 6 | 15 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | | | X | | | | | X | | | X | | | | | | | | | | | | | | | X | | | | | | | | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X |
CyberHarem/nowaki_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-22T23:36:37+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T16:10:25+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of nowaki/้‡Žๅˆ†/้‡Žๅˆ† (Kantai Collection) =========================================== This is the dataset of nowaki/้‡Žๅˆ†/้‡Žๅˆ† (Kantai Collection), containing 433 images and their tags. The core tags of this character are 'grey\_hair, asymmetrical\_hair, grey\_eyes, bangs, flipped\_hair, swept\_bangs, long\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
d18dcba3e943584b9387dee99dfe2c9ffc987307
# Dataset of arare/้œฐ/้œฐ (Kantai Collection) This is the dataset of arare/้œฐ/้œฐ (Kantai Collection), containing 275 images and their tags. The core tags of this character are `short_hair, black_hair, brown_eyes, hat`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 275 | 156.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arare_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 275 | 121.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arare_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 551 | 236.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arare_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 275 | 149.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arare_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 551 | 283.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arare_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/arare_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, black_dress, full_body, long_sleeves, pinafore_dress, simple_background, solo, white_background, white_shirt, white_socks, looking_at_viewer, machinery, rigging, torpedo_tubes, twitter_username, adapted_turret, cannon, one-hour_drawing_challenge, shoes, standing, torpedo_launcher | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 2girls, long_sleeves, pinafore_dress, school_uniform, solo_focus, white_shirt, belt, black_dress, looking_at_viewer, white_background, bangs, blush, clenched_hand, grey_hair, long_hair, open_mouth, simple_background | | 2 | 34 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, suspenders, solo, school_uniform, looking_at_viewer, arm_warmers, short_sleeves, pleated_skirt, white_background, white_shirt | | 3 | 11 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, looking_at_viewer, solo, blue_one-piece_swimsuit, simple_background, collarbone, cowboy_shot, bangs, white_background, competition_school_swimsuit, small_breasts, black_one-piece_swimsuit, covered_navel, standing | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_dress | full_body | long_sleeves | pinafore_dress | simple_background | solo | white_background | white_shirt | white_socks | looking_at_viewer | machinery | rigging | torpedo_tubes | twitter_username | adapted_turret | cannon | one-hour_drawing_challenge | shoes | standing | torpedo_launcher | 2girls | school_uniform | solo_focus | belt | bangs | blush | clenched_hand | grey_hair | long_hair | open_mouth | suspenders | arm_warmers | short_sleeves | pleated_skirt | blue_one-piece_swimsuit | collarbone | cowboy_shot | competition_school_swimsuit | small_breasts | black_one-piece_swimsuit | covered_navel | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:------------|:---------------|:-----------------|:--------------------|:-------|:-------------------|:--------------|:--------------|:--------------------|:------------|:----------|:----------------|:-------------------|:-----------------|:---------|:-----------------------------|:--------|:-----------|:-------------------|:---------|:-----------------|:-------------|:-------|:--------|:--------|:----------------|:------------|:------------|:-------------|:-------------|:--------------|:----------------|:----------------|:--------------------------|:-------------|:--------------|:------------------------------|:----------------|:---------------------------|:----------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | | X | | X | X | X | | X | X | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | 2 | 34 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | | | | | X | X | X | | X | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | | | | | | | | | 3 | 11 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | | | X | X | X | | | X | | | | | | | | | X | | | | | | X | | | | | | | | | | X | X | X | X | X | X | X |
CyberHarem/arare_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T00:11:18+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T20:01:25+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of arare/้œฐ/้œฐ (Kantai Collection) ======================================== This is the dataset of arare/้œฐ/้œฐ (Kantai Collection), containing 275 images and their tags. The core tags of this character are 'short\_hair, black\_hair, brown\_eyes, hat', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
498c8a97ca34df73cdd3a31c24d082b9bd734b22
# Dataset Card for "slt-lyrics-audio" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gmenon/slt-lyrics-audio
[ "region:us" ]
2023-08-23T00:17:52+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "eval", "path": "data/eval-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5522199699.224, "num_examples": 9538}, {"name": "eval", "num_bytes": 299870166.0, "num_examples": 507}], "download_size": 5411106600, "dataset_size": 5822069865.224}}
2023-08-23T00:31:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "slt-lyrics-audio" More Information needed
[ "# Dataset Card for \"slt-lyrics-audio\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"slt-lyrics-audio\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"slt-lyrics-audio\"\n\nMore Information needed" ]
1bb391b25a4bfa1e1f5bfae41f8559c9d6448d18
# Dataset of anchorage_water_oni/ๆณŠๅœฐๆฐด้ฌผ/ๆณŠๅœฐๆฐด้ฌผ (Kantai Collection) This is the dataset of anchorage_water_oni/ๆณŠๅœฐๆฐด้ฌผ/ๆณŠๅœฐๆฐด้ฌผ (Kantai Collection), containing 62 images and their tags. The core tags of this character are `black_hair, long_hair, horns, white_skin, very_long_hair, red_eyes, colored_skin, pale_skin, breasts, multicolored_hair, hair_between_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 62 | 67.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anchorage_water_oni_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 62 | 44.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anchorage_water_oni_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 130 | 78.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anchorage_water_oni_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 62 | 63.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anchorage_water_oni_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 130 | 103.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anchorage_water_oni_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/anchorage_water_oni_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------| | 0 | 21 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, abyssal_ship, solo, white_dress, looking_at_viewer, glowing_eyes, gradient_hair, machinery, overskirt, turret | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | abyssal_ship | solo | white_dress | looking_at_viewer | glowing_eyes | gradient_hair | machinery | overskirt | turret | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-------|:--------------|:--------------------|:---------------|:----------------|:------------|:------------|:---------| | 0 | 21 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X |
CyberHarem/anchorage_water_oni_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T00:23:09+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T23:44:34+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of anchorage\_water\_oni/ๆณŠๅœฐๆฐด้ฌผ/ๆณŠๅœฐๆฐด้ฌผ (Kantai Collection) ============================================================== This is the dataset of anchorage\_water\_oni/ๆณŠๅœฐๆฐด้ฌผ/ๆณŠๅœฐๆฐด้ฌผ (Kantai Collection), containing 62 images and their tags. The core tags of this character are 'black\_hair, long\_hair, horns, white\_skin, very\_long\_hair, red\_eyes, colored\_skin, pale\_skin, breasts, multicolored\_hair, hair\_between\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
a75f0cd8ab3dbc3d57fc1753f9327413bc81fcac
# Dataset of honolulu (Kantai Collection) This is the dataset of honolulu (Kantai Collection), containing 219 images and their tags. The core tags of this character are `blonde_hair, long_hair, breasts, blue_eyes, drill_hair, large_breasts, twintails, twin_drills, hair_ornament, hair_flower`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 219 | 238.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/honolulu_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 219 | 145.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/honolulu_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 559 | 329.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/honolulu_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 219 | 215.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/honolulu_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 559 | 451.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/honolulu_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/honolulu_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, simple_background, solo, string_bikini, white_bikini, cleavage, dated, looking_at_viewer, one-hour_drawing_challenge, side-tie_bikini_bottom, white_background, cowboy_shot, flower, open_mouth, twitter_username | | 1 | 24 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, white_bikini, side-tie_bikini_bottom, red_flower, cleavage, cowboy_shot, navel, string_bikini, looking_at_viewer, open_mouth, smile, hibiscus, blush, day, halterneck, official_alternate_costume, outdoors, collarbone, cloud, blue_sky, ocean | | 2 | 9 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, breast_pocket, headgear, looking_at_viewer, military_uniform, red_ascot, sleeveless_jacket, solo, upper_body, one-hour_drawing_challenge, twitter_username, simple_background, white_background | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, breast_pocket, red_ascot, simple_background, solo, headgear, sleeveless_jacket, white_background, cowboy_shot, dress, looking_at_viewer, smile, skirt, armpits | | 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, solo, twitter_username, white_shirt, alternate_costume, blush, cleavage, one-hour_drawing_challenge, pleated_skirt, simple_background, white_background, collared_shirt, looking_at_viewer, school_uniform, smile, cowboy_shot, open_mouth, short_sleeves | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | detached_collar, playboy_bunny, rabbit_ears, cleavage, fake_animal_ears, looking_at_viewer, pantyhose, simple_background, strapless_leotard, white_background, wrist_cuffs, 1girl, cowboy_shot, rabbit_tail, solo, bowtie, smile | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, blue_kimono, official_alternate_costume, ponytail, simple_background, solo, white_background, yukata, blush, eating, takoyaki, obi, full_body, holding_food, looking_at_viewer, mask_on_head, open_mouth, sandals, upper_body | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, black_pantyhose, christmas, fur-trimmed_dress, red_dress, santa_costume, solo, cleavage, fur-trimmed_capelet, fur-trimmed_gloves, red_capelet, red_gloves, fake_mustache, alternate_costume, looking_at_viewer, smile | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | simple_background | solo | string_bikini | white_bikini | cleavage | dated | looking_at_viewer | one-hour_drawing_challenge | side-tie_bikini_bottom | white_background | cowboy_shot | flower | open_mouth | twitter_username | red_flower | navel | smile | hibiscus | blush | day | halterneck | official_alternate_costume | outdoors | collarbone | cloud | blue_sky | ocean | breast_pocket | headgear | military_uniform | red_ascot | sleeveless_jacket | upper_body | dress | skirt | armpits | white_shirt | alternate_costume | pleated_skirt | collared_shirt | school_uniform | short_sleeves | detached_collar | playboy_bunny | rabbit_ears | fake_animal_ears | pantyhose | strapless_leotard | wrist_cuffs | rabbit_tail | bowtie | blue_kimono | ponytail | yukata | eating | takoyaki | obi | full_body | holding_food | mask_on_head | sandals | black_pantyhose | christmas | fur-trimmed_dress | red_dress | santa_costume | fur-trimmed_capelet | fur-trimmed_gloves | red_capelet | red_gloves | fake_mustache | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:----------------|:---------------|:-----------|:--------|:--------------------|:-----------------------------|:-------------------------|:-------------------|:--------------|:---------|:-------------|:-------------------|:-------------|:--------|:--------|:-----------|:--------|:------|:-------------|:-----------------------------|:-----------|:-------------|:--------|:-----------|:--------|:----------------|:-----------|:-------------------|:------------|:--------------------|:-------------|:--------|:--------|:----------|:--------------|:--------------------|:----------------|:-----------------|:-----------------|:----------------|:------------------|:----------------|:--------------|:-------------------|:------------|:--------------------|:--------------|:--------------|:---------|:--------------|:-----------|:---------|:---------|:-----------|:------|:------------|:---------------|:---------------|:----------|:------------------|:------------|:--------------------|:------------|:----------------|:----------------------|:---------------------|:--------------|:-------------|:----------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 24 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | X | X | X | | X | | X | | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 9 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | | | | X | X | | X | | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | | | | | X | | | X | X | | | | | | X | | | | | | | | | | | X | X | | X | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | X | | | X | | X | X | | X | X | | X | X | | | X | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | X | | | X | | X | | | X | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | X | | | | | X | | | X | | | X | | | | | | X | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | X | | | X | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
CyberHarem/honolulu_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T00:57:27+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T03:45:44+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of honolulu (Kantai Collection) ======================================= This is the dataset of honolulu (Kantai Collection), containing 219 images and their tags. The core tags of this character are 'blonde\_hair, long\_hair, breasts, blue\_eyes, drill\_hair, large\_breasts, twintails, twin\_drills, hair\_ornament, hair\_flower', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
dbfd53f138f0f7889ef68f23d187d27783488863
# Dataset Card for "bengaliAI-medium-cleaned-50k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Rounak28/bengaliAI-medium-cleaned-50k
[ "region:us" ]
2023-08-23T01:04:24+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 47822272173.505, "num_examples": 49750}, {"name": "test", "num_bytes": 240312925.495, "num_examples": 250}], "download_size": 6967284318, "dataset_size": 48062585099.0}}
2023-08-23T01:10:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bengaliAI-medium-cleaned-50k" More Information needed
[ "# Dataset Card for \"bengaliAI-medium-cleaned-50k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bengaliAI-medium-cleaned-50k\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"bengaliAI-medium-cleaned-50k\"\n\nMore Information needed" ]
16bd0f2619ab286081ebd9a77120ac868bea39bd
# Dataset of gangut (Kantai Collection) This is the dataset of gangut (Kantai Collection), containing 377 images and their tags. The core tags of this character are `long_hair, grey_hair, breasts, scar_on_face, hat, peaked_cap, orange_eyes, hair_between_eyes, large_breasts, red_eyes, military_hat`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 377 | 339.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gangut_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 377 | 228.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gangut_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 849 | 467.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gangut_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 377 | 318.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gangut_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 849 | 613.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/gangut_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/gangut_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, belt, black_gloves, black_skirt, looking_at_viewer, miniskirt, solo, long_sleeves, pleated_skirt, military_uniform, white_jacket, scar_on_cheek, military_jacket, red_shirt, simple_background, black_pantyhose, smile, white_background, open_mouth | | 1 | 12 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, military_uniform, red_shirt, solo, white_jacket, scar_on_cheek, looking_at_viewer, military_jacket, simple_background, smile, white_background, black_gloves, upper_body, jacket_on_shoulders, short_sleeves | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, belt, black_gloves, black_pantyhose, black_skirt, jacket_on_shoulders, machinery, miniskirt, smile, smokestack, solo, looking_at_viewer, pleated_skirt, red_shirt, rigging, short_sleeves, white_background, white_jacket, black_headwear, crossed_arms, medium_breasts, simple_background, turret, cannon, cleavage, scar_on_cheek | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | belt | black_gloves | black_skirt | looking_at_viewer | miniskirt | solo | long_sleeves | pleated_skirt | military_uniform | white_jacket | scar_on_cheek | military_jacket | red_shirt | simple_background | black_pantyhose | smile | white_background | open_mouth | upper_body | jacket_on_shoulders | short_sleeves | machinery | smokestack | rigging | black_headwear | crossed_arms | medium_breasts | turret | cannon | cleavage | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:---------------|:--------------|:--------------------|:------------|:-------|:---------------|:----------------|:-------------------|:---------------|:----------------|:------------------|:------------|:--------------------|:------------------|:--------|:-------------------|:-------------|:-------------|:----------------------|:----------------|:------------|:-------------|:----------|:-----------------|:---------------|:-----------------|:---------|:---------|:-----------| | 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | 1 | 12 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | | X | | X | | | X | X | X | X | X | X | | X | X | | X | X | X | | | | | | | | | | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | X | X | | X | | X | X | | X | X | X | X | X | | | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/gangut_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T01:05:13+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T05:45:58+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of gangut (Kantai Collection) ===================================== This is the dataset of gangut (Kantai Collection), containing 377 images and their tags. The core tags of this character are 'long\_hair, grey\_hair, breasts, scar\_on\_face, hat, peaked\_cap, orange\_eyes, hair\_between\_eyes, large\_breasts, red\_eyes, military\_hat', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
557e46bea1b3b0afb9e6d85151331fc1b5db1550
This dataset is Anthropic/hh-rlhf unfiltered and deduped, removing 18170 instances of blatant alignment and 51954 duplicates. 99228 instructions remain. i downloaded all the data (except red team) from https://huggingface.co/datasets/Anthropic/hh-rlhf/tree/09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa unarchived it merged it all into one file then ran clean.py and then ran dedupe.py to get the resulting file inspired by https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered All credit to anon8231489123 for the cleanup script that I adapted to wizardlm_clean.py, I then took this script and adapted it to clean.py
ewof/hh-rlhf-instruct-unfiltered-deduped
[ "size_categories:1K<n<10K", "language:en", "region:us" ]
2023-08-23T01:16:40+00:00
{"language": ["en"], "size_categories": ["1K<n<10K"]}
2023-08-23T02:55:13+00:00
[]
[ "en" ]
TAGS #size_categories-1K<n<10K #language-English #region-us
This dataset is Anthropic/hh-rlhf unfiltered and deduped, removing 18170 instances of blatant alignment and 51954 duplicates. 99228 instructions remain. i downloaded all the data (except red team) from URL unarchived it merged it all into one file then ran URL and then ran URL to get the resulting file inspired by URL All credit to anon8231489123 for the cleanup script that I adapted to wizardlm_clean.py, I then took this script and adapted it to URL
[]
[ "TAGS\n#size_categories-1K<n<10K #language-English #region-us \n" ]
[ 22 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #language-English #region-us \n" ]
3499d43745df24e72dce486fc1bafa8b990008f7
# Dataset Card for "audioset-music" audioset-subset using 130 music mid from [noise2music](https://arxiv.org/abs/2302.03917) ``` [ '/m/0z9c','/m/0mkg','/m/042v_gx','/m/0fd3y','/t/dd00036','/m/025td0t','/m/0192l','/m/018j2','/m/0bm02','/m/018vs','/m/02cz_7','/m/0395lw','/m/0gg8l','/m/0155w','/m/0l14_3', '/m/01kcd','/m/015vgc','/m/01xqw','/m/02bk07','/m/0l14jd','/m/02mscn','/m/0140xf','/m/01wy6','/m/0ggq0m','/m/01lyv','/m/0239kh','/m/01qbl','/m/0ggx5q','/m/02bxd','/m/026z9', '/m/02fsn','/m/0283d','/m/02hnl','/m/02k_mr','/m/026t6','/m/07s72n','/m/02sgy','/m/08cyft','/m/02lkt','/m/03xq_f','/m/0m0jc','/t/dd00035','/m/0326g','/m/0l14j_','/m/02w4v', '/m/0319l','/m/02x8m','/t/dd00032','/m/0dwtp','/m/0mbct','/m/0dls3','/m/0342h','/m/03gvt','/t/dd00031','/m/03qjg','/m/03m5k','/m/03q5t','/m/03lty','/m/0glt670','/m/03mb9', '/m/05rwpb','/m/03_d0','/m/03r5q_','/m/05148p4','/m/07pkxdp','/m/0j45pbj','/m/04rzd','/m/0dwsp','/m/06j64v','/m/05fw6t','/m/0164x2','/m/028sqc','/m/0dq0md','/m/0g293', '/m/02v2lh','/m/05pd6','/m/013y1f','/m/0l14md','/m/05r5c','/m/0fx80y','/m/064t9','/m/0dl5d','/m/05w3f','/m/05r6t','/m/05r5wn','/m/06cqb','/m/06j6l','/m/03t3fj','/m/07sbbz2', '/m/06by7','/t/dd00033','/m/0ln16','/m/06ncr','/t/dd00037','/m/01hgjl','/m/0l14l2','/m/0l14t7','/m/0jtg0','/m/06rqw','/m/06rvn','/m/0gywn','/m/0l14gg','/m/06w87','/m/0l156b', '/m/02qmj0d','/m/07s0s5r','/m/015y_n','/m/0l14qv','/m/01p970','/m/07brj','/m/01glhc','/m/07gxw','/t/dd00034','/m/02cjck','/m/07kc_','/m/011k_j','/m/02p0sh1','/m/07lnk', '/m/07c6l','/m/07gql','/m/016622','/m/07xzm','/m/0dwt5','/m/01z7dr','/m/07y_7','/m/0y4f8','/m/04wptg','/m/085jw','/m/01sm1g','/m/01bns_' ] ``` ``` [ 'A capella','Accordion','Acoustic guitar','Ambient music','Angry music', 'Background music','Bagpipes','Banjo','Bass drum','Bass guitar','Beatboxing','Bell','Bluegrass','Blues','Bowed string instrument','Brass instrument', 'Carnatic music','Cello','Chant','Choir','Christian music','Christmas music','Clarinet','Classical music','Country','Cowbell','Cymbal', 'Dance music','Didgeridoo','Disco','Double bass','Drum and bass','Drum kit','Drum roll','Drum','Dubstep', 'Electric guitar','Electronic dance music','Electronic music','Electronic organ','Electronica','Exciting music', 'Flamenco','Flute','Folk music','French horn','Funk','Funny music', 'Glockenspiel','Gong','Grunge','Guitar', 'Hammond organ','Happy music','Harmonica','Harp','Harpsichord','Heavy metal','Hip hop music','House music', 'Independent music', 'Jazz','Jingle (music)', 'Keyboard (musical)', 'Lullaby', 'Mallet percussion','Mandolin','Marimba, xylophone','Middle Eastern music','Music for children','Music of Africa','Music of Asia','Music of Bollywood','Music of Latin America', 'New-age music', 'Orchestra','Organ', 'Percussion','Piano','Plucked string instrument','Pop music','Progressive rock','Psychedelic rock','Punk rock', 'Rattle (instrument)','Reggae','Rhythm and blues','Rimshot','Rock and roll','Rock music', 'Sad music','Salsa music','Saxophone','Scary music','Scratching (performance technique)','Shofar','Singing bowl','Sitar','Ska','Snare drum','Soul music','Soundtrack music','Steel guitar, slide guitar','Steelpan','String section','Strum','Swing music','Synthesizer', 'Tabla','Tambourine','Tapping (guitar technique)','Techno','Tender music','Theme music','Theremin','Timpani','Traditional music','Trance music','Trombone','Trumpet','Tubular bells', 'Ukulele', 'Vibraphone','Video game music','Violin, fiddle','Vocal music', 'Wedding music','Wind instrument, woodwind instrument','Wood block', 'Zither' ] ```
seungheondoh/audioset-music
[ "language:en", "license:mit", "music", "audioset", "arxiv:2302.03917", "region:us" ]
2023-08-23T01:20:43+00:00
{"language": ["en"], "license": "mit", "pretty_name": "audioset-music", "tags": ["music", "audioset"]}
2023-08-23T02:09:25+00:00
[ "2302.03917" ]
[ "en" ]
TAGS #language-English #license-mit #music #audioset #arxiv-2302.03917 #region-us
# Dataset Card for "audioset-music" audioset-subset using 130 music mid from noise2music
[ "# Dataset Card for \"audioset-music\"\n\naudioset-subset using 130 music mid from noise2music" ]
[ "TAGS\n#language-English #license-mit #music #audioset #arxiv-2302.03917 #region-us \n", "# Dataset Card for \"audioset-music\"\n\naudioset-subset using 130 music mid from noise2music" ]
[ 30, 26 ]
[ "passage: TAGS\n#language-English #license-mit #music #audioset #arxiv-2302.03917 #region-us \n# Dataset Card for \"audioset-music\"\n\naudioset-subset using 130 music mid from noise2music" ]
a77d38d5359b4493e8d4e02c31f8c1bb74afd566
# Dataset of nenohi/ๅญๆ—ฅ/ๅญๆ—ฅ (Kantai Collection) This is the dataset of nenohi/ๅญๆ—ฅ/ๅญๆ—ฅ (Kantai Collection), containing 228 images and their tags. The core tags of this character are `pink_hair, long_hair, braid, single_braid, purple_eyes, headgear, ahoge`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 228 | 151.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nenohi_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 228 | 115.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nenohi_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 440 | 217.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nenohi_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 228 | 146.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nenohi_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 440 | 263.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nenohi_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/nenohi_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, :d, bike_shorts, open_mouth, sailor_dress, solo, bandages, looking_at_viewer, very_long_hair, arm_cannon, school_uniform | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, bike_shorts, open_mouth, sailor_dress, solo, :d, chibi | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, bandaged_arm, bike_shorts, looking_at_viewer, sailor_dress, smile, solo, blue_sailor_collar, bodysuit, shorts_under_dress, bow, open_mouth, cowboy_shot, one_eye_closed, skirt_hold, white_background, white_dress | | 3 | 8 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, sailor_dress, upper_body, simple_background, smile, solo, bangs, blue_sailor_collar, looking_at_viewer, short_sleeves, white_background, bodysuit, bandaged_arm, open_mouth, red_bowtie, blush, hair_between_eyes, teeth | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 2girls, open_mouth, sailor_dress, serafuku, smile, bike_shorts, bandages, blush, bow | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, kimono, obi, open_mouth, smile, solo, alternate_costume, floral_print, looking_at_viewer, bike_shorts, blush, wide_sleeves, black_gloves, half_gloves, long_sleeves | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | :d | bike_shorts | open_mouth | sailor_dress | solo | bandages | looking_at_viewer | very_long_hair | arm_cannon | school_uniform | chibi | bandaged_arm | smile | blue_sailor_collar | bodysuit | shorts_under_dress | bow | cowboy_shot | one_eye_closed | skirt_hold | white_background | white_dress | upper_body | simple_background | bangs | short_sleeves | red_bowtie | blush | hair_between_eyes | teeth | 2girls | serafuku | kimono | obi | alternate_costume | floral_print | wide_sleeves | black_gloves | half_gloves | long_sleeves | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----|:--------------|:-------------|:---------------|:-------|:-----------|:--------------------|:-----------------|:-------------|:-----------------|:--------|:---------------|:--------|:---------------------|:-----------|:---------------------|:------|:--------------|:-----------------|:-------------|:-------------------|:--------------|:-------------|:--------------------|:--------|:----------------|:-------------|:--------|:--------------------|:--------|:---------|:-----------|:---------|:------|:--------------------|:---------------|:---------------|:---------------|:--------------|:---------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | X | X | X | X | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | 3 | 8 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | X | X | X | | X | | | | | X | X | X | X | | | | | | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | | | X | X | X | | X | | | | | | | X | | | | X | | | | | | | | | | | X | | | X | X | | | | | | | | | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | X | X | | X | | X | | | | | | X | | | | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X |
CyberHarem/nenohi_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T01:41:36+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T17:21:34+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of nenohi/ๅญๆ—ฅ/ๅญๆ—ฅ (Kantai Collection) =========================================== This is the dataset of nenohi/ๅญๆ—ฅ/ๅญๆ—ฅ (Kantai Collection), containing 228 images and their tags. The core tags of this character are 'pink\_hair, long\_hair, braid, single\_braid, purple\_eyes, headgear, ahoge', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
ac9e3190b5a7a455c7ceedf13cf2a5f882cc6dad
# Dataset of wakaba/่‹ฅ่‘‰ (Kantai Collection) This is the dataset of wakaba/่‹ฅ่‘‰ (Kantai Collection), containing 204 images and their tags. The core tags of this character are `brown_hair, short_hair, brown_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 204 | 128.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakaba_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 204 | 92.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakaba_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 397 | 181.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakaba_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 204 | 120.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakaba_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 397 | 225.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakaba_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/wakaba_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, black_skirt, blazer, collared_shirt, long_sleeves, pleated_skirt, red_necktie, school_uniform, solo, white_shirt, black_pantyhose, looking_at_viewer, simple_background, white_background | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, black_pantyhose, blazer, necktie, pleated_skirt, school_uniform, shirt, solo, machinery, turret, cannon, character_name, looking_at_viewer, open_mouth | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_skirt | blazer | collared_shirt | long_sleeves | pleated_skirt | red_necktie | school_uniform | solo | white_shirt | black_pantyhose | looking_at_viewer | simple_background | white_background | necktie | shirt | machinery | turret | cannon | character_name | open_mouth | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:---------|:-----------------|:---------------|:----------------|:--------------|:-----------------|:-------|:--------------|:------------------|:--------------------|:--------------------|:-------------------|:----------|:--------|:------------|:---------|:---------|:-----------------|:-------------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | | | X | | X | X | | X | X | | | X | X | X | X | X | X | X |
CyberHarem/wakaba_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T02:11:53+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T13:27:15+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of wakaba/่‹ฅ่‘‰ (Kantai Collection) ======================================== This is the dataset of wakaba/่‹ฅ่‘‰ (Kantai Collection), containing 204 images and their tags. The core tags of this character are 'brown\_hair, short\_hair, brown\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
cf0ccd0a1326a1f34ad92ea709976e50fb12542b
# **DriveLM:** Driving with **G**raph **V**isual **Q**uestion **A**nswering. We facilitate `Perception, Prediction, Planning, Behavior, Motion` tasks with human-written reasoning logic as a connection. We propose the task of GVQA to connect the QA pairs in a graph-style structure. To support this novel task, we provide the DriveLM-Data. DriveLM-Data comprises two distinct components: DriveLM-nuScenes and DriveLM-CARLA. In the case of DriveLM-nuScenes, we construct our dataset based on the prevailing nuScenes dataset. As for DriveLM-CARLA, we collect data from the CARLA simulator. For now, only the training set of DriveLM-nuScenes is publicly available. ## Prepare DriveLM-nuScenes Dataset Our DriveLM-nuScenes contains a collection of questions and answers. The dataset is named `v1_0_train_nus.json`. We offer a subset of image data that includes all the images used in our DriveLM. You can also download the full nuScenes dataset [HERE](https://www.nuscenes.org/download). ## Usage 1. Download nuScenes subset image data (or full nuScenes dataset) and `v1_0_train_nus.json`. 2. Organize the data structure as follows: ``` DriveLM โ”œโ”€โ”€ data/ โ”‚ โ”œโ”€โ”€ QA_dataset_nus/ โ”‚ โ”‚ โ”œโ”€โ”€ v1_0_train_nus.json โ”‚ โ”œโ”€โ”€ nuscenes/ โ”‚ โ”‚ โ”œโ”€โ”€ samples/ ``` ## License and Citation This language dataset is licensed under [CC-BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). If you use this dataset, please cite our work: ```BibTeX @article{drivelm_paper2023, title={DriveLM: Driving with Graph Visual Question Answering}, author={Sima, Chonghao and Renz, Katrin and Chitta, Kashyap and Chen, Li and Zhang, Hanxue and Xie, Chengen and Luo, Ping and Geiger, Andreas and Li, Hongyang}, journal={arXiv preprint arXiv:2312.14150}, year={2023} } ``` ```BibTeX @misc{drivelm_repo2023, title={DriveLM: Driving with Graph Visual Question Answering}, author={DriveLM contributors}, howpublished={\url{https://github.com/OpenDriveLab/DriveLM}}, year={2023} } ``` For more information and updates, please visit our [GitHub repository](https://github.com/OpenDriveLab/DriveLM).
OpenDriveLab-org/DriveLM
[ "license:cc-by-nc-sa-4.0", "region:us" ]
2023-08-23T02:14:50+00:00
{"license": "cc-by-nc-sa-4.0", "viewer": false}
2023-12-22T02:51:46+00:00
[]
[]
TAGS #license-cc-by-nc-sa-4.0 #region-us
# DriveLM: Driving with Graph Visual Question Answering. We facilitate 'Perception, Prediction, Planning, Behavior, Motion' tasks with human-written reasoning logic as a connection. We propose the task of GVQA to connect the QA pairs in a graph-style structure. To support this novel task, we provide the DriveLM-Data. DriveLM-Data comprises two distinct components: DriveLM-nuScenes and DriveLM-CARLA. In the case of DriveLM-nuScenes, we construct our dataset based on the prevailing nuScenes dataset. As for DriveLM-CARLA, we collect data from the CARLA simulator. For now, only the training set of DriveLM-nuScenes is publicly available. ## Prepare DriveLM-nuScenes Dataset Our DriveLM-nuScenes contains a collection of questions and answers. The dataset is named 'v1_0_train_nus.json'. We offer a subset of image data that includes all the images used in our DriveLM. You can also download the full nuScenes dataset HERE. ## Usage 1. Download nuScenes subset image data (or full nuScenes dataset) and 'v1_0_train_nus.json'. 2. Organize the data structure as follows: ## License and Citation This language dataset is licensed under CC-BY-NC-SA 4.0. If you use this dataset, please cite our work: For more information and updates, please visit our GitHub repository.
[ "# DriveLM: Driving with Graph Visual Question Answering.\n\nWe facilitate 'Perception, Prediction, Planning, Behavior, Motion' tasks with human-written reasoning logic as a connection. We propose the task of GVQA to connect the QA pairs in a graph-style structure. To support this novel task, we provide the DriveLM-Data. \n\nDriveLM-Data comprises two distinct components: DriveLM-nuScenes and DriveLM-CARLA. In the case of DriveLM-nuScenes, we construct our dataset based on the prevailing nuScenes dataset. As for DriveLM-CARLA, we collect data from the CARLA simulator. For now, only the training set of DriveLM-nuScenes is publicly available.", "## Prepare DriveLM-nuScenes Dataset\n\nOur DriveLM-nuScenes contains a collection of questions and answers. The dataset is named 'v1_0_train_nus.json'. We offer a subset of image data that includes all the images used in our DriveLM. You can also download the full nuScenes dataset HERE.", "## Usage\n\n1. Download nuScenes subset image data (or full nuScenes dataset) and 'v1_0_train_nus.json'.\n\n2. Organize the data structure as follows:", "## License and Citation\n\nThis language dataset is licensed under CC-BY-NC-SA 4.0. If you use this dataset, please cite our work: \n\n\n\n\n\n\nFor more information and updates, please visit our GitHub repository." ]
[ "TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n", "# DriveLM: Driving with Graph Visual Question Answering.\n\nWe facilitate 'Perception, Prediction, Planning, Behavior, Motion' tasks with human-written reasoning logic as a connection. We propose the task of GVQA to connect the QA pairs in a graph-style structure. To support this novel task, we provide the DriveLM-Data. \n\nDriveLM-Data comprises two distinct components: DriveLM-nuScenes and DriveLM-CARLA. In the case of DriveLM-nuScenes, we construct our dataset based on the prevailing nuScenes dataset. As for DriveLM-CARLA, we collect data from the CARLA simulator. For now, only the training set of DriveLM-nuScenes is publicly available.", "## Prepare DriveLM-nuScenes Dataset\n\nOur DriveLM-nuScenes contains a collection of questions and answers. The dataset is named 'v1_0_train_nus.json'. We offer a subset of image data that includes all the images used in our DriveLM. You can also download the full nuScenes dataset HERE.", "## Usage\n\n1. Download nuScenes subset image data (or full nuScenes dataset) and 'v1_0_train_nus.json'.\n\n2. Organize the data structure as follows:", "## License and Citation\n\nThis language dataset is licensed under CC-BY-NC-SA 4.0. If you use this dataset, please cite our work: \n\n\n\n\n\n\nFor more information and updates, please visit our GitHub repository." ]
[ 19, 175, 81, 47, 50 ]
[ "passage: TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n# DriveLM: Driving with Graph Visual Question Answering.\n\nWe facilitate 'Perception, Prediction, Planning, Behavior, Motion' tasks with human-written reasoning logic as a connection. We propose the task of GVQA to connect the QA pairs in a graph-style structure. To support this novel task, we provide the DriveLM-Data. \n\nDriveLM-Data comprises two distinct components: DriveLM-nuScenes and DriveLM-CARLA. In the case of DriveLM-nuScenes, we construct our dataset based on the prevailing nuScenes dataset. As for DriveLM-CARLA, we collect data from the CARLA simulator. For now, only the training set of DriveLM-nuScenes is publicly available.## Prepare DriveLM-nuScenes Dataset\n\nOur DriveLM-nuScenes contains a collection of questions and answers. The dataset is named 'v1_0_train_nus.json'. We offer a subset of image data that includes all the images used in our DriveLM. You can also download the full nuScenes dataset HERE.## Usage\n\n1. Download nuScenes subset image data (or full nuScenes dataset) and 'v1_0_train_nus.json'.\n\n2. Organize the data structure as follows:## License and Citation\n\nThis language dataset is licensed under CC-BY-NC-SA 4.0. If you use this dataset, please cite our work: \n\n\n\n\n\n\nFor more information and updates, please visit our GitHub repository." ]
3d5c0823e603f348da38813267702d1d78367bfa
# Dataset Card for "instruct-python-llama2-20k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aidenTim/instruct-python-llama2-20k
[ "region:us" ]
2023-08-23T02:16:19+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 424387944.3182734, "num_examples": 209935}, {"name": "test", "num_bytes": 2021520.6817265982, "num_examples": 1000}], "download_size": 217942961, "dataset_size": 426409465.0}}
2023-08-23T02:20:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for "instruct-python-llama2-20k" More Information needed
[ "# Dataset Card for \"instruct-python-llama2-20k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"instruct-python-llama2-20k\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"instruct-python-llama2-20k\"\n\nMore Information needed" ]
8dac54c28f836343d9c8cbcf782f0cf0d700d198
# Dataset of matsuwa (Kantai Collection) This is the dataset of matsuwa (Kantai Collection), containing 361 images and their tags. The core tags of this character are `long_hair, black_hair, multicolored_hair, green_eyes, gradient_hair, purple_hair, hat, white_headwear, freckles, sailor_hat`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 361 | 323.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuwa_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 361 | 211.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuwa_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 758 | 438.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuwa_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 361 | 294.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuwa_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 758 | 581.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuwa_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/matsuwa_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blue_neckerchief, blue_ribbon, blue_sailor_collar, blue_skirt, long_sleeves, pleated_skirt, serafuku, simple_background, white_background, white_gloves, looking_at_viewer, solo, twitter_username, kneehighs, one-hour_drawing_challenge, white_socks, dated | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, alternate_costume, solo, looking_at_viewer, blush, black_dress, cowboy_shot, dated, long_sleeves, one-hour_drawing_challenge, simple_background, white_background, artist_logo, frilled_dress | | 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, solo, yukata, alternate_costume, candy_apple, long_sleeves, obi, wide_sleeves, full_body, hair_flower, looking_at_viewer, hairband | | 3 | 16 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, santa_costume, solo, long_sleeves, blush, christmas, open_mouth, red_dress, fur_trim, white_pantyhose, red_mittens, reindeer_antlers, full_body, looking_at_viewer, simple_background, beret, gift_box, fake_antlers, white_scarf | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_neckerchief | blue_ribbon | blue_sailor_collar | blue_skirt | long_sleeves | pleated_skirt | serafuku | simple_background | white_background | white_gloves | looking_at_viewer | solo | twitter_username | kneehighs | one-hour_drawing_challenge | white_socks | dated | alternate_costume | blush | black_dress | cowboy_shot | artist_logo | frilled_dress | yukata | candy_apple | obi | wide_sleeves | full_body | hair_flower | hairband | santa_costume | christmas | open_mouth | red_dress | fur_trim | white_pantyhose | red_mittens | reindeer_antlers | beret | gift_box | fake_antlers | white_scarf | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------------|:--------------|:---------------------|:-------------|:---------------|:----------------|:-----------|:--------------------|:-------------------|:---------------|:--------------------|:-------|:-------------------|:------------|:-----------------------------|:--------------|:--------|:--------------------|:--------|:--------------|:--------------|:--------------|:----------------|:---------|:--------------|:------|:---------------|:------------|:--------------|:-----------|:----------------|:------------|:-------------|:------------|:-----------|:------------------|:--------------|:-------------------|:--------|:-----------|:---------------|:--------------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | | | | X | | | X | X | | X | X | | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | | | | X | | | | | | X | X | | | | | | X | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | 3 | 16 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | | | X | | | X | | | X | X | | | | | | | X | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/matsuwa_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T02:40:15+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T08:56:01+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of matsuwa (Kantai Collection) ====================================== This is the dataset of matsuwa (Kantai Collection), containing 361 images and their tags. The core tags of this character are 'long\_hair, black\_hair, multicolored\_hair, green\_eyes, gradient\_hair, purple\_hair, hat, white\_headwear, freckles, sailor\_hat', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
483b7c9c0f61477868425559603e5093955eb334
# Dataset of aquila (Kantai Collection) This is the dataset of aquila (Kantai Collection), containing 215 images and their tags. The core tags of this character are `orange_hair, high_ponytail, long_hair, wavy_hair, hairclip, hair_ornament, breasts, large_breasts, brown_eyes, green_ribbon, ponytail, ribbon`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 215 | 180.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aquila_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 215 | 123.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aquila_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 435 | 243.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aquila_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 215 | 168.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aquila_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 435 | 312.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aquila_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/aquila_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 11 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, black_skirt, long_sleeves, miniskirt, red_jacket, white_shirt, armpit_cutout, garter_straps, smile, solo, black_thighhighs, collared_shirt, looking_at_viewer, open_mouth, puffy_sleeves, simple_background, white_background, blush | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, black_bra, blush, looking_at_viewer, navel, black_thighhighs, garter_belt, garter_straps, underwear_only, black_panties, cleavage, collarbone, lingerie, open_mouth, side-tie_panties, smile | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_skirt | long_sleeves | miniskirt | red_jacket | white_shirt | armpit_cutout | garter_straps | smile | solo | black_thighhighs | collared_shirt | looking_at_viewer | open_mouth | puffy_sleeves | simple_background | white_background | blush | black_bra | navel | garter_belt | underwear_only | black_panties | cleavage | collarbone | lingerie | side-tie_panties | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:---------------|:------------|:-------------|:--------------|:----------------|:----------------|:--------|:-------|:-------------------|:-----------------|:--------------------|:-------------|:----------------|:--------------------|:-------------------|:--------|:------------|:--------|:--------------|:-----------------|:----------------|:-----------|:-------------|:-----------|:-------------------| | 0 | 11 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | | | | | | X | X | X | X | | X | X | | | | X | X | X | X | X | X | X | X | X | X |
CyberHarem/aquila_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T02:41:34+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T02:35:38+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of aquila (Kantai Collection) ===================================== This is the dataset of aquila (Kantai Collection), containing 215 images and their tags. The core tags of this character are 'orange\_hair, high\_ponytail, long\_hair, wavy\_hair, hairclip, hair\_ornament, breasts, large\_breasts, brown\_eyes, green\_ribbon, ponytail, ribbon', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
b19dd964af05bd375a9951fe105dbffa00b7d2db
# Dataset Card for "Cosmetics" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
NexaAI/Cosmetics
[ "region:us" ]
2023-08-23T02:43:54+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 4009237.0, "num_examples": 1}], "download_size": 4010161, "dataset_size": 4009237.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-24T09:08:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Cosmetics" More Information needed
[ "# Dataset Card for \"Cosmetics\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Cosmetics\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Cosmetics\"\n\nMore Information needed" ]
bcce2040d0066f15e6658c1abdd6c42a67a6388f
INTEGRANTES : 1. Alex Rodriguez - Diplomado 2023 2. Gabriel Muรฑoz - Diplomado 2023 Objetivo : Plicar tรฉcnicas de deep learning para resolver un problema de clasificaciรณn de imรกgenes se creo las carpetas CALZADOS y dentro 2 carpetas train y val , dentro se crearon 2 capertas calzados de mujer (CALZADOMUJER) y calzado de hombre (CALZADOHOMBRE) en ambos directorio (train y val)
diplomado2023/calzados4
[ "license:apache-2.0", "region:us" ]
2023-08-23T03:19:34+00:00
{"license": "apache-2.0"}
2023-08-23T03:28:54+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
INTEGRANTES : 1. Alex Rodriguez - Diplomado 2023 2. Gabriel Muรฑoz - Diplomado 2023 Objetivo : Plicar tรฉcnicas de deep learning para resolver un problema de clasificaciรณn de imรกgenes se creo las carpetas CALZADOS y dentro 2 carpetas train y val , dentro se crearon 2 capertas calzados de mujer (CALZADOMUJER) y calzado de hombre (CALZADOHOMBRE) en ambos directorio (train y val)
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
ff2c38b03209485394320dbfd574e0ed15f84f59
# Dataset of chiyoda/ๅƒไปฃ็”ฐ/ๅƒไปฃ็”ฐ (Kantai Collection) This is the dataset of chiyoda/ๅƒไปฃ็”ฐ/ๅƒไปฃ็”ฐ (Kantai Collection), containing 262 images and their tags. The core tags of this character are `brown_hair, brown_eyes, short_hair, breasts, large_breasts, headband`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 262 | 194.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/chiyoda_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 262 | 142.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/chiyoda_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 543 | 275.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/chiyoda_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 262 | 183.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/chiyoda_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 543 | 339.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/chiyoda_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/chiyoda_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 35 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, jacket, blouse, red_hakama, open_mouth, hakama_short_skirt, thighhighs, looking_at_viewer, smile, blush, white_background | | 1 | 24 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, blush, looking_at_viewer, open_mouth | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, blush, looking_at_viewer, navel, one-hour_drawing_challenge, simple_background, solo, twitter_username, white_background, cleavage, collarbone, stomach, bare_shoulders, cropped_legs, alternate_costume, bangs, black_bikini, cowboy_shot, medium_hair, skindentation, front-tie_bikini_top, side-tie_bikini_bottom, spoken_squiggle | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, blue_one-piece_swimsuit, looking_at_viewer, solo, collarbone, dated, simple_background, twitter_username, white_background, competition_swimsuit, covered_navel, cowboy_shot, one-hour_drawing_challenge, cleavage, highleg_swimsuit, sitting | | 4 | 11 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | blush, hetero, solo_focus, 1boy, 1girl, nipples, paizuri, looking_at_viewer, sweat, huge_breasts, penis, bar_censor, open_mouth | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1boy, 1girl, blush, hetero, nipples, solo_focus, vaginal, nude, open_mouth, girl_on_top, huge_breasts, navel, penis, bar_censor, cowgirl_position, cum_in_pussy, heart, large_areolae, sex_from_behind, sweat | | 6 | 12 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, detached_collar, fake_animal_ears, looking_at_viewer, playboy_bunny, rabbit_ears, strapless_leotard, solo, simple_background, cleavage, white_background, wrist_cuffs, blush, bowtie, rabbit_tail, black_pantyhose, alternate_costume, black_leotard, cowboy_shot, fake_tail, twitter_username | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | jacket | blouse | red_hakama | open_mouth | hakama_short_skirt | thighhighs | looking_at_viewer | smile | blush | white_background | navel | one-hour_drawing_challenge | simple_background | twitter_username | cleavage | collarbone | stomach | bare_shoulders | cropped_legs | alternate_costume | bangs | black_bikini | cowboy_shot | medium_hair | skindentation | front-tie_bikini_top | side-tie_bikini_bottom | spoken_squiggle | blue_one-piece_swimsuit | dated | competition_swimsuit | covered_navel | highleg_swimsuit | sitting | hetero | solo_focus | 1boy | nipples | paizuri | sweat | huge_breasts | penis | bar_censor | vaginal | nude | girl_on_top | cowgirl_position | cum_in_pussy | heart | large_areolae | sex_from_behind | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | strapless_leotard | wrist_cuffs | bowtie | rabbit_tail | black_pantyhose | black_leotard | fake_tail | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:---------|:---------|:-------------|:-------------|:---------------------|:-------------|:--------------------|:--------|:--------|:-------------------|:--------|:-----------------------------|:--------------------|:-------------------|:-----------|:-------------|:----------|:-----------------|:---------------|:--------------------|:--------|:---------------|:--------------|:--------------|:----------------|:-----------------------|:-------------------------|:------------------|:--------------------------|:--------|:-----------------------|:----------------|:-------------------|:----------|:---------|:-------------|:-------|:----------|:----------|:--------|:---------------|:--------|:-------------|:----------|:-------|:--------------|:-------------------|:---------------|:--------|:----------------|:------------------|:------------------|:-------------------|:----------------|:--------------|:--------------------|:--------------|:---------|:--------------|:------------------|:----------------|:------------| | 0 | 35 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 24 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | | | X | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | | | | | | X | | | X | | X | X | X | X | X | | | | | | | X | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 11 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | | | | X | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | | | | X | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | 6 | 12 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | | | | | | | X | | X | X | | | X | X | X | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/chiyoda_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T03:21:35+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T21:28:44+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of chiyoda/ๅƒไปฃ็”ฐ/ๅƒไปฃ็”ฐ (Kantai Collection) ============================================== This is the dataset of chiyoda/ๅƒไปฃ็”ฐ/ๅƒไปฃ็”ฐ (Kantai Collection), containing 262 images and their tags. The core tags of this character are 'brown\_hair, brown\_eyes, short\_hair, breasts, large\_breasts, headband', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
ae0115bdcdbc83fac576207f8fd153c539c8e03c
This is [commitpack-ft-instruct](https://huggingface.co/datasets/chargoddard/commitpack-ft-instruct), derived from Octocode's [CommitPackFT](https://huggingface.co/datasets/bigcode/commitpackft), augmented with a quality analysis of the instruction-response pair by a local model. This did a pretty decent job of identifying pairs that obviously don't have enough context to know what change is being requested, or where the commit message does not match with the changes made. Data files (yaml, plain text, json, etc.) were heavily downsampled in preparing this dataset to skew it more towards actual code work. All entries should fit in a 4096 token context window, depending on the prompt format. Language composition for the default configuration: | Language | Instructions | Percent of Instructions | | --- | --- | --- | | Ruby | 69412 | 14.13% | | Python | 56024 | 11.41% | | JavaScript | 52989 | 10.79% | | PHP | 24791 | 5.05% | | YAML | 21764 | 4.43% | | Java | 20635 | 4.2% | | Markdown | 11950 | 2.43% | | C# | 9346 | 1.9% | | C | 8506 | 1.73% | | JSON | 7616 | 1.55% | | TypeScript | 5868 | 1.19% | | C++ | 4992 | 1.02% | | Swift | 4849 | 0.99% | | Rust | 2996 | 0.61% | | XML | 1766 | 0.36% | | Haskell | 1389 | 0.28% | | Emacs Lisp | 1015 | 0.21% | | Common Lisp | 778 | 0.16% | | Erlang | 480 | 0.1% | | OCaml | 333 | 0.07% | | Smalltalk | 284 | 0.06% | | Ada | 265 | 0.05% | | Scheme | 213 | 0.04% | All credit to the original authors of the code and the team behind OctoPack. ### Licensing Information Each sample comes from a code repository with a permissive license. The license is provided by the `license` field for each sample. ### Citation Information ```bibtex @article{muennighoff2023octopack, title={OctoPack: Instruction Tuning Code Large Language Models}, author={Niklas Muennighoff and Qian Liu and Armel Zebaze and Qinkai Zheng and Binyuan Hui and Terry Yue Zhuo and Swayam Singh and Xiangru Tang and Leandro von Werra and Shayne Longpre}, journal={arXiv preprint arXiv:2308.07124}, year={2023} } ```
chargoddard/commitpack-ft-instruct-rated
[ "size_categories:100K<n<1M", "language:en", "code", "region:us" ]
2023-08-23T03:23:34+00:00
{"language": ["en"], "size_categories": ["100K<n<1M"], "dataset_info": [{"config_name": "adequately_rated", "features": [{"name": "id", "dtype": "string"}, {"name": "rating", "struct": [{"name": "analysis", "dtype": "string"}, {"name": "judge", "dtype": "string"}, {"name": "score", "dtype": "int64"}]}, {"name": "language", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 502380874.99241877, "num_examples": 231589}], "download_size": 233165301, "dataset_size": 502380874.99241877}, {"config_name": "best_rated", "features": [{"name": "id", "dtype": "string"}, {"name": "rating", "struct": [{"name": "analysis", "dtype": "string"}, {"name": "judge", "dtype": "string"}, {"name": "score", "dtype": "int64"}]}, {"name": "language", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7807230.779949458, "num_examples": 3599}], "download_size": 3443289, "dataset_size": 7807230.779949458}, {"config_name": "default", "features": [{"name": "id", "dtype": "string"}, {"name": "rating", "struct": [{"name": "analysis", "dtype": "string"}, {"name": "judge", "dtype": "string"}, {"name": "score", "dtype": "int64"}]}, {"name": "language", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 668703742, "num_examples": 308261}], "download_size": 306198304, "dataset_size": 668703742}, {"config_name": "ratings_only", "features": [{"name": "success", "dtype": "bool"}, {"name": "score", "dtype": "int64"}, {"name": "response", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 124887856, "num_examples": 308261}], "download_size": 58208563, "dataset_size": 124887856}, {"config_name": "worst_rated", "features": [{"name": "id", "dtype": "string"}, {"name": "rating", "struct": [{"name": "analysis", "dtype": "string"}, {"name": "judge", "dtype": "string"}, {"name": "score", "dtype": "int64"}]}, {"name": "language", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10393009.91018001, "num_examples": 4791}], "download_size": 4676994, "dataset_size": 10393009.91018001}], "configs": [{"config_name": "adequately_rated", "data_files": [{"split": "train", "path": "adequately_rated/train-*"}]}, {"config_name": "best_rated", "data_files": [{"split": "train", "path": "best_rated/train-*"}]}, {"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}, {"config_name": "ratings_only", "data_files": [{"split": "train", "path": "ratings_only/train-*"}]}, {"config_name": "worst_rated", "data_files": [{"split": "train", "path": "worst_rated/train-*"}]}], "tags": ["code"]}
2023-08-23T08:05:10+00:00
[]
[ "en" ]
TAGS #size_categories-100K<n<1M #language-English #code #region-us
This is commitpack-ft-instruct, derived from Octocode's CommitPackFT, augmented with a quality analysis of the instruction-response pair by a local model. This did a pretty decent job of identifying pairs that obviously don't have enough context to know what change is being requested, or where the commit message does not match with the changes made. Data files (yaml, plain text, json, etc.) were heavily downsampled in preparing this dataset to skew it more towards actual code work. All entries should fit in a 4096 token context window, depending on the prompt format. Language composition for the default configuration: Language: Ruby, Instructions: 69412, Percent of Instructions: 14.13% Language: Python, Instructions: 56024, Percent of Instructions: 11.41% Language: JavaScript, Instructions: 52989, Percent of Instructions: 10.79% Language: PHP, Instructions: 24791, Percent of Instructions: 5.05% Language: YAML, Instructions: 21764, Percent of Instructions: 4.43% Language: Java, Instructions: 20635, Percent of Instructions: 4.2% Language: Markdown, Instructions: 11950, Percent of Instructions: 2.43% Language: C#, Instructions: 9346, Percent of Instructions: 1.9% Language: C, Instructions: 8506, Percent of Instructions: 1.73% Language: JSON, Instructions: 7616, Percent of Instructions: 1.55% Language: TypeScript, Instructions: 5868, Percent of Instructions: 1.19% Language: C++, Instructions: 4992, Percent of Instructions: 1.02% Language: Swift, Instructions: 4849, Percent of Instructions: 0.99% Language: Rust, Instructions: 2996, Percent of Instructions: 0.61% Language: XML, Instructions: 1766, Percent of Instructions: 0.36% Language: Haskell, Instructions: 1389, Percent of Instructions: 0.28% Language: Emacs Lisp, Instructions: 1015, Percent of Instructions: 0.21% Language: Common Lisp, Instructions: 778, Percent of Instructions: 0.16% Language: Erlang, Instructions: 480, Percent of Instructions: 0.1% Language: OCaml, Instructions: 333, Percent of Instructions: 0.07% Language: Smalltalk, Instructions: 284, Percent of Instructions: 0.06% Language: Ada, Instructions: 265, Percent of Instructions: 0.05% Language: Scheme, Instructions: 213, Percent of Instructions: 0.04% All credit to the original authors of the code and the team behind OctoPack. ### Licensing Information Each sample comes from a code repository with a permissive license. The license is provided by the 'license' field for each sample.
[ "### Licensing Information\n\n\nEach sample comes from a code repository with a permissive license. The license is provided by the 'license' field for each sample." ]
[ "TAGS\n#size_categories-100K<n<1M #language-English #code #region-us \n", "### Licensing Information\n\n\nEach sample comes from a code repository with a permissive license. The license is provided by the 'license' field for each sample." ]
[ 24, 36 ]
[ "passage: TAGS\n#size_categories-100K<n<1M #language-English #code #region-us \n### Licensing Information\n\n\nEach sample comes from a code repository with a permissive license. The license is provided by the 'license' field for each sample." ]
607de369e7559fcf2ca5f7486895f2ead848fe19
# Dataset of aircraft_carrier_water_oni/็ฉบๆฏๆฐด้ฌผ (Kantai Collection) This is the dataset of aircraft_carrier_water_oni/็ฉบๆฏๆฐด้ฌผ (Kantai Collection), containing 80 images and their tags. The core tags of this character are `long_hair, white_hair, breasts, colored_skin, white_skin, very_long_hair, red_eyes, hair_ornament, large_breasts, pale_skin`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 80 | 76.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aircraft_carrier_water_oni_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 80 | 53.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aircraft_carrier_water_oni_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 147 | 89.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aircraft_carrier_water_oni_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 80 | 71.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aircraft_carrier_water_oni_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 147 | 114.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aircraft_carrier_water_oni_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/aircraft_carrier_water_oni_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, abyssal_ship, solo, open_mouth, blush_stickers, smile, chibi, food, hair_between_eyes, sleeveless, white_background, holding, serafuku | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, abyssal_ship, bare_shoulders, detached_sleeves, orange_eyes, ribbed_dress, sailor_dress, black_dress, solo | | 2 | 16 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, abyssal_ship, black_dress, detached_sleeves, sailor_dress, short_dress, solo, ribbed_dress, zettai_ryouiki, armored_boots, bare_shoulders, black_thighhighs, looking_at_viewer, knee_boots, black_footwear, orange_eyes, sitting, black_gloves, machinery, turret, high_heel_boots | | 3 | 8 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, abyssal_ship, armored_boots, black_dress, gauntlets, one_side_up, sailor_dress, short_dress, solo, thigh_boots, thighhighs, zettai_ryouiki, looking_at_viewer, sitting, glowing, turret | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | abyssal_ship | solo | open_mouth | blush_stickers | smile | chibi | food | hair_between_eyes | sleeveless | white_background | holding | serafuku | bare_shoulders | detached_sleeves | orange_eyes | ribbed_dress | sailor_dress | black_dress | short_dress | zettai_ryouiki | armored_boots | black_thighhighs | looking_at_viewer | knee_boots | black_footwear | sitting | black_gloves | machinery | turret | high_heel_boots | gauntlets | one_side_up | thigh_boots | thighhighs | glowing | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-------|:-------------|:-----------------|:--------|:--------|:-------|:--------------------|:-------------|:-------------------|:----------|:-----------|:-----------------|:-------------------|:--------------|:---------------|:---------------|:--------------|:--------------|:-----------------|:----------------|:-------------------|:--------------------|:-------------|:-----------------|:----------|:---------------|:------------|:---------|:------------------|:------------|:--------------|:--------------|:-------------|:----------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | 2 | 16 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | 3 | 8 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | | | | | | | | | | | | | | | X | X | X | X | X | | X | | | X | | | X | | X | X | X | X | X |
CyberHarem/aircraft_carrier_water_oni_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T03:42:40+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T23:25:13+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of aircraft\_carrier\_water\_oni/็ฉบๆฏๆฐด้ฌผ (Kantai Collection) ================================================================= This is the dataset of aircraft\_carrier\_water\_oni/็ฉบๆฏๆฐด้ฌผ (Kantai Collection), containing 80 images and their tags. The core tags of this character are 'long\_hair, white\_hair, breasts, colored\_skin, white\_skin, very\_long\_hair, red\_eyes, hair\_ornament, large\_breasts, pale\_skin', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
bca62789f76e3bb22b3bbf5c4a9e0280f39c85cf
# Dataset Card for "labeled-recipes" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
corbt/labeled-recipes
[ "region:us" ]
2023-08-23T03:46:58+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3833393.4, "num_examples": 4500}, {"name": "test", "num_bytes": 425932.6, "num_examples": 500}], "download_size": 0, "dataset_size": 4259326.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2023-08-23T22:43:23+00:00
[]
[]
TAGS #region-us
# Dataset Card for "labeled-recipes" More Information needed
[ "# Dataset Card for \"labeled-recipes\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"labeled-recipes\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"labeled-recipes\"\n\nMore Information needed" ]
8445e20788dd1abc63fd3556fa677acad3994ff2
Uploaded in Github: https://github.com/DonMischo/Billions-of-Wildcards-for-Stable-Diffusion
DonMischo/Billions_of_Wildcards
[ "license:gpl-3.0", "region:us" ]
2023-08-23T04:05:37+00:00
{"license": "gpl-3.0"}
2023-08-24T10:26:12+00:00
[]
[]
TAGS #license-gpl-3.0 #region-us
Uploaded in Github: URL
[]
[ "TAGS\n#license-gpl-3.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-gpl-3.0 #region-us \n" ]
8c28b7e88a7bafedd65927468c3b6ba69a56f9ce
# Dataset of kunashiri/ๅ›ฝๅพŒ (Kantai Collection) This is the dataset of kunashiri/ๅ›ฝๅพŒ (Kantai Collection), containing 148 images and their tags. The core tags of this character are `pink_hair, short_hair, two_side_up, multicolored_hair, two-tone_hair, orange_eyes, black_hair, bangs, hair_between_eyes, red_ribbon, ribbon, bow, red_bow, neck_ribbon`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 148 | 119.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kunashiri_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 148 | 79.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kunashiri_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 310 | 162.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kunashiri_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 148 | 111.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kunashiri_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 310 | 215.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kunashiri_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/kunashiri_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, colored_tips, fur-trimmed_sleeves, green_jacket, long_sleeves, red_bowtie, upper_body, looking_at_viewer, serafuku, smile, solo, blush, simple_background, white_background, twitter_username | | 1 | 15 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, colored_tips, fur-trimmed_sleeves, green_jacket, green_skirt, long_sleeves, pleated_skirt, red_bowtie, solo, white_pantyhose, serafuku, simple_background, blush, open_mouth, twitter_username, cowboy_shot, white_background, one-hour_drawing_challenge | | 2 | 12 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, colored_tips, fur-trimmed_sleeves, green_jacket, green_skirt, long_sleeves, pleated_skirt, red_bowtie, white_pantyhose, mary_janes, simple_background, solo, full_body, white_background, serafuku, smile, black_footwear, open_mouth, standing | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | colored_tips | fur-trimmed_sleeves | green_jacket | long_sleeves | red_bowtie | upper_body | looking_at_viewer | serafuku | smile | solo | blush | simple_background | white_background | twitter_username | green_skirt | pleated_skirt | white_pantyhose | open_mouth | cowboy_shot | one-hour_drawing_challenge | mary_janes | full_body | black_footwear | standing | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:----------------------|:---------------|:---------------|:-------------|:-------------|:--------------------|:-----------|:--------|:-------|:--------|:--------------------|:-------------------|:-------------------|:--------------|:----------------|:------------------|:-------------|:--------------|:-----------------------------|:-------------|:------------|:-----------------|:-----------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | 1 | 15 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | | | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | 2 | 12 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | X | | | X | X | X | | X | X | | X | X | X | X | | | X | X | X | X |
CyberHarem/kunashiri_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T04:10:08+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T22:57:09+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of kunashiri/ๅ›ฝๅพŒ (Kantai Collection) =========================================== This is the dataset of kunashiri/ๅ›ฝๅพŒ (Kantai Collection), containing 148 images and their tags. The core tags of this character are 'pink\_hair, short\_hair, two\_side\_up, multicolored\_hair, two-tone\_hair, orange\_eyes, black\_hair, bangs, hair\_between\_eyes, red\_ribbon, ribbon, bow, red\_bow, neck\_ribbon', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
41cff984ace3cb99b00d45447373f25153a642ba
# Dataset of ooshio/ๅคงๆฝฎ/ๅคงๆฝฎ (Kantai Collection) This is the dataset of ooshio/ๅคงๆฝฎ/ๅคงๆฝฎ (Kantai Collection), containing 366 images and their tags. The core tags of this character are `long_hair, black_hair, breasts, ahoge, large_breasts, brown_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 366 | 338.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ooshio_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 366 | 226.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ooshio_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 844 | 466.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ooshio_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 366 | 310.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ooshio_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 844 | 600.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ooshio_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/ooshio_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blush, looking_at_viewer, nipples, serafuku, solo, navel, no_bra, shirt_lift, black_eyes, skirt, mouth_hold, open_mouth, sitting, tears | | 1 | 22 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, serafuku, solo, looking_at_viewer, pleated_skirt, blush, open_mouth, hairband | | 2 | 10 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, blue_skirt, serafuku, short_sleeves, simple_background, solo, white_background, looking_at_viewer, pleated_skirt, blue_sailor_collar, blush, cowboy_shot | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, serafuku, simple_background, solo, upper_body, white_background, blue_sailor_collar, looking_at_viewer, green_sailor_collar | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, serafuku, solo, black_eyes, blush, turret, cannon, looking_at_viewer, upper_body, skirt | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1boy, 1girl, hetero, nipples, paizuri, solo_focus, blush, cum_on_breasts, huge_breasts, penis, serafuku, ejaculation, shirt_lift, bar_censor, tears | | 6 | 9 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1boy, 1girl, blush, hetero, nipples, paizuri, penis, solo_focus, censored, cum, huge_breasts, open_mouth | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1boy, 1girl, hetero, nipples, open_mouth, penis, serafuku, sex, vaginal, skirt, solo_focus, tears, white_panties, blush, censored, shirt_lift, black_eyes, cum_in_pussy, kneehighs, navel, on_back, panties_aside, spread_legs | | 8 | 8 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, blue_one-piece_swimsuit, polka_dot_swimsuit, solo, casual_one-piece_swimsuit, looking_at_viewer, simple_background, barefoot, collarbone, full_body, sitting, white_background, blush, parted_lips | | 9 | 6 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 1girl, blush, looking_at_viewer, nipples, one-piece_swimsuit, polka_dot_swimsuit, solo, smile, water, wrist_scrunchie | | 10 | 5 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | 1girl, black_panties, blue_skirt, elbow_gloves, highleg_panties, shimakaze_(kancolle)_(cosplay), solo, white_gloves, black_hairband, crop_top, microskirt, navel, black_neckerchief, miniskirt, pleated_skirt, simple_background, blue_sailor_collar, blush, grey_background, looking_at_viewer, school_uniform, striped_thighhighs, torn_clothes, twitter_username, underboob | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | looking_at_viewer | nipples | serafuku | solo | navel | no_bra | shirt_lift | black_eyes | skirt | mouth_hold | open_mouth | sitting | tears | pleated_skirt | hairband | blue_skirt | short_sleeves | simple_background | white_background | blue_sailor_collar | cowboy_shot | upper_body | green_sailor_collar | turret | cannon | 1boy | hetero | paizuri | solo_focus | cum_on_breasts | huge_breasts | penis | ejaculation | bar_censor | censored | cum | sex | vaginal | white_panties | cum_in_pussy | kneehighs | on_back | panties_aside | spread_legs | blue_one-piece_swimsuit | polka_dot_swimsuit | casual_one-piece_swimsuit | barefoot | collarbone | full_body | parted_lips | one-piece_swimsuit | smile | water | wrist_scrunchie | black_panties | elbow_gloves | highleg_panties | shimakaze_(kancolle)_(cosplay) | white_gloves | black_hairband | crop_top | microskirt | black_neckerchief | miniskirt | grey_background | school_uniform | striped_thighhighs | torn_clothes | twitter_username | underboob | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------|:--------------------|:----------|:-----------|:-------|:--------|:---------|:-------------|:-------------|:--------|:-------------|:-------------|:----------|:--------|:----------------|:-----------|:-------------|:----------------|:--------------------|:-------------------|:---------------------|:--------------|:-------------|:----------------------|:---------|:---------|:-------|:---------|:----------|:-------------|:-----------------|:---------------|:--------|:--------------|:-------------|:-----------|:------|:------|:----------|:----------------|:---------------|:------------|:----------|:----------------|:--------------|:--------------------------|:---------------------|:----------------------------|:-----------|:-------------|:------------|:--------------|:---------------------|:--------|:--------|:------------------|:----------------|:---------------|:------------------|:---------------------------------|:---------------|:-----------------|:-----------|:-------------|:--------------------|:------------|:------------------|:-----------------|:---------------------|:---------------|:-------------------|:------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 22 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | | X | X | | | | | | | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 10 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | X | X | | | | | | | | | | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | X | | X | X | | | | | | | | | | | | | | X | X | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | X | | X | X | | | | X | X | | | | | | | | | | | | | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | | X | X | | | | X | | | | | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 9 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | | X | | | | | | | | | X | | | | | | | | | | | | | | | X | X | X | X | | X | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | X | | X | X | | X | | X | X | X | | X | | X | | | | | | | | | | | | | X | X | | X | | | X | | | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 8 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | X | X | | | X | | | | | | | | X | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | 9 | 6 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | X | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | 10 | 5 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | X | X | X | | | X | X | | | | | | | | | X | | X | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/ooshio_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T04:21:17+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T18:22:09+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of ooshio/ๅคงๆฝฎ/ๅคงๆฝฎ (Kantai Collection) =========================================== This is the dataset of ooshio/ๅคงๆฝฎ/ๅคงๆฝฎ (Kantai Collection), containing 366 images and their tags. The core tags of this character are 'long\_hair, black\_hair, breasts, ahoge, large\_breasts, brown\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
5493fe1686f29d4fce6912ccf9e2e03780493bd6
Original BABE dataset enriched with sentences from two annotations rounds: NewsUnfold project and Media Bias Game project. # Please cite as ``` @InProceedings{Spinde2021f, title = "Neural Media Bias Detection Using Distant Supervision With {BABE} - Bias Annotations By Experts", author = "Spinde, Timo and Plank, Manuel and Krieger, Jan-David and Ruas, Terry and Gipp, Bela and Aizawa, Akiko", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-emnlp.101", doi = "10.18653/v1/2021.findings-emnlp.101", pages = "1166--1177", } ```
mediabiasgroup/BABE-v3
[ "license:cc-by-nc-sa-4.0", "region:us" ]
2023-08-23T04:25:25+00:00
{"license": "cc-by-nc-sa-4.0"}
2023-08-23T04:37:34+00:00
[]
[]
TAGS #license-cc-by-nc-sa-4.0 #region-us
Original BABE dataset enriched with sentences from two annotations rounds: NewsUnfold project and Media Bias Game project. # Please cite as
[ "# Please cite as" ]
[ "TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n", "# Please cite as" ]
[ 19, 4 ]
[ "passage: TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n# Please cite as" ]
7e7680ccd5892ee5cc7c4ba850ea9fad8cb3f0f1
# Dataset of ishigaki (Kantai Collection) This is the dataset of ishigaki (Kantai Collection), containing 115 images and their tags. The core tags of this character are `black_hair, short_hair, ribbon, red_ribbon, hair_ribbon, white_ribbon, bangs, purple_eyes, neck_ribbon, bow`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 115 | 81.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ishigaki_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 115 | 55.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ishigaki_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 249 | 114.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ishigaki_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 115 | 75.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ishigaki_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 249 | 149.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ishigaki_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/ishigaki_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 17 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, green_jacket, green_sailor_collar, green_skirt, long_sleeves, looking_at_viewer, pleated_skirt, serafuku, solo, simple_background, cowboy_shot, white_background, smile, one-hour_drawing_challenge | | 1 | 13 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, black_socks, green_jacket, green_skirt, kneehighs, long_sleeves, pleated_skirt, serafuku, solo, green_sailor_collar, looking_at_viewer, simple_background, shoes, standing, black_footwear, full_body, smile, white_background, pom_pom_(clothes) | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, green_jacket, green_sailor_collar, long_sleeves, serafuku, simple_background, solo, upper_body, white_background, looking_at_viewer, one-hour_drawing_challenge, pom_pom_(clothes), blush, open_mouth | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, looking_at_viewer, simple_background, solo, white_background, frilled_apron, maid_headdress, white_apron, enmaided, full_body, smile, standing, artist_logo, black_footwear, dated, kimono, red_eyes, wa_maid | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | green_jacket | green_sailor_collar | green_skirt | long_sleeves | looking_at_viewer | pleated_skirt | serafuku | solo | simple_background | cowboy_shot | white_background | smile | one-hour_drawing_challenge | black_socks | kneehighs | shoes | standing | black_footwear | full_body | pom_pom_(clothes) | upper_body | blush | open_mouth | frilled_apron | maid_headdress | white_apron | enmaided | artist_logo | dated | kimono | red_eyes | wa_maid | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:----------------------|:--------------|:---------------|:--------------------|:----------------|:-----------|:-------|:--------------------|:--------------|:-------------------|:--------|:-----------------------------|:--------------|:------------|:--------|:-----------|:-----------------|:------------|:--------------------|:-------------|:--------|:-------------|:----------------|:-----------------|:--------------|:-----------|:--------------|:--------|:---------|:-----------|:----------| | 0 | 17 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | 1 | 13 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | X | X | X | | X | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | X | X | | X | X | X | | X | | X | | | | | | | X | X | X | X | | | | | | | | | | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | | | X | | | X | X | | X | X | | | | | X | X | X | | | | | X | X | X | X | X | X | X | X | X |
CyberHarem/ishigaki_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T04:38:44+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T09:26:53+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of ishigaki (Kantai Collection) ======================================= This is the dataset of ishigaki (Kantai Collection), containing 115 images and their tags. The core tags of this character are 'black\_hair, short\_hair, ribbon, red\_ribbon, hair\_ribbon, white\_ribbon, bangs, purple\_eyes, neck\_ribbon, bow', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
a5be0886f9702a21bfc35c9a4acf0e10e77384f4
# Dataset Card for "prompt_qa" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Andyrasika/prompt_qa
[ "language:en", "license:creativeml-openrail-m", "dialogue", "region:us" ]
2023-08-23T04:58:27+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 294966.75, "num_examples": 423}, {"name": "test", "num_bytes": 98322.25, "num_examples": 141}], "download_size": 213420, "dataset_size": 393289}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "tags": ["dialogue"]}
2023-08-23T05:05:33+00:00
[]
[ "en" ]
TAGS #language-English #license-creativeml-openrail-m #dialogue #region-us
# Dataset Card for "prompt_qa" More Information needed
[ "# Dataset Card for \"prompt_qa\"\n\nMore Information needed" ]
[ "TAGS\n#language-English #license-creativeml-openrail-m #dialogue #region-us \n", "# Dataset Card for \"prompt_qa\"\n\nMore Information needed" ]
[ 26, 15 ]
[ "passage: TAGS\n#language-English #license-creativeml-openrail-m #dialogue #region-us \n# Dataset Card for \"prompt_qa\"\n\nMore Information needed" ]
7f9c003395784a0945742292bc0b65235ada9809
# Dataset Card for "autotree_automl_pol_gosdt_l512_d3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yzhuang/autotree_automl_pol_gosdt_l512_d3
[ "region:us" ]
2023-08-23T05:09:58+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "int64"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "int64"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 13320800000, "num_examples": 100000}, {"name": "validation", "num_bytes": 1332080000, "num_examples": 10000}], "download_size": 960924312, "dataset_size": 14652880000}}
2023-08-25T16:25:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "autotree_automl_pol_gosdt_l512_d3" More Information needed
[ "# Dataset Card for \"autotree_automl_pol_gosdt_l512_d3\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"autotree_automl_pol_gosdt_l512_d3\"\n\nMore Information needed" ]
[ 6, 27 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_pol_gosdt_l512_d3\"\n\nMore Information needed" ]
636306301a7fb7511fc4c15a08f1a8223495e54d
# Dataset Card for "my-pandas-dataset-AbstractAndLink" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AlvianKhairi/my-pandas-dataset-AbstractAndLink
[ "region:us" ]
2023-08-23T05:11:47+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 113697146, "num_examples": 276033}], "download_size": 48418586, "dataset_size": 113697146}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-23T05:33:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "my-pandas-dataset-AbstractAndLink" More Information needed
[ "# Dataset Card for \"my-pandas-dataset-AbstractAndLink\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"my-pandas-dataset-AbstractAndLink\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"my-pandas-dataset-AbstractAndLink\"\n\nMore Information needed" ]
f18f94869324d87ecb3f63825009b9612d4b0a1c
# GitHub Code Dataset ## Dataset Description The GitHub Code dataset consists of 115M code files from GitHub in 32 programming languages with 60 extensions totaling in 1TB of data. The dataset was created from the public GitHub dataset on Google BiqQuery. ### How to use it The GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following two lines of code: ```python from datasets import load_dataset ds = load_dataset("codeparrot/github-code", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n", 'repo_name': 'MirekSz/webpack-es6-ts', 'path': 'app/mods/mod190.js', 'language': 'JavaScript', 'license': 'isc', 'size': 73 } ``` You can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration: ```python ds = load_dataset("codeparrot/github-code", streaming=True, split="train", languages=["Dockerfile"]) print(next(iter(ds))["code"]) #OUTPUT: """\ FROM rockyluke/ubuntu:precise ENV DEBIAN_FRONTEND="noninteractive" \ TZ="Europe/Amsterdam" ... """ ``` We also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages: ```python ds = load_dataset("codeparrot/github-code", streaming=True, split="train", licenses=["mit", "isc"]) licenses = [] for element in iter(ds).take(10_000): licenses.append(element["license"]) print(Counter(licenses)) #OUTPUT: Counter({'mit': 9896, 'isc': 104}) ``` Naturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage: ```python ds = load_dataset("codeparrot/github-code", split="train") ``` ## Data Structure ### Data Instances ```python { 'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n", 'repo_name': 'MirekSz/webpack-es6-ts', 'path': 'app/mods/mod190.js', 'language': 'JavaScript', 'license': 'isc', 'size': 73 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |code|string|content of source file| |repo_name|string|name of the GitHub repository| |path|string|path of file in GitHub repository| |language|string|programming language as inferred by extension| |license|string|license of GitHub repository| |size|int|size of source file in bytes| ### Data Splits The dataset only contains a train split. ## Languages The dataset contains 30 programming languages with over 60 extensions: ```python { "Assembly": [".asm"], "Batchfile": [".bat", ".cmd"], "C": [".c", ".h"], "C#": [".cs"], "C++": [".cpp", ".hpp", ".c++", ".h++", ".cc", ".hh", ".C", ".H"], "CMake": [".cmake"], "CSS": [".css"], "Dockerfile": [".dockerfile", "Dockerfile"], "FORTRAN": ['.f90', '.f', '.f03', '.f08', '.f77', '.f95', '.for', '.fpp'], "GO": [".go"], "Haskell": [".hs"], "HTML":[".html"], "Java": [".java"], "JavaScript": [".js"], "Julia": [".jl"], "Lua": [".lua"], "Makefile": ["Makefile"], "Markdown": [".md", ".markdown"], "PHP": [".php", ".php3", ".php4", ".php5", ".phps", ".phpt"], "Perl": [".pl", ".pm", ".pod", ".perl"], "PowerShell": ['.ps1', '.psd1', '.psm1'], "Python": [".py"], "Ruby": [".rb"], "Rust": [".rs"], "SQL": [".sql"], "Scala": [".scala"], "Shell": [".sh", ".bash", ".command", ".zsh"], "TypeScript": [".ts", ".tsx"], "TeX": [".tex"], "Visual Basic": [".vb"] } ``` ## Licenses Each example is also annotated with the license of the associated repository. There are in total 15 licenses: ```python [ 'mit', 'apache-2.0', 'gpl-3.0', 'gpl-2.0', 'bsd-3-clause', 'agpl-3.0', 'lgpl-3.0', 'lgpl-2.1', 'bsd-2-clause', 'cc0-1.0', 'epl-1.0', 'mpl-2.0', 'unlicense', 'isc', 'artistic-2.0' ] ``` ## Dataset Statistics The dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below: ![dataset-statistics](https://huggingface.co/datasets/codeparrot/github-code/resolve/main/github-code-stats-alpha.png) | | Language |File Count| Size (GB)| |---:|:-------------|---------:|-------:| | 0 | Java | 19548190 | 107.70 | | 1 | C | 14143113 | 183.83 | | 2 | JavaScript | 11839883 | 87.82 | | 3 | HTML | 11178557 | 118.12 | | 4 | PHP | 11177610 | 61.41 | | 5 | Markdown | 8464626 | 23.09 | | 6 | C++ | 7380520 | 87.73 | | 7 | Python | 7226626 | 52.03 | | 8 | C# | 6811652 | 36.83 | | 9 | Ruby | 4473331 | 10.95 | | 10 | GO | 2265436 | 19.28 | | 11 | TypeScript | 1940406 | 24.59 | | 12 | CSS | 1734406 | 22.67 | | 13 | Shell | 1385648 | 3.01 | | 14 | Scala | 835755 | 3.87 | | 15 | Makefile | 679430 | 2.92 | | 16 | SQL | 656671 | 5.67 | | 17 | Lua | 578554 | 2.81 | | 18 | Perl | 497949 | 4.70 | | 19 | Dockerfile | 366505 | 0.71 | | 20 | Haskell | 340623 | 1.85 | | 21 | Rust | 322431 | 2.68 | | 22 | TeX | 251015 | 2.15 | | 23 | Batchfile | 236945 | 0.70 | | 24 | CMake | 175282 | 0.54 | | 25 | Visual Basic | 155652 | 1.91 | | 26 | FORTRAN | 142038 | 1.62 | | 27 | PowerShell | 136846 | 0.69 | | 28 | Assembly | 82905 | 0.78 | | 29 | Julia | 58317 | 0.29 | ## Dataset Creation The dataset was created in two steps: 1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query [here](https://huggingface.co/datasets/codeparrot/github-code/blob/main/query.sql)). The query was executed on _Mar 16, 2022, 6:23:39 PM UTC+1_. 2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script [here](https://huggingface.co/datasets/codeparrot/github-code/blob/main/github_preprocessing.py)). ## Considerations for Using the Data The dataset consists of source code from a wide range of repositories. As such they can potentially include harmful or biased code as well as sensitive information like passwords or usernames. ## Releases You can load any older version of the dataset with the `revision` argument: ```Python ds = load_dataset("codeparrot/github-code", revision="v1.0") ``` ### v1.0 - Initial release of dataset - The query was executed on _Feb 14, 2022, 12:03:16 PM UTC+1_ ### v1.1 - Fix missing Scala/TypeScript - Fix deduplication issue with inconsistent Python `hash` - The query was executed on _Mar 16, 2022, 6:23:39 PM UTC+1_
Harshpreet-singh1/datasetfinetune
[ "task_categories:text-generation", "task_ids:language-modeling", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:unknown", "language:code", "license:other", "region:us" ]
2023-08-23T05:15:34+00:00
{"annotations_creators": [], "language_creators": ["crowdsourced", "expert-generated"], "language": ["code"], "license": ["other"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "github-code"}
2023-08-23T05:31:56+00:00
[]
[ "code" ]
TAGS #task_categories-text-generation #task_ids-language-modeling #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-unknown #language-code #license-other #region-us
GitHub Code Dataset =================== Dataset Description ------------------- The GitHub Code dataset consists of 115M code files from GitHub in 32 programming languages with 60 extensions totaling in 1TB of data. The dataset was created from the public GitHub dataset on Google BiqQuery. ### How to use it The GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following two lines of code: You can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration: We also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages: Naturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage: Data Structure -------------- ### Data Instances ### Data Fields Field: code, Type: string, Description: content of source file Field: repo\_name, Type: string, Description: name of the GitHub repository Field: path, Type: string, Description: path of file in GitHub repository Field: language, Type: string, Description: programming language as inferred by extension Field: license, Type: string, Description: license of GitHub repository Field: size, Type: int, Description: size of source file in bytes ### Data Splits The dataset only contains a train split. Languages --------- The dataset contains 30 programming languages with over 60 extensions: Licenses -------- Each example is also annotated with the license of the associated repository. There are in total 15 licenses: Dataset Statistics ------------------ The dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below: !dataset-statistics Dataset Creation ---------------- The dataset was created in two steps: 1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query here). The query was executed on *Mar 16, 2022, 6:23:39 PM UTC+1*. 2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script here). Considerations for Using the Data --------------------------------- The dataset consists of source code from a wide range of repositories. As such they can potentially include harmful or biased code as well as sensitive information like passwords or usernames. Releases -------- You can load any older version of the dataset with the 'revision' argument: ### v1.0 * Initial release of dataset * The query was executed on *Feb 14, 2022, 12:03:16 PM UTC+1* ### v1.1 * Fix missing Scala/TypeScript * Fix deduplication issue with inconsistent Python 'hash' * The query was executed on *Mar 16, 2022, 6:23:39 PM UTC+1*
[ "### How to use it\n\n\nThe GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following two lines of code:\n\n\nYou can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration:\n\n\nWe also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages:\n\n\nNaturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: code, Type: string, Description: content of source file\nField: repo\\_name, Type: string, Description: name of the GitHub repository\nField: path, Type: string, Description: path of file in GitHub repository\nField: language, Type: string, Description: programming language as inferred by extension\nField: license, Type: string, Description: license of GitHub repository\nField: size, Type: int, Description: size of source file in bytes", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nLanguages\n---------\n\n\nThe dataset contains 30 programming languages with over 60 extensions:\n\n\nLicenses\n--------\n\n\nEach example is also annotated with the license of the associated repository. There are in total 15 licenses:\n\n\nDataset Statistics\n------------------\n\n\nThe dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below:\n\n\n!dataset-statistics\n\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created in two steps:\n\n\n1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query here). The query was executed on *Mar 16, 2022, 6:23:39 PM UTC+1*.\n2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script here).\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset consists of source code from a wide range of repositories. As such they can potentially include harmful or biased code as well as sensitive information like passwords or usernames.\n\n\nReleases\n--------\n\n\nYou can load any older version of the dataset with the 'revision' argument:", "### v1.0\n\n\n* Initial release of dataset\n* The query was executed on *Feb 14, 2022, 12:03:16 PM UTC+1*", "### v1.1\n\n\n* Fix missing Scala/TypeScript\n* Fix deduplication issue with inconsistent Python 'hash'\n* The query was executed on *Mar 16, 2022, 6:23:39 PM UTC+1*" ]
[ "TAGS\n#task_categories-text-generation #task_ids-language-modeling #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-unknown #language-code #license-other #region-us \n", "### How to use it\n\n\nThe GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following two lines of code:\n\n\nYou can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration:\n\n\nWe also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages:\n\n\nNaturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: code, Type: string, Description: content of source file\nField: repo\\_name, Type: string, Description: name of the GitHub repository\nField: path, Type: string, Description: path of file in GitHub repository\nField: language, Type: string, Description: programming language as inferred by extension\nField: license, Type: string, Description: license of GitHub repository\nField: size, Type: int, Description: size of source file in bytes", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nLanguages\n---------\n\n\nThe dataset contains 30 programming languages with over 60 extensions:\n\n\nLicenses\n--------\n\n\nEach example is also annotated with the license of the associated repository. There are in total 15 licenses:\n\n\nDataset Statistics\n------------------\n\n\nThe dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below:\n\n\n!dataset-statistics\n\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created in two steps:\n\n\n1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query here). The query was executed on *Mar 16, 2022, 6:23:39 PM UTC+1*.\n2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script here).\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset consists of source code from a wide range of repositories. As such they can potentially include harmful or biased code as well as sensitive information like passwords or usernames.\n\n\nReleases\n--------\n\n\nYou can load any older version of the dataset with the 'revision' argument:", "### v1.0\n\n\n* Initial release of dataset\n* The query was executed on *Feb 14, 2022, 12:03:16 PM UTC+1*", "### v1.1\n\n\n* Fix missing Scala/TypeScript\n* Fix deduplication issue with inconsistent Python 'hash'\n* The query was executed on *Mar 16, 2022, 6:23:39 PM UTC+1*" ]
[ 75, 238, 6, 115, 312, 34, 48 ]
[ "passage: TAGS\n#task_categories-text-generation #task_ids-language-modeling #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-unknown #language-code #license-other #region-us \n### How to use it\n\n\nThe GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following two lines of code:\n\n\nYou can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration:\n\n\nWe also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages:\n\n\nNaturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage:\n\n\nData Structure\n--------------### Data Instances### Data Fields\n\n\nField: code, Type: string, Description: content of source file\nField: repo\\_name, Type: string, Description: name of the GitHub repository\nField: path, Type: string, Description: path of file in GitHub repository\nField: language, Type: string, Description: programming language as inferred by extension\nField: license, Type: string, Description: license of GitHub repository\nField: size, Type: int, Description: size of source file in bytes" ]
a976cbc4393da000c8660eec5832f0a94e665538
# Dataset of kishinami/ๅฒธๆณข (Kantai Collection) This is the dataset of kishinami/ๅฒธๆณข (Kantai Collection), containing 410 images and their tags. The core tags of this character are `bangs, brown_hair, short_hair, ahoge, blunt_bangs, wavy_hair, brown_eyes, bow`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 410 | 365.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kishinami_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 410 | 234.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kishinami_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 912 | 479.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kishinami_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 410 | 332.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kishinami_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 912 | 640.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kishinami_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/kishinami_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 15 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, long_sleeves, pleated_dress, purple_dress, school_uniform, simple_background, solo, white_shirt, grey_pantyhose, looking_at_viewer, white_background, seamed_legwear, cowboy_shot, smile, blue_bowtie | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, aqua_bowtie, blue_bowtie, grey_pantyhose, long_sleeves, looking_at_viewer, pleated_dress, purple_dress, school_uniform, seamed_legwear, smile, white_shirt, cowboy_shot, solo, yellow_eyes, character_name, purple_pantyhose | | 2 | 9 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, blue_bow, full_body, grey_pantyhose, long_sleeves, pleated_dress, purple_dress, school_uniform, seamed_legwear, simple_background, solo, white_shirt, smile, white_background, lace-up_boots, looking_at_viewer, no_pupils, standing, aqua_bowtie | | 3 | 16 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, long_sleeves, purple_dress, school_uniform, solo, upper_body, white_shirt, looking_at_viewer, smile, simple_background, aqua_bowtie, white_background, blue_bowtie, blush | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, grey_pantyhose, long_sleeves, machinery, pleated_dress, purple_dress, school_uniform, solo, white_shirt, adapted_turret, cannon, looking_at_viewer, rigging, seamed_legwear, smokestack, aqua_bowtie, cowboy_shot, signature, torpedo_launcher | | 5 | 22 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, navel, solo, sports_bra, blue_bra, blue_panties, small_breasts, looking_at_viewer, collarbone, blush, simple_background, cowboy_shot, white_background, underwear_only, white_shirt, open_shirt | | 6 | 18 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, looking_at_viewer, solo, cowboy_shot, small_breasts, bikini, simple_background, collarbone, navel, standing, white_background, yellow_eyes, one-piece_swimsuit, school_swimsuit | | 7 | 5 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, christmas, santa_hat, solo, aqua_dress, fur_trim, red_headwear, santa_costume, smile, alternate_costume, holding_sack, looking_at_viewer, yellow_eyes, blush, capelet, cowboy_shot, heart, open_mouth | | 8 | 14 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, detached_collar, fake_animal_ears, playboy_bunny, rabbit_ears, solo, strapless_leotard, wrist_cuffs, blue_bowtie, purple_leotard, looking_at_viewer, small_breasts, cowboy_shot, rabbit_tail, fishnet_pantyhose, adapted_costume, grey_pantyhose, simple_background, thighband_pantyhose, white_background, dated, no_pupils | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | long_sleeves | pleated_dress | purple_dress | school_uniform | simple_background | solo | white_shirt | grey_pantyhose | looking_at_viewer | white_background | seamed_legwear | cowboy_shot | smile | blue_bowtie | aqua_bowtie | yellow_eyes | character_name | purple_pantyhose | blue_bow | full_body | lace-up_boots | no_pupils | standing | upper_body | blush | machinery | adapted_turret | cannon | rigging | smokestack | signature | torpedo_launcher | navel | sports_bra | blue_bra | blue_panties | small_breasts | collarbone | underwear_only | open_shirt | bikini | one-piece_swimsuit | school_swimsuit | christmas | santa_hat | aqua_dress | fur_trim | red_headwear | santa_costume | alternate_costume | holding_sack | capelet | heart | open_mouth | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | strapless_leotard | wrist_cuffs | purple_leotard | rabbit_tail | fishnet_pantyhose | adapted_costume | thighband_pantyhose | dated | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:----------------|:---------------|:-----------------|:--------------------|:-------|:--------------|:-----------------|:--------------------|:-------------------|:-----------------|:--------------|:--------|:--------------|:--------------|:--------------|:-----------------|:-------------------|:-----------|:------------|:----------------|:------------|:-----------|:-------------|:--------|:------------|:-----------------|:---------|:----------|:-------------|:------------|:-------------------|:--------|:-------------|:-----------|:---------------|:----------------|:-------------|:-----------------|:-------------|:---------|:---------------------|:------------------|:------------|:------------|:-------------|:-----------|:---------------|:----------------|:--------------------|:---------------|:----------|:--------|:-------------|:------------------|:-------------------|:----------------|:--------------|:--------------------|:--------------|:-----------------|:--------------|:--------------------|:------------------|:----------------------|:--------| | 0 | 15 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | | X | X | X | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 9 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | | X | | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 16 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | X | X | X | X | X | | X | X | | | X | X | X | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | X | X | X | | X | X | X | X | | X | X | | | X | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 22 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | | | | X | X | X | | X | X | | X | | | | | | | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 18 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | | | | X | X | | | X | X | | X | | | | X | | | | | | | X | | | | | | | | | | X | | | | X | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 5 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | | | | | X | | | X | | | X | X | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | 8 | 14 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | | | | | X | X | | X | X | X | | X | | X | | | | | | | | X | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/kishinami_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T05:22:40+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T21:59:26+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of kishinami/ๅฒธๆณข (Kantai Collection) =========================================== This is the dataset of kishinami/ๅฒธๆณข (Kantai Collection), containing 410 images and their tags. The core tags of this character are 'bangs, brown\_hair, short\_hair, ahoge, blunt\_bangs, wavy\_hair, brown\_eyes, bow', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
fbd8158a41a5b294c0ab94091ceeb77182d2e6ec
# Dataset of maestrale (Kantai Collection) This is the dataset of maestrale (Kantai Collection), containing 198 images and their tags. The core tags of this character are `long_hair, one_side_up, bangs, green_eyes, blunt_bangs, ribbon, hair_ornament, hair_ribbon, grey_hair, anchor_hair_ornament, white_ribbon, white_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 198 | 179.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/maestrale_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 198 | 121.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/maestrale_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 448 | 255.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/maestrale_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 198 | 167.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/maestrale_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 448 | 332.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/maestrale_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/maestrale_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, anchor_necklace, sailor_collar, sailor_dress, simple_background, sleeveless_dress, solo, striped, tan, white_background, white_dress, full_body, looking_at_viewer, smile, hat, neckerchief | | 1 | 9 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, neckerchief, sailor_dress, simple_background, sleeveless_dress, solo, striped, white_background, white_dress, anchor_necklace, looking_at_viewer, tan, one-hour_drawing_challenge, twitter_username, open_mouth, white_sailor_collar, smile | | 2 | 35 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, looking_at_viewer, solo, blush, smile, one-piece_tan, open_mouth, simple_background, polka_dot_bikini, white_background, navel, pink_bikini, cowboy_shot, polka_dot_ribbon, frills, twitter_username | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | anchor_necklace | sailor_collar | sailor_dress | simple_background | sleeveless_dress | solo | striped | tan | white_background | white_dress | full_body | looking_at_viewer | smile | hat | neckerchief | one-hour_drawing_challenge | twitter_username | open_mouth | white_sailor_collar | blush | one-piece_tan | polka_dot_bikini | navel | pink_bikini | cowboy_shot | polka_dot_ribbon | frills | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------------|:----------------|:---------------|:--------------------|:-------------------|:-------|:----------|:------|:-------------------|:--------------|:------------|:--------------------|:--------|:------|:--------------|:-----------------------------|:-------------------|:-------------|:----------------------|:--------|:----------------|:-------------------|:--------|:--------------|:--------------|:-------------------|:---------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | 1 | 9 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | X | X | X | X | X | X | X | X | | X | X | | X | X | X | X | X | | | | | | | | | | 2 | 35 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | | | X | | X | | | X | | | X | X | | | | X | X | | X | X | X | X | X | X | X | X |
CyberHarem/maestrale_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T05:24:29+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T03:37:34+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of maestrale (Kantai Collection) ======================================== This is the dataset of maestrale (Kantai Collection), containing 198 images and their tags. The core tags of this character are 'long\_hair, one\_side\_up, bangs, green\_eyes, blunt\_bangs, ribbon, hair\_ornament, hair\_ribbon, grey\_hair, anchor\_hair\_ornament, white\_ribbon, white\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
b75eb9960a1fc28f08fdc9906d66568a65b59f1d
# Dataset Card for emotion This dataset has been created with [Argilla](https://docs.argilla.io). As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets). ## Dataset Description - **Homepage:** https://argilla.io - **Repository:** https://github.com/argilla-io/argilla - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains: * A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla. * Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`. * The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla. ### Load with Argilla To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code: ```python import argilla as rg ds = rg.FeedbackDataset.from_huggingface("argilla/emotion") ``` ### Load with `datasets` To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code: ```python from datasets import load_dataset ds = load_dataset("argilla/emotion") ``` ### Supported Tasks and Leaderboards This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/data_model.html) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure). There are no leaderboards associated with this dataset. ### Languages [More Information Needed] ## Dataset Structure ### Data in Argilla The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, and **guidelines**. The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions. | Field Name | Title | Type | Required | Markdown | | ---------- | ----- | ---- | -------- | -------- | | text | Text | TextField | True | False | The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice. | Question Name | Title | Type | Required | Description | Values/Labels | | ------------- | ----- | ---- | -------- | ----------- | ------------- | | label | Label | LabelQuestion | True | N/A | ['0', '1', '2', '3', '4', '5'] | **โœจ NEW** Additionally, we also have **suggestions**, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above. Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section. ### Data Instances An example of a dataset instance in Argilla looks as follows: ```json { "fields": { "text": "i didnt feel humiliated" }, "metadata": { "split": "train" }, "responses": [ { "status": "submitted", "values": { "label": { "value": "0" } } } ], "suggestions": [] } ``` While the same record in HuggingFace `datasets` looks as follows: ```json { "external_id": null, "label": [ { "status": "submitted", "user_id": null, "value": "0" } ], "label-suggestion": null, "label-suggestion-metadata": { "agent": null, "score": null, "type": null }, "metadata": "{\"split\": \"train\"}", "text": "i didnt feel humiliated" } ``` ### Data Fields Among the dataset fields, we differentiate between the following: * **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions. * **text** is of type `TextField`. * **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`. * **label** is of type `LabelQuestion` with the following allowed values ['0', '1', '2', '3', '4', '5']. * **โœจ NEW** **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable. * (optional) **label-suggestion** is of type `label_selection` with the following allowed values ['0', '1', '2', '3', '4', '5']. Additionally, we also have one more field which is optional and is the following: * **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file. ### Data Splits The dataset contains a single split, which is `train`. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation guidelines Argilla port of [dair-ai/emotion](https://huggingface.co/datasets/dair-ai/emotion). #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
argilla/emotion
[ "size_categories:10K<n<100K", "rlfh", "argilla", "human-feedback", "region:us" ]
2023-08-23T05:33:42+00:00
{"size_categories": "10K<n<100K", "tags": ["rlfh", "argilla", "human-feedback"]}
2023-08-23T05:37:14+00:00
[]
[]
TAGS #size_categories-10K<n<100K #rlfh #argilla #human-feedback #region-us
Dataset Card for emotion ======================== This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into Argilla as explained in Load with Argilla, or used directly with the 'datasets' library in Load with 'datasets'. Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary This dataset contains: * A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\_huggingface' method in Argilla. * Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\_huggingface' and can be loaded independently using the 'datasets' library via 'load\_dataset'. * The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla. ### Load with Argilla To load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code: ### Load with 'datasets' To load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code: ### Supported Tasks and Leaderboards This dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section. There are no leaderboards associated with this dataset. ### Languages Dataset Structure ----------------- ### Data in Argilla The dataset is created in Argilla with: fields, questions, suggestions, and guidelines. The fields are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions. The questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice. NEW Additionally, we also have suggestions, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above. Finally, the guidelines are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section. ### Data Instances An example of a dataset instance in Argilla looks as follows: While the same record in HuggingFace 'datasets' looks as follows: ### Data Fields Among the dataset fields, we differentiate between the following: * Fields: These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions. + text is of type 'TextField'. * Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'. + label is of type 'LabelQuestion' with the following allowed values ['0', '1', '2', '3', '4', '5']. * NEW Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable. + (optional) label-suggestion is of type 'label\_selection' with the following allowed values ['0', '1', '2', '3', '4', '5']. Additionally, we also have one more field which is optional and is the following: * external\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file. ### Data Splits The dataset contains a single split, which is 'train'. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation guidelines Argilla port of dair-ai/emotion. #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions
[ "### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.", "### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:", "### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:", "### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.", "### Languages\n\n\nDataset Structure\n-----------------", "### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.\n\n\n\nNEW Additionally, we also have suggestions, which are linked to the existing questions, and so on, named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.\n\n\nFinally, the guidelines are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.", "### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:", "### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ text is of type 'TextField'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ label is of type 'LabelQuestion' with the following allowed values ['0', '1', '2', '3', '4', '5'].\n* NEW Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) label-suggestion is of type 'label\\_selection' with the following allowed values ['0', '1', '2', '3', '4', '5'].\n\n\nAdditionally, we also have one more field which is optional and is the following:\n\n\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.", "### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation guidelines\n\n\nArgilla port of dair-ai/emotion.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#size_categories-10K<n<100K #rlfh #argilla #human-feedback #region-us \n", "### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.", "### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:", "### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:", "### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.", "### Languages\n\n\nDataset Structure\n-----------------", "### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.\n\n\n\nNEW Additionally, we also have suggestions, which are linked to the existing questions, and so on, named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.\n\n\nFinally, the guidelines are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.", "### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:", "### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ text is of type 'TextField'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ label is of type 'LabelQuestion' with the following allowed values ['0', '1', '2', '3', '4', '5'].\n* NEW Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) label-suggestion is of type 'label\\_selection' with the following allowed values ['0', '1', '2', '3', '4', '5'].\n\n\nAdditionally, we also have one more field which is optional and is the following:\n\n\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.", "### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation guidelines\n\n\nArgilla port of dair-ai/emotion.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 29, 162, 40, 53, 68, 11, 208, 40, 375, 27, 7, 4, 10, 10, 5, 16, 5, 9, 18, 7, 8, 14, 6, 6, 5 ]
[ "passage: TAGS\n#size_categories-10K<n<100K #rlfh #argilla #human-feedback #region-us \n### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.### Languages\n\n\nDataset Structure\n-----------------", "passage: ### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.\n\n\n\nNEW Additionally, we also have suggestions, which are linked to the existing questions, and so on, named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.\n\n\nFinally, the guidelines are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ text is of type 'TextField'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ label is of type 'LabelQuestion' with the following allowed values ['0', '1', '2', '3', '4', '5'].\n* NEW Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) label-suggestion is of type 'label\\_selection' with the following allowed values ['0', '1', '2', '3', '4', '5'].\n\n\nAdditionally, we also have one more field which is optional and is the following:\n\n\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation guidelines\n\n\nArgilla port of dair-ai/emotion.#### Annotation process" ]
a3fc4ede9cdbf812f15e0374ad55fd420a65c4f4
# Dataset Card for "autotree_automl_electricity_gosdt_l512_d3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yzhuang/autotree_automl_electricity_gosdt_l512_d3
[ "region:us" ]
2023-08-23T05:44:34+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float64"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float64"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 5538400000, "num_examples": 100000}, {"name": "validation", "num_bytes": 553840000, "num_examples": 10000}], "download_size": 1564957370, "dataset_size": 6092240000}}
2023-08-26T01:46:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "autotree_automl_electricity_gosdt_l512_d3" More Information needed
[ "# Dataset Card for \"autotree_automl_electricity_gosdt_l512_d3\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"autotree_automl_electricity_gosdt_l512_d3\"\n\nMore Information needed" ]
[ 6, 28 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_electricity_gosdt_l512_d3\"\n\nMore Information needed" ]
df3b278c6e73bf0eec20636cadb6fe7e234f0ef3
# Dataset of amagiri (Kantai Collection) This is the dataset of amagiri (Kantai Collection), containing 149 images and their tags. The core tags of this character are `long_hair, ponytail, grey_hair, glasses, hair_between_eyes, grey_eyes, very_long_hair, bangs, asymmetrical_bangs, grey-framed_eyewear`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 149 | 114.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/amagiri_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 149 | 76.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/amagiri_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 311 | 156.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/amagiri_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 149 | 105.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/amagiri_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 311 | 205.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/amagiri_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/amagiri_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, bike_shorts, navel, solo, abs, looking_at_viewer, simple_background, white_background, cowboy_shot, midriff, black_shorts, smile, sports_bra, tsurime, undershirt, character_name, one-hour_drawing_challenge, small_breasts | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, looking_at_viewer, pleated_skirt, serafuku, solo, white_background, grey_skirt, short_sleeves, simple_background, grey_sailor_collar, grin, tsurime | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, serafuku, short_sleeves, simple_background, solo, upper_body, grey_sailor_collar, looking_at_viewer, tsurime, smile, blue_background, white_background | | 3 | 11 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, looking_at_viewer, solo, black_shirt, official_alternate_costume, casual, grin, simple_background, black_headwear, jeans, anchor, baseball_cap, black_footwear, blush, full_body, holding, shorts, umbrella, white_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bike_shorts | navel | solo | abs | looking_at_viewer | simple_background | white_background | cowboy_shot | midriff | black_shorts | smile | sports_bra | tsurime | undershirt | character_name | one-hour_drawing_challenge | small_breasts | pleated_skirt | serafuku | grey_skirt | short_sleeves | grey_sailor_collar | grin | upper_body | blue_background | black_shirt | official_alternate_costume | casual | black_headwear | jeans | anchor | baseball_cap | black_footwear | blush | full_body | holding | shorts | umbrella | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:--------|:-------|:------|:--------------------|:--------------------|:-------------------|:--------------|:----------|:---------------|:--------|:-------------|:----------|:-------------|:-----------------|:-----------------------------|:----------------|:----------------|:-----------|:-------------|:----------------|:---------------------|:-------|:-------------|:------------------|:--------------|:-----------------------------|:---------|:-----------------|:--------|:---------|:---------------|:-----------------|:--------|:------------|:----------|:---------|:-----------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | | X | | X | X | X | | | | | | X | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | | X | | X | X | X | | | | X | | X | | | | | | X | | X | X | | X | X | | | | | | | | | | | | | | | 3 | 11 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | X | | X | X | X | | | | | | | | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/amagiri_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T05:44:38+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T09:58:34+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of amagiri (Kantai Collection) ====================================== This is the dataset of amagiri (Kantai Collection), containing 149 images and their tags. The core tags of this character are 'long\_hair, ponytail, grey\_hair, glasses, hair\_between\_eyes, grey\_eyes, very\_long\_hair, bangs, asymmetrical\_bangs, grey-framed\_eyewear', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
33467b328caeb87df58ffc7fd437f136eeb7f566
# Dataset of shinyou (Kantai Collection) This is the dataset of shinyou (Kantai Collection), containing 25 images and their tags. The core tags of this character are `bangs, blonde_hair, blue_eyes, long_hair, side_ponytail, blunt_bangs, hair_ornament, maid_headdress, hair_ribbon, ribbon`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 25 | 19.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinyou_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 25 | 15.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinyou_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 56 | 28.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinyou_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 25 | 19.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinyou_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 56 | 34.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinyou_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/shinyou_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, looking_at_viewer, solo, white_apron, green_dress, enmaided, maid_apron, blush, long_sleeves, smile, cowboy_shot, holding, simple_background, frilled_apron, tray | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, dougi, smile, solo, blush, upper_body, hakama_short_skirt, red_hakama | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | white_apron | green_dress | enmaided | maid_apron | blush | long_sleeves | smile | cowboy_shot | holding | simple_background | frilled_apron | tray | dougi | upper_body | hakama_short_skirt | red_hakama | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:--------------|:--------------|:-----------|:-------------|:--------|:---------------|:--------|:--------------|:----------|:--------------------|:----------------|:-------|:--------|:-------------|:---------------------|:-------------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | | | | | X | | X | | | | | | X | X | X | X |
CyberHarem/shinyou_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T05:48:16+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T09:35:31+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of shinyou (Kantai Collection) ====================================== This is the dataset of shinyou (Kantai Collection), containing 25 images and their tags. The core tags of this character are 'bangs, blonde\_hair, blue\_eyes, long\_hair, side\_ponytail, blunt\_bangs, hair\_ornament, maid\_headdress, hair\_ribbon, ribbon', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
17f17a0d78316eb8ac02a6963a3589fffd45b285
# Dataset Card for "test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
HuangHaoyang/test
[ "region:us" ]
2023-08-23T05:50:11+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "test1", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 441048.0, "num_examples": 2}], "download_size": 440373, "dataset_size": 441048.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-24T05:51:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "test" More Information needed
[ "# Dataset Card for \"test\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"test\"\n\nMore Information needed" ]
[ 6, 11 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"test\"\n\nMore Information needed" ]
fd80802010fd902c0fdf0fb6152bf654f137194b
# Translated into Korean with DeepL All Texts are translated with DeepL. (Machine Translated.) - Issue: some data items are missing, cause of DeepL plan and processing method. I use very cheap plan and all datas are merged into single file and splitted by few code and hand. - This is sample/test processing of data set creation with DeepL. - Original Dataset: totally-not-an-llm/EverythingLM-data-V2 # EverythingLM V2 Dataset **EverythingLM V2** is a diverse instruct dataset consisting of 1k of human-assistant conversations. These sets were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions. ### Differences for V1: - All data in V2 is generated by GPT4 - Higher quality dataset generation pipeline: - More humalike seed prompts - Fixed some bugs in the script - More diverse creative writing - More diverse seed prompts in general - Attempt not to overfit the model on complex instructions by occasionally skipping evol ### Cost: Reproducing this dataset would cost roughly $40. ### Instruction Categories: - Reasoning - Creative Writing - General Knowledge - Brainstorming - Search Query - Coding - Basic Instruct We also leverage various system prompts for evol-instruct and for responding to prompts. This dataset has also been filtered to remove OpenAI alignment. ### How it stands out: - Long, detailed outputs - Humanlike creativity - CoT reasoning - Complex & challenging tasks ### Plans: - Train Llama 7b & 13b models (13b model V1 trained) - Train Llama 70b QLoRA - Generate V2 of the dataset, with more categories and GPT-4 (DONE) โœ“ Included in this repo is the script to generate the dataset.
ziozzang/EverythingLM-data-V2-Ko
[ "language:ko", "license:mit", "region:us" ]
2023-08-23T05:53:09+00:00
{"language": ["ko"], "license": "mit"}
2023-08-23T06:03:47+00:00
[]
[ "ko" ]
TAGS #language-Korean #license-mit #region-us
# Translated into Korean with DeepL All Texts are translated with DeepL. (Machine Translated.) - Issue: some data items are missing, cause of DeepL plan and processing method. I use very cheap plan and all datas are merged into single file and splitted by few code and hand. - This is sample/test processing of data set creation with DeepL. - Original Dataset: totally-not-an-llm/EverythingLM-data-V2 # EverythingLM V2 Dataset EverythingLM V2 is a diverse instruct dataset consisting of 1k of human-assistant conversations. These sets were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions. ### Differences for V1: - All data in V2 is generated by GPT4 - Higher quality dataset generation pipeline: - More humalike seed prompts - Fixed some bugs in the script - More diverse creative writing - More diverse seed prompts in general - Attempt not to overfit the model on complex instructions by occasionally skipping evol ### Cost: Reproducing this dataset would cost roughly $40. ### Instruction Categories: - Reasoning - Creative Writing - General Knowledge - Brainstorming - Search Query - Coding - Basic Instruct We also leverage various system prompts for evol-instruct and for responding to prompts. This dataset has also been filtered to remove OpenAI alignment. ### How it stands out: - Long, detailed outputs - Humanlike creativity - CoT reasoning - Complex & challenging tasks ### Plans: - Train Llama 7b & 13b models (13b model V1 trained) - Train Llama 70b QLoRA - Generate V2 of the dataset, with more categories and GPT-4 (DONE) Included in this repo is the script to generate the dataset.
[ "# Translated into Korean with DeepL\nAll Texts are translated with DeepL. (Machine Translated.)\n- Issue: some data items are missing, cause of DeepL plan and processing method. I use very cheap plan and all datas are merged into single file and splitted by few code and hand.\n - This is sample/test processing of data set creation with DeepL.\n- Original Dataset: totally-not-an-llm/EverythingLM-data-V2", "# EverythingLM V2 Dataset\n\nEverythingLM V2 is a diverse instruct dataset consisting of 1k of human-assistant conversations. These sets were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions.", "### Differences for V1:\n\n- All data in V2 is generated by GPT4\n- Higher quality dataset generation pipeline:\n - More humalike seed prompts\n - Fixed some bugs in the script\n - More diverse creative writing\n - More diverse seed prompts in general\n - Attempt not to overfit the model on complex instructions by occasionally skipping evol", "### Cost:\nReproducing this dataset would cost roughly $40.", "### Instruction Categories:\n\n- Reasoning\n- Creative Writing\n- General Knowledge\n- Brainstorming\n- Search Query\n- Coding\n- Basic Instruct\n\nWe also leverage various system prompts for evol-instruct and for responding to prompts.\nThis dataset has also been filtered to remove OpenAI alignment.", "### How it stands out:\n\n- Long, detailed outputs\n- Humanlike creativity\n- CoT reasoning\n- Complex & challenging tasks", "### Plans:\n\n- Train Llama 7b & 13b models (13b model V1 trained)\n- Train Llama 70b QLoRA\n- Generate V2 of the dataset, with more categories and GPT-4 (DONE) \n\nIncluded in this repo is the script to generate the dataset." ]
[ "TAGS\n#language-Korean #license-mit #region-us \n", "# Translated into Korean with DeepL\nAll Texts are translated with DeepL. (Machine Translated.)\n- Issue: some data items are missing, cause of DeepL plan and processing method. I use very cheap plan and all datas are merged into single file and splitted by few code and hand.\n - This is sample/test processing of data set creation with DeepL.\n- Original Dataset: totally-not-an-llm/EverythingLM-data-V2", "# EverythingLM V2 Dataset\n\nEverythingLM V2 is a diverse instruct dataset consisting of 1k of human-assistant conversations. These sets were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions.", "### Differences for V1:\n\n- All data in V2 is generated by GPT4\n- Higher quality dataset generation pipeline:\n - More humalike seed prompts\n - Fixed some bugs in the script\n - More diverse creative writing\n - More diverse seed prompts in general\n - Attempt not to overfit the model on complex instructions by occasionally skipping evol", "### Cost:\nReproducing this dataset would cost roughly $40.", "### Instruction Categories:\n\n- Reasoning\n- Creative Writing\n- General Knowledge\n- Brainstorming\n- Search Query\n- Coding\n- Basic Instruct\n\nWe also leverage various system prompts for evol-instruct and for responding to prompts.\nThis dataset has also been filtered to remove OpenAI alignment.", "### How it stands out:\n\n- Long, detailed outputs\n- Humanlike creativity\n- CoT reasoning\n- Complex & challenging tasks", "### Plans:\n\n- Train Llama 7b & 13b models (13b model V1 trained)\n- Train Llama 70b QLoRA\n- Generate V2 of the dataset, with more categories and GPT-4 (DONE) \n\nIncluded in this repo is the script to generate the dataset." ]
[ 16, 109, 69, 83, 16, 69, 31, 69 ]
[ "passage: TAGS\n#language-Korean #license-mit #region-us \n# Translated into Korean with DeepL\nAll Texts are translated with DeepL. (Machine Translated.)\n- Issue: some data items are missing, cause of DeepL plan and processing method. I use very cheap plan and all datas are merged into single file and splitted by few code and hand.\n - This is sample/test processing of data set creation with DeepL.\n- Original Dataset: totally-not-an-llm/EverythingLM-data-V2# EverythingLM V2 Dataset\n\nEverythingLM V2 is a diverse instruct dataset consisting of 1k of human-assistant conversations. These sets were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions.### Differences for V1:\n\n- All data in V2 is generated by GPT4\n- Higher quality dataset generation pipeline:\n - More humalike seed prompts\n - Fixed some bugs in the script\n - More diverse creative writing\n - More diverse seed prompts in general\n - Attempt not to overfit the model on complex instructions by occasionally skipping evol### Cost:\nReproducing this dataset would cost roughly $40.### Instruction Categories:\n\n- Reasoning\n- Creative Writing\n- General Knowledge\n- Brainstorming\n- Search Query\n- Coding\n- Basic Instruct\n\nWe also leverage various system prompts for evol-instruct and for responding to prompts.\nThis dataset has also been filtered to remove OpenAI alignment.### How it stands out:\n\n- Long, detailed outputs\n- Humanlike creativity\n- CoT reasoning\n- Complex & challenging tasks### Plans:\n\n- Train Llama 7b & 13b models (13b model V1 trained)\n- Train Llama 70b QLoRA\n- Generate V2 of the dataset, with more categories and GPT-4 (DONE) \n\nIncluded in this repo is the script to generate the dataset." ]
9f2115d42709817ceb2b56f2f291d837939c7a4c
# Dataset Card for "fw_squad_num_train_1000_eval_100" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/fw_squad_num_train_1000_eval_100
[ "region:us" ]
2023-08-23T06:10:34+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "train_doc2id", "path": "data/train_doc2id-*"}, {"split": "train_id2doc", "path": "data/train_id2doc-*"}, {"split": "train_find_word", "path": "data/train_find_word-*"}, {"split": "eval_find_word", "path": "data/eval_find_word-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 299539, "num_examples": 2100}, {"name": "train_doc2id", "num_bytes": 187173, "num_examples": 1100}, {"name": "train_id2doc", "num_bytes": 190473, "num_examples": 1100}, {"name": "train_find_word", "num_bytes": 109066, "num_examples": 1000}, {"name": "eval_find_word", "num_bytes": 10589, "num_examples": 100}], "download_size": 399884, "dataset_size": 796840}}
2023-08-25T02:32:34+00:00
[]
[]
TAGS #region-us
# Dataset Card for "fw_squad_num_train_1000_eval_100" More Information needed
[ "# Dataset Card for \"fw_squad_num_train_1000_eval_100\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"fw_squad_num_train_1000_eval_100\"\n\nMore Information needed" ]
[ 6, 27 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"fw_squad_num_train_1000_eval_100\"\n\nMore Information needed" ]
c42734f0e34175a64d69630dc09fa0c59e478d18
# Dataset Card for "test-diploma-lucchi-cropped-new-mix-biggest" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
insanemyrr/test-diploma-lucchi-cropped-new-mix-biggest
[ "region:us" ]
2023-08-23T06:12:26+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "testing", "1": "training"}}}}], "splits": [{"name": "train", "num_bytes": 299779032.96, "num_examples": 3960}, {"name": "test", "num_bytes": 299751233.76, "num_examples": 3960}], "download_size": 599433953, "dataset_size": 599530266.72}}
2023-08-23T06:14:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "test-diploma-lucchi-cropped-new-mix-biggest" More Information needed
[ "# Dataset Card for \"test-diploma-lucchi-cropped-new-mix-biggest\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"test-diploma-lucchi-cropped-new-mix-biggest\"\n\nMore Information needed" ]
[ 6, 28 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"test-diploma-lucchi-cropped-new-mix-biggest\"\n\nMore Information needed" ]
0bcfdda5d9dd22c60a1cb54ce5b8091a84544e65
# Dataset Card for "fw_baseline_squad_train_1000_eval_100" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/fw_baseline_squad_train_1000_eval_100
[ "region:us" ]
2023-08-23T06:13:18+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "eval_find_word", "path": "data/eval_find_word-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 355328, "num_examples": 1000}, {"name": "eval_find_word", "num_bytes": 35045, "num_examples": 100}, {"name": "validation", "num_bytes": 35045, "num_examples": 100}], "download_size": 259711, "dataset_size": 425418}}
2023-08-25T01:53:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "fw_baseline_squad_train_1000_eval_100" More Information needed
[ "# Dataset Card for \"fw_baseline_squad_train_1000_eval_100\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"fw_baseline_squad_train_1000_eval_100\"\n\nMore Information needed" ]
[ 6, 28 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"fw_baseline_squad_train_1000_eval_100\"\n\nMore Information needed" ]
a3af1f75089684b20345980669ff194d3258eaf7
# Dataset of sheffield/ใ‚ทใ‚งใƒ•ใ‚ฃใƒผใƒซใƒ‰ (Kantai Collection) This is the dataset of sheffield/ใ‚ทใ‚งใƒ•ใ‚ฃใƒผใƒซใƒ‰ (Kantai Collection), containing 161 images and their tags. The core tags of this character are `brown_hair, long_hair, blue_eyes, messy_hair, breasts, hair_between_eyes, ponytail`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 161 | 186.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sheffield_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 161 | 111.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sheffield_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 386 | 243.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sheffield_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 161 | 168.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sheffield_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 386 | 337.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sheffield_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/sheffield_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, military_uniform, red_ascot, red_rose, solo, upper_body, white_gloves, simple_background, looking_at_viewer, white_background | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, brown_belt, cowboy_shot, military_uniform, pleated_skirt, red_ascot, red_rose, solo, white_gloves, white_skirt, white_background, simple_background | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, brown_belt, military_uniform, pleated_skirt, red_ascot, red_rose, solo, white_skirt, belt_buckle, cowboy_shot, white_gloves, blush, closed_mouth, long_sleeves | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, black_socks, brown_belt, full_body, kneehighs, military_uniform, red_ascot, red_rose, rudder_footwear, solo, white_gloves, white_skirt, belt_buckle, machinery, pleated_skirt, rigging, simple_background, white_background, turret, open_mouth, sitting, torn_clothes | | 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, brown_belt, long_sleeves, red_capelet, red_rose, blush, closed_mouth, gift_box, solo, belt_buckle, black_dress, christmas, holding_gift, official_alternate_costume, black_thighhighs, cowboy_shot, medium_hair, military_uniform, red_ascot | | 5 | 8 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, looking_at_viewer, solo, cowboy_shot, medium_breasts, covered_navel, one-hour_drawing_challenge, alternate_costume, bikini, blue_sky, cloud, collarbone, competition_swimsuit, dated, day, highleg_swimsuit, simple_background, small_breasts, white_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | military_uniform | red_ascot | red_rose | solo | upper_body | white_gloves | simple_background | looking_at_viewer | white_background | brown_belt | cowboy_shot | pleated_skirt | white_skirt | belt_buckle | blush | closed_mouth | long_sleeves | black_socks | full_body | kneehighs | rudder_footwear | machinery | rigging | turret | open_mouth | sitting | torn_clothes | red_capelet | gift_box | black_dress | christmas | holding_gift | official_alternate_costume | black_thighhighs | medium_hair | medium_breasts | covered_navel | one-hour_drawing_challenge | alternate_costume | bikini | blue_sky | cloud | collarbone | competition_swimsuit | dated | day | highleg_swimsuit | small_breasts | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------------|:------------|:-----------|:-------|:-------------|:---------------|:--------------------|:--------------------|:-------------------|:-------------|:--------------|:----------------|:--------------|:--------------|:--------|:---------------|:---------------|:--------------|:------------|:------------|:------------------|:------------|:----------|:---------|:-------------|:----------|:---------------|:--------------|:-----------|:--------------|:------------|:---------------|:-----------------------------|:-------------------|:--------------|:-----------------|:----------------|:-----------------------------|:--------------------|:---------|:-----------|:--------|:-------------|:-----------------------|:--------|:------|:-------------------|:----------------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | | X | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | X | X | | X | X | | X | X | | X | X | X | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | X | X | X | | | | | | X | X | | | X | X | X | X | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | 5 | 8 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | | | X | | | X | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/sheffield_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T06:14:31+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T00:07:03+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of sheffield/ใ‚ทใ‚งใƒ•ใ‚ฃใƒผใƒซใƒ‰ (Kantai Collection) ================================================ This is the dataset of sheffield/ใ‚ทใ‚งใƒ•ใ‚ฃใƒผใƒซใƒ‰ (Kantai Collection), containing 161 images and their tags. The core tags of this character are 'brown\_hair, long\_hair, blue\_eyes, messy\_hair, breasts, hair\_between\_eyes, ponytail', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
b93217025b1e19cc148b5853b8b89952ba86172f
# Dataset Card for "my-pandas-dataset-Abstract_Link" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AlvianKhairi/my-pandas-dataset-Abstract_Link
[ "region:us" ]
2023-08-23T06:20:36+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 455609414, "num_examples": 552066}], "download_size": 173420444, "dataset_size": 455609414}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-23T06:20:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "my-pandas-dataset-Abstract_Link" More Information needed
[ "# Dataset Card for \"my-pandas-dataset-Abstract_Link\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"my-pandas-dataset-Abstract_Link\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"my-pandas-dataset-Abstract_Link\"\n\nMore Information needed" ]
0f3969629bcc45c860279283dd2b9a614bce6d46
# Dataset of aircraft_carrier_oni/็ฉบๆฏๆฃฒ้ฌผ (Kantai Collection) This is the dataset of aircraft_carrier_oni/็ฉบๆฏๆฃฒ้ฌผ (Kantai Collection), containing 90 images and their tags. The core tags of this character are `long_hair, white_hair, one_side_up, very_long_hair, breasts, colored_skin, white_skin, large_breasts, red_eyes, pale_skin, orange_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 90 | 103.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aircraft_carrier_oni_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 90 | 70.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aircraft_carrier_oni_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 179 | 121.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aircraft_carrier_oni_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 90 | 94.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aircraft_carrier_oni_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 179 | 158.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aircraft_carrier_oni_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/aircraft_carrier_oni_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 39 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | abyssal_ship, thighhighs, 1girl, black_dress, solo, armored_boots, gauntlets, sailor_dress, thigh_boots, short_dress, zettai_ryouiki, looking_at_viewer, sitting | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | abyssal_ship | thighhighs | 1girl | black_dress | solo | armored_boots | gauntlets | sailor_dress | thigh_boots | short_dress | zettai_ryouiki | looking_at_viewer | sitting | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------|:-------------|:--------|:--------------|:-------|:----------------|:------------|:---------------|:--------------|:--------------|:-----------------|:--------------------|:----------| | 0 | 39 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/aircraft_carrier_oni_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T06:31:36+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T23:29:59+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of aircraft\_carrier\_oni/็ฉบๆฏๆฃฒ้ฌผ (Kantai Collection) ========================================================== This is the dataset of aircraft\_carrier\_oni/็ฉบๆฏๆฃฒ้ฌผ (Kantai Collection), containing 90 images and their tags. The core tags of this character are 'long\_hair, white\_hair, one\_side\_up, very\_long\_hair, breasts, colored\_skin, white\_skin, large\_breasts, red\_eyes, pale\_skin, orange\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
158d315c9cacbab9fabddef842c2ba94c24a5244
โœ”**Product Name** โ€” [Rangii Toenail Fungus](https://rangii-toenail-fungus.jimdosite.com/) โœ”**Category** โ€” Collagen Synthesis โœ”**Side Effect** โ€” No Side Effects โœ”**Availability** โ€” [Online](https://www.healthsupplement24x7.com/get-rangii) โœ”**Results** โ€” In 1-2 Months โœ”**Official Websiteย โ€”** [https://www.healthsupplement24x7.com/get-rangii](https://www.healthsupplement24x7.com/get-rangii) [Rangii Toenail Fungus](https://pdfhost.io/v/A5DB1UWHb_Rangii_Toenail_Fungus_Helps_To_Rejuvenates_And_Revitalize_Toe_Skin_And_Nail_Health) is a powerful formula designed to help kill hard-to-treat fungi. The formula has been manufactured using 100% natural ingredients proven effective in eliminating toe fungus. The liquid will also help restore the health of your cuticles, giving you good-looking toenails. [![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGPvMj4dzR_SU3JPcnzPkMVzjYjyDoYJZJusKZrScFZqScuRwerBUubqhV8p3mkUQ255wK02NQFa2olvgw0flCmCl981YsZcCbbIRVxj4xoaUHTGy1ESYAPR59YksdiTCrKi6C34Bg2Jho6Jhf-ywtRbamMg6Vgd7HXJJIDUX27El_GDeGtK7o6CJEL0h6/w640-h280/Screenshot%20(3093).png)](https://www.healthsupplement24x7.com/get-rangii)ย  ### **[Click Here to Buy Rangii Toenail Fungus](https://www.healthsupplement24x7.com/get-rangii)** **What exactly is Rangii Toenail Fungus?** ------------------------------------------ [Rangii Toenail Fungus](https://rangii.clubeo.com) is a specially formulated liquid designed to combat fungal nail infections. This advanced treatment is meticulously crafted using cutting-edge technology and powerful natural ingredients to effectively eliminate the fungus and restore the health of your nails. The unique blend of active components and oils in the [Rangii Toenail Fungus](https://colab.research.google.com/drive/1kaCabmKiHAPlkTruYQdXKUF_e5DPLylO) works synergistically to penetrate deep into the affected nails, targeting the root cause of the infection and inhibiting fungal growth. With regular use, this remarkable solution not only treats existing infections but also acts as a preventive measure against future outbreaks. With [Rangii Toenail Fungus](https://colab.research.google.com/drive/1BnmMAHLudCNoT3ibOCjEtsdGcCms5zsU), you can finally experience the relief and confidence of having fungus-free nails. **How Does Rangii Work?** ------------------------- [Rangii](https://healthsupplements24x7.blogspot.com/2023/08/rangii.html) is an easy liquid solution developed by experts and recommended by doctors. This serum works instantly on your fungus and skin. Unlike other nail fungus remover products , Rangii offers rapid results so that you can flaunt flawless skin without waiting for the treatment to kick in. This liquid serum blends the power of two natural ingredients that have been used in traditional herbal remedies for centuries - Hyaluronic Acid and Vitamin E Extract. These two natural ingredients contribute to the effectiveness of the Rangii serum. **Rangii Real Benefits:** ------------------------- * **Stronger nails:** The ingredients of Rangii work within. Protects and strengthens nails. No more fragile nails! * **Nail growth:** The serum promotes healthy nail growth for long and glamorous nails. Grow your nails. * **Better nails:** The serum's moisturizing and nutrient-rich properties help revitalize nails. * **Nail repair:** Nail fungus, discoloration and damage are gone. The serum's powerful ingredients address these issues, helping to keep your nails healthy. * **Hydrate nails and cuticles:** The serum also hydrates the epidermis. It prevents dry skin and cuticle problems by moisturizing and softening. * **Simple to use:** Applying the serum is easy. Apply the liquid to the nail and gently massage the area. * **Moisturize nail cuticles:**ย [Rangii Toenail Fungus](https://www.sympla.com.br/evento/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health/2131910) complex believes that a dehydrated cuticle leaves nails dry and brittle. The liquid keeps them moist and strong, preventing infection. * **Helper Cells:**ย [Rangii](https://soundcloud.com/rangii/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health) removes toxins and creates new nail cells. They help strengthen nails and prevent infections. * **Maintain blood flow:** Infected fingernails and feet can interfere with blood circulation. Increased blood flow delivers vitamins and oxygen with Rangii. Blood circulation nourishes the nail and speeds up repair. * **Improves collagen production:** Vitamins C and E in Rangii improve collagen. The serum stimulates collagen production, which helps strengthen nails. [![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBjEna6XgxJiV3t-voAnVv4Mz5DJrHxKUb7cgRGkSScLzgFTUPeIzc9AzVv447O40qp7ooW3aPxtNZf0T68y7v9KGmz4F1sbzaUhzUPntZ3w8QAiOUqV6lz4eDOKZar0a2LljdcrCsanZsM6EaFJsTHz66oz-DUT_99e2Yj96EAnc1kzQug6EjaxeD_Rec/w640-h292/Screenshot%20(3094).png)](https://www.healthsupplement24x7.com/get-rangii) ### **[\[big Savings Today\] Buy Rangii Toenail Fungus Before Stock Runs Out](https://www.healthsupplement24x7.com/get-rangii)** **What are the ingredients in Rangii?** --------------------------------------- [Rangii](https://www.townscript.com/e/rangii-toenail-fungus-120030) contains a lot of natural oils, minerals, vitamins, and herbal extracts that are guaranteed and proven to treat toenail fungus permanently. According to the official website of [Rangii](https://events.humanitix.com/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health), the following are the ingredients in it: * **Barbadensis:** It is said to reduce skin irritation and nail breakage that are commonly caused in people with toenail fungus infection. It can also relieve pain and inflammation. * **Pelargonium Graveolens Oil:** It is used to fight chronic inflammation of the body in response to fungal infections. It can soothe this inflammation and reduce the irritation of the nails and skin. * **Horsetail:** It is used for its antifungal uses and properties that can find fungi and kill them. It is an excellent remedy for toenail fungus. * **Lemon Extract:** It is said to boost circulation towards your toes and improve the nourishment of the nails and the surrounding skin to remove fungus completely. * **Vitamin E Extract:** It has antifungal, antibacterial, anti-inflammatory, antiviral, and antimicrobial properties that make it the perfect ingredient to treat toenail fungus. * **Pine (Pinus Sylvestris) Bud Extract:** It is known for its soothing and healing properties as it contains some antioxidants that can restore healthy cellular functions and health. * **Hyaluronic Acid:** It can smoothen out the rough edges and add a protective layer on your toenails so they donโ€™t become rough and irritated. * **Potassium Sorbate:** It contains antioxidants that can remove bacteria, toxins and impurities from your feet. It is also used as a cleaner. ย [![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg41JuDAt1VcgfVd4pSieQzDzBKypcCdBpgyzFajFdITPbGahIS1vHcWCfUFCM7RrRTZ6tHAevqecrk1yga9sTp8F8J-bdsBljdvs9AoosIjmaQOqC4nScB5bxVRlvfurobkP7ERmF8aoV4WhSo3bWETEI0Ca2liOurxF9Wajgz41KsKu1OxUZt4hJtDmih/w640-h300/Screenshot%20(3095).png)](https://www.healthsupplement24x7.com/get-rangii) [![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzKY0a5qX547xvh8Cu4cAB2gjsj_VupLDnthc6TYvePBI-qRcRSk4RPt7ulZ9n3yyoeOK5XZJpEeBOg_2u1FlrAMaX6bbSa6lQg4L-hONYYBoiYJJIu9hNoJVBdCGvtr2htqozN8nyyNEi16Wlk1-HcaXYgec-9Q9_uI2k1tS4oXGTJt94IRJUxk15B2jd/w640-h454/Screenshot%20(3096).png)](https://www.healthsupplement24x7.com/get-rangii)ย  ### **[Buy Now From Rangii Toenail Fungus Official Website - Best Price, And Discount!](https://www.healthsupplement24x7.com/get-rangii)** **How to Use Rangii Toenail Fungus?**ย  -------------------------------------- Using this [Rangii](https://colab.research.google.com/drive/1Z0_xQUZ6ci80HFkpq7CnOkzfRnXvfx9u) is quite simple. Take one dropper of serum and apply it to the affected area. Gently massage nails and skin in a day. Consistency paves better results, and you can also apply it under moisturizers. If you are prone to skin conditions, then consult a dermatologist before using this serum. **Pricing and Availability**ย  ----------------------------- [Rangii](https://www.ivoox.com/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-audios-mp3_rf_114760270_1.html) is only available on the official website . Customers making multiple orders get discounts plus bonuses. Below is how the pricing works:ย  * One Rangii bottle โ€“ $69 * Three Rangii Bottles โ€“ $49/bottle * Six Rangii bottles โ€“ $39/bottle + 2 Digital books + free US shippingย  After purchase, [Rangii](https://colab.research.google.com/drive/10ptXse-pV7DfOZUmh6wvztRFW0nJfjRI) maker ships the formulation and bonuses within five business days. However, customers receive the digital guides through their email immediately after purchase.ย  [![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhB-LvIU43Dkw0oPbiuP68-Z3tUxozPJcAmzp0H9rCLTU_xLUlLFwPU4jZyC8uwof6wjP5KEVOrx_MZK_diCptip8_ReUer2I4cbD5GqCE9j-6AsAMVyZQwyg5lICrDQ3zX9_A0Bt6oWJDQtv5kHTIwHbniG1OgntmAJ1svwXqEEUWg1maOBe1YUOOzTwTs/w640-h474/Screenshot%20(3098).png)](https://www.healthsupplement24x7.com/get-rangii) ### **[Click to buy Rangii Toenail Fungus today from the companyโ€™s official website!](https://www.healthsupplement24x7.com/get-rangii)** **Rangii โ€“ Bonuses** -------------------- BONUS #1 7 Dangers of Ignoring Fungus BONUS #2 Toenail Fungus Code **Rangii โ€“ Refund Policy** -------------------------- The main goal of the creators of [Rangii](https://rangii.clubeo.com/page/rangii-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health.html) is to satisfy the customer. That's why every box of this product comes with a 60-day money-back guarantee. For 60 days you can use Rangii risk-free; If you are not satisfied, just ask for a refund. This way you can be sure that even if [Rangii](https://www.scoop.it/topic/rangii-toenail-fungus) does not give the desired effect, your hard-earned money will not be wasted.ย  **Where Can I Buy Rangii Toenail Fungus in USA?** ------------------------------------------------- Get [Rangii Toenail Fungus](https://rangii.clubeo.com/page/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health.html) mole corrector and tag removal serum in Canada & USA from the official website. Check Its availability in Australia, New Zealand, Canada & USA. Visit [Rangii Toenail Fungus](https://rangii.hashnode.dev/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health) Official Website & select your country before placing order. [![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5QdDQhp6pcBLTb7CuobAM0hh4P-q9j_qbiF9Bl-k9GSQ5emHundabfyMPEFu9iANiUGtp2GYZtd-eq6zmGpPFXR4tN_i8D_U7eRD_Rm_E_0U2opkvDcZ4rt4GTIZh6FnPTpvE2QypsCmpyI4NlHeQZc_fofTRNzYLaYgxufZEodAtKjxYW5xIN7ErI9yq/w640-h418/Screenshot%20(3097).png)](https://www.healthsupplement24x7.com/get-rangii) **Conclusion** -------------- If you want to get rid of your embarrassing skin, [Rangii Fungus Remover](https://rangii.clubeo.com/calendar/2023/08/23/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health) is a great option. It has a special formula that eliminates them quickly and naturally without causing any problems or side effects. This serum can be used for 30 days, and in two weeks, you should see results; thus you donโ€™t have to sit tight for a really long time prior to utilizing it once more. The product has received favorable reviews from customers who have expressed their satisfaction with it. If youโ€™re ready for more information, ### **[Buy Your โ€œRangii Fungus Remover Serumโ€ Before Stock Runs Out](https://www.healthsupplement24x7.com/get-rangii)** [https://healthsupplements24x7.blogspot.com/2023/08/rangii.html](https://healthsupplements24x7.blogspot.com/2023/08/rangii.html) [https://rangii-toenail-fungus.jimdosite.com/](https://rangii-toenail-fungus.jimdosite.com/) [https://pdfhost.io/v/A5DB1UWHb\_Rangii\_Toenail\_Fungus\_Helps\_To\_Rejuvenates\_And\_Revitalize\_Toe\_Skin\_And\_Nail\_Health](https://pdfhost.io/v/A5DB1UWHb_Rangii_Toenail_Fungus_Helps_To_Rejuvenates_And_Revitalize_Toe_Skin_And_Nail_Health) [https://rangii.clubeo.com](https://rangii.clubeo.com) [https://rangii.clubeo.com/calendar/2023/08/23/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health](https://rangii.clubeo.com/calendar/2023/08/23/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health) [https://rangii.clubeo.com/page/rangii-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health.html](https://rangii.clubeo.com/page/rangii-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health.html) [https://rangii.clubeo.com/page/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health.html](https://rangii.clubeo.com/page/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health.html) [https://rangii.hashnode.dev/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health](https://rangii.hashnode.dev/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health) [https://www.sympla.com.br/evento/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health/2131910](https://www.sympla.com.br/evento/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health/2131910) [https://www.scoop.it/topic/rangii-toenail-fungus](https://www.scoop.it/topic/rangii-toenail-fungus) [https://www.ivoox.com/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-audios-mp3\_rf\_114760270\_1.html](https://www.ivoox.com/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-audios-mp3_rf_114760270_1.html) [https://soundcloud.com/rangii/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health](https://soundcloud.com/rangii/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health) [https://www.townscript.com/e/rangii-toenail-fungus-120030](https://www.townscript.com/e/rangii-toenail-fungus-120030) [https://colab.research.google.com/drive/1Z0\_xQUZ6ci80HFkpq7CnOkzfRnXvfx9u](https://colab.research.google.com/drive/1Z0_xQUZ6ci80HFkpq7CnOkzfRnXvfx9u) [https://colab.research.google.com/drive/10ptXse-pV7DfOZUmh6wvztRFW0nJfjRI](https://colab.research.google.com/drive/10ptXse-pV7DfOZUmh6wvztRFW0nJfjRI) [https://colab.research.google.com/drive/1BnmMAHLudCNoT3ibOCjEtsdGcCms5zsU](https://colab.research.google.com/drive/1BnmMAHLudCNoT3ibOCjEtsdGcCms5zsU) [https://colab.research.google.com/drive/1kaCabmKiHAPlkTruYQdXKUF\_e5DPLylO](https://colab.research.google.com/drive/1kaCabmKiHAPlkTruYQdXKUF_e5DPLylO) [https://colab.research.google.com/drive/1Gzv5qzTUBZGgBvlX4u9\_6OyvAZl7l3B4](https://colab.research.google.com/drive/1Gzv5qzTUBZGgBvlX4u9_6OyvAZl7l3B4) [https://events.humanitix.com/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health](https://events.humanitix.com/rangii-toenail-fungus-helps-to-rejuvenates-and-revitalize-toe-skin-and-nail-health) [https://form.jotform.com/Rangii/rangii-toenail-fungus](https://form.jotform.com/Rangii/rangii-toenail-fungus) [https://blieutch-ghuay-teey.yolasite.com/](https://blieutch-ghuay-teey.yolasite.com/) [https://devfolio.co/@rangiireviews](https://devfolio.co/@rangiireviews) [https://devfolio.co/projects/rangii-e948](https://devfolio.co/projects/rangii-e948)
rangii-reviews/rangii
[ "region:us" ]
2023-08-23T06:33:09+00:00
{}
2023-08-23T06:33:29+00:00
[]
[]
TAGS #region-us
Product Name โ€” Rangii Toenail Fungus Category โ€” Collagen Synthesis Side Effect โ€” No Side Effects Availability โ€” Online Results โ€” In 1-2 Months Official Websiteย โ€” URL Rangii Toenail Fungus is a powerful formula designed to help kill hard-to-treat fungi. The formula has been manufactured using 100% natural ingredients proven effective in eliminating toe fungus. The liquid will also help restore the health of your cuticles, giving you good-looking toenails. ![.png)](URLย  ### Click Here to Buy Rangii Toenail Fungus What exactly is Rangii Toenail Fungus? ------------------------------------------ Rangii Toenail Fungus is a specially formulated liquid designed to combat fungal nail infections. This advanced treatment is meticulously crafted using cutting-edge technology and powerful natural ingredients to effectively eliminate the fungus and restore the health of your nails. The unique blend of active components and oils in the Rangii Toenail Fungus works synergistically to penetrate deep into the affected nails, targeting the root cause of the infection and inhibiting fungal growth. With regular use, this remarkable solution not only treats existing infections but also acts as a preventive measure against future outbreaks. With Rangii Toenail Fungus, you can finally experience the relief and confidence of having fungus-free nails. How Does Rangii Work? ------------------------- Rangii is an easy liquid solution developed by experts and recommended by doctors. This serum works instantly on your fungus and skin. Unlike other nail fungus remover products , Rangii offers rapid results so that you can flaunt flawless skin without waiting for the treatment to kick in. This liquid serum blends the power of two natural ingredients that have been used in traditional herbal remedies for centuries - Hyaluronic Acid and Vitamin E Extract. These two natural ingredients contribute to the effectiveness of the Rangii serum. Rangii Real Benefits: ------------------------- * Stronger nails: The ingredients of Rangii work within. Protects and strengthens nails. No more fragile nails! * Nail growth: The serum promotes healthy nail growth for long and glamorous nails. Grow your nails. * Better nails: The serum's moisturizing and nutrient-rich properties help revitalize nails. * Nail repair: Nail fungus, discoloration and damage are gone. The serum's powerful ingredients address these issues, helping to keep your nails healthy. * Hydrate nails and cuticles: The serum also hydrates the epidermis. It prevents dry skin and cuticle problems by moisturizing and softening. * Simple to use: Applying the serum is easy. Apply the liquid to the nail and gently massage the area. * Moisturize nail cuticles:ย Rangii Toenail Fungus complex believes that a dehydrated cuticle leaves nails dry and brittle. The liquid keeps them moist and strong, preventing infection. * Helper Cells:ย Rangii removes toxins and creates new nail cells. They help strengthen nails and prevent infections. * Maintain blood flow: Infected fingernails and feet can interfere with blood circulation. Increased blood flow delivers vitamins and oxygen with Rangii. Blood circulation nourishes the nail and speeds up repair. * Improves collagen production: Vitamins C and E in Rangii improve collagen. The serum stimulates collagen production, which helps strengthen nails. ![.png)](URL ### [\[big Savings Today\] Buy Rangii Toenail Fungus Before Stock Runs Out](URL What are the ingredients in Rangii? --------------------------------------- Rangii contains a lot of natural oils, minerals, vitamins, and herbal extracts that are guaranteed and proven to treat toenail fungus permanently. According to the official website of Rangii, the following are the ingredients in it: * Barbadensis: It is said to reduce skin irritation and nail breakage that are commonly caused in people with toenail fungus infection. It can also relieve pain and inflammation. * Pelargonium Graveolens Oil: It is used to fight chronic inflammation of the body in response to fungal infections. It can soothe this inflammation and reduce the irritation of the nails and skin. * Horsetail: It is used for its antifungal uses and properties that can find fungi and kill them. It is an excellent remedy for toenail fungus. * Lemon Extract: It is said to boost circulation towards your toes and improve the nourishment of the nails and the surrounding skin to remove fungus completely. * Vitamin E Extract: It has antifungal, antibacterial, anti-inflammatory, antiviral, and antimicrobial properties that make it the perfect ingredient to treat toenail fungus. * Pine (Pinus Sylvestris) Bud Extract: It is known for its soothing and healing properties as it contains some antioxidants that can restore healthy cellular functions and health. * Hyaluronic Acid: It can smoothen out the rough edges and add a protective layer on your toenails so they donโ€™t become rough and irritated. * Potassium Sorbate: It contains antioxidants that can remove bacteria, toxins and impurities from your feet. It is also used as a cleaner. ย ![.png)](URL ![.png)](URLย  ### Buy Now From Rangii Toenail Fungus Official Website - Best Price, And Discount! How to Use Rangii Toenail Fungus?ย  -------------------------------------- Using this Rangii is quite simple. Take one dropper of serum and apply it to the affected area. Gently massage nails and skin in a day. Consistency paves better results, and you can also apply it under moisturizers. If you are prone to skin conditions, then consult a dermatologist before using this serum. Pricing and Availabilityย  ----------------------------- Rangii is only available on the official website . Customers making multiple orders get discounts plus bonuses. Below is how the pricing works:ย  * One Rangii bottle โ€“ $69 * Three Rangii Bottles โ€“ $49/bottle * Six Rangii bottles โ€“ $39/bottle + 2 Digital books + free US shippingย  After purchase, Rangii maker ships the formulation and bonuses within five business days. However, customers receive the digital guides through their email immediately after purchase.ย  ![.png)](URL ### Click to buy Rangii Toenail Fungus today from the companyโ€™s official website! Rangii โ€“ Bonuses -------------------- BONUS #1 7 Dangers of Ignoring Fungus BONUS #2 Toenail Fungus Code Rangii โ€“ Refund Policy -------------------------- The main goal of the creators of Rangii is to satisfy the customer. That's why every box of this product comes with a 60-day money-back guarantee. For 60 days you can use Rangii risk-free; If you are not satisfied, just ask for a refund. This way you can be sure that even if Rangii does not give the desired effect, your hard-earned money will not be wasted.ย  Where Can I Buy Rangii Toenail Fungus in USA? ------------------------------------------------- Get Rangii Toenail Fungus mole corrector and tag removal serum in Canada & USA from the official website. Check Its availability in Australia, New Zealand, Canada & USA. Visit Rangii Toenail Fungus Official Website & select your country before placing order. ![.png)](URL Conclusion -------------- If you want to get rid of your embarrassing skin, Rangii Fungus Remover is a great option. It has a special formula that eliminates them quickly and naturally without causing any problems or side effects. This serum can be used for 30 days, and in two weeks, you should see results; thus you donโ€™t have to sit tight for a really long time prior to utilizing it once more. The product has received favorable reviews from customers who have expressed their satisfaction with it. If youโ€™re ready for more information, ### Buy Your โ€œRangii Fungus Remover Serumโ€ Before Stock Runs Out URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL
[ "### Click Here to Buy Rangii Toenail Fungus\n\nWhat exactly is Rangii Toenail Fungus?\n------------------------------------------\n\nRangii Toenail Fungus is a specially formulated liquid designed to combat fungal nail infections. This advanced treatment is meticulously crafted using cutting-edge technology and powerful natural ingredients to effectively eliminate the fungus and restore the health of your nails. The unique blend of active components and oils in the Rangii Toenail Fungus works synergistically to penetrate deep into the affected nails, targeting the root cause of the infection and inhibiting fungal growth. With regular use, this remarkable solution not only treats existing infections but also acts as a preventive measure against future outbreaks. With Rangii Toenail Fungus, you can finally experience the relief and confidence of having fungus-free nails.\n\nHow Does Rangii Work?\n-------------------------\n\nRangii is an easy liquid solution developed by experts and recommended by doctors. This serum works instantly on your fungus and skin. Unlike other nail fungus remover products , Rangii offers rapid results so that you can flaunt flawless skin without waiting for the treatment to kick in.\n\nThis liquid serum blends the power of two natural ingredients that have been used in traditional herbal remedies for centuries - Hyaluronic Acid and Vitamin E Extract. These two natural ingredients contribute to the effectiveness of the Rangii serum.\n\nRangii Real Benefits:\n-------------------------\n\n* Stronger nails: The ingredients of Rangii work within. Protects and strengthens nails. No more fragile nails! \n \n \n* Nail growth: The serum promotes healthy nail growth for long and glamorous nails. Grow your nails. \n \n \n* Better nails: The serum's moisturizing and nutrient-rich properties help revitalize nails. \n \n \n* Nail repair: Nail fungus, discoloration and damage are gone. The serum's powerful ingredients address these issues, helping to keep your nails healthy. \n \n \n* Hydrate nails and cuticles: The serum also hydrates the epidermis. It prevents dry skin and cuticle problems by moisturizing and softening. \n \n \n* Simple to use: Applying the serum is easy. Apply the liquid to the nail and gently massage the area. \n \n \n* Moisturize nail cuticles:ย Rangii Toenail Fungus complex believes that a dehydrated cuticle leaves nails dry and brittle. The liquid keeps them moist and strong, preventing infection. \n \n \n* Helper Cells:ย Rangii removes toxins and creates new nail cells. They help strengthen nails and prevent infections. \n \n \n* Maintain blood flow: Infected fingernails and feet can interfere with blood circulation. Increased blood flow delivers vitamins and oxygen with Rangii. Blood circulation nourishes the nail and speeds up repair. \n \n \n* Improves collagen production: Vitamins C and E in Rangii improve collagen. The serum stimulates collagen production, which helps strengthen nails.\n\n![.png)](URL", "### [\\[big Savings Today\\] Buy Rangii Toenail Fungus Before Stock Runs Out](URL\n\nWhat are the ingredients in Rangii?\n---------------------------------------\n\nRangii contains a lot of natural oils, minerals, vitamins, and herbal extracts that are guaranteed and proven to treat toenail fungus permanently. According to the official website of Rangii, the following are the ingredients in it:\n\n* Barbadensis: It is said to reduce skin irritation and nail breakage that are commonly caused in people with toenail fungus infection. It can also relieve pain and inflammation. \n \n \n* Pelargonium Graveolens Oil: It is used to fight chronic inflammation of the body in response to fungal infections. It can soothe this inflammation and reduce the irritation of the nails and skin. \n \n \n* Horsetail: It is used for its antifungal uses and properties that can find fungi and kill them. It is an excellent remedy for toenail fungus. \n \n \n* Lemon Extract: It is said to boost circulation towards your toes and improve the nourishment of the nails and the surrounding skin to remove fungus completely. \n \n \n* Vitamin E Extract: It has antifungal, antibacterial, anti-inflammatory, antiviral, and antimicrobial properties that make it the perfect ingredient to treat toenail fungus. \n \n \n* Pine (Pinus Sylvestris) Bud Extract: It is known for its soothing and healing properties as it contains some antioxidants that can restore healthy cellular functions and health. \n \n \n* Hyaluronic Acid: It can smoothen out the rough edges and add a protective layer on your toenails so they donโ€™t become rough and irritated. \n \n \n* Potassium Sorbate: It contains antioxidants that can remove bacteria, toxins and impurities from your feet. It is also used as a cleaner.\n\nย ![.png)](URL\n\n![.png)](URL", "### Buy Now From Rangii Toenail Fungus Official Website - Best Price, And Discount!\n\nHow to Use Rangii Toenail Fungus?ย \n--------------------------------------\n\nUsing this Rangii is quite simple. Take one dropper of serum and apply it to the affected area. Gently massage nails and skin in a day. Consistency paves better results, and you can also apply it under moisturizers. If you are prone to skin conditions, then consult a dermatologist before using this serum.\n\nPricing and Availabilityย \n-----------------------------\n\nRangii is only available on the official website . Customers making multiple orders get discounts plus bonuses. Below is how the pricing works:ย \n\n* One Rangii bottle โ€“ $69\n* Three Rangii Bottles โ€“ $49/bottle\n* Six Rangii bottles โ€“ $39/bottle + 2 Digital books + free US shippingย \n\nAfter purchase, Rangii maker ships the formulation and bonuses within five business days. However, customers receive the digital guides through their email immediately after purchase.ย \n\n![.png)](URL", "### Click to buy Rangii Toenail Fungus today from the companyโ€™s official website!\n\nRangii โ€“ Bonuses\n--------------------\n\nBONUS #1 7 Dangers of Ignoring Fungus\n\nBONUS #2 Toenail Fungus Code\n\nRangii โ€“ Refund Policy\n--------------------------\n\nThe main goal of the creators of Rangii is to satisfy the customer. That's why every box of this product comes with a 60-day money-back guarantee. For 60 days you can use Rangii risk-free; If you are not satisfied, just ask for a refund. This way you can be sure that even if Rangii does not give the desired effect, your hard-earned money will not be wasted.ย \n\nWhere Can I Buy Rangii Toenail Fungus in USA?\n-------------------------------------------------\n\nGet Rangii Toenail Fungus mole corrector and tag removal serum in Canada & USA from the official website. Check Its availability in Australia, New Zealand, Canada & USA. Visit Rangii Toenail Fungus Official Website & select your country before placing order.\n\n![.png)](URL\n\nConclusion\n--------------\n\nIf you want to get rid of your embarrassing skin, Rangii Fungus Remover is a great option. It has a special formula that eliminates them quickly and naturally without causing any problems or side effects. This serum can be used for 30 days, and in two weeks, you should see results; thus you donโ€™t have to sit tight for a really long time prior to utilizing it once more.\n\nThe product has received favorable reviews from customers who have expressed their satisfaction with it. If youโ€™re ready for more information,", "### Buy Your โ€œRangii Fungus Remover Serumโ€ Before Stock Runs Out\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL" ]
[ "TAGS\n#region-us \n", "### Click Here to Buy Rangii Toenail Fungus\n\nWhat exactly is Rangii Toenail Fungus?\n------------------------------------------\n\nRangii Toenail Fungus is a specially formulated liquid designed to combat fungal nail infections. This advanced treatment is meticulously crafted using cutting-edge technology and powerful natural ingredients to effectively eliminate the fungus and restore the health of your nails. The unique blend of active components and oils in the Rangii Toenail Fungus works synergistically to penetrate deep into the affected nails, targeting the root cause of the infection and inhibiting fungal growth. With regular use, this remarkable solution not only treats existing infections but also acts as a preventive measure against future outbreaks. With Rangii Toenail Fungus, you can finally experience the relief and confidence of having fungus-free nails.\n\nHow Does Rangii Work?\n-------------------------\n\nRangii is an easy liquid solution developed by experts and recommended by doctors. This serum works instantly on your fungus and skin. Unlike other nail fungus remover products , Rangii offers rapid results so that you can flaunt flawless skin without waiting for the treatment to kick in.\n\nThis liquid serum blends the power of two natural ingredients that have been used in traditional herbal remedies for centuries - Hyaluronic Acid and Vitamin E Extract. These two natural ingredients contribute to the effectiveness of the Rangii serum.\n\nRangii Real Benefits:\n-------------------------\n\n* Stronger nails: The ingredients of Rangii work within. Protects and strengthens nails. No more fragile nails! \n \n \n* Nail growth: The serum promotes healthy nail growth for long and glamorous nails. Grow your nails. \n \n \n* Better nails: The serum's moisturizing and nutrient-rich properties help revitalize nails. \n \n \n* Nail repair: Nail fungus, discoloration and damage are gone. The serum's powerful ingredients address these issues, helping to keep your nails healthy. \n \n \n* Hydrate nails and cuticles: The serum also hydrates the epidermis. It prevents dry skin and cuticle problems by moisturizing and softening. \n \n \n* Simple to use: Applying the serum is easy. Apply the liquid to the nail and gently massage the area. \n \n \n* Moisturize nail cuticles:ย Rangii Toenail Fungus complex believes that a dehydrated cuticle leaves nails dry and brittle. The liquid keeps them moist and strong, preventing infection. \n \n \n* Helper Cells:ย Rangii removes toxins and creates new nail cells. They help strengthen nails and prevent infections. \n \n \n* Maintain blood flow: Infected fingernails and feet can interfere with blood circulation. Increased blood flow delivers vitamins and oxygen with Rangii. Blood circulation nourishes the nail and speeds up repair. \n \n \n* Improves collagen production: Vitamins C and E in Rangii improve collagen. The serum stimulates collagen production, which helps strengthen nails.\n\n![.png)](URL", "### [\\[big Savings Today\\] Buy Rangii Toenail Fungus Before Stock Runs Out](URL\n\nWhat are the ingredients in Rangii?\n---------------------------------------\n\nRangii contains a lot of natural oils, minerals, vitamins, and herbal extracts that are guaranteed and proven to treat toenail fungus permanently. According to the official website of Rangii, the following are the ingredients in it:\n\n* Barbadensis: It is said to reduce skin irritation and nail breakage that are commonly caused in people with toenail fungus infection. It can also relieve pain and inflammation. \n \n \n* Pelargonium Graveolens Oil: It is used to fight chronic inflammation of the body in response to fungal infections. It can soothe this inflammation and reduce the irritation of the nails and skin. \n \n \n* Horsetail: It is used for its antifungal uses and properties that can find fungi and kill them. It is an excellent remedy for toenail fungus. \n \n \n* Lemon Extract: It is said to boost circulation towards your toes and improve the nourishment of the nails and the surrounding skin to remove fungus completely. \n \n \n* Vitamin E Extract: It has antifungal, antibacterial, anti-inflammatory, antiviral, and antimicrobial properties that make it the perfect ingredient to treat toenail fungus. \n \n \n* Pine (Pinus Sylvestris) Bud Extract: It is known for its soothing and healing properties as it contains some antioxidants that can restore healthy cellular functions and health. \n \n \n* Hyaluronic Acid: It can smoothen out the rough edges and add a protective layer on your toenails so they donโ€™t become rough and irritated. \n \n \n* Potassium Sorbate: It contains antioxidants that can remove bacteria, toxins and impurities from your feet. It is also used as a cleaner.\n\nย ![.png)](URL\n\n![.png)](URL", "### Buy Now From Rangii Toenail Fungus Official Website - Best Price, And Discount!\n\nHow to Use Rangii Toenail Fungus?ย \n--------------------------------------\n\nUsing this Rangii is quite simple. Take one dropper of serum and apply it to the affected area. Gently massage nails and skin in a day. Consistency paves better results, and you can also apply it under moisturizers. If you are prone to skin conditions, then consult a dermatologist before using this serum.\n\nPricing and Availabilityย \n-----------------------------\n\nRangii is only available on the official website . Customers making multiple orders get discounts plus bonuses. Below is how the pricing works:ย \n\n* One Rangii bottle โ€“ $69\n* Three Rangii Bottles โ€“ $49/bottle\n* Six Rangii bottles โ€“ $39/bottle + 2 Digital books + free US shippingย \n\nAfter purchase, Rangii maker ships the formulation and bonuses within five business days. However, customers receive the digital guides through their email immediately after purchase.ย \n\n![.png)](URL", "### Click to buy Rangii Toenail Fungus today from the companyโ€™s official website!\n\nRangii โ€“ Bonuses\n--------------------\n\nBONUS #1 7 Dangers of Ignoring Fungus\n\nBONUS #2 Toenail Fungus Code\n\nRangii โ€“ Refund Policy\n--------------------------\n\nThe main goal of the creators of Rangii is to satisfy the customer. That's why every box of this product comes with a 60-day money-back guarantee. For 60 days you can use Rangii risk-free; If you are not satisfied, just ask for a refund. This way you can be sure that even if Rangii does not give the desired effect, your hard-earned money will not be wasted.ย \n\nWhere Can I Buy Rangii Toenail Fungus in USA?\n-------------------------------------------------\n\nGet Rangii Toenail Fungus mole corrector and tag removal serum in Canada & USA from the official website. Check Its availability in Australia, New Zealand, Canada & USA. Visit Rangii Toenail Fungus Official Website & select your country before placing order.\n\n![.png)](URL\n\nConclusion\n--------------\n\nIf you want to get rid of your embarrassing skin, Rangii Fungus Remover is a great option. It has a special formula that eliminates them quickly and naturally without causing any problems or side effects. This serum can be used for 30 days, and in two weeks, you should see results; thus you donโ€™t have to sit tight for a really long time prior to utilizing it once more.\n\nThe product has received favorable reviews from customers who have expressed their satisfaction with it. If youโ€™re ready for more information,", "### Buy Your โ€œRangii Fungus Remover Serumโ€ Before Stock Runs Out\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL" ]
[ 6, 685, 429, 237, 352, 42 ]
[ "passage: TAGS\n#region-us \n", "passage: ### Click Here to Buy Rangii Toenail Fungus\n\nWhat exactly is Rangii Toenail Fungus?\n------------------------------------------\n\nRangii Toenail Fungus is a specially formulated liquid designed to combat fungal nail infections. This advanced treatment is meticulously crafted using cutting-edge technology and powerful natural ingredients to effectively eliminate the fungus and restore the health of your nails. The unique blend of active components and oils in the Rangii Toenail Fungus works synergistically to penetrate deep into the affected nails, targeting the root cause of the infection and inhibiting fungal growth. With regular use, this remarkable solution not only treats existing infections but also acts as a preventive measure against future outbreaks. With Rangii Toenail Fungus, you can finally experience the relief and confidence of having fungus-free nails.\n\nHow Does Rangii Work?\n-------------------------\n\nRangii is an easy liquid solution developed by experts and recommended by doctors. This serum works instantly on your fungus and skin. Unlike other nail fungus remover products , Rangii offers rapid results so that you can flaunt flawless skin without waiting for the treatment to kick in.\n\nThis liquid serum blends the power of two natural ingredients that have been used in traditional herbal remedies for centuries - Hyaluronic Acid and Vitamin E Extract. These two natural ingredients contribute to the effectiveness of the Rangii serum.\n\nRangii Real Benefits:\n-------------------------\n\n* Stronger nails: The ingredients of Rangii work within. Protects and strengthens nails. No more fragile nails! \n \n \n* Nail growth: The serum promotes healthy nail growth for long and glamorous nails. Grow your nails. \n \n \n* Better nails: The serum's moisturizing and nutrient-rich properties help revitalize nails. \n \n \n* Nail repair: Nail fungus, discoloration and damage are gone. The serum's powerful ingredients address these issues, helping to keep your nails healthy. \n \n \n* Hydrate nails and cuticles: The serum also hydrates the epidermis. It prevents dry skin and cuticle problems by moisturizing and softening. \n \n \n* Simple to use: Applying the serum is easy. Apply the liquid to the nail and gently massage the area. \n \n \n* Moisturize nail cuticles:ย Rangii Toenail Fungus complex believes that a dehydrated cuticle leaves nails dry and brittle. The liquid keeps them moist and strong, preventing infection. \n \n \n* Helper Cells:ย Rangii removes toxins and creates new nail cells. They help strengthen nails and prevent infections. \n \n \n* Maintain blood flow: Infected fingernails and feet can interfere with blood circulation. Increased blood flow delivers vitamins and oxygen with Rangii. Blood circulation nourishes the nail and speeds up repair. \n \n \n* Improves collagen production: Vitamins C and E in Rangii improve collagen. The serum stimulates collagen production, which helps strengthen nails.\n\n![.png)](URL### [\\[big Savings Today\\] Buy Rangii Toenail Fungus Before Stock Runs Out](URL\n\nWhat are the ingredients in Rangii?\n---------------------------------------\n\nRangii contains a lot of natural oils, minerals, vitamins, and herbal extracts that are guaranteed and proven to treat toenail fungus permanently. According to the official website of Rangii, the following are the ingredients in it:\n\n* Barbadensis: It is said to reduce skin irritation and nail breakage that are commonly caused in people with toenail fungus infection. It can also relieve pain and inflammation. \n \n \n* Pelargonium Graveolens Oil: It is used to fight chronic inflammation of the body in response to fungal infections. It can soothe this inflammation and reduce the irritation of the nails and skin. \n \n \n* Horsetail: It is used for its antifungal uses and properties that can find fungi and kill them. It is an excellent remedy for toenail fungus. \n \n \n* Lemon Extract: It is said to boost circulation towards your toes and improve the nourishment of the nails and the surrounding skin to remove fungus completely. \n \n \n* Vitamin E Extract: It has antifungal, antibacterial, anti-inflammatory, antiviral, and antimicrobial properties that make it the perfect ingredient to treat toenail fungus. \n \n \n* Pine (Pinus Sylvestris) Bud Extract: It is known for its soothing and healing properties as it contains some antioxidants that can restore healthy cellular functions and health. \n \n \n* Hyaluronic Acid: It can smoothen out the rough edges and add a protective layer on your toenails so they donโ€™t become rough and irritated. \n \n \n* Potassium Sorbate: It contains antioxidants that can remove bacteria, toxins and impurities from your feet. It is also used as a cleaner.\n\nย ![.png)](URL\n\n![.png)](URL" ]
f2ed33cb3d883e382bb6767b9a808a8be02054fa
# Dataset Card for "combined_embedded_v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
HydraLM/embedded_datasets_0822
[ "region:us" ]
2023-08-23T06:43:14+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "conversation_id", "dtype": "int64"}, {"name": "dataset_id", "dtype": "string"}, {"name": "unique_conversation_id", "dtype": "string"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 14376321549, "num_examples": 2865791}], "download_size": 14664637194, "dataset_size": 14376321549}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-23T06:49:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for "combined_embedded_v2" More Information needed
[ "# Dataset Card for \"combined_embedded_v2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"combined_embedded_v2\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"combined_embedded_v2\"\n\nMore Information needed" ]
e4edbc9e9d6dee7797f6c801d48a8015d8125107
# Dataset of kazagumo/้ขจ้›ฒ (Kantai Collection) This is the dataset of kazagumo/้ขจ้›ฒ (Kantai Collection), containing 418 images and their tags. The core tags of this character are `brown_hair, long_hair, ponytail, ribbon, hair_ribbon, blue_eyes, grey_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 418 | 393.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kazagumo_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 418 | 248.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kazagumo_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 930 | 514.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kazagumo_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 418 | 358.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kazagumo_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 930 | 696.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kazagumo_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/kazagumo_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, cowboy_shot, looking_at_viewer, simple_background, navel, collarbone, sarong, white_background, one-hour_drawing_challenge, open_mouth, striped_bikini, twitter_username, blush, polka_dot, small_breasts, smile, white_bikini | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, aqua_necktie, blue_necktie, looking_at_viewer, solo, upper_body, white_shirt, school_uniform, dated, one-hour_drawing_challenge, smile, dress_shirt, simple_background, twitter_username | | 2 | 16 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, school_uniform, solo, white_shirt, simple_background, white_background, grey_pantyhose, looking_at_viewer, aqua_necktie, blue_necktie, purple_dress, sleeveless_dress, long_sleeves, cowboy_shot, smile | | 3 | 10 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, grey_thighhighs, school_uniform, solo, aqua_necktie, blazer, cowboy_shot, purple_dress, looking_at_viewer, long_sleeves, white_shirt, blue_necktie, smile | | 4 | 15 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, solo, school_uniform, white_background, full_body, lace-up_boots, simple_background, white_shirt, standing, blue_necktie, grey_pantyhose, purple_dress, looking_at_viewer, open_mouth | | 5 | 8 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 2girls, school_uniform, simple_background, white_shirt, purple_dress, aqua_necktie, halterneck, white_background, looking_at_viewer, smile | | 6 | 15 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | detached_collar, fake_animal_ears, playboy_bunny, rabbit_ears, strapless_leotard, wrist_cuffs, solo, 1girl, grey_pantyhose, purple_leotard, blue_necktie, looking_at_viewer, rabbit_tail, thighband_pantyhose, fishnet_pantyhose, adapted_costume, bow, cowboy_shot, full_body, open_mouth, simple_background, small_breasts, standing | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1boy, 1girl, hetero, solo_focus, blush, navel, penis, cowgirl_position, girl_on_top, nipples, nude, open_mouth, sex, small_breasts, vaginal, cum_in_pussy, heart, mosaic_censoring | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | cowboy_shot | looking_at_viewer | simple_background | navel | collarbone | sarong | white_background | one-hour_drawing_challenge | open_mouth | striped_bikini | twitter_username | blush | polka_dot | small_breasts | smile | white_bikini | aqua_necktie | blue_necktie | upper_body | white_shirt | school_uniform | dated | dress_shirt | grey_pantyhose | purple_dress | sleeveless_dress | long_sleeves | grey_thighhighs | blazer | full_body | lace-up_boots | standing | 2girls | halterneck | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | strapless_leotard | wrist_cuffs | purple_leotard | rabbit_tail | thighband_pantyhose | fishnet_pantyhose | adapted_costume | bow | 1boy | hetero | solo_focus | penis | cowgirl_position | girl_on_top | nipples | nude | sex | vaginal | cum_in_pussy | heart | mosaic_censoring | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------|:--------------------|:--------------------|:--------|:-------------|:---------|:-------------------|:-----------------------------|:-------------|:-----------------|:-------------------|:--------|:------------|:----------------|:--------|:---------------|:---------------|:---------------|:-------------|:--------------|:-----------------|:--------|:--------------|:-----------------|:---------------|:-------------------|:---------------|:------------------|:---------|:------------|:----------------|:-----------|:---------|:-------------|:------------------|:-------------------|:----------------|:--------------|:--------------------|:--------------|:-----------------|:--------------|:----------------------|:--------------------|:------------------|:------|:-------|:---------|:-------------|:--------|:-------------------|:--------------|:----------|:-------|:------|:----------|:---------------|:--------|:-------------------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | X | X | | | | | X | | | X | | | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 16 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | | | | X | | | | | | | | X | | X | X | | X | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 10 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | X | | | | | | | | | | | | | X | | X | X | | X | X | | | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 15 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | X | X | | | | X | | X | | | | | | | | | X | | X | X | | | X | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 8 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | | | | X | X | | | | X | | | | | | | | X | | X | | | X | X | | | | X | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 15 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | X | X | X | | | | | | X | | | | | X | | | | X | | | | | | X | | | | | | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | | | | X | | | | | X | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/kazagumo_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T06:44:44+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T19:09:07+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of kazagumo/้ขจ้›ฒ (Kantai Collection) ========================================== This is the dataset of kazagumo/้ขจ้›ฒ (Kantai Collection), containing 418 images and their tags. The core tags of this character are 'brown\_hair, long\_hair, ponytail, ribbon, hair\_ribbon, blue\_eyes, grey\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
20dfe5de572623c98a513187f3e9a2d154045041
# GitHub Code Dataset ## Dataset Description The GitHub Code dataset consists of 115M code files from GitHub in 32 programming languages with 60 extensions totaling in 1TB of data. The dataset was created from the public GitHub dataset on Google BiqQuery. ### How to use it The GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following two lines of code: ```python from datasets import load_dataset ds = load_dataset("codeparrot/github-code", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n", 'repo_name': 'MirekSz/webpack-es6-ts', 'path': 'app/mods/mod190.js', 'language': 'JavaScript', 'license': 'isc', 'size': 73 } ``` You can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration: ```python ds = load_dataset("codeparrot/github-code", streaming=True, split="train", languages=["Dockerfile"]) print(next(iter(ds))["code"]) #OUTPUT: """\ FROM rockyluke/ubuntu:precise ENV DEBIAN_FRONTEND="noninteractive" \ TZ="Europe/Amsterdam" ... """ ``` We also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages: ```python ds = load_dataset("codeparrot/github-code", streaming=True, split="train", licenses=["mit", "isc"]) licenses = [] for element in iter(ds).take(10_000): licenses.append(element["license"]) print(Counter(licenses)) #OUTPUT: Counter({'mit': 9896, 'isc': 104}) ``` Naturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage: ```python ds = load_dataset("codeparrot/github-code", split="train") ``` ## Data Structure ### Data Instances ```python { 'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n", 'repo_name': 'MirekSz/webpack-es6-ts', 'path': 'app/mods/mod190.js', 'language': 'JavaScript', 'license': 'isc', 'size': 73 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |code|string|content of source file| |repo_name|string|name of the GitHub repository| |path|string|path of file in GitHub repository| |language|string|programming language as inferred by extension| |license|string|license of GitHub repository| |size|int|size of source file in bytes| ### Data Splits The dataset only contains a train split. ## Languages The dataset contains 30 programming languages with over 60 extensions: ```python { "Assembly": [".asm"], "Batchfile": [".bat", ".cmd"], "C": [".c", ".h"], "C#": [".cs"], "C++": [".cpp", ".hpp", ".c++", ".h++", ".cc", ".hh", ".C", ".H"], "CMake": [".cmake"], "CSS": [".css"], "Dockerfile": [".dockerfile", "Dockerfile"], "FORTRAN": ['.f90', '.f', '.f03', '.f08', '.f77', '.f95', '.for', '.fpp'], "GO": [".go"], "Haskell": [".hs"], "HTML":[".html"], "Java": [".java"], "JavaScript": [".js"], "Julia": [".jl"], "Lua": [".lua"], "Makefile": ["Makefile"], "Markdown": [".md", ".markdown"], "PHP": [".php", ".php3", ".php4", ".php5", ".phps", ".phpt"], "Perl": [".pl", ".pm", ".pod", ".perl"], "PowerShell": ['.ps1', '.psd1', '.psm1'], "Python": [".py"], "Ruby": [".rb"], "Rust": [".rs"], "SQL": [".sql"], "Scala": [".scala"], "Shell": [".sh", ".bash", ".command", ".zsh"], "TypeScript": [".ts", ".tsx"], "TeX": [".tex"], "Visual Basic": [".vb"] } ``` ## Licenses Each example is also annotated with the license of the associated repository. There are in total 15 licenses: ```python [ 'mit', 'apache-2.0', 'gpl-3.0', 'gpl-2.0', 'bsd-3-clause', 'agpl-3.0', 'lgpl-3.0', 'lgpl-2.1', 'bsd-2-clause', 'cc0-1.0', 'epl-1.0', 'mpl-2.0', 'unlicense', 'isc', 'artistic-2.0' ] ``` ## Dataset Statistics The dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below: ![dataset-statistics](https://huggingface.co/datasets/codeparrot/github-code/resolve/main/github-code-stats-alpha.png) | | Language |File Count| Size (GB)| |---:|:-------------|---------:|-------:| | 0 | Java | 19548190 | 107.70 | | 1 | C | 14143113 | 183.83 | | 2 | JavaScript | 11839883 | 87.82 | | 3 | HTML | 11178557 | 118.12 | | 4 | PHP | 11177610 | 61.41 | | 5 | Markdown | 8464626 | 23.09 | | 6 | C++ | 7380520 | 87.73 | | 7 | Python | 7226626 | 52.03 | | 8 | C# | 6811652 | 36.83 | | 9 | Ruby | 4473331 | 10.95 | | 10 | GO | 2265436 | 19.28 | | 11 | TypeScript | 1940406 | 24.59 | | 12 | CSS | 1734406 | 22.67 | | 13 | Shell | 1385648 | 3.01 | | 14 | Scala | 835755 | 3.87 | | 15 | Makefile | 679430 | 2.92 | | 16 | SQL | 656671 | 5.67 | | 17 | Lua | 578554 | 2.81 | | 18 | Perl | 497949 | 4.70 | | 19 | Dockerfile | 366505 | 0.71 | | 20 | Haskell | 340623 | 1.85 | | 21 | Rust | 322431 | 2.68 | | 22 | TeX | 251015 | 2.15 | | 23 | Batchfile | 236945 | 0.70 | | 24 | CMake | 175282 | 0.54 | | 25 | Visual Basic | 155652 | 1.91 | | 26 | FORTRAN | 142038 | 1.62 | | 27 | PowerShell | 136846 | 0.69 | | 28 | Assembly | 82905 | 0.78 | | 29 | Julia | 58317 | 0.29 | ## Dataset Creation The dataset was created in two steps: 1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query [here](https://huggingface.co/datasets/codeparrot/github-code/blob/main/query.sql)). The query was executed on _Mar 16, 2022, 6:23:39 PM UTC+1_. 2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script [here](https://huggingface.co/datasets/codeparrot/github-code/blob/main/github_preprocessing.py)). ## Considerations for Using the Data The dataset consists of source code from a wide range of repositories. As such they can potentially include harmful or biased code as well as sensitive information like passwords or usernames. ## Releases You can load any older version of the dataset with the `revision` argument: ```Python ds = load_dataset("codeparrot/github-code", revision="v1.0") ``` ### v1.0 - Initial release of dataset - The query was executed on _Feb 14, 2022, 12:03:16 PM UTC+1_ ### v1.1 - Fix missing Scala/TypeScript - Fix deduplication issue with inconsistent Python `hash` - The query was executed on _Mar 16, 2022, 6:23:39 PM UTC+1_
Aditya78b/codeparrot-java-all
[ "task_categories:text-generation", "task_ids:language-modeling", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:unknown", "language:code", "license:other", "region:us" ]
2023-08-23T06:51:10+00:00
{"annotations_creators": [], "language_creators": ["crowdsourced", "expert-generated"], "language": ["code"], "license": ["other"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "github-code"}
2023-08-23T06:56:43+00:00
[]
[ "code" ]
TAGS #task_categories-text-generation #task_ids-language-modeling #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-unknown #language-code #license-other #region-us
GitHub Code Dataset =================== Dataset Description ------------------- The GitHub Code dataset consists of 115M code files from GitHub in 32 programming languages with 60 extensions totaling in 1TB of data. The dataset was created from the public GitHub dataset on Google BiqQuery. ### How to use it The GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following two lines of code: You can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration: We also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages: Naturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage: Data Structure -------------- ### Data Instances ### Data Fields Field: code, Type: string, Description: content of source file Field: repo\_name, Type: string, Description: name of the GitHub repository Field: path, Type: string, Description: path of file in GitHub repository Field: language, Type: string, Description: programming language as inferred by extension Field: license, Type: string, Description: license of GitHub repository Field: size, Type: int, Description: size of source file in bytes ### Data Splits The dataset only contains a train split. Languages --------- The dataset contains 30 programming languages with over 60 extensions: Licenses -------- Each example is also annotated with the license of the associated repository. There are in total 15 licenses: Dataset Statistics ------------------ The dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below: !dataset-statistics Dataset Creation ---------------- The dataset was created in two steps: 1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query here). The query was executed on *Mar 16, 2022, 6:23:39 PM UTC+1*. 2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script here). Considerations for Using the Data --------------------------------- The dataset consists of source code from a wide range of repositories. As such they can potentially include harmful or biased code as well as sensitive information like passwords or usernames. Releases -------- You can load any older version of the dataset with the 'revision' argument: ### v1.0 * Initial release of dataset * The query was executed on *Feb 14, 2022, 12:03:16 PM UTC+1* ### v1.1 * Fix missing Scala/TypeScript * Fix deduplication issue with inconsistent Python 'hash' * The query was executed on *Mar 16, 2022, 6:23:39 PM UTC+1*
[ "### How to use it\n\n\nThe GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following two lines of code:\n\n\nYou can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration:\n\n\nWe also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages:\n\n\nNaturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: code, Type: string, Description: content of source file\nField: repo\\_name, Type: string, Description: name of the GitHub repository\nField: path, Type: string, Description: path of file in GitHub repository\nField: language, Type: string, Description: programming language as inferred by extension\nField: license, Type: string, Description: license of GitHub repository\nField: size, Type: int, Description: size of source file in bytes", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nLanguages\n---------\n\n\nThe dataset contains 30 programming languages with over 60 extensions:\n\n\nLicenses\n--------\n\n\nEach example is also annotated with the license of the associated repository. There are in total 15 licenses:\n\n\nDataset Statistics\n------------------\n\n\nThe dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below:\n\n\n!dataset-statistics\n\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created in two steps:\n\n\n1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query here). The query was executed on *Mar 16, 2022, 6:23:39 PM UTC+1*.\n2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script here).\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset consists of source code from a wide range of repositories. As such they can potentially include harmful or biased code as well as sensitive information like passwords or usernames.\n\n\nReleases\n--------\n\n\nYou can load any older version of the dataset with the 'revision' argument:", "### v1.0\n\n\n* Initial release of dataset\n* The query was executed on *Feb 14, 2022, 12:03:16 PM UTC+1*", "### v1.1\n\n\n* Fix missing Scala/TypeScript\n* Fix deduplication issue with inconsistent Python 'hash'\n* The query was executed on *Mar 16, 2022, 6:23:39 PM UTC+1*" ]
[ "TAGS\n#task_categories-text-generation #task_ids-language-modeling #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-unknown #language-code #license-other #region-us \n", "### How to use it\n\n\nThe GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following two lines of code:\n\n\nYou can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration:\n\n\nWe also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages:\n\n\nNaturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: code, Type: string, Description: content of source file\nField: repo\\_name, Type: string, Description: name of the GitHub repository\nField: path, Type: string, Description: path of file in GitHub repository\nField: language, Type: string, Description: programming language as inferred by extension\nField: license, Type: string, Description: license of GitHub repository\nField: size, Type: int, Description: size of source file in bytes", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nLanguages\n---------\n\n\nThe dataset contains 30 programming languages with over 60 extensions:\n\n\nLicenses\n--------\n\n\nEach example is also annotated with the license of the associated repository. There are in total 15 licenses:\n\n\nDataset Statistics\n------------------\n\n\nThe dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below:\n\n\n!dataset-statistics\n\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created in two steps:\n\n\n1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query here). The query was executed on *Mar 16, 2022, 6:23:39 PM UTC+1*.\n2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script here).\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset consists of source code from a wide range of repositories. As such they can potentially include harmful or biased code as well as sensitive information like passwords or usernames.\n\n\nReleases\n--------\n\n\nYou can load any older version of the dataset with the 'revision' argument:", "### v1.0\n\n\n* Initial release of dataset\n* The query was executed on *Feb 14, 2022, 12:03:16 PM UTC+1*", "### v1.1\n\n\n* Fix missing Scala/TypeScript\n* Fix deduplication issue with inconsistent Python 'hash'\n* The query was executed on *Mar 16, 2022, 6:23:39 PM UTC+1*" ]
[ 75, 238, 6, 115, 312, 34, 48 ]
[ "passage: TAGS\n#task_categories-text-generation #task_ids-language-modeling #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-unknown #language-code #license-other #region-us \n### How to use it\n\n\nThe GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following two lines of code:\n\n\nYou can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration:\n\n\nWe also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages:\n\n\nNaturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage:\n\n\nData Structure\n--------------### Data Instances### Data Fields\n\n\nField: code, Type: string, Description: content of source file\nField: repo\\_name, Type: string, Description: name of the GitHub repository\nField: path, Type: string, Description: path of file in GitHub repository\nField: language, Type: string, Description: programming language as inferred by extension\nField: license, Type: string, Description: license of GitHub repository\nField: size, Type: int, Description: size of source file in bytes" ]
fb7fde184c169af3bdf7cd363ebdfec3ec4ea646
# Dataset of i_400 (Kantai Collection) This is the dataset of i_400 (Kantai Collection), containing 130 images and their tags. The core tags of this character are `long_hair, black_hair, headgear, bangs, black_eyes, purple_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 130 | 102.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_400_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 130 | 66.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_400_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 279 | 137.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_400_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 130 | 94.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_400_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 279 | 181.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_400_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/i_400_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 11 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blue_one-piece_swimsuit, orange_sailor_collar, sailor_shirt, school_swimsuit, sleeveless_shirt, solo, swimsuit_under_clothes, white_shirt, bare_arms, looking_at_viewer, open_mouth, smile, white_background, black_one-piece_swimsuit, simple_background, cowboy_shot, standing, tanlines, teeth | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, detached_collar, fake_animal_ears, playboy_bunny, rabbit_ears, solo, strapless_leotard, wrist_cuffs, black_leotard, looking_at_viewer, one-piece_tan, open_mouth, alternate_costume, blush, cowboy_shot, rabbit_tail, simple_background, smile, white_background, bare_legs, blue_leotard, bowtie, dated, small_breasts | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_one-piece_swimsuit | orange_sailor_collar | sailor_shirt | school_swimsuit | sleeveless_shirt | solo | swimsuit_under_clothes | white_shirt | bare_arms | looking_at_viewer | open_mouth | smile | white_background | black_one-piece_swimsuit | simple_background | cowboy_shot | standing | tanlines | teeth | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | strapless_leotard | wrist_cuffs | black_leotard | one-piece_tan | alternate_costume | blush | rabbit_tail | bare_legs | blue_leotard | bowtie | dated | small_breasts | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------------|:-----------------------|:---------------|:------------------|:-------------------|:-------|:-------------------------|:--------------|:------------|:--------------------|:-------------|:--------|:-------------------|:---------------------------|:--------------------|:--------------|:-----------|:-----------|:--------|:------------------|:-------------------|:----------------|:--------------|:--------------------|:--------------|:----------------|:----------------|:--------------------|:--------|:--------------|:------------|:---------------|:---------|:--------|:----------------| | 0 | 11 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | | | | | X | | | | X | X | X | X | | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/i_400_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T06:53:46+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T09:53:19+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of i\_400 (Kantai Collection) ===================================== This is the dataset of i\_400 (Kantai Collection), containing 130 images and their tags. The core tags of this character are 'long\_hair, black\_hair, headgear, bangs, black\_eyes, purple\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
1a32483a26de85b590d97f0920e2de9a8a0d2bad
Fine-tune GPT-3.5 to essentially act as an Observer, not answering questions but instead analyzing user inputs and providing instructions and assigning tasks to Answer GPT. This dataset consists of question-and-answer data from user queries on Quora (in English) and Zhihu (in Chinese) for the finetuned model of GPT-3.5.
JosephusCheung/observer
[ "language:en", "language:zh", "license:gpl-3.0", "region:us" ]
2023-08-23T07:08:46+00:00
{"language": ["en", "zh"], "license": "gpl-3.0"}
2023-08-23T07:13:03+00:00
[]
[ "en", "zh" ]
TAGS #language-English #language-Chinese #license-gpl-3.0 #region-us
Fine-tune GPT-3.5 to essentially act as an Observer, not answering questions but instead analyzing user inputs and providing instructions and assigning tasks to Answer GPT. This dataset consists of question-and-answer data from user queries on Quora (in English) and Zhihu (in Chinese) for the finetuned model of GPT-3.5.
[]
[ "TAGS\n#language-English #language-Chinese #license-gpl-3.0 #region-us \n" ]
[ 23 ]
[ "passage: TAGS\n#language-English #language-Chinese #license-gpl-3.0 #region-us \n" ]
642922f71c4cf88f8fbc89ec14050fafbff4f6da
# Dataset Card for "pubmed_subset_new" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zxvix/pubmed_subset_new
[ "region:us" ]
2023-08-23T07:08:51+00:00
{"dataset_info": {"features": [{"name": "MedlineCitation", "struct": [{"name": "PMID", "dtype": "int32"}, {"name": "DateCompleted", "struct": [{"name": "Year", "dtype": "int32"}, {"name": "Month", "dtype": "int32"}, {"name": "Day", "dtype": "int32"}]}, {"name": "NumberOfReferences", "dtype": "int32"}, {"name": "DateRevised", "struct": [{"name": "Year", "dtype": "int32"}, {"name": "Month", "dtype": "int32"}, {"name": "Day", "dtype": "int32"}]}, {"name": "Article", "struct": [{"name": "Abstract", "struct": [{"name": "AbstractText", "dtype": "string"}]}, {"name": "ArticleTitle", "dtype": "string"}, {"name": "AuthorList", "struct": [{"name": "Author", "sequence": [{"name": "LastName", "dtype": "string"}, {"name": "ForeName", "dtype": "string"}, {"name": "Initials", "dtype": "string"}, {"name": "CollectiveName", "dtype": "string"}]}]}, {"name": "Language", "dtype": "string"}, {"name": "GrantList", "struct": [{"name": "Grant", "sequence": [{"name": "GrantID", "dtype": "string"}, {"name": "Agency", "dtype": "string"}, {"name": "Country", "dtype": "string"}]}]}, {"name": "PublicationTypeList", "struct": [{"name": "PublicationType", "sequence": "string"}]}]}, {"name": "MedlineJournalInfo", "struct": [{"name": "Country", "dtype": "string"}]}, {"name": "ChemicalList", "struct": [{"name": "Chemical", "sequence": [{"name": "RegistryNumber", "dtype": "string"}, {"name": "NameOfSubstance", "dtype": "string"}]}]}, {"name": "CitationSubset", "dtype": "string"}, {"name": "MeshHeadingList", "struct": [{"name": "MeshHeading", "sequence": [{"name": "DescriptorName", "dtype": "string"}, {"name": "QualifierName", "dtype": "string"}]}]}]}, {"name": "PubmedData", "struct": [{"name": "ArticleIdList", "sequence": [{"name": "ArticleId", "sequence": "string"}]}, {"name": "PublicationStatus", "dtype": "string"}, {"name": "History", "struct": [{"name": "PubMedPubDate", "sequence": [{"name": "Year", "dtype": "int32"}, {"name": "Month", "dtype": "int32"}, {"name": "Day", "dtype": "int32"}]}]}, {"name": "ReferenceList", "sequence": [{"name": "Citation", "dtype": "string"}, {"name": "CitationId", "dtype": "int32"}]}]}, {"name": "text", "dtype": "string"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3033204166.457245, "num_examples": 1000000}, {"name": "test", "num_bytes": 3033204.166457245, "num_examples": 1000}], "download_size": 1638343655, "dataset_size": 3036237370.623702}}
2023-08-23T08:04:37+00:00
[]
[]
TAGS #region-us
# Dataset Card for "pubmed_subset_new" More Information needed
[ "# Dataset Card for \"pubmed_subset_new\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"pubmed_subset_new\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"pubmed_subset_new\"\n\nMore Information needed" ]
3588c90df6a53c5116cc8aa235c5110147495768
# Dataset Card for "CtoD_ForFineTune" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
vincenttttt/CtoD_ForFineTune
[ "region:us" ]
2023-08-23T07:17:56+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1621745, "num_examples": 3673}], "download_size": 476099, "dataset_size": 1621745}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-23T07:17:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for "CtoD_ForFineTune" More Information needed
[ "# Dataset Card for \"CtoD_ForFineTune\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"CtoD_ForFineTune\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"CtoD_ForFineTune\"\n\nMore Information needed" ]
7b8ffefdff4370570aa9de112f70a86c224131d7
dddd
jojobroo/dfd
[ "region:us" ]
2023-08-23T07:29:30+00:00
{}
2023-08-23T07:29:44+00:00
[]
[]
TAGS #region-us
dddd
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
5191cedc6ff64567524da7a13a3962ac53bfdfff
# Dataset Card for "4e46e0e8" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/4e46e0e8
[ "region:us" ]
2023-08-23T07:31:16+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1338, "dataset_size": 186}}
2023-08-23T07:31:17+00:00
[]
[]
TAGS #region-us
# Dataset Card for "4e46e0e8" More Information needed
[ "# Dataset Card for \"4e46e0e8\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"4e46e0e8\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"4e46e0e8\"\n\nMore Information needed" ]
c06b9871a21ba5d57602b92b2ff163bfd9fde866
# AutoTrain Dataset for project: flan-t5-tuning ## Dataset Description This dataset has been automatically processed by AutoTrain for project flan-t5-tuning. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "target": "G(!( oGupGnpHlFihSN ))", "source": "it never happens that oGupGnpHlFihSN", "feat_Unnamed: 2": null }, { "target": "G(!( uJwMVmQcOjk & NFbgbwYf & uwbnvOQXgDVD ))", "source": "at no time uJwMVmQcOjk and, at the same time, NFbgbwYf and uwbnvOQXgDVD", "feat_Unnamed: 2": null } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "target": "Value(dtype='string', id=None)", "source": "Value(dtype='string', id=None)", "feat_Unnamed: 2": "Value(dtype='float64', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 2399 | | valid | 600 |
XYLF/autotrain-data-flan-t5-tuning
[ "task_categories:translation", "region:us" ]
2023-08-23T07:39:30+00:00
{"task_categories": ["translation"]}
2023-08-23T07:45:58+00:00
[]
[]
TAGS #task_categories-translation #region-us
AutoTrain Dataset for project: flan-t5-tuning ============================================= Dataset Description ------------------- This dataset has been automatically processed by AutoTrain for project flan-t5-tuning. ### Languages The BCP-47 code for the dataset's language is unk. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#task_categories-translation #region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ 15, 27, 17, 23, 27 ]
[ "passage: TAGS\n#task_categories-translation #region-us \n### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from this dataset looks as follows:### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
646f628127c34739b8441af56d9fff58e9f5a397
# Dataset of mikura (Kantai Collection) This is the dataset of mikura (Kantai Collection), containing 216 images and their tags. The core tags of this character are `long_hair, twintails, low_twintails, grey_hair, green_eyes, hat, sailor_hat, white_headwear`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 216 | 174.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mikura_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 216 | 126.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mikura_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 428 | 239.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mikura_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 216 | 164.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mikura_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 428 | 297.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mikura_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/mikura_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blue_sailor_collar, full_body, pleated_skirt, puffy_short_sleeves, red_skirt, sailor_shirt, white_gloves, white_shirt, white_socks, white_background, hip_vent, simple_background, solo, standing, looking_at_viewer, smile | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, blue_sailor_collar, hip_vent, looking_at_viewer, pleated_skirt, puffy_short_sleeves, red_skirt, sailor_shirt, solo, white_gloves, white_panties, white_shirt, dolphin, cowboy_shot, simple_background, white_background, smile, bangs, serafuku | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, adapted_turret, blue_sailor_collar, cannon, hip_vent, machinery, pleated_skirt, puffy_short_sleeves, red_skirt, sailor_shirt, simple_background, white_gloves, white_panties, white_shirt, white_socks, full_body, rigging, serafuku, solo, undershirt, smile, standing, white_background, grey_background, looking_at_viewer | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, blue_sailor_collar, puffy_short_sleeves, simple_background, solo, upper_body, white_background, white_shirt, bangs, sailor_shirt, looking_at_viewer, white_gloves | | 4 | 8 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | blush, long_sleeves, 1girl, frills, pantyhose, dress, red_footwear, red_hairband, alternate_costume, bangs, looking_at_viewer, simple_background, smile, solo, bag, full_body, holding, mary_janes, open_mouth, shirt, skirt, light_brown_hair, socks, white_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_sailor_collar | full_body | pleated_skirt | puffy_short_sleeves | red_skirt | sailor_shirt | white_gloves | white_shirt | white_socks | white_background | hip_vent | simple_background | solo | standing | looking_at_viewer | smile | white_panties | dolphin | cowboy_shot | bangs | serafuku | adapted_turret | cannon | machinery | rigging | undershirt | grey_background | upper_body | blush | long_sleeves | frills | pantyhose | dress | red_footwear | red_hairband | alternate_costume | bag | holding | mary_janes | open_mouth | shirt | skirt | light_brown_hair | socks | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------------|:------------|:----------------|:----------------------|:------------|:---------------|:---------------|:--------------|:--------------|:-------------------|:-----------|:--------------------|:-------|:-----------|:--------------------|:--------|:----------------|:----------|:--------------|:--------|:-----------|:-----------------|:---------|:------------|:----------|:-------------|:------------------|:-------------|:--------|:---------------|:---------|:------------|:--------|:---------------|:---------------|:--------------------|:------|:----------|:-------------|:-------------|:--------|:--------|:-------------------|:--------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | X | X | X | X | X | X | | X | X | X | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | | X | | X | X | X | | X | | X | X | | X | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | 4 | 8 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | X | | | | | | | | X | | X | X | | X | X | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/mikura_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T07:45:30+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T10:04:47+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of mikura (Kantai Collection) ===================================== This is the dataset of mikura (Kantai Collection), containing 216 images and their tags. The core tags of this character are 'long\_hair, twintails, low\_twintails, grey\_hair, green\_eyes, hat, sailor\_hat, white\_headwear', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
e20662be2afdd5ec5ca98cb880d0e0637951eab8
# Dataset of asagumo/ๆœ้›ฒ (Kantai Collection) This is the dataset of asagumo/ๆœ้›ฒ (Kantai Collection), containing 404 images and their tags. The core tags of this character are `brown_hair, long_hair, twintails, hair_ribbon, ribbon, grey_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 404 | 314.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asagumo_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 404 | 215.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asagumo_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 856 | 433.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asagumo_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 404 | 293.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asagumo_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 856 | 553.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asagumo_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/asagumo_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, white_shirt, grey_jacket, white_headband, buttons, grey_skirt, long_sleeves, pleated_skirt, simple_background, blue_necktie, closed_mouth, ponytail, white_background, collared_shirt, cowboy_shot, looking_at_viewer, official_alternate_costume, school_uniform | | 1 | 17 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, grey_skirt, pleated_skirt, solo, white_shirt, blue_ascot, short_sleeves, suspender_skirt, arm_warmers, looking_at_viewer, simple_background, white_background, cowboy_shot, twitter_username, collared_shirt, black_thighhighs, school_uniform, smile | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, arm_warmers, ascot, black_thighhighs, looking_at_viewer, pleated_skirt, school_uniform, solo, suspenders, blush, grey_skirt, white_shirt, sitting, smile | | 3 | 11 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, looking_at_viewer, solo, cowboy_shot, blue_bikini, small_breasts, white_background, flat_chest, simple_background, blue_sky, blush, cloud, day, outdoors, side-tie_bikini_bottom, standing, twitter_username | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | white_shirt | grey_jacket | white_headband | buttons | grey_skirt | long_sleeves | pleated_skirt | simple_background | blue_necktie | closed_mouth | ponytail | white_background | collared_shirt | cowboy_shot | looking_at_viewer | official_alternate_costume | school_uniform | blue_ascot | short_sleeves | suspender_skirt | arm_warmers | twitter_username | black_thighhighs | smile | ascot | suspenders | blush | sitting | blue_bikini | small_breasts | flat_chest | blue_sky | cloud | day | outdoors | side-tie_bikini_bottom | standing | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------|:--------------|:-----------------|:----------|:-------------|:---------------|:----------------|:--------------------|:---------------|:---------------|:-----------|:-------------------|:-----------------|:--------------|:--------------------|:-----------------------------|:-----------------|:-------------|:----------------|:------------------|:--------------|:-------------------|:-------------------|:--------|:--------|:-------------|:--------|:----------|:--------------|:----------------|:-------------|:-----------|:--------|:------|:-----------|:-------------------------|:-----------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | 1 | 17 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | | | | X | | X | X | | | | X | X | X | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | | | X | | X | | | | | | | | X | | X | | | | X | | X | X | X | X | X | X | | | | | | | | | | | 3 | 11 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | | | | | | | X | | | | X | | X | X | | | | | | | X | | | | | X | | X | X | X | X | X | X | X | X | X |
CyberHarem/asagumo_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T07:51:07+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T21:00:29+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of asagumo/ๆœ้›ฒ (Kantai Collection) ========================================= This is the dataset of asagumo/ๆœ้›ฒ (Kantai Collection), containing 404 images and their tags. The core tags of this character are 'brown\_hair, long\_hair, twintails, hair\_ribbon, ribbon, grey\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
d23ced37ab41a2a19bbb38cb7284e8db42ede5ac
# Dataset Card for "gradio-backticks" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ArmelR/gradio-backticks
[ "region:us" ]
2023-08-23T07:54:26+00:00
{"dataset_info": {"features": [{"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 49489556.86150187, "num_examples": 18601}, {"name": "test", "num_bytes": 5764466.444776864, "num_examples": 1590}], "download_size": 27504053, "dataset_size": 55254023.306278735}}
2023-08-23T07:55:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "gradio-backticks" More Information needed
[ "# Dataset Card for \"gradio-backticks\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"gradio-backticks\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"gradio-backticks\"\n\nMore Information needed" ]
4bd1935c6ecdbe48500e63d5512dcaa8ee62b09a
# Dataset Card for emotion-custom This dataset has been created with [Argilla](https://docs.argilla.io). As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets). ## Dataset Description - **Homepage:** https://argilla.io - **Repository:** https://github.com/argilla-io/argilla - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains: * A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla. * Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`. * The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla. ### Load with Argilla To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code: ```python import argilla as rg ds = rg.FeedbackDataset.from_huggingface("davidberenstein1957/emotion-custom") ``` ### Load with `datasets` To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code: ```python from datasets import load_dataset ds = load_dataset("davidberenstein1957/emotion-custom") ``` ### Supported Tasks and Leaderboards This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure). There are no leaderboards associated with this dataset. ### Languages [More Information Needed] ## Dataset Structure ### Data in Argilla The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**. The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions. | Field Name | Title | Type | Required | Markdown | | ---------- | ----- | ---- | -------- | -------- | | text | Text | text | True | False | The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking. | Question Name | Title | Type | Required | Description | Values/Labels | | ------------- | ----- | ---- | -------- | ----------- | ------------- | | sentiment | Sentiment | label_selection | True | N/A | ['positive', 'neutral', 'negative'] | | mixed-emotion | Mixed-emotion | multi_label_selection | True | N/A | ['joy', 'anger', 'sadness', 'fear', 'surprise', 'love'] | The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata". The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`. | Metadata Name | Title | Type | Values | Visible for Annotators | | ------------- | ----- | ---- | ------ | ---------------------- | The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section. ### Data Instances An example of a dataset instance in Argilla looks as follows: ```json { "external_id": null, "fields": { "text": "i didnt feel humiliated" }, "metadata": {}, "responses": [ { "status": "submitted", "user_id": "f2c5232d-10c8-4468-8044-6b489e9db9b6", "values": { "mixed-emotion": { "value": [ "fear" ] }, "sentiment": { "value": "positive" } } } ], "suggestions": [], "vectors": {} } ``` While the same record in HuggingFace `datasets` looks as follows: ```json { "external_id": null, "metadata": "{}", "mixed-emotion": [ { "status": "submitted", "user_id": "f2c5232d-10c8-4468-8044-6b489e9db9b6", "value": [ "fear" ] } ], "mixed-emotion-suggestion": null, "mixed-emotion-suggestion-metadata": { "agent": null, "score": null, "type": null }, "sentiment": [ { "status": "submitted", "user_id": "f2c5232d-10c8-4468-8044-6b489e9db9b6", "value": "positive" } ], "sentiment-suggestion": null, "sentiment-suggestion-metadata": { "agent": null, "score": null, "type": null }, "text": "i didnt feel humiliated" } ``` ### Data Fields Among the dataset fields, we differentiate between the following: * **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions. * **text** is of type `text`. * **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`. * **sentiment** is of type `label_selection` with the following allowed values ['positive', 'neutral', 'negative']. * **mixed-emotion** is of type `multi_label_selection` with the following allowed values ['joy', 'anger', 'sadness', 'fear', 'surprise', 'love']. * **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable. * (optional) **sentiment-suggestion** is of type `label_selection` with the following allowed values ['positive', 'neutral', 'negative']. * (optional) **mixed-emotion-suggestion** is of type `multi_label_selection` with the following allowed values ['joy', 'anger', 'sadness', 'fear', 'surprise', 'love']. Additionally, we also have two more fields that are optional and are the following: * **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`. * **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file. ### Data Splits The dataset contains a single split, which is `train`. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation guidelines Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
davidberenstein1957/emotion-custom
[ "size_categories:n<1K", "rlfh", "argilla", "human-feedback", "region:us" ]
2023-08-23T08:28:26+00:00
{"size_categories": "n<1K", "tags": ["rlfh", "argilla", "human-feedback"]}
2023-12-16T11:26:44+00:00
[]
[]
TAGS #size_categories-n<1K #rlfh #argilla #human-feedback #region-us
Dataset Card for emotion-custom =============================== This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into Argilla as explained in Load with Argilla, or used directly with the 'datasets' library in Load with 'datasets'. Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary This dataset contains: * A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\_huggingface' method in Argilla. * Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\_huggingface' and can be loaded independently using the 'datasets' library via 'load\_dataset'. * The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla. ### Load with Argilla To load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code: ### Load with 'datasets' To load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code: ### Supported Tasks and Leaderboards This dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section. There are no leaderboards associated with this dataset. ### Languages Dataset Structure ----------------- ### Data in Argilla The dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines. The fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions. The questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\_selection, multi\_label\_selection, or ranking. The suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata". The metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'. The guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section. ### Data Instances An example of a dataset instance in Argilla looks as follows: While the same record in HuggingFace 'datasets' looks as follows: ### Data Fields Among the dataset fields, we differentiate between the following: * Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions. + text is of type 'text'. * Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'. + sentiment is of type 'label\_selection' with the following allowed values ['positive', 'neutral', 'negative']. + mixed-emotion is of type 'multi\_label\_selection' with the following allowed values ['joy', 'anger', 'sadness', 'fear', 'surprise', 'love']. * Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable. + (optional) sentiment-suggestion is of type 'label\_selection' with the following allowed values ['positive', 'neutral', 'negative']. + (optional) mixed-emotion-suggestion is of type 'multi\_label\_selection' with the following allowed values ['joy', 'anger', 'sadness', 'fear', 'surprise', 'love']. Additionally, we also have two more fields that are optional and are the following: * metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'. * external\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file. ### Data Splits The dataset contains a single split, which is 'train'. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation guidelines Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions
[ "### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.", "### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:", "### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:", "### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.", "### Languages\n\n\nDataset Structure\n-----------------", "### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.", "### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:", "### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ text is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ sentiment is of type 'label\\_selection' with the following allowed values ['positive', 'neutral', 'negative'].\n\t+ mixed-emotion is of type 'multi\\_label\\_selection' with the following allowed values ['joy', 'anger', 'sadness', 'fear', 'surprise', 'love'].\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) sentiment-suggestion is of type 'label\\_selection' with the following allowed values ['positive', 'neutral', 'negative'].\n\t+ (optional) mixed-emotion-suggestion is of type 'multi\\_label\\_selection' with the following allowed values ['joy', 'anger', 'sadness', 'fear', 'surprise', 'love'].\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.", "### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation guidelines\n\n\nEmotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#size_categories-n<1K #rlfh #argilla #human-feedback #region-us \n", "### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.", "### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:", "### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:", "### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.", "### Languages\n\n\nDataset Structure\n-----------------", "### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.", "### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:", "### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ text is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ sentiment is of type 'label\\_selection' with the following allowed values ['positive', 'neutral', 'negative'].\n\t+ mixed-emotion is of type 'multi\\_label\\_selection' with the following allowed values ['joy', 'anger', 'sadness', 'fear', 'surprise', 'love'].\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) sentiment-suggestion is of type 'label\\_selection' with the following allowed values ['positive', 'neutral', 'negative'].\n\t+ (optional) mixed-emotion-suggestion is of type 'multi\\_label\\_selection' with the following allowed values ['joy', 'anger', 'sadness', 'fear', 'surprise', 'love'].\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.", "### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation guidelines\n\n\nEmotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 27, 162, 40, 53, 68, 11, 404, 40, 597, 27, 7, 4, 10, 10, 5, 36, 5, 9, 18, 7, 8, 14, 6, 6, 5 ]
[ "passage: TAGS\n#size_categories-n<1K #rlfh #argilla #human-feedback #region-us \n### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.### Languages\n\n\nDataset Structure\n-----------------", "passage: ### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:" ]
d90f13c3d08cfc804b8a495b891f59b72ea20e4f
# Dataset of daitou (Kantai Collection) This is the dataset of daitou (Kantai Collection), containing 77 images and their tags. The core tags of this character are `black_hair, short_hair, hat, sailor_hat, white_headwear, low_ponytail, brown_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 77 | 32.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/daitou_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 77 | 30.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/daitou_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 121 | 47.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/daitou_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 77 | 32.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/daitou_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 121 | 52.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/daitou_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/daitou_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 16 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | blue_sailor_collar, sailor_dress, short_sleeves, white_background, white_dress, simple_background, uwabaki, 1girl, full_body, solo_focus, standing, white_socks, multiple_girls | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, blue_sailor_collar, sailor_dress, short_sleeves, solo, white_dress, simple_background, white_background, open_mouth, upper_body | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | blue_sailor_collar, sailor_dress, short_sleeves, solo_focus, white_dress, 2girls, cowboy_shot, looking_at_viewer, smile, blush | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | blue_sailor_collar | sailor_dress | short_sleeves | white_background | white_dress | simple_background | uwabaki | 1girl | full_body | solo_focus | standing | white_socks | multiple_girls | solo | open_mouth | upper_body | 2girls | cowboy_shot | looking_at_viewer | smile | blush | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------|:---------------|:----------------|:-------------------|:--------------|:--------------------|:----------|:--------|:------------|:-------------|:-----------|:--------------|:-----------------|:-------|:-------------|:-------------|:---------|:--------------|:--------------------|:--------|:--------| | 0 | 16 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | | X | | | | | | X | X | X | | | | | | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | X | | | | | X | | | | | | | X | X | X | X | X |
CyberHarem/daitou_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T08:33:57+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T10:11:54+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of daitou (Kantai Collection) ===================================== This is the dataset of daitou (Kantai Collection), containing 77 images and their tags. The core tags of this character are 'black\_hair, short\_hair, hat, sailor\_hat, white\_headwear, low\_ponytail, brown\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
09acda381ef3cc9434207d6a2ef535f9c154111a
# Dataset of ancient_destroyer_oni (Kantai Collection) This is the dataset of ancient_destroyer_oni (Kantai Collection), containing 17 images and their tags. The core tags of this character are `black_hair, drill_hair, side_ponytail, long_hair, blue_eyes, mole, mole_under_eye, glowing_eyes, hair_ornament, white_skin, anchor_hair_ornament, breasts, colored_skin, side_drill, small_breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 17 | 11.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ancient_destroyer_oni_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 17 | 8.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ancient_destroyer_oni_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 31 | 15.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ancient_destroyer_oni_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 17 | 10.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ancient_destroyer_oni_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 31 | 18.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ancient_destroyer_oni_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/ancient_destroyer_oni_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 17 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | abyssal_ship, 1girl, kimono, black_gloves, glowing, hakama_skirt, solo, meiji_schoolgirl_uniform, simple_background, thighhighs, blush, looking_at_viewer, wide_sleeves | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | abyssal_ship | 1girl | kimono | black_gloves | glowing | hakama_skirt | solo | meiji_schoolgirl_uniform | simple_background | thighhighs | blush | looking_at_viewer | wide_sleeves | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------|:--------|:---------|:---------------|:----------|:---------------|:-------|:---------------------------|:--------------------|:-------------|:--------|:--------------------|:---------------| | 0 | 17 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/ancient_destroyer_oni_kantaicollection
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-23T08:36:41+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T09:55:17+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of ancient\_destroyer\_oni (Kantai Collection) ====================================================== This is the dataset of ancient\_destroyer\_oni (Kantai Collection), containing 17 images and their tags. The core tags of this character are 'black\_hair, drill\_hair, side\_ponytail, long\_hair, blue\_eyes, mole, mole\_under\_eye, glowing\_eyes, hair\_ornament, white\_skin, anchor\_hair\_ornament, breasts, colored\_skin, side\_drill, small\_breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]