sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
listlengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
listlengths
0
25
languages
listlengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
listlengths
0
352
processed_texts
listlengths
1
353
tokens_length
listlengths
1
353
input_texts
listlengths
1
40
bb18788d2963cf9bcec3c130c314ace4f3632d51
# Dataset Card for "Zeroshot_Gold_Test-1K_other" This dataset is a test dataset for the Zeroshot models. It has 1000 data in a prompt format exclusively for testing with class 'other' in Brazilian Portuguese. Prompt: ``` "Classifique o tweet entre 'classe1', 'classe2', 'classe3', 'classe4', 'other' \\n\\nTweet: frase \\n\\nLabel: ``` ## How to load and use this dataset: ``` from datasets import load_dataset dataset = load_dataset("Weni/Zeroshot_Gold_Test-1K_other") dataset ``` [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Weni/Zeroshot_Test-Gold-1K_other
[ "region:us" ]
2023-09-25T17:49:33+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "texto", "dtype": "string"}, {"name": "true_class", "dtype": "string"}, {"name": "BERT", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 186270, "num_examples": 1000}], "download_size": 55029, "dataset_size": 186270}}
2023-09-25T17:56:09+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Zeroshot_Gold_Test-1K_other" This dataset is a test dataset for the Zeroshot models. It has 1000 data in a prompt format exclusively for testing with class 'other' in Brazilian Portuguese. Prompt: ## How to load and use this dataset: More Information needed
[ "# Dataset Card for \"Zeroshot_Gold_Test-1K_other\"\n\nThis dataset is a test dataset for the Zeroshot models. \nIt has 1000 data in a prompt format exclusively for testing with class 'other' in Brazilian Portuguese.\n\nPrompt:", "## How to load and use this dataset:\n\n\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Zeroshot_Gold_Test-1K_other\"\n\nThis dataset is a test dataset for the Zeroshot models. \nIt has 1000 data in a prompt format exclusively for testing with class 'other' in Brazilian Portuguese.\n\nPrompt:", "## How to load and use this dataset:\n\n\n\nMore Information needed" ]
[ 6, 60, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Zeroshot_Gold_Test-1K_other\"\n\nThis dataset is a test dataset for the Zeroshot models. \nIt has 1000 data in a prompt format exclusively for testing with class 'other' in Brazilian Portuguese.\n\nPrompt:## How to load and use this dataset:\n\n\n\nMore Information needed" ]
41e1014abb44d6752162f3b548e73f6cf5d2002f
# Dataset of uehara_ayumu/上原歩夢/우에하라아유무 (Love Live! School Idol Festival ALL STARS) This is the dataset of uehara_ayumu/上原歩夢/우에하라아유무 (Love Live! School Idol Festival ALL STARS), containing 500 images and their tags. The core tags of this character are `bangs, medium_hair, yellow_eyes, hair_bun, hair_ornament, single_side_bun, braid, blunt_bangs, pink_hair, orange_hair, bow, swept_bangs, hairclip, braided_bun, half_updo`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 864.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/uehara_ayumu_loveliveschoolidolfestivalallstars/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 400.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/uehara_ayumu_loveliveschoolidolfestivalallstars/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1286 | 926.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/uehara_ayumu_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 715.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/uehara_ayumu_loveliveschoolidolfestivalallstars/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1286 | 1.45 GiB | [Download](https://huggingface.co/datasets/CyberHarem/uehara_ayumu_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/uehara_ayumu_loveliveschoolidolfestivalallstars', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blazer, hairpin, nijigasaki_academy_school_uniform, solo, upper_body, blush, looking_at_viewer, shirt, black_jacket, long_sleeves, ribbon, smile | | 1 | 12 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, black_jacket, blush, long_sleeves, looking_at_viewer, nijigasaki_academy_school_uniform, solo, neck_ribbon, blazer, red_ribbon, winter_uniform, collared_shirt, plaid_skirt, pleated_skirt, white_shirt, open_mouth, white_skirt, breasts, :d, green_eyes, red_hair, single_hair_bun, hairpin, white_background | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, hairpin, nijigasaki_academy_school_uniform, short_sleeves, solo, summer_uniform, collared_shirt, looking_at_viewer, neck_ribbon, smile, sweater_vest, upper_body, simple_background, single_hair_bun, white_background, blush, white_shirt | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, collared_shirt, nijigasaki_academy_school_uniform, short_sleeves, solo, summer_uniform, sweater_vest, white_background, blush, looking_at_viewer, neck_ribbon, upper_body, blue_shirt, :d, green_eyes, open_mouth, red_hair, simple_background, breasts, white_shirt | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, blue_skirt, blush, collared_shirt, looking_at_viewer, neck_ribbon, nijigasaki_academy_school_uniform, pink_ribbon, plaid_skirt, pleated_skirt, short_sleeves, solo, summer_uniform, sweater_vest, white_shirt, dress_shirt, medium_breasts, open_mouth, green_eyes, :d, blue_shirt, pink_bowtie, red_hair, single_hair_bun | | 5 | 17 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | hairpin, 1girl, short_sleeves, solo, looking_at_viewer, white_shirt, blue_skirt, smile, green_eyes, blush, earrings, see-through_sleeves, simple_background, blue_bow, white_background, hand_on_own_chest, star_(symbol) | | 6 | 10 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, pink_dress, solo, earrings, hair_flower, pearl_necklace, white_gloves, hairpin, pink_flower | | 7 | 22 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, solo, pearl_necklace, earrings, hair_flower, pink_dress, white_gloves, looking_at_viewer, white_dress, pink_bow, collarbone, open_mouth, blush, pink_flower, :d, puffy_short_sleeves, waist_bow, white_background | | 8 | 15 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, pearl_necklace, solo, hair_bow, pink_dress, earrings, hairpin, looking_at_viewer, smile, bead_necklace, pink_bow, blush, wrist_cuffs, petals, open_mouth | | 9 | 10 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 1girl, hair_flower, smile, solo, earrings, white_gloves, detached_sleeves, hair_bow, looking_at_viewer, pink_rose, upper_body, blush, collarbone, pink_bow, pink_dress, ribbon, bare_shoulders, white_background, breasts, green_eyes, open_mouth, heart | | 10 | 7 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | 1girl, navel, solo, fingerless_gloves, hair_bow, wrist_cuffs, earrings, epaulettes, looking_at_viewer, ponytail, red_bowtie, sleeveless, smile, white_skirt, red_cape, red_gloves, white_shirt, blush, frills, green_eyes, hairband, midriff | | 11 | 6 | ![](samples/11/clu11-sample0.png) | ![](samples/11/clu11-sample1.png) | ![](samples/11/clu11-sample2.png) | ![](samples/11/clu11-sample3.png) | ![](samples/11/clu11-sample4.png) | 1girl, cleavage, detached_collar, fake_animal_ears, looking_at_viewer, medium_breasts, playboy_bunny, rabbit_ears, solo, strapless_leotard, bowtie, smile, bare_shoulders, black_leotard, blush, fishnet_pantyhose, green_eyes, hairpin, rabbit_tail, red_hair, white_background, wrist_cuffs, collarbone, fake_tail, simple_background, single_hair_bun | | 12 | 5 | ![](samples/12/clu12-sample0.png) | ![](samples/12/clu12-sample1.png) | ![](samples/12/clu12-sample2.png) | ![](samples/12/clu12-sample3.png) | ![](samples/12/clu12-sample4.png) | 1girl, blush, looking_at_viewer, smile, solo, long_sleeves, outdoors, red_hair, shirt, collarbone, night_sky, open_mouth, own_hands_together, green_eyes, skirt, starry_sky | | 13 | 9 | ![](samples/13/clu13-sample0.png) | ![](samples/13/clu13-sample1.png) | ![](samples/13/clu13-sample2.png) | ![](samples/13/clu13-sample3.png) | ![](samples/13/clu13-sample4.png) | 1girl, fake_animal_ears, rabbit_ears, solo, blush, looking_at_viewer, striped, rabbit_tail, red_bow, shirt, arm_warmers, bowtie, hair_bow, red_shorts, suspenders, white_background, hairband, short_shorts, sleeveless, green_eyes, simple_background | | 14 | 6 | ![](samples/14/clu14-sample0.png) | ![](samples/14/clu14-sample1.png) | ![](samples/14/clu14-sample2.png) | ![](samples/14/clu14-sample3.png) | ![](samples/14/clu14-sample4.png) | 1girl, hat, solo, white_gloves, collarbone, hairpin, strawberry, earrings, frills, short_sleeves, thighhighs, cake, dress, looking_at_viewer, smile | | 15 | 9 | ![](samples/15/clu15-sample0.png) | ![](samples/15/clu15-sample1.png) | ![](samples/15/clu15-sample2.png) | ![](samples/15/clu15-sample3.png) | ![](samples/15/clu15-sample4.png) | 1girl, cleavage, cloud, looking_at_viewer, medium_breasts, ocean, outdoors, solo, beach, collarbone, day, blush, blue_sky, navel, smile, white_bikini, bikini_skirt, green_eyes, red_hair | | 16 | 10 | ![](samples/16/clu16-sample0.png) | ![](samples/16/clu16-sample1.png) | ![](samples/16/clu16-sample2.png) | ![](samples/16/clu16-sample3.png) | ![](samples/16/clu16-sample4.png) | 1girl, china_dress, double_bun, solo, bun_cover, holding_fan, thigh_strap, looking_at_viewer, medium_breasts, twin_braids, wrist_cuffs, folded_fan, cleavage_cutout, earrings, sidelocks, smile, blush, tongue_out | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blazer | hairpin | nijigasaki_academy_school_uniform | solo | upper_body | blush | looking_at_viewer | shirt | black_jacket | long_sleeves | ribbon | smile | neck_ribbon | red_ribbon | winter_uniform | collared_shirt | plaid_skirt | pleated_skirt | white_shirt | open_mouth | white_skirt | breasts | :d | green_eyes | red_hair | single_hair_bun | white_background | short_sleeves | summer_uniform | sweater_vest | simple_background | blue_shirt | blue_skirt | pink_ribbon | dress_shirt | medium_breasts | pink_bowtie | earrings | see-through_sleeves | blue_bow | hand_on_own_chest | star_(symbol) | pink_dress | hair_flower | pearl_necklace | white_gloves | pink_flower | white_dress | pink_bow | collarbone | puffy_short_sleeves | waist_bow | hair_bow | bead_necklace | wrist_cuffs | petals | detached_sleeves | pink_rose | bare_shoulders | heart | navel | fingerless_gloves | epaulettes | ponytail | red_bowtie | sleeveless | red_cape | red_gloves | frills | hairband | midriff | cleavage | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | strapless_leotard | bowtie | black_leotard | fishnet_pantyhose | rabbit_tail | fake_tail | outdoors | night_sky | own_hands_together | skirt | starry_sky | striped | red_bow | arm_warmers | red_shorts | suspenders | short_shorts | hat | strawberry | thighhighs | cake | dress | cloud | ocean | beach | day | blue_sky | white_bikini | bikini_skirt | china_dress | double_bun | bun_cover | holding_fan | thigh_strap | twin_braids | folded_fan | cleavage_cutout | sidelocks | tongue_out | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:---------|:----------|:------------------------------------|:-------|:-------------|:--------|:--------------------|:--------|:---------------|:---------------|:---------|:--------|:--------------|:-------------|:-----------------|:-----------------|:--------------|:----------------|:--------------|:-------------|:--------------|:----------|:-----|:-------------|:-----------|:------------------|:-------------------|:----------------|:-----------------|:---------------|:--------------------|:-------------|:-------------|:--------------|:--------------|:-----------------|:--------------|:-----------|:----------------------|:-----------|:--------------------|:----------------|:-------------|:--------------|:-----------------|:---------------|:--------------|:--------------|:-----------|:-------------|:----------------------|:------------|:-----------|:----------------|:--------------|:---------|:-------------------|:------------|:-----------------|:--------|:--------|:--------------------|:-------------|:-----------|:-------------|:-------------|:-----------|:-------------|:---------|:-----------|:----------|:-----------|:------------------|:-------------------|:----------------|:--------------|:--------------------|:---------|:----------------|:--------------------|:--------------|:------------|:-----------|:------------|:---------------------|:--------|:-------------|:----------|:----------|:--------------|:-------------|:-------------|:---------------|:------|:-------------|:-------------|:-------|:--------|:--------|:--------|:--------|:------|:-----------|:---------------|:---------------|:--------------|:-------------|:------------|:--------------|:--------------|:--------------|:-------------|:------------------|:------------|:-------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 12 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | | X | X | | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | X | X | X | X | X | X | | | | | X | X | | | X | | | X | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | X | X | X | X | X | | | | | | X | | | X | | | X | X | | X | X | X | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | | X | X | | X | X | | | | | | X | | | X | X | X | X | X | | | X | X | X | X | | X | X | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 17 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | X | | X | | X | X | | | | | X | | | | | | | X | | | | | X | | | X | X | | | X | | X | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 10 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 22 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | | | X | | X | X | | | | | | | | | | | | | X | | | X | | | | X | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 15 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | | X | | X | | X | X | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | X | | | | | X | | X | | | | X | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 9 | 10 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | | | | X | X | X | X | | | | X | X | | | | | | | | X | | X | | X | | | X | | | | | | | | | | | X | | | | | X | X | | X | | | X | X | | | X | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 10 | 7 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | X | | | | X | | X | X | | | | | X | | | | | | | X | | X | | | X | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | X | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 11 | 6 | ![](samples/11/clu11-sample0.png) | ![](samples/11/clu11-sample1.png) | ![](samples/11/clu11-sample2.png) | ![](samples/11/clu11-sample3.png) | ![](samples/11/clu11-sample4.png) | X | | X | | X | | X | X | | | | | X | | | | | | | | | | | | X | X | X | X | | | | X | | | | | X | | | | | | | | | | | | | | X | | | | | X | | | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 12 | 5 | ![](samples/12/clu12-sample0.png) | ![](samples/12/clu12-sample1.png) | ![](samples/12/clu12-sample2.png) | ![](samples/12/clu12-sample3.png) | ![](samples/12/clu12-sample4.png) | X | | | | X | | X | X | X | | X | | X | | | | | | | | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 13 | 9 | ![](samples/13/clu13-sample0.png) | ![](samples/13/clu13-sample1.png) | ![](samples/13/clu13-sample2.png) | ![](samples/13/clu13-sample3.png) | ![](samples/13/clu13-sample4.png) | X | | | | X | | X | X | X | | | | | | | | | | | | | | | | X | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | X | | | | X | | | | X | | X | | X | | | X | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | 14 | 6 | ![](samples/14/clu14-sample0.png) | ![](samples/14/clu14-sample1.png) | ![](samples/14/clu14-sample2.png) | ![](samples/14/clu14-sample3.png) | ![](samples/14/clu14-sample4.png) | X | | X | | X | | | X | | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | | | X | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | 15 | 9 | ![](samples/15/clu15-sample0.png) | ![](samples/15/clu15-sample1.png) | ![](samples/15/clu15-sample2.png) | ![](samples/15/clu15-sample3.png) | ![](samples/15/clu15-sample4.png) | X | | | | X | | X | X | | | | | X | | | | | | | | | | | | X | X | | | | | | | | | | | X | | | | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | 16 | 10 | ![](samples/16/clu16-sample0.png) | ![](samples/16/clu16-sample1.png) | ![](samples/16/clu16-sample2.png) | ![](samples/16/clu16-sample3.png) | ![](samples/16/clu16-sample4.png) | X | | | | X | | X | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
CyberHarem/uehara_ayumu_loveliveschoolidolfestivalallstars
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-25T17:57:34+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-17T02:41:44+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of uehara\_ayumu/上原歩夢/우에하라아유무 (Love Live! School Idol Festival ALL STARS) ================================================================================= This is the dataset of uehara\_ayumu/上原歩夢/우에하라아유무 (Love Live! School Idol Festival ALL STARS), containing 500 images and their tags. The core tags of this character are 'bangs, medium\_hair, yellow\_eyes, hair\_bun, hair\_ornament, single\_side\_bun, braid, blunt\_bangs, pink\_hair, orange\_hair, bow, swept\_bangs, hairclip, braided\_bun, half\_updo', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
d620cfd310e96788ecb92d804bdb5a18af9b2208
# Dataset Card for "giga_speech_all_preprocessed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
fmagot01/giga_speech_all_preprocessed
[ "region:us" ]
2023-09-25T18:13:12+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "People and Blogs", "1": "Business", "2": "Nonprofits and Activism", "3": "Crime", "4": "History", "5": "Pets and Animals", "6": "News and Politics", "7": "Travel and Events", "8": "Kids and Family", "9": "Leisure", "10": "N/A", "11": "Comedy", "12": "News and Politics", "13": "Sports", "14": "Arts", "15": "Science and Technology", "16": "Autos and Vehicles", "17": "Science and Technology", "18": "People and Blogs", "19": "Music", "20": "Society and Culture", "21": "Education", "22": "Howto and Style", "23": "Film and Animation", "24": "Gaming", "25": "Entertainment", "26": "Travel and Events", "27": "Health and Fitness", "28": "audiobook"}}}}, {"name": "input_values", "sequence": "float32"}, {"name": "attention_mask", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 4132688160, "num_examples": 8450}, {"name": "test", "num_bytes": 477405104, "num_examples": 939}], "download_size": 2122088027, "dataset_size": 4610093264}}
2023-09-25T19:20:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for "giga_speech_all_preprocessed" More Information needed
[ "# Dataset Card for \"giga_speech_all_preprocessed\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"giga_speech_all_preprocessed\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"giga_speech_all_preprocessed\"\n\nMore Information needed" ]
e46e6fd45de884f47a755e9e2cc8d825e7773eb5
# Dataset Card for "logits-mt-it-en-128" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
amitness/logits-mt-it-en-128
[ "region:us" ]
2023-09-25T18:22:48+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"name": "teacher_logits", "sequence": {"sequence": "float64"}}, {"name": "teacher_indices", "sequence": {"sequence": "int64"}}, {"name": "teacher_mask_indices", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 184467361976, "num_examples": 40721350}, {"name": "test", "num_bytes": 32556394204, "num_examples": 7186121}], "download_size": 0, "dataset_size": 217023756180}}
2023-09-27T09:27:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for "logits-mt-it-en-128" More Information needed
[ "# Dataset Card for \"logits-mt-it-en-128\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"logits-mt-it-en-128\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"logits-mt-it-en-128\"\n\nMore Information needed" ]
5a186bdbd489d477706a2522289420ad3049e0ca
This dataset is curated from UniProt. The test set was created by selecting entire families of proteins to separate out at random. The train/test split is approximately 80/20. All binding site and active site annotations were merged. All sequences longer than 1000 amino acids were split into non-overlapping chunks of 1000 residues or less.
AmelieSchreiber/1111K_binding_sites
[ "license:mit", "region:us" ]
2023-09-25T18:51:08+00:00
{"license": "mit"}
2023-10-01T00:23:16+00:00
[]
[]
TAGS #license-mit #region-us
This dataset is curated from UniProt. The test set was created by selecting entire families of proteins to separate out at random. The train/test split is approximately 80/20. All binding site and active site annotations were merged. All sequences longer than 1000 amino acids were split into non-overlapping chunks of 1000 residues or less.
[]
[ "TAGS\n#license-mit #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#license-mit #region-us \n" ]
81306fbdbe26ec9eeed0f98e7467de6aa346edf2
# Dataset of zhong_lanzhu/鐘嵐珠/중란주 (Love Live! Nijigasaki Gakuen School Idol Doukoukai) This is the dataset of zhong_lanzhu/鐘嵐珠/중란주 (Love Live! Nijigasaki Gakuen School Idol Doukoukai), containing 500 images and their tags. The core tags of this character are `long_hair, pink_hair, blue_eyes, ahoge, breasts, mole, bangs, mole_under_eye, sidelocks, hair_bun, double_bun`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 825.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/zhong_lanzhu_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 396.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/zhong_lanzhu_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1262 | 879.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/zhong_lanzhu_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 696.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/zhong_lanzhu_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1262 | 1.37 GiB | [Download](https://huggingface.co/datasets/CyberHarem/zhong_lanzhu_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/zhong_lanzhu_lovelivenijigasakihighschoolidolclub', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 38 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, looking_at_viewer, black_gloves, chinese_clothes, smile, cleavage_cutout, bun_cover, dress, upper_body, blush | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, belt, earrings, hair_ornament, solo, cleavage_cutout, epaulettes, looking_at_viewer, red_dress, birthday, black_gloves, chinese_clothes, jacket, smile, upper_body | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, cleavage, collarbone, looking_at_viewer, solo, bare_shoulders, blush, large_breasts, two_side_up, white_background, closed_mouth, simple_background, smile, upper_body, black_camisole, medium_breasts | | 3 | 18 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, jacket, nijigasaki_academy_school_uniform, solo, looking_at_viewer, smile, two_side_up, skirt, hand_on_hip, white_background, blush | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, large_breasts, looking_at_viewer, nijigasaki_academy_school_uniform, solo, upper_body, white_background, blush, red_jacket, smile | | 5 | 15 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, solo, large_breasts, looking_at_viewer, blush, smile, navel, collarbone, cleavage, side-tie_bikini_bottom, blue_sky, red_bikini, cloud, simple_background, two_side_up, criss-cross_halter, day, ocean, outdoors, white_background | | 6 | 9 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, blush, large_breasts, nipples, 1boy, completely_nude, hetero, penis, solo_focus, sweat, collarbone, mosaic_censoring, looking_at_viewer, open_mouth, paizuri, smile, two_side_up, breasts_squeezed_together, hair_rings, motion_lines, pov_crotch, swept_bangs, upper_body | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | black_gloves | chinese_clothes | smile | cleavage_cutout | bun_cover | dress | upper_body | blush | belt | earrings | hair_ornament | epaulettes | red_dress | birthday | jacket | cleavage | collarbone | bare_shoulders | large_breasts | two_side_up | white_background | closed_mouth | simple_background | black_camisole | medium_breasts | nijigasaki_academy_school_uniform | skirt | hand_on_hip | red_jacket | navel | side-tie_bikini_bottom | blue_sky | red_bikini | cloud | criss-cross_halter | day | ocean | outdoors | nipples | 1boy | completely_nude | hetero | penis | solo_focus | sweat | mosaic_censoring | open_mouth | paizuri | breasts_squeezed_together | hair_rings | motion_lines | pov_crotch | swept_bangs | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:---------------|:------------------|:--------|:------------------|:------------|:--------|:-------------|:--------|:-------|:-----------|:----------------|:-------------|:------------|:-----------|:---------|:-----------|:-------------|:-----------------|:----------------|:--------------|:-------------------|:---------------|:--------------------|:-----------------|:-----------------|:------------------------------------|:--------|:--------------|:-------------|:--------|:-------------------------|:-----------|:-------------|:--------|:---------------------|:------|:--------|:-----------|:----------|:-------|:------------------|:---------|:--------|:-------------|:--------|:-------------------|:-------------|:----------|:----------------------------|:-------------|:---------------|:-------------|:--------------| | 0 | 38 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | | X | | | | X | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 18 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | | | X | | | | | X | | | | | | | X | | | | | X | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | X | | | X | | | | X | X | | | | | | | | | | | X | | X | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 15 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | X | | | X | | | | | X | | | | | | | | X | X | | X | X | X | | X | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | 6 | 9 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | X | | | X | | | | X | X | | | | | | | | | X | | X | X | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/zhong_lanzhu_lovelivenijigasakihighschoolidolclub
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-25T18:54:10+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-17T02:48:27+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of zhong\_lanzhu/鐘嵐珠/중란주 (Love Live! Nijigasaki Gakuen School Idol Doukoukai) ===================================================================================== This is the dataset of zhong\_lanzhu/鐘嵐珠/중란주 (Love Live! Nijigasaki Gakuen School Idol Doukoukai), containing 500 images and their tags. The core tags of this character are 'long\_hair, pink\_hair, blue\_eyes, ahoge, breasts, mole, bangs, mole\_under\_eye, sidelocks, hair\_bun, double\_bun', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
a2671a338a65476ba0f45d1c104a759e8364c87f
# Dataset Card for "heroicons" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yirenlu/heroicons
[ "region:us" ]
2023-09-25T18:55:57+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4277197.0, "num_examples": 292}], "download_size": 4220955, "dataset_size": 4277197.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-26T22:11:38+00:00
[]
[]
TAGS #region-us
# Dataset Card for "heroicons" More Information needed
[ "# Dataset Card for \"heroicons\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"heroicons\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"heroicons\"\n\nMore Information needed" ]
9af827f6c87d33efab3971d73d89bc6f97e849ef
Example Dataset for https://colab.research.google.com/drive/11DwtftycpDSgp3Z1vnV-Cy68zvkGZL4K?usp=sharing
clam004/law_instruct_4k
[ "license:apache-2.0", "region:us" ]
2023-09-25T19:16:45+00:00
{"license": "apache-2.0"}
2023-09-25T19:24:33+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
Example Dataset for URL
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
884a280b7d4d852d359ce4fe61971c2bcc9b589b
# Dataset Card for "amazon_polarity_embeddings_random1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
atmallen/amazon_polarity_embeddings_random1
[ "region:us" ]
2023-09-25T19:28:18+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}, {"name": "embedding", "sequence": "float32"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7148364432, "num_examples": 3600000}, {"name": "test", "num_bytes": 19940712, "num_examples": 10000}], "download_size": 3902806188, "dataset_size": 7168305144}}
2023-09-26T04:23:05+00:00
[]
[]
TAGS #region-us
# Dataset Card for "amazon_polarity_embeddings_random1" More Information needed
[ "# Dataset Card for \"amazon_polarity_embeddings_random1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"amazon_polarity_embeddings_random1\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"amazon_polarity_embeddings_random1\"\n\nMore Information needed" ]
730f20f475c46309eb4a8d38f1303f17221ac68f
# Dataset of emma_verde/エマ/엠마베르데 (Love Live! School Idol Festival ALL STARS) This is the dataset of emma_verde/エマ/엠마베르데 (Love Live! School Idol Festival ALL STARS), containing 500 images and their tags. The core tags of this character are `bangs, freckles, brown_hair, breasts, long_hair, braid, twin_braids, blue_eyes, large_breasts, twintails, red_hair, hair_ornament`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 763.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emma_verde_loveliveschoolidolfestivalallstars/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 372.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emma_verde_loveliveschoolidolfestivalallstars/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1277 | 855.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emma_verde_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 650.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emma_verde_loveliveschoolidolfestivalallstars/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1277 | 1.29 GiB | [Download](https://huggingface.co/datasets/CyberHarem/emma_verde_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/emma_verde_loveliveschoolidolfestivalallstars', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 21 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blue_skirt, looking_at_viewer, solo, white_shirt, long_sleeves, blush, open_mouth, collared_shirt, :d, long_skirt, white_background, ribbon | | 1 | 15 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, looking_at_viewer, nijigasaki_academy_school_uniform, plaid_skirt, short_sleeves, simple_background, solo, summer_uniform, white_background, collared_shirt, smile, white_shirt, blush, neck_ribbon, pleated_skirt, hair_between_eyes, blue_shirt, open_mouth, shirt_tucked_in | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, collared_shirt, nijigasaki_academy_school_uniform, plaid_skirt, pleated_skirt, short_sleeves, smile, solo, summer_uniform, white_shirt, green_background, looking_at_viewer, neck_ribbon, blush, hair_between_eyes, low_twintails, shirt_tucked_in, closed_mouth, open_mouth | | 3 | 31 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, nijigasaki_academy_school_uniform, solo, looking_at_viewer, black_jacket, white_shirt, long_sleeves, smile, winter_uniform, blush, neck_ribbon, blazer, collared_shirt, plaid_skirt, white_skirt, open_mouth, pleated_skirt, green_ribbon, white_background, simple_background | | 4 | 15 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, looking_at_viewer, smile, solo, dirndl, hair_flower, collarbone, hairband, dress, open_mouth, blush, outdoors, sky | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, dated, english_text, hair_flower, happy_birthday, looking_at_viewer, solo, blush, smile, twin_drills, character_name, green_dress, hat, low_twintails, sky, upper_body | | 6 | 17 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, hair_flower, solo, smile, looking_at_viewer, bow, open_mouth, short_sleeves, green_dress, twin_drills, white_dress, blush, hat, low_twintails | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, blue_sky, day, open_mouth, smile, solo, cleavage, cloud, looking_at_viewer, ocean, outdoors, blush, collarbone, green_bikini, navel, upper_body, beach, frilled_bikini, hair_between_eyes, hair_flower, jewelry | | 8 | 6 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, blush, open_mouth, solo, white_apron, black_dress, enmaided, frills, looking_at_viewer, maid_apron, maid_headdress, :d, aqua_eyes, simple_background, white_background, low_twintails, puffy_short_sleeves, thighhighs, upper_teeth_only | | 9 | 7 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 1girl, demon_horns, heart, looking_at_viewer, solo, earrings, sleeveless_dress, black_dress, black_gloves, blush, short_hair, smile, bare_shoulders, cleavage, frills, purple_dress, sitting, aqua_eyes, birthday, demon_tail, demon_wings, elbow_gloves, fake_horns, green_eyes, hairband, petals, see-through, tattoo | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_skirt | looking_at_viewer | solo | white_shirt | long_sleeves | blush | open_mouth | collared_shirt | :d | long_skirt | white_background | ribbon | nijigasaki_academy_school_uniform | plaid_skirt | short_sleeves | simple_background | summer_uniform | smile | neck_ribbon | pleated_skirt | hair_between_eyes | blue_shirt | shirt_tucked_in | green_background | low_twintails | closed_mouth | black_jacket | winter_uniform | blazer | white_skirt | green_ribbon | dirndl | hair_flower | collarbone | hairband | dress | outdoors | sky | dated | english_text | happy_birthday | twin_drills | character_name | green_dress | hat | upper_body | bow | white_dress | blue_sky | day | cleavage | cloud | ocean | green_bikini | navel | beach | frilled_bikini | jewelry | white_apron | black_dress | enmaided | frills | maid_apron | maid_headdress | aqua_eyes | puffy_short_sleeves | thighhighs | upper_teeth_only | demon_horns | heart | earrings | sleeveless_dress | black_gloves | short_hair | bare_shoulders | purple_dress | sitting | birthday | demon_tail | demon_wings | elbow_gloves | fake_horns | green_eyes | petals | see-through | tattoo | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:--------------------|:-------|:--------------|:---------------|:--------|:-------------|:-----------------|:-----|:-------------|:-------------------|:---------|:------------------------------------|:--------------|:----------------|:--------------------|:-----------------|:--------|:--------------|:----------------|:--------------------|:-------------|:------------------|:-------------------|:----------------|:---------------|:---------------|:-----------------|:---------|:--------------|:---------------|:---------|:--------------|:-------------|:-----------|:--------|:-----------|:------|:--------|:---------------|:-----------------|:--------------|:-----------------|:--------------|:------|:-------------|:------|:--------------|:-----------|:------|:-----------|:--------|:--------|:---------------|:--------|:--------|:-----------------|:----------|:--------------|:--------------|:-----------|:---------|:-------------|:-----------------|:------------|:----------------------|:-------------|:-------------------|:--------------|:--------|:-----------|:-------------------|:---------------|:-------------|:-----------------|:---------------|:----------|:-----------|:-------------|:--------------|:---------------|:-------------|:-------------|:---------|:--------------|:---------| | 0 | 21 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 15 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | X | X | | X | X | X | | | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | X | X | X | | X | X | X | | | | | X | X | X | | X | X | X | X | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 31 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | X | X | X | X | X | X | X | | | X | | X | X | | X | | X | X | X | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 15 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | X | X | | | X | X | | | | | | | | | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | X | X | | | X | | | | | | | | | | | | X | | | | | | | X | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 17 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | X | X | | | X | X | | | | | | | | X | | | X | | | | | | | X | | | | | | | | X | | | | | | | | | X | | X | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | X | X | | | X | X | | | | | | | | | | | X | | | X | | | | | | | | | | | | X | X | | | X | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 6 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | | X | X | | | X | X | | X | | X | | | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | 9 | 7 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | | X | X | | | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | | X | | X | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/emma_verde_loveliveschoolidolfestivalallstars
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-25T19:30:33+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-17T02:28:45+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of emma\_verde/エマ/엠마베르데 (Love Live! School Idol Festival ALL STARS) =========================================================================== This is the dataset of emma\_verde/エマ/엠마베르데 (Love Live! School Idol Festival ALL STARS), containing 500 images and their tags. The core tags of this character are 'bangs, freckles, brown\_hair, breasts, long\_hair, braid, twin\_braids, blue\_eyes, large\_breasts, twintails, red\_hair, hair\_ornament', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
f6cf4289eaafa71d9bd26942cf7c9dca834c03a6
# Dataset Card for "SD-CLIP-alignment-1300" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Doub7e/SD-CLIP-alignment-1300
[ "region:us" ]
2023-09-25T19:39:58+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "clip_pred", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 260505634.0, "num_examples": 550}], "download_size": 260509726, "dataset_size": 260505634.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-25T21:53:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "SD-CLIP-alignment-1300" More Information needed
[ "# Dataset Card for \"SD-CLIP-alignment-1300\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"SD-CLIP-alignment-1300\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"SD-CLIP-alignment-1300\"\n\nMore Information needed" ]
e88e26dc19171b816b7bd11ab02400fa9d472b02
# Dataset of mitake_ran/美竹蘭 (BanG Dream!) This is the dataset of mitake_ran/美竹蘭 (BanG Dream!), containing 470 images and their tags. The core tags of this character are `black_hair, multicolored_hair, red_hair, streaked_hair, short_hair, bangs, purple_eyes, bob_cut, red_eyes, breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 470 | 515.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mitake_ran_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 470 | 302.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mitake_ran_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 992 | 612.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mitake_ran_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 470 | 455.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mitake_ran_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 992 | 870.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mitake_ran_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/mitake_ran_bangdream', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 11 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, black_jacket, solo, upper_body, black_choker, hood_down, hooded_jacket, looking_at_viewer, chain_necklace, simple_background, smile, white_background, blush, hoop_earrings, long_sleeves, off_shoulder, open_mouth, pendant, red_shirt | | 1 | 20 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, black_choker, black_jacket, solo, looking_at_viewer, long_sleeves, v-shaped_eyebrows, electric_guitar, hood_down, hooded_jacket, open_jacket, open_mouth, off_shoulder, black_shorts, blush, chain_necklace, cleavage, plectrum, thigh_strap, collarbone, holding_instrument, short_shorts, microphone, earrings, red_shirt, smile, upper_body, black_bra, medium_breasts, music | | 2 | 9 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, solo, black_shirt, collarbone, short_sleeves, v-shaped_eyebrows, black_choker, looking_at_viewer, open_mouth, padlock, plectrum, music, short_shorts, bracelet, denim_shorts, gibson_les_paul, holding_instrument, holding_microphone, microphone_stand, off-shoulder_shirt, upper_body, white_background | | 3 | 22 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, school_uniform, solo, collared_shirt, looking_at_viewer, striped_necktie, upper_body, white_shirt, blush, blazer, grey_jacket, long_sleeves, green_necktie | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | black_collar, black_shirt, looking_at_viewer, off-shoulder_shirt, padlock, short_sleeves, simple_background, 1girl, print_shirt, solo, studded_collar, upper_body, white_background, collarbone, frown, studded_bracelet, v-shaped_eyebrows, black_choker, blush, closed_mouth, lightning_bolt_symbol, studded_belt | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | long_sleeves, plaid_skirt, pleated_skirt, school_uniform, blazer, collared_shirt, striped_necktie, white_shirt, 2girls, blush, grey_jacket, solo_focus, 3girls, grey_hair, kneehighs, looking_at_viewer, miniskirt, sitting | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | black_footwear, black_choker, black_gloves, blush, gibson_les_paul, holding_instrument, jacket, knee_boots, short_sleeves, shorts, thigh_strap, v-shaped_eyebrows, buttons, earrings, frown, lace-up_boots, long_hair, looking_at_viewer, plaid_shirt, plectrum, thighhighs, 2girls, 3girls, chain_necklace, half_gloves, open_mouth, red_shirt, skirt, white_background | | 7 | 14 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | serafuku, black_neckerchief, long_sleeves, red_shirt, black_headwear, hat, star_(symbol), black_gloves, looking_at_viewer, miniskirt, 1girl, plaid_skirt, solo, pleated_skirt, black_choker, black_sailor_collar, double-breasted, half_gloves, kneehighs, sitting, badge, black_skirt, blush, closed_mouth, collarbone, earrings, holding, red_socks, simple_background, white_background | | 8 | 6 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, collarbone, looking_at_viewer, navel, nipples, open_mouth, solo, blush, completely_nude, pussy, uncensored, large_breasts, simple_background, white_background, anus, barefoot, cleft_of_venus, medium_breasts, spread_legs, two-tone_hair | | 9 | 7 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | black_jacket, leather_jacket, ribbed_sweater, 1girl, long_sleeves, open_jacket, white_background, looking_at_viewer, pantyhose, black_belt, grey_sweater, short_shorts, simple_background, solo, turtleneck_sweater, blush, denim_shorts, smile, white_sweater | | 10 | 9 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | 1girl, hetero, nipples, solo_focus, blush, 1boy, completely_nude, navel, penis, pussy, sex, looking_at_viewer, medium_breasts, open_mouth, censored, vaginal, collarbone, on_back, saliva, spread_legs, sweat | | 11 | 5 | ![](samples/11/clu11-sample0.png) | ![](samples/11/clu11-sample1.png) | ![](samples/11/clu11-sample2.png) | ![](samples/11/clu11-sample3.png) | ![](samples/11/clu11-sample4.png) | 1girl, looking_at_viewer, solo, blush, simple_background, white_background, bare_shoulders, choker, earrings, pink_rose, white_gloves, bracelet, cleavage, collarbone, corset, elbow_gloves, frills, hair_flower, hat, purple_bow, ribbon, smile, star_hair_ornament, strapless_dress, veil, wedding_dress, white_dress | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_jacket | solo | upper_body | black_choker | hood_down | hooded_jacket | looking_at_viewer | chain_necklace | simple_background | smile | white_background | blush | hoop_earrings | long_sleeves | off_shoulder | open_mouth | pendant | red_shirt | v-shaped_eyebrows | electric_guitar | open_jacket | black_shorts | cleavage | plectrum | thigh_strap | collarbone | holding_instrument | short_shorts | microphone | earrings | black_bra | medium_breasts | music | black_shirt | short_sleeves | padlock | bracelet | denim_shorts | gibson_les_paul | holding_microphone | microphone_stand | off-shoulder_shirt | school_uniform | collared_shirt | striped_necktie | white_shirt | blazer | grey_jacket | green_necktie | black_collar | print_shirt | studded_collar | frown | studded_bracelet | closed_mouth | lightning_bolt_symbol | studded_belt | plaid_skirt | pleated_skirt | 2girls | solo_focus | 3girls | grey_hair | kneehighs | miniskirt | sitting | black_footwear | black_gloves | jacket | knee_boots | shorts | buttons | lace-up_boots | long_hair | plaid_shirt | thighhighs | half_gloves | skirt | serafuku | black_neckerchief | black_headwear | hat | star_(symbol) | black_sailor_collar | double-breasted | badge | black_skirt | holding | red_socks | navel | nipples | completely_nude | pussy | uncensored | large_breasts | anus | barefoot | cleft_of_venus | spread_legs | two-tone_hair | leather_jacket | ribbed_sweater | pantyhose | black_belt | grey_sweater | turtleneck_sweater | white_sweater | hetero | 1boy | penis | sex | censored | vaginal | on_back | saliva | sweat | bare_shoulders | choker | pink_rose | white_gloves | corset | elbow_gloves | frills | hair_flower | purple_bow | ribbon | star_hair_ornament | strapless_dress | veil | wedding_dress | white_dress | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:---------------|:-------|:-------------|:---------------|:------------|:----------------|:--------------------|:-----------------|:--------------------|:--------|:-------------------|:--------|:----------------|:---------------|:---------------|:-------------|:----------|:------------|:--------------------|:------------------|:--------------|:---------------|:-----------|:-----------|:--------------|:-------------|:---------------------|:---------------|:-------------|:-----------|:------------|:-----------------|:--------|:--------------|:----------------|:----------|:-----------|:---------------|:------------------|:---------------------|:-------------------|:---------------------|:-----------------|:-----------------|:------------------|:--------------|:---------|:--------------|:----------------|:---------------|:--------------|:-----------------|:--------|:-------------------|:---------------|:------------------------|:---------------|:--------------|:----------------|:---------|:-------------|:---------|:------------|:------------|:------------|:----------|:-----------------|:---------------|:---------|:-------------|:---------|:----------|:----------------|:------------|:--------------|:-------------|:--------------|:--------|:-----------|:--------------------|:-----------------|:------|:----------------|:----------------------|:------------------|:--------|:--------------|:----------|:------------|:--------|:----------|:------------------|:--------|:-------------|:----------------|:-------|:-----------|:-----------------|:--------------|:----------------|:-----------------|:-----------------|:------------|:-------------|:---------------|:---------------------|:----------------|:---------|:-------|:--------|:------|:-----------|:----------|:----------|:---------|:--------|:-----------------|:---------|:------------|:---------------|:---------|:---------------|:---------|:--------------|:-------------|:---------|:---------------------|:------------------|:-------|:----------------|:--------------| | 0 | 11 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 20 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | X | X | | X | | X | | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 9 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | X | X | X | | | X | | | | X | | | | | X | | | X | | | | | X | | X | X | X | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 22 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | X | X | | | | X | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | X | X | X | | | X | | X | | X | X | | | | | | | X | | | | | | | X | | | | | | | | X | X | X | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | | | | | | | | X | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | | | | | X | | | X | X | | | X | X | | | | X | | X | X | | | | | X | X | | X | | | X | | | | | X | | | | X | | | | | | | | | | | | | | X | | | | | | | X | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 14 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | X | | X | | | X | | X | | X | X | | X | | | | X | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | X | X | | | | | X | X | X | | X | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 6 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | | X | | | | | X | | X | | X | X | | | | X | | | | | | | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 9 | 7 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | X | X | | | | | X | | X | X | X | X | | X | | | | | | | X | | | | | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | 10 | 9 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | X | | | | | | | X | | | | | X | | | | X | | | | | | | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | 11 | 5 | ![](samples/11/clu11-sample0.png) | ![](samples/11/clu11-sample1.png) | ![](samples/11/clu11-sample2.png) | ![](samples/11/clu11-sample3.png) | ![](samples/11/clu11-sample4.png) | X | | X | | | | | X | | X | X | X | X | | | | | | | | | | | X | | | X | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/mitake_ran_bangdream
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-25T20:02:32+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T17:04:25+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of mitake\_ran/美竹蘭 (BanG Dream!) ======================================== This is the dataset of mitake\_ran/美竹蘭 (BanG Dream!), containing 470 images and their tags. The core tags of this character are 'black\_hair, multicolored\_hair, red\_hair, streaked\_hair, short\_hair, bangs, purple\_eyes, bob\_cut, red\_eyes, breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
ac979181487f225ae8e51acba506aad1671e003b
# Dataset Card for "olavo-article-17k-llama2-chat-dataset-text" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Hadnet/olavo-article-17k-llama2-chat-dataset-text
[ "region:us" ]
2023-09-25T20:35:14+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9693532, "num_examples": 17361}], "download_size": 5505395, "dataset_size": 9693532}}
2023-09-25T20:35:17+00:00
[]
[]
TAGS #region-us
# Dataset Card for "olavo-article-17k-llama2-chat-dataset-text" More Information needed
[ "# Dataset Card for \"olavo-article-17k-llama2-chat-dataset-text\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"olavo-article-17k-llama2-chat-dataset-text\"\n\nMore Information needed" ]
[ 6, 26 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"olavo-article-17k-llama2-chat-dataset-text\"\n\nMore Information needed" ]
4888a6cc4cae51cfd48251fa21414941d7f90fde
# Dataset Card for "three_styles_prompted_all_512x512" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kewu93/three_styles_prompted_all_512x512
[ "region:us" ]
2023-09-25T20:38:58+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "style_class", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 278199128.832, "num_examples": 11312}, {"name": "val", "num_bytes": 68747668.084, "num_examples": 2828}], "download_size": 341189285, "dataset_size": 346946796.916}}
2023-09-25T20:39:35+00:00
[]
[]
TAGS #region-us
# Dataset Card for "three_styles_prompted_all_512x512" More Information needed
[ "# Dataset Card for \"three_styles_prompted_all_512x512\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"three_styles_prompted_all_512x512\"\n\nMore Information needed" ]
[ 6, 28 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"three_styles_prompted_all_512x512\"\n\nMore Information needed" ]
341a967bab10607b1f728df72760c8f70fc9cd25
# Dataset of asaka_karin/朝香果林/아사카카린 (Love Live! Nijigasaki Gakuen School Idol Doukoukai) This is the dataset of asaka_karin/朝香果林/아사카카린 (Love Live! Nijigasaki Gakuen School Idol Doukoukai), containing 500 images and their tags. The core tags of this character are `blue_eyes, bangs, breasts, medium_hair, blue_hair, black_hair, large_breasts, mole, multiple_moles, blunt_bangs`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 847.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asaka_karin_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 396.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asaka_karin_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1276 | 899.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asaka_karin_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 708.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asaka_karin_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1276 | 1.39 GiB | [Download](https://huggingface.co/datasets/CyberHarem/asaka_karin_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/asaka_karin_lovelivenijigasakihighschoolidolclub', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 24 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, looking_at_viewer, nijigasaki_academy_school_uniform, short_sleeves, solo, summer_uniform, collared_shirt, smile, plaid_skirt, simple_background, sweater_vest, pleated_skirt, white_background, blush, white_shirt, closed_mouth, neck_ribbon, hand_up | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, blush, collared_shirt, looking_at_viewer, nijigasaki_academy_school_uniform, short_sleeves, solo, upper_body, neck_ribbon, open_mouth, simple_background, summer_uniform, white_shirt, blue_background, green_ribbon, hand_up, sweater_vest, :d, aqua_ribbon, black_vest, blue_vest, medium_breasts | | 2 | 17 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, looking_at_viewer, nijigasaki_academy_school_uniform, solo, white_background, white_skirt, blush, simple_background, collared_shirt, smile, plaid_skirt, black_pantyhose, long_sleeves, pleated_skirt, white_shirt, black_jacket, ribbon, closed_mouth, open_mouth | | 3 | 8 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, solo, looking_at_viewer, open_mouth, smile, blush, cleavage, collarbone, mole_on_body, off_shoulder, sweater, medium_breasts, shirt, upper_body, bare_shoulders, brown_shorts, simple_background, white_background | | 4 | 11 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, looking_at_viewer, solo, blush, cleavage, outdoors, smile, blue_bikini, cloud, navel, ocean, blue_sky, collarbone, day, beach, medium_breasts, sitting | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, black_gloves, cape, looking_at_viewer, smile, solo, star_hair_ornament, wrist_cuffs, hat_feather, upper_body, blush, white_background, black_necktie, simple_background | | 6 | 8 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, hat, looking_at_viewer, solo, black_gloves, garter_straps, hair_ornament, handcuffs, necktie, skirt, star_(symbol), blush, cleavage, smile, blue_cape, short_hair, wrist_cuffs, fishnet_thighhighs, fur_trim | | 7 | 5 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, blush, fox_ears, fox_tail, looking_at_viewer, smile, solo, fox_girl, fox_shadow_puppet, fake_animal_ears, simple_background, blue_skirt, fake_tail, fur_trim, hairclip, long_sleeves, medium_breasts, necktie, one_eye_closed, open_mouth, upper_body, white_background | | 8 | 8 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, looking_at_viewer, solo, hair_flower, blush, bracelet, dress, necklace, smile, cleavage, detached_sleeves, short_hair, upper_body, collarbone, sidelocks, braid, character_name, petals | | 9 | 22 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 1girl, looking_at_viewer, solo, necklace, smile, blush, mole_on_body, cleavage, lace-trimmed_gloves, bare_shoulders, collarbone, elbow_gloves, hair_ornament, hat_feather, black_gloves, miniskirt, blue_flower, blue_headwear, midriff, navel, blue_skirt, thighhighs, upper_body, dress, garter_straps, hat_flower | | 10 | 12 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | 1girl, crop_top, hat_feather, looking_at_viewer, midriff, mole_on_body, solo, blue_bow, blue_shorts, blue_tank_top, bridal_gauntlets, mini_hat, navel, smile, top_hat, wrist_cuffs, collarbone, cowboy_shot, garter_straps, short_shorts, single_bare_shoulder, swept_bangs, tilted_headwear, blush, black_gloves, blue_headwear, cleavage, thighhighs, elbow_gloves, bare_shoulders, closed_mouth, belt, sidelocks | | 11 | 6 | ![](samples/11/clu11-sample0.png) | ![](samples/11/clu11-sample1.png) | ![](samples/11/clu11-sample2.png) | ![](samples/11/clu11-sample3.png) | ![](samples/11/clu11-sample4.png) | 1girl, blush, looking_at_viewer, solo, panda, smile, stuffed_animal, off-shoulder_shirt, heart, object_hug, sitting | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | nijigasaki_academy_school_uniform | short_sleeves | solo | summer_uniform | collared_shirt | smile | plaid_skirt | simple_background | sweater_vest | pleated_skirt | white_background | blush | white_shirt | closed_mouth | neck_ribbon | hand_up | upper_body | open_mouth | blue_background | green_ribbon | :d | aqua_ribbon | black_vest | blue_vest | medium_breasts | white_skirt | black_pantyhose | long_sleeves | black_jacket | ribbon | cleavage | collarbone | mole_on_body | off_shoulder | sweater | shirt | bare_shoulders | brown_shorts | outdoors | blue_bikini | cloud | navel | ocean | blue_sky | day | beach | sitting | black_gloves | cape | star_hair_ornament | wrist_cuffs | hat_feather | black_necktie | hat | garter_straps | hair_ornament | handcuffs | necktie | skirt | star_(symbol) | blue_cape | short_hair | fishnet_thighhighs | fur_trim | fox_ears | fox_tail | fox_girl | fox_shadow_puppet | fake_animal_ears | blue_skirt | fake_tail | hairclip | one_eye_closed | hair_flower | bracelet | dress | necklace | detached_sleeves | sidelocks | braid | character_name | petals | lace-trimmed_gloves | elbow_gloves | miniskirt | blue_flower | blue_headwear | midriff | thighhighs | hat_flower | crop_top | blue_bow | blue_shorts | blue_tank_top | bridal_gauntlets | mini_hat | top_hat | cowboy_shot | short_shorts | single_bare_shoulder | swept_bangs | tilted_headwear | belt | panda | stuffed_animal | off-shoulder_shirt | heart | object_hug | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------------------|:------------------------------------|:----------------|:-------|:-----------------|:-----------------|:--------|:--------------|:--------------------|:---------------|:----------------|:-------------------|:--------|:--------------|:---------------|:--------------|:----------|:-------------|:-------------|:------------------|:---------------|:-----|:--------------|:-------------|:------------|:-----------------|:--------------|:------------------|:---------------|:---------------|:---------|:-----------|:-------------|:---------------|:---------------|:----------|:--------|:-----------------|:---------------|:-----------|:--------------|:--------|:--------|:--------|:-----------|:------|:--------|:----------|:---------------|:-------|:---------------------|:--------------|:--------------|:----------------|:------|:----------------|:----------------|:------------|:----------|:--------|:----------------|:------------|:-------------|:---------------------|:-----------|:-----------|:-----------|:-----------|:--------------------|:-------------------|:-------------|:------------|:-----------|:-----------------|:--------------|:-----------|:--------|:-----------|:-------------------|:------------|:--------|:-----------------|:---------|:----------------------|:---------------|:------------|:--------------|:----------------|:----------|:-------------|:-------------|:-----------|:-----------|:--------------|:----------------|:-------------------|:-----------|:----------|:--------------|:---------------|:-----------------------|:--------------|:------------------|:-------|:--------|:-----------------|:---------------------|:--------|:-------------| | 0 | 24 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | | | X | X | | | X | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 17 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | X | | X | X | X | X | | X | X | X | X | X | | | | X | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 8 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | | X | | | X | | X | | | X | X | | | | | X | X | | | | | | | X | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 11 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | | X | | | X | | | | | | X | | | | | | | | | | | | | X | | | | | | X | X | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | | | X | | | X | | X | | | X | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 8 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | | | X | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | | | X | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 5 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | X | | | X | | | X | | X | | | X | X | | | | | X | X | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 8 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | X | | | X | | | X | | | | | | X | | | | | X | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | 9 | 22 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | X | | | X | | | X | | | | | | X | | | | | X | | | | | | | | | | | | | | X | X | X | | | | X | | | | | X | | | | | | X | | | | X | | | X | X | | | | | | | | | | | | | | X | | | | | | X | X | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | 10 | 12 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | X | X | | | X | | | X | | | | | | X | | X | | | | | | | | | | | | | | | | | X | X | X | | | | X | | | | | X | | | | | | X | | | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | | | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | 11 | 6 | ![](samples/11/clu11-sample0.png) | ![](samples/11/clu11-sample1.png) | ![](samples/11/clu11-sample2.png) | ![](samples/11/clu11-sample3.png) | ![](samples/11/clu11-sample4.png) | X | X | | | X | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X |
CyberHarem/asaka_karin_lovelivenijigasakihighschoolidolclub
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-25T20:48:00+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-17T02:34:06+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of asaka\_karin/朝香果林/아사카카린 (Love Live! Nijigasaki Gakuen School Idol Doukoukai) ======================================================================================= This is the dataset of asaka\_karin/朝香果林/아사카카린 (Love Live! Nijigasaki Gakuen School Idol Doukoukai), containing 500 images and their tags. The core tags of this character are 'blue\_eyes, bangs, breasts, medium\_hair, blue\_hair, black\_hair, large\_breasts, mole, multiple\_moles, blunt\_bangs', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
e8d67d600d3be726def797a03fe70a89ded09339
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
Tonic/image-classification-mistakes
[ "region:us" ]
2023-09-25T20:54:50+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.csv"}]}]}
2023-09-25T21:00:35+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Dataset Name", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 8, 24, 6, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
ad789c9d622bfbdeecdc90df2041b3ba458e0368
This is a dataset of short Chiense stories generated from GPT3.5. It is inspired by Tiny Stories dataset, but instead of millions of rows, I only generated a few thousands stories. The dataset was created as a learning exercise for using GPT API to generate training data for a potential language model idea. I created these stories by first using ChatGPT to generate a list of male and female character names, a list of genre and one sentence story themes and a list of story starters (similar to "Once upon a time"). Later, I use GPT3.5 chat completion API to generate short stories given the 3 constraints: genre and theme and sentence starter. And the stories were generated in the batch of 3. So every 3 stories would share the exact same parameters. --- license: cc-by-4.0 ---
joe-chiu/TinyChineseStories
[ "language:zh", "region:us" ]
2023-09-25T20:58:35+00:00
{"language": ["zh"]}
2023-09-25T22:19:08+00:00
[]
[ "zh" ]
TAGS #language-Chinese #region-us
This is a dataset of short Chiense stories generated from GPT3.5. It is inspired by Tiny Stories dataset, but instead of millions of rows, I only generated a few thousands stories. The dataset was created as a learning exercise for using GPT API to generate training data for a potential language model idea. I created these stories by first using ChatGPT to generate a list of male and female character names, a list of genre and one sentence story themes and a list of story starters (similar to "Once upon a time"). Later, I use GPT3.5 chat completion API to generate short stories given the 3 constraints: genre and theme and sentence starter. And the stories were generated in the batch of 3. So every 3 stories would share the exact same parameters. --- license: cc-by-4.0 ---
[]
[ "TAGS\n#language-Chinese #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#language-Chinese #region-us \n" ]
2877709d45fcc4a4fbb119d188acec97e9e1c9ec
# Dataset Card for "Grocery_chatbot_text_v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nelson2424/Grocery_chatbot_text_v2
[ "region:us" ]
2023-09-25T21:11:20+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "items", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 196348, "num_examples": 1070}], "download_size": 59003, "dataset_size": 196348}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-25T23:16:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Grocery_chatbot_text_v2" More Information needed
[ "# Dataset Card for \"Grocery_chatbot_text_v2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Grocery_chatbot_text_v2\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Grocery_chatbot_text_v2\"\n\nMore Information needed" ]
589e9a025cc1e6d355f3588dd3626f6c717a164c
## This is the Official Puffin dataset. Exactly 3,000 examples with each response created using GPT-4. - Comprised of over 2,000 multi-turn conversations between GPT-4 and real humans. - Average context length per conversation is over 1,000 tokens. (will measure this more accurately soon) - Average turns per conversation is more than 10. (will measure this more accurately soon) - The other portion of Puffin is made of manually curated subsets of the following (All responses synthesized using GPT-4): CamelAI/Physics CamelAI/Math CamelAI/Biology CamelAI/Chemistry A majority of the real multi-turn conversations are made up of a curated subset of the original ShareGPT dataset. - Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021" - Most importantly, we narrowed down the ShareGPT dataset to strictly only GPT-4 examples. Knowing which ShareGPT examples were GPT-4 vs GPT-3.5 was a task that would've been much more arduous if it wasn't for the help of folks over at OpenChat, whom annoteated the neccessary examples. During the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out. Luckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big thank you to J-Supha specifically for making these types of significant contributions. Along with J-Supha, some other people are worth mentioning, these are the folks that helped on long late night calls to help debug and/or get Puffin training on Llama-2 Asap, all within 12 hours of Llama-2 being announced. - Emozilla, Teknium, Caseus. And of course thank you to RedmondAI for sponsoring the training compute! ## Future Plans & How you can help! This is a relatively early build amongst the grand plans for the future of what I plan to work on! In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations. If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
clam004/puffin
[ "task_categories:conversational", "task_categories:question-answering", "task_categories:text-generation", "size_categories:1K<n<10K", "language:en", "license:apache-2.0", "Physics", "Biology", "Math", "Chemistry", "Culture", "Logic", "Roleplay", "region:us" ]
2023-09-25T21:24:47+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["conversational", "question-answering", "text-generation"], "pretty_name": "Puffin", "tags": ["Physics", "Biology", "Math", "Chemistry", "Culture", "Logic", "Roleplay"]}
2023-09-25T21:25:25+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #task_categories-question-answering #task_categories-text-generation #size_categories-1K<n<10K #language-English #license-apache-2.0 #Physics #Biology #Math #Chemistry #Culture #Logic #Roleplay #region-us
## This is the Official Puffin dataset. Exactly 3,000 examples with each response created using GPT-4. - Comprised of over 2,000 multi-turn conversations between GPT-4 and real humans. - Average context length per conversation is over 1,000 tokens. (will measure this more accurately soon) - Average turns per conversation is more than 10. (will measure this more accurately soon) - The other portion of Puffin is made of manually curated subsets of the following (All responses synthesized using GPT-4): CamelAI/Physics CamelAI/Math CamelAI/Biology CamelAI/Chemistry A majority of the real multi-turn conversations are made up of a curated subset of the original ShareGPT dataset. - Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021" - Most importantly, we narrowed down the ShareGPT dataset to strictly only GPT-4 examples. Knowing which ShareGPT examples were GPT-4 vs GPT-3.5 was a task that would've been much more arduous if it wasn't for the help of folks over at OpenChat, whom annoteated the neccessary examples. During the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out. Luckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big thank you to J-Supha specifically for making these types of significant contributions. Along with J-Supha, some other people are worth mentioning, these are the folks that helped on long late night calls to help debug and/or get Puffin training on Llama-2 Asap, all within 12 hours of Llama-2 being announced. - Emozilla, Teknium, Caseus. And of course thank you to RedmondAI for sponsoring the training compute! ## Future Plans & How you can help! This is a relatively early build amongst the grand plans for the future of what I plan to work on! In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations. If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
[ "## This is the Official Puffin dataset. Exactly 3,000 examples with each response created using GPT-4.\n\n - Comprised of over 2,000 multi-turn conversations between GPT-4 and real humans.\n\n - Average context length per conversation is over 1,000 tokens. (will measure this more accurately soon)\n\n - Average turns per conversation is more than 10. (will measure this more accurately soon)\n\n - The other portion of Puffin is made of manually curated subsets of the following (All responses synthesized using GPT-4):\n\nCamelAI/Physics\nCamelAI/Math\nCamelAI/Biology\nCamelAI/Chemistry\n\nA majority of the real multi-turn conversations are made up of a curated subset of the original ShareGPT dataset.\n\n - Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as \"As an AI language model\" and \"September 2021\"\n\n - Most importantly, we narrowed down the ShareGPT dataset to strictly only GPT-4 examples. Knowing which ShareGPT examples were GPT-4 vs GPT-3.5 was a task that would've been much more arduous if it wasn't for the help of folks over at OpenChat, whom annoteated the neccessary examples.\n\n\nDuring the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out. Luckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big thank you to J-Supha specifically for making these types of significant contributions.\n\nAlong with J-Supha, some other people are worth mentioning, these are the folks that helped on long late night calls to help debug and/or get Puffin training on Llama-2 Asap, all within 12 hours of Llama-2 being announced.\n\n - Emozilla, Teknium, Caseus. And of course thank you to RedmondAI for sponsoring the training compute!", "## Future Plans & How you can help!\n\nThis is a relatively early build amongst the grand plans for the future of what I plan to work on! \n\nIn the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations. \n\nIf you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!" ]
[ "TAGS\n#task_categories-conversational #task_categories-question-answering #task_categories-text-generation #size_categories-1K<n<10K #language-English #license-apache-2.0 #Physics #Biology #Math #Chemistry #Culture #Logic #Roleplay #region-us \n", "## This is the Official Puffin dataset. Exactly 3,000 examples with each response created using GPT-4.\n\n - Comprised of over 2,000 multi-turn conversations between GPT-4 and real humans.\n\n - Average context length per conversation is over 1,000 tokens. (will measure this more accurately soon)\n\n - Average turns per conversation is more than 10. (will measure this more accurately soon)\n\n - The other portion of Puffin is made of manually curated subsets of the following (All responses synthesized using GPT-4):\n\nCamelAI/Physics\nCamelAI/Math\nCamelAI/Biology\nCamelAI/Chemistry\n\nA majority of the real multi-turn conversations are made up of a curated subset of the original ShareGPT dataset.\n\n - Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as \"As an AI language model\" and \"September 2021\"\n\n - Most importantly, we narrowed down the ShareGPT dataset to strictly only GPT-4 examples. Knowing which ShareGPT examples were GPT-4 vs GPT-3.5 was a task that would've been much more arduous if it wasn't for the help of folks over at OpenChat, whom annoteated the neccessary examples.\n\n\nDuring the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out. Luckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big thank you to J-Supha specifically for making these types of significant contributions.\n\nAlong with J-Supha, some other people are worth mentioning, these are the folks that helped on long late night calls to help debug and/or get Puffin training on Llama-2 Asap, all within 12 hours of Llama-2 being announced.\n\n - Emozilla, Teknium, Caseus. And of course thank you to RedmondAI for sponsoring the training compute!", "## Future Plans & How you can help!\n\nThis is a relatively early build amongst the grand plans for the future of what I plan to work on! \n\nIn the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations. \n\nIf you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!" ]
[ 87, 457, 118 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-question-answering #task_categories-text-generation #size_categories-1K<n<10K #language-English #license-apache-2.0 #Physics #Biology #Math #Chemistry #Culture #Logic #Roleplay #region-us \n" ]
0956ba9dc83c61d99c9544f012590753bf716679
# Dataset Card for "three_styles_prompted_all_512x512_excluded_training" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kewu93/three_styles_prompted_all_512x512_excluded_training
[ "region:us" ]
2023-09-25T21:29:57+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "style_class", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7284057.537128714, "num_examples": 300}, {"name": "val", "num_bytes": 4317601.0, "num_examples": 150}], "download_size": 12016133, "dataset_size": 11601658.537128713}}
2023-09-25T21:30:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "three_styles_prompted_all_512x512_excluded_training" More Information needed
[ "# Dataset Card for \"three_styles_prompted_all_512x512_excluded_training\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"three_styles_prompted_all_512x512_excluded_training\"\n\nMore Information needed" ]
[ 6, 34 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"three_styles_prompted_all_512x512_excluded_training\"\n\nMore Information needed" ]
6c17fc48a3469ef78f995349f691ca23618a1ec8
# Dataset Card for "common_language_preprocessed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AescF/common_language_preprocessed
[ "region:us" ]
2023-09-25T21:31:05+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Arabic", "1": "Basque", "2": "Breton", "3": "Catalan", "4": "Chinese_China", "5": "Chinese_Hongkong", "6": "Chinese_Taiwan", "7": "Chuvash", "8": "Czech", "9": "Dhivehi", "10": "Dutch", "11": "English", "12": "Esperanto", "13": "Estonian", "14": "French", "15": "Frisian", "16": "Georgian", "17": "German", "18": "Greek", "19": "Hakha_Chin", "20": "Indonesian", "21": "Interlingua", "22": "Italian", "23": "Japanese", "24": "Kabyle", "25": "Kinyarwanda", "26": "Kyrgyz", "27": "Latvian", "28": "Maltese", "29": "Mangolian", "30": "Persian", "31": "Polish", "32": "Portuguese", "33": "Romanian", "34": "Romansh_Sursilvan", "35": "Russian", "36": "Sakha", "37": "Slovenian", "38": "Spanish", "39": "Swedish", "40": "Tamil", "41": "Tatar", "42": "Turkish", "43": "Ukranian", "44": "Welsh"}}}}, {"name": "input_values", "sequence": "float32"}, {"name": "attention_mask", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 13848986619, "num_examples": 22194}, {"name": "validation", "num_bytes": 3461442109, "num_examples": 5888}, {"name": "test", "num_bytes": 3473659131, "num_examples": 5963}], "download_size": 0, "dataset_size": 20784087859}}
2023-09-25T22:40:35+00:00
[]
[]
TAGS #region-us
# Dataset Card for "common_language_preprocessed" More Information needed
[ "# Dataset Card for \"common_language_preprocessed\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"common_language_preprocessed\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"common_language_preprocessed\"\n\nMore Information needed" ]
26701a2fb9f76bed4304b89cc081c861909145a7
# Dataset Card for "amazon_polarity_embeddings_random2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
atmallen/amazon_polarity_embeddings_random2
[ "region:us" ]
2023-09-25T21:31:19+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}, {"name": "embedding", "sequence": "float32"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7148364432, "num_examples": 3600000}, {"name": "test", "num_bytes": 19940712, "num_examples": 10000}], "download_size": 3900873029, "dataset_size": 7168305144}}
2023-09-26T06:22:35+00:00
[]
[]
TAGS #region-us
# Dataset Card for "amazon_polarity_embeddings_random2" More Information needed
[ "# Dataset Card for \"amazon_polarity_embeddings_random2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"amazon_polarity_embeddings_random2\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"amazon_polarity_embeddings_random2\"\n\nMore Information needed" ]
dc8e4e3c767094640c065221a1f33c78663f9283
# LibriVox Catalog Archive Note: this archive does not include any audio files, but simply includes the catalog. ## What is LibriVox? LibriVox is a catalog of free and public domain audiobooks. [Learn more...](https://librivox.org/) Last updated: Sep 25, 2023
mrfakename/librivox-full-catalog-archive
[ "license:cc0-1.0", "region:us" ]
2023-09-25T21:36:16+00:00
{"license": "cc0-1.0"}
2023-09-25T21:43:02+00:00
[]
[]
TAGS #license-cc0-1.0 #region-us
# LibriVox Catalog Archive Note: this archive does not include any audio files, but simply includes the catalog. ## What is LibriVox? LibriVox is a catalog of free and public domain audiobooks. Learn more... Last updated: Sep 25, 2023
[ "# LibriVox Catalog Archive\n\nNote: this archive does not include any audio files, but simply includes the catalog.", "## What is LibriVox?\n\nLibriVox is a catalog of free and public domain audiobooks. Learn more...\n\nLast updated: Sep 25, 2023" ]
[ "TAGS\n#license-cc0-1.0 #region-us \n", "# LibriVox Catalog Archive\n\nNote: this archive does not include any audio files, but simply includes the catalog.", "## What is LibriVox?\n\nLibriVox is a catalog of free and public domain audiobooks. Learn more...\n\nLast updated: Sep 25, 2023" ]
[ 14, 24, 31 ]
[ "passage: TAGS\n#license-cc0-1.0 #region-us \n# LibriVox Catalog Archive\n\nNote: this archive does not include any audio files, but simply includes the catalog.## What is LibriVox?\n\nLibriVox is a catalog of free and public domain audiobooks. Learn more...\n\nLast updated: Sep 25, 2023" ]
ca498c35e7061fb23bc6bb4a67dfec99aab5bb49
# Dataset of minato_yukina/湊友希那/미나토유키나 (BanG Dream!) This is the dataset of minato_yukina/湊友希那/미나토유키나 (BanG Dream!), containing 500 images and their tags. The core tags of this character are `long_hair, bangs, yellow_eyes, grey_hair, hair_ornament`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 773.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/minato_yukina_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 427.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/minato_yukina_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1220 | 909.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/minato_yukina_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 679.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/minato_yukina_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1220 | 1.30 GiB | [Download](https://huggingface.co/datasets/CyberHarem/minato_yukina_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/minato_yukina_bangdream', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, cat_ears, looking_at_viewer, solo, black_thighhighs, blush, cat_tail, tail_bow, black_gloves, black_skirt, fake_animal_ears, fishnets, frilled_skirt, hairband, open_mouth, simple_background, sitting, bare_shoulders, breasts, neck_ribbon, purple_bow, purple_ribbon, shirt | | 1 | 12 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, blush, collarbone, looking_at_viewer, solo, nipples, navel, open_mouth, sidelocks, simple_background, small_breasts, white_background, completely_nude, stomach, pussy, sweat, groin, wet | | 2 | 15 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, long_sleeves, solo, butterfly_hair_ornament, looking_at_viewer, white_shirt, black_skirt, white_bow, blush, smile, white_background, bowtie, center_frills, simple_background, closed_mouth, hair_between_eyes, purple_hair, upper_body | | 3 | 26 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, hair_flower, solo, detached_sleeves, blue_rose, dress, looking_at_viewer, black_rose, feather_hair_ornament, earrings, open_mouth, upper_body, black_feathers, lace_choker, lace_trim, long_sleeves, neck_ribbon, black_choker, black_sleeves, brooch, holding_microphone, sleeveless, black_bow, bowtie, brown_eyes | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | black_footwear, black_gloves, frills, brooch, knee_boots, mini_crown, pantyhose, electric_guitar, green_dress, high_heel_boots, holding_microphone, looking_at_viewer, open_mouth, puffy_short_sleeves, smile, 1girl, blue_rose, earrings, holding_instrument, lace-up_boots, multiple_girls, purple_rose, solo_focus | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, earrings, hair_flower, looking_at_viewer, solo, dress, white_gloves, white_rose, hand_on_own_chest, open_mouth, smile, necklace, upper_body | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | blue_rose, hair_flower, hairband, necklace, purple_rose, 1girl, corset, cross-laced_clothes, feather_hair_ornament, looking_at_viewer, solo, upper_body, black_choker, black_feathers, blue_feathers, blue_jacket, cleavage, earrings, long_sleeves, mini_crown, open_mouth, ribbon, :o, blush, collarbone, cropped_jacket, hair_between_eyes, hand_on_own_chest, white_background, white_hair | | 7 | 5 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, black_choker, black_dress, collarbone, hair_flower, looking_at_viewer, smile, solo, black_rose, blush, frilled_dress, pearl_necklace, tiara, black_gloves, breasts, bridal_veil, purple_rose, closed_mouth, holding_bouquet, petals, white_flower | | 8 | 14 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, looking_at_viewer, school_uniform, solo, white_shirt, blazer, collared_shirt, blush, long_sleeves, white_background, closed_mouth, grey_jacket, simple_background, hair_between_eyes, striped_necktie, upper_body, brown_eyes | | 9 | 11 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 1girl, looking_at_viewer, off_shoulder, solo, bare_shoulders, white_background, collarbone, earrings, nail_polish, necklace, simple_background, upper_body, yellow_nails, black_choker, black_headwear, blush, breasts, butterfly, closed_mouth, jacket, long_sleeves, beret, crop_top, hair_between_eyes, parted_lips | | 10 | 5 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | 1girl, air_bubble, looking_at_viewer, navel, solo, underwater, midriff, earrings, frills, open_mouth, pearl_(gemstone), skirt, blush, bracelet, hair_between_eyes, hand_on_own_chest, head_fins, necklace, outstretched_arm, scrunchie, small_breasts, white_background, wrist_cuffs | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | cat_ears | looking_at_viewer | solo | black_thighhighs | blush | cat_tail | tail_bow | black_gloves | black_skirt | fake_animal_ears | fishnets | frilled_skirt | hairband | open_mouth | simple_background | sitting | bare_shoulders | breasts | neck_ribbon | purple_bow | purple_ribbon | shirt | collarbone | nipples | navel | sidelocks | small_breasts | white_background | completely_nude | stomach | pussy | sweat | groin | wet | long_sleeves | butterfly_hair_ornament | white_shirt | white_bow | smile | bowtie | center_frills | closed_mouth | hair_between_eyes | purple_hair | upper_body | hair_flower | detached_sleeves | blue_rose | dress | black_rose | feather_hair_ornament | earrings | black_feathers | lace_choker | lace_trim | black_choker | black_sleeves | brooch | holding_microphone | sleeveless | black_bow | brown_eyes | black_footwear | frills | knee_boots | mini_crown | pantyhose | electric_guitar | green_dress | high_heel_boots | puffy_short_sleeves | holding_instrument | lace-up_boots | multiple_girls | purple_rose | solo_focus | white_gloves | white_rose | hand_on_own_chest | necklace | corset | cross-laced_clothes | blue_feathers | blue_jacket | cleavage | ribbon | :o | cropped_jacket | white_hair | black_dress | frilled_dress | pearl_necklace | tiara | bridal_veil | holding_bouquet | petals | white_flower | school_uniform | blazer | collared_shirt | grey_jacket | striped_necktie | off_shoulder | nail_polish | yellow_nails | black_headwear | butterfly | jacket | beret | crop_top | parted_lips | air_bubble | underwater | midriff | pearl_(gemstone) | skirt | bracelet | head_fins | outstretched_arm | scrunchie | wrist_cuffs | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:-----------|:--------------------|:-------|:-------------------|:--------|:-----------|:-----------|:---------------|:--------------|:-------------------|:-----------|:----------------|:-----------|:-------------|:--------------------|:----------|:-----------------|:----------|:--------------|:-------------|:----------------|:--------|:-------------|:----------|:--------|:------------|:----------------|:-------------------|:------------------|:----------|:--------|:--------|:--------|:------|:---------------|:--------------------------|:--------------|:------------|:--------|:---------|:----------------|:---------------|:--------------------|:--------------|:-------------|:--------------|:-------------------|:------------|:--------|:-------------|:------------------------|:-----------|:-----------------|:--------------|:------------|:---------------|:----------------|:---------|:---------------------|:-------------|:------------|:-------------|:-----------------|:---------|:-------------|:-------------|:------------|:------------------|:--------------|:------------------|:----------------------|:---------------------|:----------------|:-----------------|:--------------|:-------------|:---------------|:-------------|:--------------------|:-----------|:---------|:----------------------|:----------------|:--------------|:-----------|:---------|:-----|:-----------------|:-------------|:--------------|:----------------|:-----------------|:--------|:--------------|:------------------|:---------|:---------------|:-----------------|:---------|:-----------------|:--------------|:------------------|:---------------|:--------------|:---------------|:-----------------|:------------|:---------|:--------|:-----------|:--------------|:-------------|:-------------|:----------|:-------------------|:--------|:-----------|:------------|:-------------------|:------------|:--------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 12 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | X | | X | | | | | | | | | X | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 15 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | X | X | | X | | | | X | | | | | | X | | | | | | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 26 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | X | X | | | | | | | | | | | X | | | | | X | | | | | | | | | | | | | | | | X | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | X | | | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | X | | | | X | | | | | | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | X | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | X | X | | X | | | | | | | | X | X | | | | | | | | | X | | | | | X | | | | | | | X | | | | | | | | X | | X | X | | X | | | X | X | X | | | X | | | | | | | | | | X | | | | | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 5 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | X | X | | X | | | X | | | | | | | | | | X | | | | | X | | | | | | | | | | | | | | | | X | | | X | | | | X | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 14 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | | X | X | | X | | | | | | | | | | X | | | | | | | | | | | | | X | | | | | | | X | | X | | | | | X | X | | X | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | 9 | 11 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | | X | X | | X | | | | | | | | | | X | | X | X | | | | | X | | | | | X | | | | | | | X | | | | | | | X | X | | X | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | 10 | 5 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | X | | X | X | | X | | | | | | | | | X | | | | | | | | | | | X | | X | X | | | | | | | | | | | | | | | X | | | | | | | | | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
CyberHarem/minato_yukina_bangdream
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-25T21:44:34+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T17:25:12+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of minato\_yukina/湊友希那/미나토유키나 (BanG Dream!) =================================================== This is the dataset of minato\_yukina/湊友希那/미나토유키나 (BanG Dream!), containing 500 images and their tags. The core tags of this character are 'long\_hair, bangs, yellow\_eyes, grey\_hair, hair\_ornament', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
0473cc504d68b562c3b6176b6500fc9e545c76f8
26,425,839 samples. Contains only the unique, RDKit canonicalized SMILES molecules in a CSV format (after extracting), from the original "Entire GDB-11" dataset found at https://gdb.unibe.ch/downloads/.
Pixelatory/GDB-11
[ "size_categories:10M<n<100M", "chemistry", "biology", "region:us" ]
2023-09-25T21:50:13+00:00
{"size_categories": ["10M<n<100M"], "tags": ["chemistry", "biology"]}
2023-09-25T22:10:50+00:00
[]
[]
TAGS #size_categories-10M<n<100M #chemistry #biology #region-us
26,425,839 samples. Contains only the unique, RDKit canonicalized SMILES molecules in a CSV format (after extracting), from the original "Entire GDB-11" dataset found at URL
[]
[ "TAGS\n#size_categories-10M<n<100M #chemistry #biology #region-us \n" ]
[ 25 ]
[ "passage: TAGS\n#size_categories-10M<n<100M #chemistry #biology #region-us \n" ]
b283a336b88778919f9fbb0c6ba7fac65809eb10
# Dataset Card for "llama2d-mind2web" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
llama2d/llama2d-mind2web
[ "region:us" ]
2023-09-25T21:51:49+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "float32"}, {"name": "coords", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "float32"}, {"name": "attention_mask", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 106211392, "num_examples": 2212}], "download_size": 12910313, "dataset_size": 106211392}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-08T05:44:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for "llama2d-mind2web" More Information needed
[ "# Dataset Card for \"llama2d-mind2web\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"llama2d-mind2web\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"llama2d-mind2web\"\n\nMore Information needed" ]
c0f737db6d8cd332acd52477190913d5bb4891f1
975,820,226 samples. Contains only the unique, RDKit canonicalized SMILES molecules in a CSV format (after extracting), from the original "Entire GDB-13" dataset found at https://gdb.unibe.ch/downloads/.
Pixelatory/GDB-13
[ "size_categories:100M<n<1B", "chemistry", "biology", "region:us" ]
2023-09-25T21:54:00+00:00
{"size_categories": ["100M<n<1B"], "tags": ["chemistry", "biology"]}
2023-09-25T22:10:33+00:00
[]
[]
TAGS #size_categories-100M<n<1B #chemistry #biology #region-us
975,820,226 samples. Contains only the unique, RDKit canonicalized SMILES molecules in a CSV format (after extracting), from the original "Entire GDB-13" dataset found at URL
[]
[ "TAGS\n#size_categories-100M<n<1B #chemistry #biology #region-us \n" ]
[ 25 ]
[ "passage: TAGS\n#size_categories-100M<n<1B #chemistry #biology #region-us \n" ]
597703fde5b1223e94545084e602b5acab9d38b6
# Dataset Card for "Hi" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Manduzamzam/Hi
[ "region:us" ]
2023-09-25T21:58:14+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 5886291.0, "num_examples": 112}, {"name": "validation", "num_bytes": 63094.0, "num_examples": 10}], "download_size": 0, "dataset_size": 5949385.0}}
2023-09-26T04:42:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Hi" More Information needed
[ "# Dataset Card for \"Hi\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Hi\"\n\nMore Information needed" ]
[ 6, 11 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Hi\"\n\nMore Information needed" ]
611c3fea59b139b8a66daf832fe14952d3fb611c
# Dataset of mia_taylor (Love Live! Nijigasaki Gakuen School Idol Doukoukai) This is the dataset of mia_taylor (Love Live! Nijigasaki Gakuen School Idol Doukoukai), containing 419 images and their tags. The core tags of this character are `short_hair, blonde_hair, hair_over_one_eye, purple_eyes, bangs`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 419 | 587.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mia_taylor_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 419 | 293.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mia_taylor_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 973 | 631.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mia_taylor_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 419 | 500.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mia_taylor_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 973 | 974.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mia_taylor_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/mia_taylor_lovelivenijigasakihighschoolidolclub', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, jacket, long_sleeves, looking_at_viewer, solo, white_background, nijigasaki_academy_school_uniform, plaid_skirt, simple_background, smile, white_shirt, collared_shirt, green_ribbon, breasts, open_mouth, sitting, white_skirt | | 1 | 15 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, jacket, nijigasaki_academy_school_uniform, solo, looking_at_viewer, smile, breasts, skirt, upper_body | | 2 | 13 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, black_gloves, hat, solo, looking_at_viewer, smile, black_headwear, open_mouth, black_dress, cape, earrings | | 3 | 10 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, black_gloves, necklace, solo, black_dress, earrings, collarbone, hair_ornament, hat, looking_at_viewer, smile, feathers, holding_microphone, upper_body, black_headwear, blue_eyes, elbow_gloves, microphone_stand | | 4 | 19 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, solo, looking_at_viewer, hood_up, jacket, long_sleeves, necklace, white_shirt, blush, star_earrings, black_hoodie, simple_background, animal_hood, holding, smile, cat_tail, closed_mouth, pleated_skirt, upper_body, white_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | jacket | long_sleeves | looking_at_viewer | solo | white_background | nijigasaki_academy_school_uniform | plaid_skirt | simple_background | smile | white_shirt | collared_shirt | green_ribbon | breasts | open_mouth | sitting | white_skirt | skirt | upper_body | black_gloves | hat | black_headwear | black_dress | cape | earrings | necklace | collarbone | hair_ornament | feathers | holding_microphone | blue_eyes | elbow_gloves | microphone_stand | hood_up | blush | star_earrings | black_hoodie | animal_hood | holding | cat_tail | closed_mouth | pleated_skirt | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:---------------|:--------------------|:-------|:-------------------|:------------------------------------|:--------------|:--------------------|:--------|:--------------|:-----------------|:---------------|:----------|:-------------|:----------|:--------------|:--------|:-------------|:---------------|:------|:-----------------|:--------------|:-------|:-----------|:-----------|:-------------|:----------------|:-----------|:---------------------|:------------|:---------------|:-------------------|:----------|:--------|:----------------|:---------------|:--------------|:----------|:-----------|:---------------|:----------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 15 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | X | X | | X | | | X | | | | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 13 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | | X | X | | | | | X | | | | | X | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | 3 | 10 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | X | X | | | | | X | | | | | | | | | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | 4 | 19 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | X | X | X | X | | | X | X | X | | | | | | | | X | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X |
CyberHarem/mia_taylor_lovelivenijigasakihighschoolidolclub
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-25T21:58:54+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-17T02:35:00+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of mia\_taylor (Love Live! Nijigasaki Gakuen School Idol Doukoukai) =========================================================================== This is the dataset of mia\_taylor (Love Live! Nijigasaki Gakuen School Idol Doukoukai), containing 419 images and their tags. The core tags of this character are 'short\_hair, blonde\_hair, hair\_over\_one\_eye, purple\_eyes, bangs', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
e825e141c881fbbef5a64aadf227233ca6403480
Credit to https://huggingface.co/datasets/potsawee/wiki_bio_gpt3_hallucination
clam004/antihallucination_dataset
[ "license:apache-2.0", "region:us" ]
2023-09-25T22:00:51+00:00
{"license": "apache-2.0"}
2023-09-25T22:42:44+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
Credit to URL
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
eb4d67a333bc24452c758e003cc095464103fc97
# Dataset Card for "microstructure_porosity_images" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zche318/microstructure_porosity_images
[ "region:us" ]
2023-09-25T22:05:49+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4233663.84, "num_examples": 4740}], "download_size": 5044541, "dataset_size": 4233663.84}}
2023-09-25T22:06:00+00:00
[]
[]
TAGS #region-us
# Dataset Card for "microstructure_porosity_images" More Information needed
[ "# Dataset Card for \"microstructure_porosity_images\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"microstructure_porosity_images\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"microstructure_porosity_images\"\n\nMore Information needed" ]
18a8965c65136014e781f77d688229afc78645be
<a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> [EverythingLM V3 Data](https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V3) converted to ShareGPT format.
acrastt/EverythingLM-V3-ShareGPT
[ "size_categories:1K<n<10K", "language:en", "license:mit", "region:us" ]
2023-09-25T22:08:51+00:00
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "pretty_name": "Everything-V3-"}
2023-10-24T21:18:47+00:00
[]
[ "en" ]
TAGS #size_categories-1K<n<10K #language-English #license-mit #region-us
<a href="URL target="_blank"><img src="URL alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> EverythingLM V3 Data converted to ShareGPT format.
[]
[ "TAGS\n#size_categories-1K<n<10K #language-English #license-mit #region-us \n" ]
[ 27 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #language-English #license-mit #region-us \n" ]
8486f4120a2d35c32dd35c8acec72de1a6efe623
49,999,990 samples. Contains only the unique, RDKit canonicalized SMILES molecules in a CSV format (after extracting), from the original "GDB-17-Set" dataset found at https://gdb.unibe.ch/downloads/.
Pixelatory/GDB-17
[ "size_categories:10M<n<100M", "chemistry", "biology", "region:us" ]
2023-09-25T22:11:34+00:00
{"size_categories": ["10M<n<100M"], "tags": ["chemistry", "biology"]}
2023-09-25T22:18:47+00:00
[]
[]
TAGS #size_categories-10M<n<100M #chemistry #biology #region-us
49,999,990 samples. Contains only the unique, RDKit canonicalized SMILES molecules in a CSV format (after extracting), from the original "GDB-17-Set" dataset found at URL
[]
[ "TAGS\n#size_categories-10M<n<100M #chemistry #biology #region-us \n" ]
[ 25 ]
[ "passage: TAGS\n#size_categories-10M<n<100M #chemistry #biology #region-us \n" ]
a427f59bbb7993fb4b917bf66445c1af410ad3ad
114,218,565 samples. Contains only the unique, RDKit canonicalized SMILES molecules in a CSV format (after extracting), from the PubChem dataset found at https://ftp.ncbi.nlm.nih.gov/pubchem/Compound/CURRENT-Full/. PubChem compounds collected in 30 April 2023.
Pixelatory/PubChem-04-30-2023
[ "size_categories:100M<n<1B", "chemistry", "biology", "region:us" ]
2023-09-25T22:22:49+00:00
{"size_categories": ["100M<n<1B"], "tags": ["chemistry", "biology"]}
2023-09-25T22:28:43+00:00
[]
[]
TAGS #size_categories-100M<n<1B #chemistry #biology #region-us
114,218,565 samples. Contains only the unique, RDKit canonicalized SMILES molecules in a CSV format (after extracting), from the PubChem dataset found at URL PubChem compounds collected in 30 April 2023.
[]
[ "TAGS\n#size_categories-100M<n<1B #chemistry #biology #region-us \n" ]
[ 25 ]
[ "passage: TAGS\n#size_categories-100M<n<1B #chemistry #biology #region-us \n" ]
d75584061703e04f932f3b589cd0a6e470946c44
# Dataset Card for "microstructure_porosity_feature_images" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zche318/microstructure_porosity_feature_images
[ "region:us" ]
2023-09-25T22:55:35+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4206093.84, "num_examples": 4740}], "download_size": 5043097, "dataset_size": 4206093.84}}
2023-09-25T22:55:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for "microstructure_porosity_feature_images" More Information needed
[ "# Dataset Card for \"microstructure_porosity_feature_images\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"microstructure_porosity_feature_images\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"microstructure_porosity_feature_images\"\n\nMore Information needed" ]
3eecfb915d435cb2d91eb3989c901bdc574dc6cc
# Dataset Card for "amazon_polarity_embeddings_random3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
atmallen/amazon_polarity_embeddings_random3
[ "region:us" ]
2023-09-25T23:10:35+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}, {"name": "embedding", "sequence": "float32"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7148364432, "num_examples": 3600000}, {"name": "test", "num_bytes": 19940712, "num_examples": 10000}], "download_size": 3898006107, "dataset_size": 7168305144}}
2023-09-26T16:30:32+00:00
[]
[]
TAGS #region-us
# Dataset Card for "amazon_polarity_embeddings_random3" More Information needed
[ "# Dataset Card for \"amazon_polarity_embeddings_random3\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"amazon_polarity_embeddings_random3\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"amazon_polarity_embeddings_random3\"\n\nMore Information needed" ]
451ab582b80e5eda9251b3a099c5f7b80bad839e
# Dataset Card for "amazon_polarity_embeddings_random4" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
atmallen/amazon_polarity_embeddings_random4
[ "region:us" ]
2023-09-25T23:36:20+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}, {"name": "embedding", "sequence": "float32"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7148364432, "num_examples": 3600000}, {"name": "test", "num_bytes": 19940712, "num_examples": 10000}], "download_size": 3912035793, "dataset_size": 7168305144}}
2023-09-26T17:59:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "amazon_polarity_embeddings_random4" More Information needed
[ "# Dataset Card for \"amazon_polarity_embeddings_random4\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"amazon_polarity_embeddings_random4\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"amazon_polarity_embeddings_random4\"\n\nMore Information needed" ]
508a479435e6ea89fb7e7d4c5b3db0e2d1c1ae00
# One million commits A large variety of git commits pulled from across GitHub. Created by **William Entriken**, released **2023-09-26**, version **1**. This composition is licensed under the **MIT license**. ## Intended use This dataset could be used to train a model concerned with programming tasks: 1. Summarize some programming work 1. Perform work given a description of the work to do 1. Learn-by-example the syntax for all active programming languages and structured data formats This dataset could be used to train a model concerned with scoring and trend analysis: 1. Identify trends in programming across years 1. Compare popularity and impact of programming topics/languages (using stars, quality as a signal) 1. Score programming work as good or bad (using authors' ratings as a signal) 1. Identify popular project file layouts for each programming language Because this dataset comprises about a half terabyte of text, it may be useful for a training a general text processing model. Or at least a niche of such a model. ## Factors An attempt was made at providing a large variety of repositories. To example the potential biases, here is a comparison of our dataset to sampling at random from the list of all commits across GitHub. 1. This dataset should overrepresent repositories with 7+ character words. 1. This dataset should overrepresent repositories with few commits (because only one commit is included per repository). 1. This dataset should overrepresent repositories from authors with many repositories. ## Metrics * Commits: 1,000,000 * Distinct repositories: 1,000,000 * Distinct authors: xxx * Bytes: ## Data format The files are distributed as SQLite databases with the schema: ```sql CREATE TABLE commits (repo_full_name TEXT UNIQUE, repo JSON, `commit` JSON, commit_patch TEXT, author JSON); ``` Each repository is included at most once, which is why `repo_full_name` is a unique key. The `commit_patch` is the full `.patch` file for the commit. Other columns `repo`, `commit` and `author` are JSON blobs following the GitHub API schema: <details> <summary>repository</summary> ```json { "title": "Repo Search Result Item", "description": "Repo Search Result Item", "type": "object", "properties": { "id": { "type": "integer" }, "node_id": { "type": "string" }, "name": { "type": "string" }, "full_name": { "type": "string" }, "owner": { "anyOf": [ { "type": "null" }, { "title": "Simple User", "description": "A GitHub user.", "type": "object", "properties": { "name": { "type": [ "string", "null" ] }, "email": { "type": [ "string", "null" ] }, "login": { "type": "string", "examples": [ "octocat" ] }, "id": { "type": "integer", "examples": [ 1 ] }, "node_id": { "type": "string", "examples": [ "MDQ6VXNlcjE=" ] }, "avatar_url": { "type": "string", "format": "uri", "examples": [ "https://github.com/images/error/octocat_happy.gif" ] }, "gravatar_id": { "type": [ "string", "null" ], "examples": [ "41d064eb2195891e12d0413f63227ea7" ] }, "url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat" ] }, "html_url": { "type": "string", "format": "uri", "examples": [ "https://github.com/octocat" ] }, "followers_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/followers" ] }, "following_url": { "type": "string", "examples": [ "https://api.github.com/users/octocat/following{/other_user}" ] }, "gists_url": { "type": "string", "examples": [ "https://api.github.com/users/octocat/gists{/gist_id}" ] }, "starred_url": { "type": "string", "examples": [ "https://api.github.com/users/octocat/starred{/owner}{/repo}" ] }, "subscriptions_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/subscriptions" ] }, "organizations_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/orgs" ] }, "repos_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/repos" ] }, "events_url": { "type": "string", "examples": [ "https://api.github.com/users/octocat/events{/privacy}" ] }, "received_events_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/received_events" ] }, "type": { "type": "string", "examples": [ "User" ] }, "site_admin": { "type": "boolean" }, "starred_at": { "type": "string", "examples": [ "\"2020-07-09T00:17:55Z\"" ] } }, "required": [ "avatar_url", "events_url", "followers_url", "following_url", "gists_url", "gravatar_id", "html_url", "id", "node_id", "login", "organizations_url", "received_events_url", "repos_url", "site_admin", "starred_url", "subscriptions_url", "type", "url" ] } ] }, "private": { "type": "boolean" }, "html_url": { "type": "string", "format": "uri" }, "description": { "type": [ "string", "null" ] }, "fork": { "type": "boolean" }, "url": { "type": "string", "format": "uri" }, "created_at": { "type": "string", "format": "date-time" }, "updated_at": { "type": "string", "format": "date-time" }, "pushed_at": { "type": "string", "format": "date-time" }, "homepage": { "type": [ "string", "null" ], "format": "uri" }, "size": { "type": "integer" }, "stargazers_count": { "type": "integer" }, "watchers_count": { "type": "integer" }, "language": { "type": [ "string", "null" ] }, "forks_count": { "type": "integer" }, "open_issues_count": { "type": "integer" }, "master_branch": { "type": "string" }, "default_branch": { "type": "string" }, "score": { "type": "number" }, "forks_url": { "type": "string", "format": "uri" }, "keys_url": { "type": "string" }, "collaborators_url": { "type": "string" }, "teams_url": { "type": "string", "format": "uri" }, "hooks_url": { "type": "string", "format": "uri" }, "issue_events_url": { "type": "string" }, "events_url": { "type": "string", "format": "uri" }, "assignees_url": { "type": "string" }, "branches_url": { "type": "string" }, "tags_url": { "type": "string", "format": "uri" }, "blobs_url": { "type": "string" }, "git_tags_url": { "type": "string" }, "git_refs_url": { "type": "string" }, "trees_url": { "type": "string" }, "statuses_url": { "type": "string" }, "languages_url": { "type": "string", "format": "uri" }, "stargazers_url": { "type": "string", "format": "uri" }, "contributors_url": { "type": "string", "format": "uri" }, "subscribers_url": { "type": "string", "format": "uri" }, "subscription_url": { "type": "string", "format": "uri" }, "commits_url": { "type": "string" }, "git_commits_url": { "type": "string" }, "comments_url": { "type": "string" }, "issue_comment_url": { "type": "string" }, "contents_url": { "type": "string" }, "compare_url": { "type": "string" }, "merges_url": { "type": "string", "format": "uri" }, "archive_url": { "type": "string" }, "downloads_url": { "type": "string", "format": "uri" }, "issues_url": { "type": "string" }, "pulls_url": { "type": "string" }, "milestones_url": { "type": "string" }, "notifications_url": { "type": "string" }, "labels_url": { "type": "string" }, "releases_url": { "type": "string" }, "deployments_url": { "type": "string", "format": "uri" }, "git_url": { "type": "string" }, "ssh_url": { "type": "string" }, "clone_url": { "type": "string" }, "svn_url": { "type": "string", "format": "uri" }, "forks": { "type": "integer" }, "open_issues": { "type": "integer" }, "watchers": { "type": "integer" }, "topics": { "type": "array", "items": { "type": "string" } }, "mirror_url": { "type": [ "string", "null" ], "format": "uri" }, "has_issues": { "type": "boolean" }, "has_projects": { "type": "boolean" }, "has_pages": { "type": "boolean" }, "has_wiki": { "type": "boolean" }, "has_downloads": { "type": "boolean" }, "has_discussions": { "type": "boolean" }, "archived": { "type": "boolean" }, "disabled": { "type": "boolean", "description": "Returns whether or not this repository disabled." }, "visibility": { "description": "The repository visibility: public, private, or internal.", "type": "string" }, "license": { "anyOf": [ { "type": "null" }, { "title": "License Simple", "description": "License Simple", "type": "object", "properties": { "key": { "type": "string", "examples": [ "mit" ] }, "name": { "type": "string", "examples": [ "MIT License" ] }, "url": { "type": [ "string", "null" ], "format": "uri", "examples": [ "https://api.github.com/licenses/mit" ] }, "spdx_id": { "type": [ "string", "null" ], "examples": [ "MIT" ] }, "node_id": { "type": "string", "examples": [ "MDc6TGljZW5zZW1pdA==" ] }, "html_url": { "type": "string", "format": "uri" } }, "required": [ "key", "name", "url", "spdx_id", "node_id" ] } ] }, "permissions": { "type": "object", "properties": { "admin": { "type": "boolean" }, "maintain": { "type": "boolean" }, "push": { "type": "boolean" }, "triage": { "type": "boolean" }, "pull": { "type": "boolean" } }, "required": [ "admin", "pull", "push" ] }, "text_matches": { "title": "Search Result Text Matches", "type": "array", "items": { "type": "object", "properties": { "object_url": { "type": "string" }, "object_type": { "type": [ "string", "null" ] }, "property": { "type": "string" }, "fragment": { "type": "string" }, "matches": { "type": "array", "items": { "type": "object", "properties": { "text": { "type": "string" }, "indices": { "type": "array", "items": { "type": "integer" } } } } } } } }, "temp_clone_token": { "type": "string" }, "allow_merge_commit": { "type": "boolean" }, "allow_squash_merge": { "type": "boolean" }, "allow_rebase_merge": { "type": "boolean" }, "allow_auto_merge": { "type": "boolean" }, "delete_branch_on_merge": { "type": "boolean" }, "allow_forking": { "type": "boolean" }, "is_template": { "type": "boolean" }, "web_commit_signoff_required": { "type": "boolean", "examples": [ false ] } }, "required": [ "archive_url", "assignees_url", "blobs_url", "branches_url", "collaborators_url", "comments_url", "commits_url", "compare_url", "contents_url", "contributors_url", "deployments_url", "description", "downloads_url", "events_url", "fork", "forks_url", "full_name", "git_commits_url", "git_refs_url", "git_tags_url", "hooks_url", "html_url", "id", "node_id", "issue_comment_url", "issue_events_url", "issues_url", "keys_url", "labels_url", "languages_url", "merges_url", "milestones_url", "name", "notifications_url", "owner", "private", "pulls_url", "releases_url", "stargazers_url", "statuses_url", "subscribers_url", "subscription_url", "tags_url", "teams_url", "trees_url", "url", "clone_url", "default_branch", "forks", "forks_count", "git_url", "has_downloads", "has_issues", "has_projects", "has_wiki", "has_pages", "homepage", "language", "archived", "disabled", "mirror_url", "open_issues", "open_issues_count", "license", "pushed_at", "size", "ssh_url", "stargazers_count", "svn_url", "watchers", "watchers_count", "created_at", "updated_at", "score" ] } ``` </details> <details> <summary>commit</summary> ```json { "title": "Commit", "description": "Commit", "type": "object", "properties": { "url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/repos/octocat/Hello-World/commits/6dcb09b5b57875f334f61aebed695e2e4193db5e" ] }, "sha": { "type": "string", "examples": [ "6dcb09b5b57875f334f61aebed695e2e4193db5e" ] }, "node_id": { "type": "string", "examples": [ "MDY6Q29tbWl0NmRjYjA5YjViNTc4NzVmMzM0ZjYxYWViZWQ2OTVlMmU0MTkzZGI1ZQ==" ] }, "html_url": { "type": "string", "format": "uri", "examples": [ "https://github.com/octocat/Hello-World/commit/6dcb09b5b57875f334f61aebed695e2e4193db5e" ] }, "comments_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/repos/octocat/Hello-World/commits/6dcb09b5b57875f334f61aebed695e2e4193db5e/comments" ] }, "commit": { "type": "object", "properties": { "url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/repos/octocat/Hello-World/commits/6dcb09b5b57875f334f61aebed695e2e4193db5e" ] }, "author": { "anyOf": [ { "type": "null" }, { "title": "Git User", "description": "Metaproperties for Git author/committer information.", "type": "object", "properties": { "name": { "type": "string", "examples": [ "\"Chris Wanstrath\"" ] }, "email": { "type": "string", "examples": [ "\"[email protected]\"" ] }, "date": { "type": "string", "examples": [ "\"2007-10-29T02:42:39.000-07:00\"" ] } } } ] }, "committer": { "anyOf": [ { "type": "null" }, { "title": "Git User", "description": "Metaproperties for Git author/committer information.", "type": "object", "properties": { "name": { "type": "string", "examples": [ "\"Chris Wanstrath\"" ] }, "email": { "type": "string", "examples": [ "\"[email protected]\"" ] }, "date": { "type": "string", "examples": [ "\"2007-10-29T02:42:39.000-07:00\"" ] } } } ] }, "message": { "type": "string", "examples": [ "Fix all the bugs" ] }, "comment_count": { "type": "integer", "examples": [ 0 ] }, "tree": { "type": "object", "properties": { "sha": { "type": "string", "examples": [ "827efc6d56897b048c772eb4087f854f46256132" ] }, "url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/repos/octocat/Hello-World/tree/827efc6d56897b048c772eb4087f854f46256132" ] } }, "required": [ "sha", "url" ] }, "verification": { "title": "Verification", "type": "object", "properties": { "verified": { "type": "boolean" }, "reason": { "type": "string" }, "payload": { "type": [ "string", "null" ] }, "signature": { "type": [ "string", "null" ] } }, "required": [ "verified", "reason", "payload", "signature" ] } }, "required": [ "author", "committer", "comment_count", "message", "tree", "url" ] }, "author": { "anyOf": [ { "type": "null" }, { "title": "Simple User", "description": "A GitHub user.", "type": "object", "properties": { "name": { "type": [ "string", "null" ] }, "email": { "type": [ "string", "null" ] }, "login": { "type": "string", "examples": [ "octocat" ] }, "id": { "type": "integer", "examples": [ 1 ] }, "node_id": { "type": "string", "examples": [ "MDQ6VXNlcjE=" ] }, "avatar_url": { "type": "string", "format": "uri", "examples": [ "https://github.com/images/error/octocat_happy.gif" ] }, "gravatar_id": { "type": [ "string", "null" ], "examples": [ "41d064eb2195891e12d0413f63227ea7" ] }, "url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat" ] }, "html_url": { "type": "string", "format": "uri", "examples": [ "https://github.com/octocat" ] }, "followers_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/followers" ] }, "following_url": { "type": "string", "examples": [ "https://api.github.com/users/octocat/following{/other_user}" ] }, "gists_url": { "type": "string", "examples": [ "https://api.github.com/users/octocat/gists{/gist_id}" ] }, "starred_url": { "type": "string", "examples": [ "https://api.github.com/users/octocat/starred{/owner}{/repo}" ] }, "subscriptions_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/subscriptions" ] }, "organizations_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/orgs" ] }, "repos_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/repos" ] }, "events_url": { "type": "string", "examples": [ "https://api.github.com/users/octocat/events{/privacy}" ] }, "received_events_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/received_events" ] }, "type": { "type": "string", "examples": [ "User" ] }, "site_admin": { "type": "boolean" }, "starred_at": { "type": "string", "examples": [ "\"2020-07-09T00:17:55Z\"" ] } }, "required": [ "avatar_url", "events_url", "followers_url", "following_url", "gists_url", "gravatar_id", "html_url", "id", "node_id", "login", "organizations_url", "received_events_url", "repos_url", "site_admin", "starred_url", "subscriptions_url", "type", "url" ] } ] }, "committer": { "anyOf": [ { "type": "null" }, { "title": "Simple User", "description": "A GitHub user.", "type": "object", "properties": { "name": { "type": [ "string", "null" ] }, "email": { "type": [ "string", "null" ] }, "login": { "type": "string", "examples": [ "octocat" ] }, "id": { "type": "integer", "examples": [ 1 ] }, "node_id": { "type": "string", "examples": [ "MDQ6VXNlcjE=" ] }, "avatar_url": { "type": "string", "format": "uri", "examples": [ "https://github.com/images/error/octocat_happy.gif" ] }, "gravatar_id": { "type": [ "string", "null" ], "examples": [ "41d064eb2195891e12d0413f63227ea7" ] }, "url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat" ] }, "html_url": { "type": "string", "format": "uri", "examples": [ "https://github.com/octocat" ] }, "followers_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/followers" ] }, "following_url": { "type": "string", "examples": [ "https://api.github.com/users/octocat/following{/other_user}" ] }, "gists_url": { "type": "string", "examples": [ "https://api.github.com/users/octocat/gists{/gist_id}" ] }, "starred_url": { "type": "string", "examples": [ "https://api.github.com/users/octocat/starred{/owner}{/repo}" ] }, "subscriptions_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/subscriptions" ] }, "organizations_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/orgs" ] }, "repos_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/repos" ] }, "events_url": { "type": "string", "examples": [ "https://api.github.com/users/octocat/events{/privacy}" ] }, "received_events_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/received_events" ] }, "type": { "type": "string", "examples": [ "User" ] }, "site_admin": { "type": "boolean" }, "starred_at": { "type": "string", "examples": [ "\"2020-07-09T00:17:55Z\"" ] } }, "required": [ "avatar_url", "events_url", "followers_url", "following_url", "gists_url", "gravatar_id", "html_url", "id", "node_id", "login", "organizations_url", "received_events_url", "repos_url", "site_admin", "starred_url", "subscriptions_url", "type", "url" ] } ] }, "parents": { "type": "array", "items": { "type": "object", "properties": { "sha": { "type": "string", "examples": [ "7638417db6d59f3c431d3e1f261cc637155684cd" ] }, "url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/repos/octocat/Hello-World/commits/7638417db6d59f3c431d3e1f261cc637155684cd" ] }, "html_url": { "type": "string", "format": "uri", "examples": [ "https://github.com/octocat/Hello-World/commit/7638417db6d59f3c431d3e1f261cc637155684cd" ] } }, "required": [ "sha", "url" ] } }, "stats": { "type": "object", "properties": { "additions": { "type": "integer" }, "deletions": { "type": "integer" }, "total": { "type": "integer" } } }, "files": { "type": "array", "items": { "title": "Diff Entry", "description": "Diff Entry", "type": "object", "properties": { "sha": { "type": "string", "examples": [ "bbcd538c8e72b8c175046e27cc8f907076331401" ] }, "filename": { "type": "string", "examples": [ "file1.txt" ] }, "status": { "type": "string", "enum": [ "added", "removed", "modified", "renamed", "copied", "changed", "unchanged" ], "examples": [ "added" ] }, "additions": { "type": "integer", "examples": [ 103 ] }, "deletions": { "type": "integer", "examples": [ 21 ] }, "changes": { "type": "integer", "examples": [ 124 ] }, "blob_url": { "type": "string", "format": "uri", "examples": [ "https://github.com/octocat/Hello-World/blob/6dcb09b5b57875f334f61aebed695e2e4193db5e/file1.txt" ] }, "raw_url": { "type": "string", "format": "uri", "examples": [ "https://github.com/octocat/Hello-World/raw/6dcb09b5b57875f334f61aebed695e2e4193db5e/file1.txt" ] }, "contents_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/repos/octocat/Hello-World/contents/file1.txt?ref=6dcb09b5b57875f334f61aebed695e2e4193db5e" ] }, "patch": { "type": "string", "examples": [ "@@ -132,7 +132,7 @@ module Test @@ -1000,7 +1000,7 @@ module Test" ] }, "previous_filename": { "type": "string", "examples": [ "file.txt" ] } }, "required": [ "additions", "blob_url", "changes", "contents_url", "deletions", "filename", "raw_url", "sha", "status" ] } } }, "required": [ "url", "sha", "node_id", "html_url", "comments_url", "commit", "author", "committer", "parents" ] } ``` </details> <details> <summary>user</summary> ```json { "oneOf": [ { "title": "Private User", "description": "Private User", "type": "object", "properties": { "login": { "type": "string", "examples": [ "octocat" ] }, "id": { "type": "integer", "examples": [ 1 ] }, "node_id": { "type": "string", "examples": [ "MDQ6VXNlcjE=" ] }, "avatar_url": { "type": "string", "format": "uri", "examples": [ "https://github.com/images/error/octocat_happy.gif" ] }, "gravatar_id": { "type": [ "string", "null" ], "examples": [ "41d064eb2195891e12d0413f63227ea7" ] }, "url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat" ] }, "html_url": { "type": "string", "format": "uri", "examples": [ "https://github.com/octocat" ] }, "followers_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/followers" ] }, "following_url": { "type": "string", "examples": [ "https://api.github.com/users/octocat/following{/other_user}" ] }, "gists_url": { "type": "string", "examples": [ "https://api.github.com/users/octocat/gists{/gist_id}" ] }, "starred_url": { "type": "string", "examples": [ "https://api.github.com/users/octocat/starred{/owner}{/repo}" ] }, "subscriptions_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/subscriptions" ] }, "organizations_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/orgs" ] }, "repos_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/repos" ] }, "events_url": { "type": "string", "examples": [ "https://api.github.com/users/octocat/events{/privacy}" ] }, "received_events_url": { "type": "string", "format": "uri", "examples": [ "https://api.github.com/users/octocat/received_events" ] }, "type": { "type": "string", "examples": [ "User" ] }, "site_admin": { "type": "boolean" }, "name": { "type": [ "string", "null" ], "examples": [ "monalisa octocat" ] }, "company": { "type": [ "string", "null" ], "examples": [ "GitHub" ] }, "blog": { "type": [ "string", "null" ], "examples": [ "https://github.com/blog" ] }, "location": { "type": [ "string", "null" ], "examples": [ "San Francisco" ] }, "email": { "type": [ "string", "null" ], "format": "email", "examples": [ "[email protected]" ] }, "hireable": { "type": [ "boolean", "null" ] }, "bio": { "type": [ "string", "null" ], "examples": [ "There once was..." ] }, "twitter_username": { "type": [ "string", "null" ], "examples": [ "monalisa" ] }, "public_repos": { "type": "integer", "examples": [ 2 ] }, "public_gists": { "type": "integer", "examples": [ 1 ] }, "followers": { "type": "integer", "examples": [ 20 ] }, "following": { "type": "integer", "examples": [ 0 ] }, "created_at": { "type": "string", "format": "date-time", "examples": [ "2008-01-14T04:33:35Z" ] }, "updated_at": { "type": "string", "format": "date-time", "examples": [ "2008-01-14T04:33:35Z" ] }, "private_gists": { "type": "integer", "examples": [ 81 ] }, "total_private_repos": { "type": "integer", "examples": [ 100 ] }, "owned_private_repos": { "type": "integer", "examples": [ 100 ] }, "disk_usage": { "type": "integer", "examples": [ 10000 ] }, "collaborators": { "type": "integer", "examples": [ 8 ] }, "two_factor_authentication": { "type": "boolean", "examples": [ true ] }, "plan": { "type": "object", "properties": { "collaborators": { "type": "integer" }, "name": { "type": "string" }, "space": { "type": "integer" }, "private_repos": { "type": "integer" } }, "required": [ "collaborators", "name", "space", "private_repos" ] }, "suspended_at": { "type": [ "string", "null" ], "format": "date-time" }, "business_plus": { "type": "boolean" }, "ldap_dn": { "type": "string" } }, "required": [ "avatar_url", "events_url", "followers_url", "following_url", "gists_url", "gravatar_id", "html_url", "id", "node_id", "login", "organizations_url", "received_events_url", "repos_url", "site_admin", "starred_url", "subscriptions_url", "type", "url", "bio", "blog", "company", "email", "followers", "following", "hireable", "location", "name", "public_gists", "public_repos", "created_at", "updated_at", "collaborators", "disk_usage", "owned_private_repos", "private_gists", "total_private_repos", "two_factor_authentication" ] }, { "title": "Public User", "description": "Public User", "type": "object", "properties": { "login": { "type": "string" }, "id": { "type": "integer" }, "node_id": { "type": "string" }, "avatar_url": { "type": "string", "format": "uri" }, "gravatar_id": { "type": [ "string", "null" ] }, "url": { "type": "string", "format": "uri" }, "html_url": { "type": "string", "format": "uri" }, "followers_url": { "type": "string", "format": "uri" }, "following_url": { "type": "string" }, "gists_url": { "type": "string" }, "starred_url": { "type": "string" }, "subscriptions_url": { "type": "string", "format": "uri" }, "organizations_url": { "type": "string", "format": "uri" }, "repos_url": { "type": "string", "format": "uri" }, "events_url": { "type": "string" }, "received_events_url": { "type": "string", "format": "uri" }, "type": { "type": "string" }, "site_admin": { "type": "boolean" }, "name": { "type": [ "string", "null" ] }, "company": { "type": [ "string", "null" ] }, "blog": { "type": [ "string", "null" ] }, "location": { "type": [ "string", "null" ] }, "email": { "type": [ "string", "null" ], "format": "email" }, "hireable": { "type": [ "boolean", "null" ] }, "bio": { "type": [ "string", "null" ] }, "twitter_username": { "type": [ "string", "null" ] }, "public_repos": { "type": "integer" }, "public_gists": { "type": "integer" }, "followers": { "type": "integer" }, "following": { "type": "integer" }, "created_at": { "type": "string", "format": "date-time" }, "updated_at": { "type": "string", "format": "date-time" }, "plan": { "type": "object", "properties": { "collaborators": { "type": "integer" }, "name": { "type": "string" }, "space": { "type": "integer" }, "private_repos": { "type": "integer" } }, "required": [ "collaborators", "name", "space", "private_repos" ] }, "suspended_at": { "type": [ "string", "null" ], "format": "date-time" }, "private_gists": { "type": "integer", "examples": [ 1 ] }, "total_private_repos": { "type": "integer", "examples": [ 2 ] }, "owned_private_repos": { "type": "integer", "examples": [ 2 ] }, "disk_usage": { "type": "integer", "examples": [ 1 ] }, "collaborators": { "type": "integer", "examples": [ 3 ] } }, "required": [ "avatar_url", "events_url", "followers_url", "following_url", "gists_url", "gravatar_id", "html_url", "id", "node_id", "login", "organizations_url", "received_events_url", "repos_url", "site_admin", "starred_url", "subscriptions_url", "type", "url", "bio", "blog", "company", "email", "followers", "following", "hireable", "location", "name", "public_gists", "public_repos", "created_at", "updated_at" ], "additionalProperties": false } ] } ``` </details> ## Methodology A set of some random words were started as a seed. Then a search was performed against the GitHub API using one of those words, a random acceptable license, a random sort order and direction. Then from each repository, one of the most recent 1,000 commits was randomly selected. And words over seven characters were added back to the random word set. This process repeated one million times. The acceptable licenses were chosen to include every license which allows that commit to be included in this dataset without me including one million separate licenses notices. Those are (identified by SPDX identifier): - `apache-2.0` - `bsl-1.0` - `bsd-2-clause` - `bsd-3-clause` - `0bsd` - `cc0-1.0` - `wtfpl` - `isc` - `mit` - `postgresql` - `unlicense` - `zlib` These licenses are identified in the dataset so you are easily able to filter if needed. ## Ethical considerations Commits on GitHub include author contact information, which is publicly available to anybody even without a GitHub account. This dataset includes that contact information in each commit. This dataset creates a risk for spam unwanted outreach. I considered whether it was appropriate to attempt redacting such information from the commit patches and contents, and ultimately decided against this. Taking my own experience in receiving mail I have found that the vast majority of outreach to my GitHub email address (filtered through Gmail) is seemingly legitimate research requests and genuine interview requests, compared to zero apparent evil phishing or selling attempts. And practically, I understand it would be more effective and trivial for an evil spammer to look up individual author's contact information or to harvest it for a specific topic using the GitHub API than downloading and parsing this terabyte-scale dataset. So overall, the risk was deemed acceptable. ## Caveats and recommendations The database is not normalized. So you may find that author information in one commit does not match that same author's information in another commit. This is simply an artifact of the data collection process having taken course over a period of weeks rather than a single snapshot. ## Acknowledgements Thank you to these people for sharing their GitHub API keys to aid in this research: [Dan Tedesco](https://twitter.com/dtedesco1), [@037](https://twitter.com/037), [Sandeep Pani](https://twitter.com/skcd42), [Axelle Moortgat](https://twitter.com/axellemoortgat) ## Citation Please cite this dataset as: ```latex @misc{entriken_2023_onemillioncommits, author = {William Entriken}, title = {``One million commits''}, year = 2023, howpublished = {Hugging Face datasets}, url = {https://huggingface.co/datasets/fulldecent/one-million-commits} } ```
fulldecent/one-million-commits
[ "task_categories:text-classification", "task_categories:zero-shot-classification", "task_categories:summarization", "task_categories:text2text-generation", "task_categories:sentence-similarity", "size_categories:1M<n<10M", "license:mit", "chemistry", "biology", "finance", "legal", "music", "art", "code", "climate", "medical", "region:us" ]
2023-09-26T00:11:23+00:00
{"license": "mit", "size_categories": ["1M<n<10M"], "task_categories": ["text-classification", "zero-shot-classification", "summarization", "text2text-generation", "sentence-similarity"], "pretty_name": "One million commits", "tags": ["chemistry", "biology", "finance", "legal", "music", "art", "code", "climate", "medical"]}
2023-11-30T19:26:53+00:00
[]
[]
TAGS #task_categories-text-classification #task_categories-zero-shot-classification #task_categories-summarization #task_categories-text2text-generation #task_categories-sentence-similarity #size_categories-1M<n<10M #license-mit #chemistry #biology #finance #legal #music #art #code #climate #medical #region-us
# One million commits A large variety of git commits pulled from across GitHub. Created by William Entriken, released 2023-09-26, version 1. This composition is licensed under the MIT license. ## Intended use This dataset could be used to train a model concerned with programming tasks: 1. Summarize some programming work 1. Perform work given a description of the work to do 1. Learn-by-example the syntax for all active programming languages and structured data formats This dataset could be used to train a model concerned with scoring and trend analysis: 1. Identify trends in programming across years 1. Compare popularity and impact of programming topics/languages (using stars, quality as a signal) 1. Score programming work as good or bad (using authors' ratings as a signal) 1. Identify popular project file layouts for each programming language Because this dataset comprises about a half terabyte of text, it may be useful for a training a general text processing model. Or at least a niche of such a model. ## Factors An attempt was made at providing a large variety of repositories. To example the potential biases, here is a comparison of our dataset to sampling at random from the list of all commits across GitHub. 1. This dataset should overrepresent repositories with 7+ character words. 1. This dataset should overrepresent repositories with few commits (because only one commit is included per repository). 1. This dataset should overrepresent repositories from authors with many repositories. ## Metrics * Commits: 1,000,000 * Distinct repositories: 1,000,000 * Distinct authors: xxx * Bytes: ## Data format The files are distributed as SQLite databases with the schema: Each repository is included at most once, which is why 'repo_full_name' is a unique key. The 'commit_patch' is the full '.patch' file for the commit. Other columns 'repo', 'commit' and 'author' are JSON blobs following the GitHub API schema: <details> <summary>repository</summary> </details> <details> <summary>commit</summary> </details> <details> <summary>user</summary> </details> ## Methodology A set of some random words were started as a seed. Then a search was performed against the GitHub API using one of those words, a random acceptable license, a random sort order and direction. Then from each repository, one of the most recent 1,000 commits was randomly selected. And words over seven characters were added back to the random word set. This process repeated one million times. The acceptable licenses were chosen to include every license which allows that commit to be included in this dataset without me including one million separate licenses notices. Those are (identified by SPDX identifier): - 'apache-2.0' - 'bsl-1.0' - 'bsd-2-clause' - 'bsd-3-clause' - '0bsd' - 'cc0-1.0' - 'wtfpl' - 'isc' - 'mit' - 'postgresql' - 'unlicense' - 'zlib' These licenses are identified in the dataset so you are easily able to filter if needed. ## Ethical considerations Commits on GitHub include author contact information, which is publicly available to anybody even without a GitHub account. This dataset includes that contact information in each commit. This dataset creates a risk for spam unwanted outreach. I considered whether it was appropriate to attempt redacting such information from the commit patches and contents, and ultimately decided against this. Taking my own experience in receiving mail I have found that the vast majority of outreach to my GitHub email address (filtered through Gmail) is seemingly legitimate research requests and genuine interview requests, compared to zero apparent evil phishing or selling attempts. And practically, I understand it would be more effective and trivial for an evil spammer to look up individual author's contact information or to harvest it for a specific topic using the GitHub API than downloading and parsing this terabyte-scale dataset. So overall, the risk was deemed acceptable. ## Caveats and recommendations The database is not normalized. So you may find that author information in one commit does not match that same author's information in another commit. This is simply an artifact of the data collection process having taken course over a period of weeks rather than a single snapshot. ## Acknowledgements Thank you to these people for sharing their GitHub API keys to aid in this research: Dan Tedesco, @037, Sandeep Pani, Axelle Moortgat Please cite this dataset as:
[ "# One million commits\n\nA large variety of git commits pulled from across GitHub.\n\nCreated by William Entriken, released 2023-09-26, version 1.\n\nThis composition is licensed under the MIT license.", "## Intended use\n\nThis dataset could be used to train a model concerned with programming tasks:\n\n1. Summarize some programming work\n1. Perform work given a description of the work to do\n1. Learn-by-example the syntax for all active programming languages and structured data formats\n\nThis dataset could be used to train a model concerned with scoring and trend analysis:\n\n1. Identify trends in programming across years\n1. Compare popularity and impact of programming topics/languages (using stars, quality as a signal)\n1. Score programming work as good or bad (using authors' ratings as a signal)\n1. Identify popular project file layouts for each programming language\n\nBecause this dataset comprises about a half terabyte of text, it may be useful for a training a general text processing model. Or at least a niche of such a model.", "## Factors\n\nAn attempt was made at providing a large variety of repositories. To example the potential biases, here is a comparison of our dataset to sampling at random from the list of all commits across GitHub.\n\n1. This dataset should overrepresent repositories with 7+ character words.\n1. This dataset should overrepresent repositories with few commits (because only one commit is included per repository).\n1. This dataset should overrepresent repositories from authors with many repositories.", "## Metrics\n\n* Commits: 1,000,000\n* Distinct repositories: 1,000,000\n* Distinct authors: xxx\n* Bytes:", "## Data format\n\nThe files are distributed as SQLite databases with the schema:\n\n\n\nEach repository is included at most once, which is why 'repo_full_name' is a unique key. The 'commit_patch' is the full '.patch' file for the commit. Other columns 'repo', 'commit' and 'author' are JSON blobs following the GitHub API schema:\n\n<details>\n\n<summary>repository</summary>\n\n\n\n</details>\n\n<details>\n\n<summary>commit</summary>\n\n\n\n</details>\n\n<details>\n\n<summary>user</summary>\n\n\n\n</details>", "## Methodology\n\nA set of some random words were started as a seed. Then a search was performed against the GitHub API using one of those words, a random acceptable license, a random sort order and direction. Then from each repository, one of the most recent 1,000 commits was randomly selected. And words over seven characters were added back to the random word set. This process repeated one million times.\n\nThe acceptable licenses were chosen to include every license which allows that commit to be included in this dataset without me including one million separate licenses notices. Those are (identified by SPDX identifier):\n\n- 'apache-2.0'\n- 'bsl-1.0'\n- 'bsd-2-clause'\n- 'bsd-3-clause'\n- '0bsd'\n- 'cc0-1.0'\n- 'wtfpl'\n- 'isc'\n- 'mit'\n- 'postgresql'\n- 'unlicense'\n- 'zlib'\n\nThese licenses are identified in the dataset so you are easily able to filter if needed.", "## Ethical considerations\n\nCommits on GitHub include author contact information, which is publicly available to anybody even without a GitHub account. This dataset includes that contact information in each commit. This dataset creates a risk for spam unwanted outreach.\n\nI considered whether it was appropriate to attempt redacting such information from the commit patches and contents, and ultimately decided against this.\n\nTaking my own experience in receiving mail I have found that the vast majority of outreach to my GitHub email address (filtered through Gmail) is seemingly legitimate research requests and genuine interview requests, compared to zero apparent evil phishing or selling attempts. And practically, I understand it would be more effective and trivial for an evil spammer to look up individual author's contact information or to harvest it for a specific topic using the GitHub API than downloading and parsing this terabyte-scale dataset.\n\nSo overall, the risk was deemed acceptable.", "## Caveats and recommendations\n\nThe database is not normalized. So you may find that author information in one commit does not match that same author's information in another commit. This is simply an artifact of the data collection process having taken course over a period of weeks rather than a single snapshot.", "## Acknowledgements\n\nThank you to these people for sharing their GitHub API keys to aid in this research: Dan Tedesco, @037, Sandeep Pani, Axelle Moortgat\n\nPlease cite this dataset as:" ]
[ "TAGS\n#task_categories-text-classification #task_categories-zero-shot-classification #task_categories-summarization #task_categories-text2text-generation #task_categories-sentence-similarity #size_categories-1M<n<10M #license-mit #chemistry #biology #finance #legal #music #art #code #climate #medical #region-us \n", "# One million commits\n\nA large variety of git commits pulled from across GitHub.\n\nCreated by William Entriken, released 2023-09-26, version 1.\n\nThis composition is licensed under the MIT license.", "## Intended use\n\nThis dataset could be used to train a model concerned with programming tasks:\n\n1. Summarize some programming work\n1. Perform work given a description of the work to do\n1. Learn-by-example the syntax for all active programming languages and structured data formats\n\nThis dataset could be used to train a model concerned with scoring and trend analysis:\n\n1. Identify trends in programming across years\n1. Compare popularity and impact of programming topics/languages (using stars, quality as a signal)\n1. Score programming work as good or bad (using authors' ratings as a signal)\n1. Identify popular project file layouts for each programming language\n\nBecause this dataset comprises about a half terabyte of text, it may be useful for a training a general text processing model. Or at least a niche of such a model.", "## Factors\n\nAn attempt was made at providing a large variety of repositories. To example the potential biases, here is a comparison of our dataset to sampling at random from the list of all commits across GitHub.\n\n1. This dataset should overrepresent repositories with 7+ character words.\n1. This dataset should overrepresent repositories with few commits (because only one commit is included per repository).\n1. This dataset should overrepresent repositories from authors with many repositories.", "## Metrics\n\n* Commits: 1,000,000\n* Distinct repositories: 1,000,000\n* Distinct authors: xxx\n* Bytes:", "## Data format\n\nThe files are distributed as SQLite databases with the schema:\n\n\n\nEach repository is included at most once, which is why 'repo_full_name' is a unique key. The 'commit_patch' is the full '.patch' file for the commit. Other columns 'repo', 'commit' and 'author' are JSON blobs following the GitHub API schema:\n\n<details>\n\n<summary>repository</summary>\n\n\n\n</details>\n\n<details>\n\n<summary>commit</summary>\n\n\n\n</details>\n\n<details>\n\n<summary>user</summary>\n\n\n\n</details>", "## Methodology\n\nA set of some random words were started as a seed. Then a search was performed against the GitHub API using one of those words, a random acceptable license, a random sort order and direction. Then from each repository, one of the most recent 1,000 commits was randomly selected. And words over seven characters were added back to the random word set. This process repeated one million times.\n\nThe acceptable licenses were chosen to include every license which allows that commit to be included in this dataset without me including one million separate licenses notices. Those are (identified by SPDX identifier):\n\n- 'apache-2.0'\n- 'bsl-1.0'\n- 'bsd-2-clause'\n- 'bsd-3-clause'\n- '0bsd'\n- 'cc0-1.0'\n- 'wtfpl'\n- 'isc'\n- 'mit'\n- 'postgresql'\n- 'unlicense'\n- 'zlib'\n\nThese licenses are identified in the dataset so you are easily able to filter if needed.", "## Ethical considerations\n\nCommits on GitHub include author contact information, which is publicly available to anybody even without a GitHub account. This dataset includes that contact information in each commit. This dataset creates a risk for spam unwanted outreach.\n\nI considered whether it was appropriate to attempt redacting such information from the commit patches and contents, and ultimately decided against this.\n\nTaking my own experience in receiving mail I have found that the vast majority of outreach to my GitHub email address (filtered through Gmail) is seemingly legitimate research requests and genuine interview requests, compared to zero apparent evil phishing or selling attempts. And practically, I understand it would be more effective and trivial for an evil spammer to look up individual author's contact information or to harvest it for a specific topic using the GitHub API than downloading and parsing this terabyte-scale dataset.\n\nSo overall, the risk was deemed acceptable.", "## Caveats and recommendations\n\nThe database is not normalized. So you may find that author information in one commit does not match that same author's information in another commit. This is simply an artifact of the data collection process having taken course over a period of weeks rather than a single snapshot.", "## Acknowledgements\n\nThank you to these people for sharing their GitHub API keys to aid in this research: Dan Tedesco, @037, Sandeep Pani, Axelle Moortgat\n\nPlease cite this dataset as:" ]
[ 108, 46, 192, 120, 33, 154, 231, 221, 64, 49 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-zero-shot-classification #task_categories-summarization #task_categories-text2text-generation #task_categories-sentence-similarity #size_categories-1M<n<10M #license-mit #chemistry #biology #finance #legal #music #art #code #climate #medical #region-us \n# One million commits\n\nA large variety of git commits pulled from across GitHub.\n\nCreated by William Entriken, released 2023-09-26, version 1.\n\nThis composition is licensed under the MIT license.## Intended use\n\nThis dataset could be used to train a model concerned with programming tasks:\n\n1. Summarize some programming work\n1. Perform work given a description of the work to do\n1. Learn-by-example the syntax for all active programming languages and structured data formats\n\nThis dataset could be used to train a model concerned with scoring and trend analysis:\n\n1. Identify trends in programming across years\n1. Compare popularity and impact of programming topics/languages (using stars, quality as a signal)\n1. Score programming work as good or bad (using authors' ratings as a signal)\n1. Identify popular project file layouts for each programming language\n\nBecause this dataset comprises about a half terabyte of text, it may be useful for a training a general text processing model. Or at least a niche of such a model.## Factors\n\nAn attempt was made at providing a large variety of repositories. To example the potential biases, here is a comparison of our dataset to sampling at random from the list of all commits across GitHub.\n\n1. This dataset should overrepresent repositories with 7+ character words.\n1. This dataset should overrepresent repositories with few commits (because only one commit is included per repository).\n1. This dataset should overrepresent repositories from authors with many repositories.## Metrics\n\n* Commits: 1,000,000\n* Distinct repositories: 1,000,000\n* Distinct authors: xxx\n* Bytes:", "passage: ## Data format\n\nThe files are distributed as SQLite databases with the schema:\n\n\n\nEach repository is included at most once, which is why 'repo_full_name' is a unique key. The 'commit_patch' is the full '.patch' file for the commit. Other columns 'repo', 'commit' and 'author' are JSON blobs following the GitHub API schema:\n\n<details>\n\n<summary>repository</summary>\n\n\n\n</details>\n\n<details>\n\n<summary>commit</summary>\n\n\n\n</details>\n\n<details>\n\n<summary>user</summary>\n\n\n\n</details>## Methodology\n\nA set of some random words were started as a seed. Then a search was performed against the GitHub API using one of those words, a random acceptable license, a random sort order and direction. Then from each repository, one of the most recent 1,000 commits was randomly selected. And words over seven characters were added back to the random word set. This process repeated one million times.\n\nThe acceptable licenses were chosen to include every license which allows that commit to be included in this dataset without me including one million separate licenses notices. Those are (identified by SPDX identifier):\n\n- 'apache-2.0'\n- 'bsl-1.0'\n- 'bsd-2-clause'\n- 'bsd-3-clause'\n- '0bsd'\n- 'cc0-1.0'\n- 'wtfpl'\n- 'isc'\n- 'mit'\n- 'postgresql'\n- 'unlicense'\n- 'zlib'\n\nThese licenses are identified in the dataset so you are easily able to filter if needed.## Ethical considerations\n\nCommits on GitHub include author contact information, which is publicly available to anybody even without a GitHub account. This dataset includes that contact information in each commit. This dataset creates a risk for spam unwanted outreach.\n\nI considered whether it was appropriate to attempt redacting such information from the commit patches and contents, and ultimately decided against this.\n\nTaking my own experience in receiving mail I have found that the vast majority of outreach to my GitHub email address (filtered through Gmail) is seemingly legitimate research requests and genuine interview requests, compared to zero apparent evil phishing or selling attempts. And practically, I understand it would be more effective and trivial for an evil spammer to look up individual author's contact information or to harvest it for a specific topic using the GitHub API than downloading and parsing this terabyte-scale dataset.\n\nSo overall, the risk was deemed acceptable." ]
a3c2d833e8cc606366c062d7ac5b48f0b4e2e523
# Dataset Card for "d39ae2e0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/d39ae2e0
[ "region:us" ]
2023-09-26T00:14:34+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 169, "num_examples": 10}], "download_size": 1317, "dataset_size": 169}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-26T00:14:35+00:00
[]
[]
TAGS #region-us
# Dataset Card for "d39ae2e0" More Information needed
[ "# Dataset Card for \"d39ae2e0\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"d39ae2e0\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"d39ae2e0\"\n\nMore Information needed" ]
9589f254f17bfc66fe0b22842790a4764d0764a9
Language Models for Taiwanese Culture training dataset. ## Citation Please cite the repo if you use the data or code in this repo. ``` @inproceedings{lin-chen-2023-llm, title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models", author = "Lin, Yen-Ting and Chen, Yun-Nung", booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.nlp4convai-1.5", pages = "47--58" } @misc{taiwanllama, author={Lin, Yen-Ting and Chen, Yun-Nung}, title={Taiwanese-Aligned Language Models based on Meta-Llama2}, year={2023}, url={https://github.com/adamlin120/Taiwan-LLaMa}, note={Code and models available at https://github.com/adamlin120/Taiwan-LLaMa}, } ```
DavidLanz/yentinglin-traditional_mandarin_instructions
[ "task_categories:question-answering", "task_categories:summarization", "task_categories:text-generation", "size_categories:10K<n<100K", "language:zh", "language:en", "license:cc-by-sa-3.0", "region:us" ]
2023-09-26T00:20:20+00:00
{"language": ["zh", "en"], "license": "cc-by-sa-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "summarization", "text-generation"]}
2023-09-26T00:37:42+00:00
[]
[ "zh", "en" ]
TAGS #task_categories-question-answering #task_categories-summarization #task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #language-English #license-cc-by-sa-3.0 #region-us
Language Models for Taiwanese Culture training dataset. Please cite the repo if you use the data or code in this repo.
[]
[ "TAGS\n#task_categories-question-answering #task_categories-summarization #task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #language-English #license-cc-by-sa-3.0 #region-us \n" ]
[ 71 ]
[ "passage: TAGS\n#task_categories-question-answering #task_categories-summarization #task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #language-English #license-cc-by-sa-3.0 #region-us \n" ]
1fe0d1500747fb0e9a62c029e2011d444e7a44ce
* Google/MusicCapsのデータをスペクトログラムにしたもの。 * 内容はmickylan2367/ColorSpectrogramと同じ(パケットファイルにしただけ) * 基本的に、このリポジトリはHuggingfaceの実験場。
mickylan2367/ParquetPractice
[ "size_categories:1K<n<10K", "language:en", "license:cc-by-4.0", "music", "region:us" ]
2023-09-26T00:25:09+00:00
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "tags": ["music"]}
2023-10-07T09:54:50+00:00
[]
[ "en" ]
TAGS #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #music #region-us
* Google/MusicCapsのデータをスペクトログラムにしたもの。 * 内容はmickylan2367/ColorSpectrogramと同じ(パケットファイルにしただけ) * 基本的に、このリポジトリはHuggingfaceの実験場。
[]
[ "TAGS\n#size_categories-1K<n<10K #language-English #license-cc-by-4.0 #music #region-us \n" ]
[ 33 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #language-English #license-cc-by-4.0 #music #region-us \n" ]
2096ff6f4aedc50adfe706a5da8aadcf15cd5019
# Dataset Card for "6400c282" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/6400c282
[ "region:us" ]
2023-09-26T00:34:37+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 165, "num_examples": 10}], "download_size": 1313, "dataset_size": 165}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-26T00:34:38+00:00
[]
[]
TAGS #region-us
# Dataset Card for "6400c282" More Information needed
[ "# Dataset Card for \"6400c282\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"6400c282\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"6400c282\"\n\nMore Information needed" ]
849fdc847e9816543f4698f4d2a230d65c74988e
## This is the Official LessWrong-Amplify-Instruct dataset. Over 500 multi-turn examples, and many more coming soon! - This leverages Amplify-Instruct method to extend thousands of scraped Less-Wrong posts into advanced in-depth multi-turn conversations. - Comprised of over 500 highly filtered multi-turn conversations between GPT-4 and real humans. - Average context length per conversation is over 2,000 tokens. (will measure this more accurately soon) - Synthetically created using a newly developed pipeline that leverages GPT-4 to dynamically role play and inquire as the human and assistant. - Each conversation is optimized to amplify the raw knowledge retreival of the model and delve deep into obscure and advanced topics. ## Purpose? - This dataset is not intended to be trained on by itself, however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such! ## Quality filtering and cleaning. - Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021" ## Credits During the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out. Luckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big thank you to J-Supha specifically for making these types of significant contributions. ## Future Plans & How you can help! This is a relatively early build amongst the grand plans for the future of what I plan to work on! In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets. If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord! Citation: ``` @article{daniele2023amplify-instruct, title={Amplify-Instruct: Synthetically Generated Diverse Multi-turn Conversations for Effecient LLM Training.}, author={Daniele, Luigi and Suphavadeeprasit}, journal={arXiv preprint arXiv:(comming soon)}, year={2023} } ```
LDJnr/LessWrong-Amplify-Instruct
[ "task_categories:conversational", "task_categories:question-answering", "task_categories:text-generation", "size_categories:n<1K", "language:en", "license:apache-2.0", "Physics", "Biology", "Math", "Chemistry", "Culture", "Logic", "region:us" ]
2023-09-26T00:42:29+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["conversational", "question-answering", "text-generation"], "pretty_name": "LessWrong-Amplify-Instruct", "tags": ["Physics", "Biology", "Math", "Chemistry", "Culture", "Logic"]}
2023-11-21T17:55:39+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #task_categories-question-answering #task_categories-text-generation #size_categories-n<1K #language-English #license-apache-2.0 #Physics #Biology #Math #Chemistry #Culture #Logic #region-us
## This is the Official LessWrong-Amplify-Instruct dataset. Over 500 multi-turn examples, and many more coming soon! - This leverages Amplify-Instruct method to extend thousands of scraped Less-Wrong posts into advanced in-depth multi-turn conversations. - Comprised of over 500 highly filtered multi-turn conversations between GPT-4 and real humans. - Average context length per conversation is over 2,000 tokens. (will measure this more accurately soon) - Synthetically created using a newly developed pipeline that leverages GPT-4 to dynamically role play and inquire as the human and assistant. - Each conversation is optimized to amplify the raw knowledge retreival of the model and delve deep into obscure and advanced topics. ## Purpose? - This dataset is not intended to be trained on by itself, however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such! ## Quality filtering and cleaning. - Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021" ## Credits During the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out. Luckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big thank you to J-Supha specifically for making these types of significant contributions. ## Future Plans & How you can help! This is a relatively early build amongst the grand plans for the future of what I plan to work on! In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets. If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord! Citation:
[ "## This is the Official LessWrong-Amplify-Instruct dataset. Over 500 multi-turn examples, and many more coming soon!\n\n - This leverages Amplify-Instruct method to extend thousands of scraped Less-Wrong posts into advanced in-depth multi-turn conversations.\n\n - Comprised of over 500 highly filtered multi-turn conversations between GPT-4 and real humans.\n\n - Average context length per conversation is over 2,000 tokens. (will measure this more accurately soon)\n\n - Synthetically created using a newly developed pipeline that leverages GPT-4 to dynamically role play and inquire as the human and assistant.\n\n - Each conversation is optimized to amplify the raw knowledge retreival of the model and delve deep into obscure and advanced topics.", "## Purpose?\n\n - This dataset is not intended to be trained on by itself, however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such!", "## Quality filtering and cleaning.\n\n - Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as \"As an AI language model\" and \"September 2021\"", "## Credits\n\nDuring the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out.\n\nLuckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big thank you to J-Supha specifically for making these types of significant contributions.", "## Future Plans & How you can help!\n\nThis is a relatively early build amongst the grand plans for the future of what I plan to work on! \n\nIn the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets.\n\nIf you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!\n\nCitation:" ]
[ "TAGS\n#task_categories-conversational #task_categories-question-answering #task_categories-text-generation #size_categories-n<1K #language-English #license-apache-2.0 #Physics #Biology #Math #Chemistry #Culture #Logic #region-us \n", "## This is the Official LessWrong-Amplify-Instruct dataset. Over 500 multi-turn examples, and many more coming soon!\n\n - This leverages Amplify-Instruct method to extend thousands of scraped Less-Wrong posts into advanced in-depth multi-turn conversations.\n\n - Comprised of over 500 highly filtered multi-turn conversations between GPT-4 and real humans.\n\n - Average context length per conversation is over 2,000 tokens. (will measure this more accurately soon)\n\n - Synthetically created using a newly developed pipeline that leverages GPT-4 to dynamically role play and inquire as the human and assistant.\n\n - Each conversation is optimized to amplify the raw knowledge retreival of the model and delve deep into obscure and advanced topics.", "## Purpose?\n\n - This dataset is not intended to be trained on by itself, however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such!", "## Quality filtering and cleaning.\n\n - Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as \"As an AI language model\" and \"September 2021\"", "## Credits\n\nDuring the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out.\n\nLuckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big thank you to J-Supha specifically for making these types of significant contributions.", "## Future Plans & How you can help!\n\nThis is a relatively early build amongst the grand plans for the future of what I plan to work on! \n\nIn the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets.\n\nIf you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!\n\nCitation:" ]
[ 81, 178, 69, 45, 85, 127 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-question-answering #task_categories-text-generation #size_categories-n<1K #language-English #license-apache-2.0 #Physics #Biology #Math #Chemistry #Culture #Logic #region-us \n## This is the Official LessWrong-Amplify-Instruct dataset. Over 500 multi-turn examples, and many more coming soon!\n\n - This leverages Amplify-Instruct method to extend thousands of scraped Less-Wrong posts into advanced in-depth multi-turn conversations.\n\n - Comprised of over 500 highly filtered multi-turn conversations between GPT-4 and real humans.\n\n - Average context length per conversation is over 2,000 tokens. (will measure this more accurately soon)\n\n - Synthetically created using a newly developed pipeline that leverages GPT-4 to dynamically role play and inquire as the human and assistant.\n\n - Each conversation is optimized to amplify the raw knowledge retreival of the model and delve deep into obscure and advanced topics.## Purpose?\n\n - This dataset is not intended to be trained on by itself, however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such!## Quality filtering and cleaning.\n\n - Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as \"As an AI language model\" and \"September 2021\"## Credits\n\nDuring the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out.\n\nLuckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big thank you to J-Supha specifically for making these types of significant contributions." ]
4dbe16db47bcbde2816124934a52729ad874963f
# Dataset of matsubara_kanon/松原花音/마츠바라카논 (BanG Dream!) This is the dataset of matsubara_kanon/松原花音/마츠바라카논 (BanG Dream!), containing 500 images and their tags. The core tags of this character are `blue_hair, purple_eyes, bangs, long_hair, one_side_up, ribbon, hair_ribbon, hair_ornament, light_blue_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 683.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsubara_kanon_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 399.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsubara_kanon_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1237 | 872.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsubara_kanon_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 610.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsubara_kanon_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1237 | 1.22 GiB | [Download](https://huggingface.co/datasets/CyberHarem/matsubara_kanon_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/matsubara_kanon_bangdream', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 15 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | epaulettes, 1girl, band_uniform, shako_cap, solo, looking_at_viewer, open_mouth, green_ribbon, holding, sleeveless, wrist_cuffs, drumsticks, blush, white_skirt, :d, instrument | | 1 | 44 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, looking_at_viewer, solo, blush, bare_shoulders, short_sleeves, smile, hair_flower, frilled_sleeves, simple_background, white_background, blue_shirt, dress, upper_body, collarbone, green_ribbon, open_mouth, frilled_shirt, off_shoulder | | 2 | 13 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, blush, looking_at_viewer, solo, bow, short_sleeves, striped_ribbon, twintails, alternate_hairstyle, wrist_cuffs, earrings, red_ribbon, holding_gift, valentine, frilled_dress, center_frills, frilled_sleeves, gift_box, hair_flower, heart-shaped_box, heart_hair_ornament, white_background, :d, blue_ribbon, open_mouth, thighhighs | | 3 | 11 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, solo, earrings, looking_at_viewer, double_bun, blush, open_mouth, :d, dress, braid, jellyfish, long_sleeves, see-through_sleeves, shirt, hand_on_own_chest, upper_body | | 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, bow, detached_sleeves, looking_at_viewer, smile, solo, tiara, wrist_cuffs, blush, double_bun, frills, underwater, air_bubble, earrings, bun_cover, jellyfish, midriff, sidelocks, skirt, aqua_hair, bloomers, cleavage, gem, medium_breasts, open_mouth, pink_eyes, pink_ribbon | | 5 | 35 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | hanasakigawa_school_uniform, long_sleeves, 1girl, sailor_dress, blush, solo, brown_dress, red_ribbon, looking_at_viewer, neck_ribbon, smile, green_ribbon, open_mouth, double-breasted, upper_body, white_background, flower, simple_background, white_sailor_collar | | 6 | 7 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, bow, hair_flower, hairband, solo, white_flower, alternate_hairstyle, blush, floral_print, hair_over_shoulder, long_sleeves, ribbon_braid, single_braid, strawberry_hair_ornament, looking_at_viewer, print_dress, striped_ribbon, upper_body, white_gloves, :d, blue_dress, flower_earrings, open_mouth, pink_ribbon, upper_teeth_only, white_background, frilled_dress, frilled_shirt_collar, frilled_sleeves, red_ribbon | | 7 | 7 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, blush, hair_flower, long_sleeves, looking_at_viewer, solo, wide_sleeves, green_kimono, white_background, frilled_sleeves, maid_apron, white_flower, cosplay, frilled_apron, open_mouth, pom_pom_(clothes), wa_maid, white_apron, :d, green_bow, green_ribbon, hair_between_eyes, simple_background, striped_bow, upper_body | | 8 | 6 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, frills, looking_at_viewer, solo, blush, hat_bow, hat_ribbon, rabbit_ears, top_hat, chain, open_mouth, skirt, sleeveless, upper_body, :d, black_headwear, black_ribbon, blue_bow, heart_earrings, neck_ribbon, striped, watch, white_background, wrist_cuffs | | 9 | 10 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 2girls, blush, open_mouth, yuri, white_background, closed_eyes, blonde_hair | | 10 | 13 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | 1girl, day, outdoors, solo, cloud, looking_at_viewer, bikini, blue_sky, blush, collarbone, open_mouth, cleavage, navel, :d, frills, medium_breasts, ocean, ball, holding, jewelry, standing | | 11 | 6 | ![](samples/11/clu11-sample0.png) | ![](samples/11/clu11-sample1.png) | ![](samples/11/clu11-sample2.png) | ![](samples/11/clu11-sample3.png) | ![](samples/11/clu11-sample4.png) | 1girl, blush, hetero, solo_focus, 1boy, collarbone, large_breasts, looking_at_viewer, nipples, open_mouth, penis, clothing_aside, cum_in_pussy, sex, spread_legs, sweat, vaginal, bar_censor, bikini, girl_on_top, heavy_breathing, medium_breasts, mosaic_censoring, navel, sidelocks, straddling, tears | | 12 | 6 | ![](samples/12/clu12-sample0.png) | ![](samples/12/clu12-sample1.png) | ![](samples/12/clu12-sample2.png) | ![](samples/12/clu12-sample3.png) | ![](samples/12/clu12-sample4.png) | 1girl, hetero, nipples, open_mouth, penis, sex, solo_focus, heart-shaped_pupils, medium_breasts, navel, pussy, spread_legs, 1boy, completely_nude, cum, mosaic_censoring, nose_blush, sweat, testicles, vaginal | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | epaulettes | 1girl | band_uniform | shako_cap | solo | looking_at_viewer | open_mouth | green_ribbon | holding | sleeveless | wrist_cuffs | drumsticks | blush | white_skirt | :d | instrument | bare_shoulders | short_sleeves | smile | hair_flower | frilled_sleeves | simple_background | white_background | blue_shirt | dress | upper_body | collarbone | frilled_shirt | off_shoulder | bow | striped_ribbon | twintails | alternate_hairstyle | earrings | red_ribbon | holding_gift | valentine | frilled_dress | center_frills | gift_box | heart-shaped_box | heart_hair_ornament | blue_ribbon | thighhighs | double_bun | braid | jellyfish | long_sleeves | see-through_sleeves | shirt | hand_on_own_chest | detached_sleeves | tiara | frills | underwater | air_bubble | bun_cover | midriff | sidelocks | skirt | aqua_hair | bloomers | cleavage | gem | medium_breasts | pink_eyes | pink_ribbon | hanasakigawa_school_uniform | sailor_dress | brown_dress | neck_ribbon | double-breasted | flower | white_sailor_collar | hairband | white_flower | floral_print | hair_over_shoulder | ribbon_braid | single_braid | strawberry_hair_ornament | print_dress | white_gloves | blue_dress | flower_earrings | upper_teeth_only | frilled_shirt_collar | wide_sleeves | green_kimono | maid_apron | cosplay | frilled_apron | pom_pom_(clothes) | wa_maid | white_apron | green_bow | hair_between_eyes | striped_bow | hat_bow | hat_ribbon | rabbit_ears | top_hat | chain | black_headwear | black_ribbon | blue_bow | heart_earrings | striped | watch | 2girls | yuri | closed_eyes | blonde_hair | day | outdoors | cloud | bikini | blue_sky | navel | ocean | ball | jewelry | standing | hetero | solo_focus | 1boy | large_breasts | nipples | penis | clothing_aside | cum_in_pussy | sex | spread_legs | sweat | vaginal | bar_censor | girl_on_top | heavy_breathing | mosaic_censoring | straddling | tears | heart-shaped_pupils | pussy | completely_nude | cum | nose_blush | testicles | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-------------|:--------|:---------------|:------------|:-------|:--------------------|:-------------|:---------------|:----------|:-------------|:--------------|:-------------|:--------|:--------------|:-----|:-------------|:-----------------|:----------------|:--------|:--------------|:------------------|:--------------------|:-------------------|:-------------|:--------|:-------------|:-------------|:----------------|:---------------|:------|:-----------------|:------------|:----------------------|:-----------|:-------------|:---------------|:------------|:----------------|:----------------|:-----------|:-------------------|:----------------------|:--------------|:-------------|:-------------|:--------|:------------|:---------------|:----------------------|:--------|:--------------------|:-------------------|:--------|:---------|:-------------|:-------------|:------------|:----------|:------------|:--------|:------------|:-----------|:-----------|:------|:-----------------|:------------|:--------------|:------------------------------|:---------------|:--------------|:--------------|:------------------|:---------|:----------------------|:-----------|:---------------|:---------------|:---------------------|:---------------|:---------------|:---------------------------|:--------------|:---------------|:-------------|:------------------|:-------------------|:-----------------------|:---------------|:---------------|:-------------|:----------|:----------------|:--------------------|:----------|:--------------|:------------|:--------------------|:--------------|:----------|:-------------|:--------------|:----------|:--------|:-----------------|:---------------|:-----------|:-----------------|:----------|:--------|:---------|:-------|:--------------|:--------------|:------|:-----------|:--------|:---------|:-----------|:--------|:--------|:-------|:----------|:-----------|:---------|:-------------|:-------|:----------------|:----------|:--------|:-----------------|:---------------|:------|:--------------|:--------|:----------|:-------------|:--------------|:------------------|:-------------------|:-------------|:--------|:----------------------|:--------|:------------------|:------|:-------------|:------------| | 0 | 15 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 44 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | | X | | | X | X | X | X | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 13 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | | X | | | X | X | X | | | | X | | X | | X | | | X | | X | X | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 11 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | | X | | | X | X | X | | | | | | X | | X | | | | | | | | | | X | X | | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | | X | | | X | X | X | | | | X | | X | | | | | | X | | | | | | | | | | | X | | | | X | | | | | | | | | | | X | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 35 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | | X | | | X | X | X | X | | | | | X | | | | | | X | | | X | X | | | X | | | | | | | | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 7 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | | X | | | X | X | X | | | | | | X | | X | | | | | X | X | | X | | | X | | | | X | X | | X | | X | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 7 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | | X | | | X | X | X | X | | | | | X | | X | | | | | X | X | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 6 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | | X | | | X | X | X | | | X | X | | X | | X | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 9 | 10 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | | | | | | | X | | | | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 10 | 13 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | | X | | | X | X | X | | X | | | | X | | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | 11 | 6 | ![](samples/11/clu11-sample0.png) | ![](samples/11/clu11-sample1.png) | ![](samples/11/clu11-sample2.png) | ![](samples/11/clu11-sample3.png) | ![](samples/11/clu11-sample4.png) | | X | | | | X | X | | | | | | X | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | 12 | 6 | ![](samples/12/clu12-sample0.png) | ![](samples/12/clu12-sample1.png) | ![](samples/12/clu12-sample2.png) | ![](samples/12/clu12-sample3.png) | ![](samples/12/clu12-sample4.png) | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | X | X | | X | X | | | X | X | X | X | | | | X | | | X | X | X | X | X | X |
CyberHarem/matsubara_kanon_bangdream
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-26T00:44:33+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T17:23:44+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of matsubara\_kanon/松原花音/마츠바라카논 (BanG Dream!) ===================================================== This is the dataset of matsubara\_kanon/松原花音/마츠바라카논 (BanG Dream!), containing 500 images and their tags. The core tags of this character are 'blue\_hair, purple\_eyes, bangs, long\_hair, one\_side\_up, ribbon, hair\_ribbon, hair\_ornament, light\_blue\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
d1169e7c4ff2875016a01429bbb95b4f8b5f60b6
Language Models for Taiwanese Culture training dataset. ## Citation Please cite the repo if you use the data or code in this repo. ``` @inproceedings{lin-chen-2023-llm, title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models", author = "Lin, Yen-Ting and Chen, Yun-Nung", booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.nlp4convai-1.5", pages = "47--58" } @misc{taiwanllama, author={Lin, Yen-Ting and Chen, Yun-Nung}, title={Taiwanese-Aligned Language Models based on Meta-Llama2}, year={2023}, url={https://github.com/adamlin120/Taiwan-LLaMa}, note={Code and models available at https://github.com/adamlin120/Taiwan-LLaMa}, } ```
DavidLanz/yentinglin-zh_TW_c4
[ "task_categories:question-answering", "task_categories:summarization", "task_categories:text-generation", "size_categories:10K<n<100K", "language:zh", "language:en", "license:cc-by-sa-3.0", "region:us" ]
2023-09-26T00:45:31+00:00
{"language": ["zh", "en"], "license": "cc-by-sa-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "summarization", "text-generation"]}
2023-09-26T00:58:39+00:00
[]
[ "zh", "en" ]
TAGS #task_categories-question-answering #task_categories-summarization #task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #language-English #license-cc-by-sa-3.0 #region-us
Language Models for Taiwanese Culture training dataset. Please cite the repo if you use the data or code in this repo.
[]
[ "TAGS\n#task_categories-question-answering #task_categories-summarization #task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #language-English #license-cc-by-sa-3.0 #region-us \n" ]
[ 71 ]
[ "passage: TAGS\n#task_categories-question-answering #task_categories-summarization #task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #language-English #license-cc-by-sa-3.0 #region-us \n" ]
f807c01d064f8f1d9d903049f88806ecc6c18909
## This is the Official Pure-Dove dataset. Over 3K multi-turn examples, and many more coming soon! This dataset aims to be the largest highest quality cluster of real human back and forth conversations with GPT-4. Steps have even been done to ensure that only the best GPT-4 conversations in comparisons are kept, there are many instances where two GPT-4 responses are rated as equal to eachother or as both bad. We exclude all such responses from Pure Dove and make sure to only include ChatBot Arena responses that are voted as being better even against another instance of GPT-4. - Comprised of over 3000 highly filtered multi-turn conversations between GPT-4 and real humans. - Average context length per conversation is over 800 tokens. ## Purpose? - This dataset is not particularly intended to be trained on by itself, however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such! ## Quality filtering and cleaning. - The conversations were sourced from openly datasets such as ShareGPT and ChatBotArena by Lmsys, however, a large portion of these chats were riddled with hallucinations and abnormal distributions of different languages. - Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021", not just in english, but other languages too! ## Credits During the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out. Luckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big credit to J-Supha within NousResearch specifically for making these types of significant contributions. ## Future Plans & How you can help! This is a relatively early build amongst the grand plans for the future of what I plan to work on! In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets. If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord! Citation: ``` @article{daniele2023amplify-instruct, title={Amplify-Instruct: Synthetically Generated Diverse Multi-turn Conversations for Effecient LLM Training.}, author={Daniele, Luigi and Suphavadeeprasit}, journal={arXiv preprint arXiv:(comming soon)}, year={2023} } ```
LDJnr/Pure-Dove
[ "task_categories:conversational", "task_categories:question-answering", "task_categories:text-generation", "size_categories:1K<n<10K", "language:en", "license:apache-2.0", "Physics", "Biology", "Math", "Chemistry", "Culture", "Logic", "Roleplay", "region:us" ]
2023-09-26T01:06:24+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["conversational", "question-answering", "text-generation"], "pretty_name": "Pure-Dove", "tags": ["Physics", "Biology", "Math", "Chemistry", "Culture", "Logic", "Roleplay"]}
2023-11-21T17:55:19+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #task_categories-question-answering #task_categories-text-generation #size_categories-1K<n<10K #language-English #license-apache-2.0 #Physics #Biology #Math #Chemistry #Culture #Logic #Roleplay #region-us
## This is the Official Pure-Dove dataset. Over 3K multi-turn examples, and many more coming soon! This dataset aims to be the largest highest quality cluster of real human back and forth conversations with GPT-4. Steps have even been done to ensure that only the best GPT-4 conversations in comparisons are kept, there are many instances where two GPT-4 responses are rated as equal to eachother or as both bad. We exclude all such responses from Pure Dove and make sure to only include ChatBot Arena responses that are voted as being better even against another instance of GPT-4. - Comprised of over 3000 highly filtered multi-turn conversations between GPT-4 and real humans. - Average context length per conversation is over 800 tokens. ## Purpose? - This dataset is not particularly intended to be trained on by itself, however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such! ## Quality filtering and cleaning. - The conversations were sourced from openly datasets such as ShareGPT and ChatBotArena by Lmsys, however, a large portion of these chats were riddled with hallucinations and abnormal distributions of different languages. - Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021", not just in english, but other languages too! ## Credits During the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out. Luckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big credit to J-Supha within NousResearch specifically for making these types of significant contributions. ## Future Plans & How you can help! This is a relatively early build amongst the grand plans for the future of what I plan to work on! In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets. If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord! Citation:
[ "## This is the Official Pure-Dove dataset. Over 3K multi-turn examples, and many more coming soon!\n\nThis dataset aims to be the largest highest quality cluster of real human back and forth conversations with GPT-4.\n\nSteps have even been done to ensure that only the best GPT-4 conversations in comparisons are kept, there are many instances where two GPT-4 responses are rated as equal to eachother or as both bad. We exclude all such responses from Pure Dove and make sure to only include ChatBot Arena responses that are voted as being better even against another instance of GPT-4.\n\n - Comprised of over 3000 highly filtered multi-turn conversations between GPT-4 and real humans.\n\n - Average context length per conversation is over 800 tokens.", "## Purpose?\n\n - This dataset is not particularly intended to be trained on by itself, however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such!", "## Quality filtering and cleaning.\n\n - The conversations were sourced from openly datasets such as ShareGPT and ChatBotArena by Lmsys, however, a large portion of these chats were riddled with hallucinations and abnormal distributions of different languages. \n\n - Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as \"As an AI language model\" and \"September 2021\", not just in english, but other languages too!", "## Credits\n\nDuring the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out.\n\nLuckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big credit to J-Supha within NousResearch specifically for making these types of significant contributions.", "## Future Plans & How you can help!\n\nThis is a relatively early build amongst the grand plans for the future of what I plan to work on! \n\nIn the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets.\n\nIf you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!\n\nCitation:" ]
[ "TAGS\n#task_categories-conversational #task_categories-question-answering #task_categories-text-generation #size_categories-1K<n<10K #language-English #license-apache-2.0 #Physics #Biology #Math #Chemistry #Culture #Logic #Roleplay #region-us \n", "## This is the Official Pure-Dove dataset. Over 3K multi-turn examples, and many more coming soon!\n\nThis dataset aims to be the largest highest quality cluster of real human back and forth conversations with GPT-4.\n\nSteps have even been done to ensure that only the best GPT-4 conversations in comparisons are kept, there are many instances where two GPT-4 responses are rated as equal to eachother or as both bad. We exclude all such responses from Pure Dove and make sure to only include ChatBot Arena responses that are voted as being better even against another instance of GPT-4.\n\n - Comprised of over 3000 highly filtered multi-turn conversations between GPT-4 and real humans.\n\n - Average context length per conversation is over 800 tokens.", "## Purpose?\n\n - This dataset is not particularly intended to be trained on by itself, however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such!", "## Quality filtering and cleaning.\n\n - The conversations were sourced from openly datasets such as ShareGPT and ChatBotArena by Lmsys, however, a large portion of these chats were riddled with hallucinations and abnormal distributions of different languages. \n\n - Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as \"As an AI language model\" and \"September 2021\", not just in english, but other languages too!", "## Credits\n\nDuring the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out.\n\nLuckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big credit to J-Supha within NousResearch specifically for making these types of significant contributions.", "## Future Plans & How you can help!\n\nThis is a relatively early build amongst the grand plans for the future of what I plan to work on! \n\nIn the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets.\n\nIf you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!\n\nCitation:" ]
[ 87, 175, 70, 110, 88, 127 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-question-answering #task_categories-text-generation #size_categories-1K<n<10K #language-English #license-apache-2.0 #Physics #Biology #Math #Chemistry #Culture #Logic #Roleplay #region-us \n## This is the Official Pure-Dove dataset. Over 3K multi-turn examples, and many more coming soon!\n\nThis dataset aims to be the largest highest quality cluster of real human back and forth conversations with GPT-4.\n\nSteps have even been done to ensure that only the best GPT-4 conversations in comparisons are kept, there are many instances where two GPT-4 responses are rated as equal to eachother or as both bad. We exclude all such responses from Pure Dove and make sure to only include ChatBot Arena responses that are voted as being better even against another instance of GPT-4.\n\n - Comprised of over 3000 highly filtered multi-turn conversations between GPT-4 and real humans.\n\n - Average context length per conversation is over 800 tokens.## Purpose?\n\n - This dataset is not particularly intended to be trained on by itself, however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such!## Quality filtering and cleaning.\n\n - The conversations were sourced from openly datasets such as ShareGPT and ChatBotArena by Lmsys, however, a large portion of these chats were riddled with hallucinations and abnormal distributions of different languages. \n\n - Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as \"As an AI language model\" and \"September 2021\", not just in english, but other languages too!" ]
df1256486e65aaffeb37cabae1b358a3ed885c33
# Japanese Question-Answering Corpus (JQAC) This work was supported by Works Appilcations Co., Ltd. This containt is managed by Hiroki Tanioka (taniokah[at]gmail.com), since 2018. [https://taniokah.github.io/jqac/](https://taniokah.github.io/jqac/) --- license: cc-by-sa-4.0 ---
kurusunagisa/jqac
[ "region:us" ]
2023-09-26T01:16:55+00:00
{}
2023-09-26T01:20:25+00:00
[]
[]
TAGS #region-us
# Japanese Question-Answering Corpus (JQAC) This work was supported by Works Appilcations Co., Ltd. This containt is managed by Hiroki Tanioka (taniokah[at]URL), since 2018. URL --- license: cc-by-sa-4.0 ---
[ "# Japanese Question-Answering Corpus (JQAC) \n\nThis work was supported by Works Appilcations Co., Ltd.\n\nThis containt is managed by Hiroki Tanioka (taniokah[at]URL), since 2018.\n\nURL\n\n---\nlicense: cc-by-sa-4.0\n---" ]
[ "TAGS\n#region-us \n", "# Japanese Question-Answering Corpus (JQAC) \n\nThis work was supported by Works Appilcations Co., Ltd.\n\nThis containt is managed by Hiroki Tanioka (taniokah[at]URL), since 2018.\n\nURL\n\n---\nlicense: cc-by-sa-4.0\n---" ]
[ 6, 64 ]
[ "passage: TAGS\n#region-us \n# Japanese Question-Answering Corpus (JQAC) \n\nThis work was supported by Works Appilcations Co., Ltd.\n\nThis containt is managed by Hiroki Tanioka (taniokah[at]URL), since 2018.\n\nURL\n\n---\nlicense: cc-by-sa-4.0\n---" ]
06ed00e273a1a0d8975de2ae2bd52875ee4e7e0a
## This is the Official Verified Camel dataset. Just over 100 verified examples, and many more coming soon! - Comprised of over 100 highly filtered and curated examples from specific portions of CamelAI stem datasets. - These examples are verified to be true by experts in the specific related field, with atleast a bachelors degree in the subject. - Roughly 30-40% of the originally curated data from CamelAI was found to have atleast minor errors and/or incoherent questions(as determined by experts in said field) ## Purpose? - This dataset is not intended to be trained on by itself(besides perhaps interesting research purposes) however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such! ## Quality filtering and cleaning. - Extensive cleaning was done to make sure there is no possible instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021" - This was done for the initial curation due to the responses being originally created by GPT-4. ## Future Plans & How you can help! This is a relatively early build amongst the grand plans for the future of what I plan to work on! In the near future we plan on leveraging the help of even more domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets. If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord! Citation: ``` @article{daniele2023amplify-instruct, title={Amplify-Instruct: Synthetically Generated Diverse Multi-turn Conversations for Effecient LLM Training.}, author={Daniele, Luigi and Suphavadeeprasit}, journal={arXiv preprint arXiv:(comming soon)}, year={2023} } ```
LDJnr/Verified-Camel
[ "task_categories:conversational", "task_categories:question-answering", "task_categories:text-generation", "size_categories:n<1K", "language:en", "license:apache-2.0", "Physics", "Biology", "Math", "Chemistry", "Culture", "Logic", "region:us" ]
2023-09-26T01:20:36+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["conversational", "question-answering", "text-generation"], "pretty_name": "Verified-Camel", "tags": ["Physics", "Biology", "Math", "Chemistry", "Culture", "Logic"]}
2023-11-21T17:55:57+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #task_categories-question-answering #task_categories-text-generation #size_categories-n<1K #language-English #license-apache-2.0 #Physics #Biology #Math #Chemistry #Culture #Logic #region-us
## This is the Official Verified Camel dataset. Just over 100 verified examples, and many more coming soon! - Comprised of over 100 highly filtered and curated examples from specific portions of CamelAI stem datasets. - These examples are verified to be true by experts in the specific related field, with atleast a bachelors degree in the subject. - Roughly 30-40% of the originally curated data from CamelAI was found to have atleast minor errors and/or incoherent questions(as determined by experts in said field) ## Purpose? - This dataset is not intended to be trained on by itself(besides perhaps interesting research purposes) however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such! ## Quality filtering and cleaning. - Extensive cleaning was done to make sure there is no possible instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021" - This was done for the initial curation due to the responses being originally created by GPT-4. ## Future Plans & How you can help! This is a relatively early build amongst the grand plans for the future of what I plan to work on! In the near future we plan on leveraging the help of even more domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets. If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord! Citation:
[ "## This is the Official Verified Camel dataset. Just over 100 verified examples, and many more coming soon!\n\n - Comprised of over 100 highly filtered and curated examples from specific portions of CamelAI stem datasets. \n\n - These examples are verified to be true by experts in the specific related field, with atleast a bachelors degree in the subject.\n\n - Roughly 30-40% of the originally curated data from CamelAI was found to have atleast minor errors and/or incoherent questions(as determined by experts in said field)", "## Purpose?\n\n - This dataset is not intended to be trained on by itself(besides perhaps interesting research purposes) however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such!", "## Quality filtering and cleaning.\n\n - Extensive cleaning was done to make sure there is no possible instances of overt AI moralizing or related behaviour, such as \"As an AI language model\" and \"September 2021\"\n\n - This was done for the initial curation due to the responses being originally created by GPT-4.", "## Future Plans & How you can help!\n\nThis is a relatively early build amongst the grand plans for the future of what I plan to work on! \n\nIn the near future we plan on leveraging the help of even more domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets.\n\nIf you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!\n\nCitation:" ]
[ "TAGS\n#task_categories-conversational #task_categories-question-answering #task_categories-text-generation #size_categories-n<1K #language-English #license-apache-2.0 #Physics #Biology #Math #Chemistry #Culture #Logic #region-us \n", "## This is the Official Verified Camel dataset. Just over 100 verified examples, and many more coming soon!\n\n - Comprised of over 100 highly filtered and curated examples from specific portions of CamelAI stem datasets. \n\n - These examples are verified to be true by experts in the specific related field, with atleast a bachelors degree in the subject.\n\n - Roughly 30-40% of the originally curated data from CamelAI was found to have atleast minor errors and/or incoherent questions(as determined by experts in said field)", "## Purpose?\n\n - This dataset is not intended to be trained on by itself(besides perhaps interesting research purposes) however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such!", "## Quality filtering and cleaning.\n\n - Extensive cleaning was done to make sure there is no possible instances of overt AI moralizing or related behaviour, such as \"As an AI language model\" and \"September 2021\"\n\n - This was done for the initial curation due to the responses being originally created by GPT-4.", "## Future Plans & How you can help!\n\nThis is a relatively early build amongst the grand plans for the future of what I plan to work on! \n\nIn the near future we plan on leveraging the help of even more domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets.\n\nIf you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!\n\nCitation:" ]
[ 81, 126, 77, 72, 129 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-question-answering #task_categories-text-generation #size_categories-n<1K #language-English #license-apache-2.0 #Physics #Biology #Math #Chemistry #Culture #Logic #region-us \n## This is the Official Verified Camel dataset. Just over 100 verified examples, and many more coming soon!\n\n - Comprised of over 100 highly filtered and curated examples from specific portions of CamelAI stem datasets. \n\n - These examples are verified to be true by experts in the specific related field, with atleast a bachelors degree in the subject.\n\n - Roughly 30-40% of the originally curated data from CamelAI was found to have atleast minor errors and/or incoherent questions(as determined by experts in said field)## Purpose?\n\n - This dataset is not intended to be trained on by itself(besides perhaps interesting research purposes) however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such!## Quality filtering and cleaning.\n\n - Extensive cleaning was done to make sure there is no possible instances of overt AI moralizing or related behaviour, such as \"As an AI language model\" and \"September 2021\"\n\n - This was done for the initial curation due to the responses being originally created by GPT-4.## Future Plans & How you can help!\n\nThis is a relatively early build amongst the grand plans for the future of what I plan to work on! \n\nIn the near future we plan on leveraging the help of even more domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets.\n\nIf you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!\n\nCitation:" ]
22da98e54a4292ab5292d80dd101988e9cdd154f
# Dataset Card for "instruct_v1_5k_and_lima" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aditijha/instruct_v1_5k_and_lima
[ "region:us" ]
2023-09-26T01:22:09+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6691318, "num_examples": 6000}], "download_size": 3598588, "dataset_size": 6691318}}
2023-09-26T01:22:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "instruct_v1_5k_and_lima" More Information needed
[ "# Dataset Card for \"instruct_v1_5k_and_lima\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"instruct_v1_5k_and_lima\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"instruct_v1_5k_and_lima\"\n\nMore Information needed" ]
79846ac78ce08b7c8dee84cebc7d91d1345dcc9e
# Dataset Card for "instruct_v1_10k_and_lima" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aditijha/instruct_v1_10k_and_lima
[ "region:us" ]
2023-09-26T01:22:33+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10473658, "num_examples": 11000}], "download_size": 5587292, "dataset_size": 10473658}}
2023-09-26T01:22:34+00:00
[]
[]
TAGS #region-us
# Dataset Card for "instruct_v1_10k_and_lima" More Information needed
[ "# Dataset Card for \"instruct_v1_10k_and_lima\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"instruct_v1_10k_and_lima\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"instruct_v1_10k_and_lima\"\n\nMore Information needed" ]
c404bfde9c9e5f39947db6ecbac586c222775c47
This is a Yolo format dataset with images annotated using Roboflow. All the images are of healthy, normal human ear drums or tympanic membranes. Both right and left tympanic membranes are included. The idea was to create a model to identify normal versus diseased ear drums (mostly by the absence of light reflex). The model was able to reach 100% accuracy with this dataset in correctly identifying the presence of light reflex.
kephalian/Ear_drum_identification
[ "task_categories:object-detection", "size_categories:n<1K", "language:en", "license:apache-2.0", "doi:10.57967/hf/1151", "region:us" ]
2023-09-26T01:33:20+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["object-detection"]}
2023-09-26T01:38:32+00:00
[]
[ "en" ]
TAGS #task_categories-object-detection #size_categories-n<1K #language-English #license-apache-2.0 #doi-10.57967/hf/1151 #region-us
This is a Yolo format dataset with images annotated using Roboflow. All the images are of healthy, normal human ear drums or tympanic membranes. Both right and left tympanic membranes are included. The idea was to create a model to identify normal versus diseased ear drums (mostly by the absence of light reflex). The model was able to reach 100% accuracy with this dataset in correctly identifying the presence of light reflex.
[]
[ "TAGS\n#task_categories-object-detection #size_categories-n<1K #language-English #license-apache-2.0 #doi-10.57967/hf/1151 #region-us \n" ]
[ 51 ]
[ "passage: TAGS\n#task_categories-object-detection #size_categories-n<1K #language-English #license-apache-2.0 #doi-10.57967/hf/1151 #region-us \n" ]
c7e13037d0ffeb666ef6c874962841778c3d8d12
# Dataset Card for "cybersecurity-corpus-llama2-1k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AR2021/cybersecurity-corpus-llama2-1k
[ "region:us" ]
2023-09-26T01:48:42+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 33142, "num_examples": 789}], "download_size": 14328, "dataset_size": 33142}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-26T01:57:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cybersecurity-corpus-llama2-1k" More Information needed
[ "# Dataset Card for \"cybersecurity-corpus-llama2-1k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cybersecurity-corpus-llama2-1k\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"cybersecurity-corpus-llama2-1k\"\n\nMore Information needed" ]
00d6c044e5a7d11aca74ab0ef8bac7e212aa7d9d
Language Models for Taiwanese Culture training dataset. ## Citation Please cite the repo if you use the data or code in this repo. ``` @inproceedings{lin-chen-2023-llm, title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models", author = "Lin, Yen-Ting and Chen, Yun-Nung", booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.nlp4convai-1.5", pages = "47--58" } @misc{taiwanllama, author={Lin, Yen-Ting and Chen, Yun-Nung}, title={Taiwanese-Aligned Language Models based on Meta-Llama2}, year={2023}, url={https://github.com/adamlin120/Taiwan-LLaMa}, note={Code and models available at https://github.com/adamlin120/Taiwan-LLaMa}, } ```
DavidLanz/zh_TW_c4
[ "task_categories:question-answering", "task_categories:summarization", "task_categories:text-generation", "size_categories:10K<n<100K", "language:zh", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2023-09-26T01:49:44+00:00
{"language": ["zh", "en"], "license": "cc-by-sa-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "summarization", "text-generation"]}
2023-09-26T05:08:24+00:00
[]
[ "zh", "en" ]
TAGS #task_categories-question-answering #task_categories-summarization #task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #language-English #license-cc-by-sa-4.0 #region-us
Language Models for Taiwanese Culture training dataset. Please cite the repo if you use the data or code in this repo.
[]
[ "TAGS\n#task_categories-question-answering #task_categories-summarization #task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #language-English #license-cc-by-sa-4.0 #region-us \n" ]
[ 71 ]
[ "passage: TAGS\n#task_categories-question-answering #task_categories-summarization #task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #language-English #license-cc-by-sa-4.0 #region-us \n" ]
2dc6d348d074508de41cb9fa1bef6e1fb8e5fcfc
# Dataset of shirokane_rinko/白金燐子 (BanG Dream!) This is the dataset of shirokane_rinko/白金燐子 (BanG Dream!), containing 481 images and their tags. The core tags of this character are `black_hair, purple_eyes, long_hair, bangs, breasts, blunt_bangs, large_breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 481 | 631.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shirokane_rinko_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 481 | 357.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shirokane_rinko_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1122 | 750.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shirokane_rinko_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 481 | 556.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shirokane_rinko_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1122 | 1.06 GiB | [Download](https://huggingface.co/datasets/CyberHarem/shirokane_rinko_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/shirokane_rinko_bangdream', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, choker, hair_flower, looking_at_viewer, necklace, solo, black_dress, blush, smile, black_gloves, bare_shoulders, tiara, veil, blue_rose, frills, blue_butterfly, collarbone, strapless_dress, closed_mouth, see-through | | 1 | 18 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, earrings, smile, solo, hair_flower, short_hair, white_rose, brooch, neck_ribbon, gem, looking_at_viewer, pink_rose, blue_rose, alternate_hairstyle, white_dress, white_gloves, upper_body, blush, hat_flower, hair_bun, blue_ribbon, open_mouth | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, dress, hair_flower, solo, detached_sleeves, feather_hair_ornament, looking_at_viewer, blush, brooch, ribbon, white_background, black_flower, blue_rose, bow, choker, simple_background, smile, upper_body | | 3 | 15 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, black_ribbon, long_sleeves, looking_at_viewer, solo, white_shirt, neck_ribbon, blush, frilled_sleeves, white_background, black_skirt, upper_body, closed_mouth, simple_background, smile | | 4 | 32 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, solo, blush, hanasakigawa_school_uniform, long_sleeves, looking_at_viewer, white_sailor_collar, white_background, simple_background, red_ribbon, sailor_dress, neck_ribbon, upper_body, brown_dress, closed_mouth, double-breasted, sidelocks, smile | | 5 | 10 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, blush, solo, collarbone, looking_at_viewer, nipples, simple_background, white_background, completely_nude, cleft_of_venus, navel, pussy, uncensored, open_mouth, smile, stomach, closed_mouth, sidelocks | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | blush, cleavage, looking_at_viewer, navel, 1girl, black_bikini, collarbone, day, hair_flower, open_mouth, sarong, side_ponytail, solo, earrings, medium_breasts, outdoors, scrunchie, standing, :d, blue_sky, cloud, frilled_bikini, lens_flare, ocean, purple_rose, water, white_flower | | 7 | 7 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1boy, 1girl, blush, hetero, nipples, paizuri, solo_focus, collarbone, mosaic_censoring, penis, open_mouth, pov, breasts_squeezed_together, ejaculation, looking_at_viewer, completely_nude, cum_on_breasts, sweat, upper_body | | 8 | 6 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1boy, 1girl, blush, hetero, navel, nipples, open_mouth, sex, solo_focus, sweat, vaginal, censored, girl_on_top, penis, cowgirl_position, indoors, saliva, collarbone, completely_nude, cum_in_pussy, sidelocks, spread_legs, stomach, thighhighs | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | choker | hair_flower | looking_at_viewer | necklace | solo | black_dress | blush | smile | black_gloves | bare_shoulders | tiara | veil | blue_rose | frills | blue_butterfly | collarbone | strapless_dress | closed_mouth | see-through | earrings | short_hair | white_rose | brooch | neck_ribbon | gem | pink_rose | alternate_hairstyle | white_dress | white_gloves | upper_body | hat_flower | hair_bun | blue_ribbon | open_mouth | dress | detached_sleeves | feather_hair_ornament | ribbon | white_background | black_flower | bow | simple_background | black_ribbon | long_sleeves | white_shirt | frilled_sleeves | black_skirt | hanasakigawa_school_uniform | white_sailor_collar | red_ribbon | sailor_dress | brown_dress | double-breasted | sidelocks | nipples | completely_nude | cleft_of_venus | navel | pussy | uncensored | stomach | cleavage | black_bikini | day | sarong | side_ponytail | medium_breasts | outdoors | scrunchie | standing | :d | blue_sky | cloud | frilled_bikini | lens_flare | ocean | purple_rose | water | white_flower | 1boy | hetero | paizuri | solo_focus | mosaic_censoring | penis | pov | breasts_squeezed_together | ejaculation | cum_on_breasts | sweat | sex | vaginal | censored | girl_on_top | cowgirl_position | indoors | saliva | cum_in_pussy | spread_legs | thighhighs | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:--------------|:--------------------|:-----------|:-------|:--------------|:--------|:--------|:---------------|:-----------------|:--------|:-------|:------------|:---------|:-----------------|:-------------|:------------------|:---------------|:--------------|:-----------|:-------------|:-------------|:---------|:--------------|:------|:------------|:----------------------|:--------------|:---------------|:-------------|:-------------|:-----------|:--------------|:-------------|:--------|:-------------------|:------------------------|:---------|:-------------------|:---------------|:------|:--------------------|:---------------|:---------------|:--------------|:------------------|:--------------|:------------------------------|:----------------------|:-------------|:---------------|:--------------|:------------------|:------------|:----------|:------------------|:-----------------|:--------|:--------|:-------------|:----------|:-----------|:---------------|:------|:---------|:----------------|:-----------------|:-----------|:------------|:-----------|:-----|:-----------|:--------|:-----------------|:-------------|:--------|:--------------|:--------|:---------------|:-------|:---------|:----------|:-------------|:-------------------|:--------|:------|:----------------------------|:--------------|:-----------------|:--------|:------|:----------|:-----------|:--------------|:-------------------|:----------|:---------|:---------------|:--------------|:-------------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 18 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | X | | X | | X | X | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | | X | | X | X | | | | | X | | | | | | | | | | X | | | | | | | X | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 15 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | X | | X | | X | X | | | | | | | | | | X | | | | | | X | | | | | | X | | | | | | | | | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 32 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | | X | | X | | X | X | | | | | | | | | | X | | | | | | X | | | | | | X | | | | | | | | | X | | | X | | X | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 10 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | | X | | X | | X | X | | | | | | | | X | | X | | | | | | | | | | | | | | | | X | | | | | X | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | X | X | | X | | X | | | | | | | | | X | | | | X | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | 7 | 7 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | | X | | | | X | | | | | | | | | X | | | | | | | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | 8 | 6 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | | | | | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | X | X | | X | | | X | | | | | | | | | | | | | | | | | | | X | X | | X | | X | | | | | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/shirokane_rinko_bangdream
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-26T01:55:17+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T17:27:48+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of shirokane\_rinko/白金燐子 (BanG Dream!) ============================================== This is the dataset of shirokane\_rinko/白金燐子 (BanG Dream!), containing 481 images and their tags. The core tags of this character are 'black\_hair, purple\_eyes, long\_hair, bangs, breasts, blunt\_bangs, large\_breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
36cbfd5a4739771b8a7b07b33bf77fc1d4387610
# Dataset Card for Dataset Name ## Dataset Description This dataset is collected from top token in etherscan which are verifed and opensource. It contains useful information within the token, like token address, holders, sourcecode and so on. Besides, we pay much attention to analysis the source code of tokens’ contract, which refers to the tag column in this dataset. The value of tag contains LLM analysis of contract source code with the token, which is useful for solidity contract creation by LLMs. ### Dataset Summary This dataset contains of 877 rows in total. Each row includes the following features: - token name(string) : the name of the top token we collect from etherscan - max_total_supply(integer): max total supply of this token - holders(string): holders number of this token - total_transfers(integer): token transfer transaction number of this token - circulating_supply_market_cap(string): the total market value of a cryptocurrency or token based on its circulating supply - fully_diluted_market_cap(string): the total market value of a cryptocurrency or token based on its maximum or fully diluted supply - contract_address(string): address of this token - source_code(string): source code of the contracts - abi(string): abi, application binary interface of the source code. - tags(json): the llm analysis of the source code display in json type. The structure of tags is : ```python { "Pragma": <Pragma>, "Contracs": [ { "name": "<Contact_name>", "role": "<Contract_role>" , "functions": { "<func_name>": "<func_role>" }, "modifier": { "<modifier_name>": "<modifier_role>" } } ], "Interface": [ { "name": "<Interface_name>", "role": "<Interface_role>" , "functions": { "<func_name>": "<func_role>" }, "modifier": { "<modifier_name>": "<modifier_role>" } } ], "Library": [ { "name": "<Library_name>", "role": "<Library_role>" , "functions": { "<func_name>": "<func_role>" }, "modifier": { "<modifier_name>": "<modifier_role>" } } ], } ``` tags value contains the name and role of each contract/library/interface and the functions’ name and role within it. Tags can help poor llms clearly figure out what’s users need and feed back the correct answer. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages - The dataset is in the English language (en). - Smart contracts (source code ) are in Solidity programming language. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - token name(string) : the name of the top token we collect from etherscan - max_total_supply(integer): max total supply of this token - holders(string): holders number of this token - total_transfers(integer): token transfer transaction number of this token - circulating_supply_market_cap(string): the total market value of a cryptocurrency or token based on its circulating supply - fully_diluted_market_cap(string): the total market value of a cryptocurrency or token based on its maximum or fully diluted supply - contract_address(string): address of this token - source_code(string): source code of the contracts - abi(string): abi, application binary interface of the source code. - tags(json): the llm analysis of the source code display in json type. The structure of tags is : ## Dataset Creation To collect token information except tags we use beautifulsoup4 to crawl contracts from etherscan top token. As for tags we built a tool called “Labeling Tool for Smart Contract Dataset Based on LLM” This tool uses LLM model like GPT3.5 to figure out the structure of contracts and roles of every part. And we made an SmartContractTagging agent to complete this task. You can find our codes in this github link: xxxx ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed]
tat1111/cad_blockchain_smartcontract
[ "size_categories:n<1K", "language:en", "license:afl-3.0", "smartcontract", "blockchain", "region:us" ]
2023-09-26T01:59:58+00:00
{"language": ["en"], "license": "afl-3.0", "size_categories": ["n<1K"], "tags": ["smartcontract", "blockchain"]}
2023-10-09T01:36:03+00:00
[]
[ "en" ]
TAGS #size_categories-n<1K #language-English #license-afl-3.0 #smartcontract #blockchain #region-us
# Dataset Card for Dataset Name ## Dataset Description This dataset is collected from top token in etherscan which are verifed and opensource. It contains useful information within the token, like token address, holders, sourcecode and so on. Besides, we pay much attention to analysis the source code of tokens’ contract, which refers to the tag column in this dataset. The value of tag contains LLM analysis of contract source code with the token, which is useful for solidity contract creation by LLMs. ### Dataset Summary This dataset contains of 877 rows in total. Each row includes the following features: - token name(string) : the name of the top token we collect from etherscan - max_total_supply(integer): max total supply of this token - holders(string): holders number of this token - total_transfers(integer): token transfer transaction number of this token - circulating_supply_market_cap(string): the total market value of a cryptocurrency or token based on its circulating supply - fully_diluted_market_cap(string): the total market value of a cryptocurrency or token based on its maximum or fully diluted supply - contract_address(string): address of this token - source_code(string): source code of the contracts - abi(string): abi, application binary interface of the source code. - tags(json): the llm analysis of the source code display in json type. The structure of tags is : tags value contains the name and role of each contract/library/interface and the functions’ name and role within it. Tags can help poor llms clearly figure out what’s users need and feed back the correct answer. ### Supported Tasks and Leaderboards ### Languages - The dataset is in the English language (en). - Smart contracts (source code ) are in Solidity programming language. ## Dataset Structure ### Data Instances ### Data Fields - token name(string) : the name of the top token we collect from etherscan - max_total_supply(integer): max total supply of this token - holders(string): holders number of this token - total_transfers(integer): token transfer transaction number of this token - circulating_supply_market_cap(string): the total market value of a cryptocurrency or token based on its circulating supply - fully_diluted_market_cap(string): the total market value of a cryptocurrency or token based on its maximum or fully diluted supply - contract_address(string): address of this token - source_code(string): source code of the contracts - abi(string): abi, application binary interface of the source code. - tags(json): the llm analysis of the source code display in json type. The structure of tags is : ## Dataset Creation To collect token information except tags we use beautifulsoup4 to crawl contracts from etherscan top token. As for tags we built a tool called “Labeling Tool for Smart Contract Dataset Based on LLM” This tool uses LLM model like GPT3.5 to figure out the structure of contracts and roles of every part. And we made an SmartContractTagging agent to complete this task. You can find our codes in this github link: xxxx ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization
[ "# Dataset Card for Dataset Name", "## Dataset Description\n\nThis dataset is collected from top token in etherscan which are verifed and opensource. It contains useful information within the token, like token address, holders, sourcecode and so on. \nBesides, we pay much attention to analysis the source code of tokens’ contract, which refers to the tag column in this dataset. The value of tag contains LLM analysis of contract source code with the token, which is useful for solidity contract creation by LLMs.", "### Dataset Summary\n\nThis dataset contains of 877 rows in total. Each row includes the following features:\n\n- token name(string) : the name of the top token we collect from etherscan\n- max_total_supply(integer): max total supply of this token\n- holders(string): holders number of this token\n- total_transfers(integer): token transfer transaction number of this token\n- circulating_supply_market_cap(string): the total market value of a cryptocurrency or token based on its circulating supply\n- fully_diluted_market_cap(string): the total market value of a cryptocurrency or token based on its maximum or fully diluted supply\n- contract_address(string): address of this token\n- source_code(string): source code of the contracts\n- abi(string): abi, application binary interface of the source code.\n- tags(json): the llm analysis of the source code display in json type. The structure of tags is :\n\n\n\ntags value contains the name and role of each contract/library/interface and the functions’ name and role within it. Tags can help poor llms clearly figure out what’s users need and feed back the correct answer.", "### Supported Tasks and Leaderboards", "### Languages\n\n- The dataset is in the English language (en).\n- Smart contracts (source code ) are in Solidity programming language.", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- token name(string) : the name of the top token we collect from etherscan\n- max_total_supply(integer): max total supply of this token\n- holders(string): holders number of this token\n- total_transfers(integer): token transfer transaction number of this token\n- circulating_supply_market_cap(string): the total market value of a cryptocurrency or token based on its circulating supply\n- fully_diluted_market_cap(string): the total market value of a cryptocurrency or token based on its maximum or fully diluted supply\n- contract_address(string): address of this token\n- source_code(string): source code of the contracts\n- abi(string): abi, application binary interface of the source code.\n- tags(json): the llm analysis of the source code display in json type. The structure of tags is :", "## Dataset Creation\nTo collect token information except tags we use beautifulsoup4 to crawl contracts from etherscan top token. \n\nAs for tags we built a tool called “Labeling Tool for Smart Contract Dataset Based on LLM”\n\nThis tool uses LLM model like GPT3.5 to figure out the structure of contracts and roles of every part. \n\nAnd we made an SmartContractTagging agent to complete this task. You can find our codes in this github link: xxxx", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization" ]
[ "TAGS\n#size_categories-n<1K #language-English #license-afl-3.0 #smartcontract #blockchain #region-us \n", "# Dataset Card for Dataset Name", "## Dataset Description\n\nThis dataset is collected from top token in etherscan which are verifed and opensource. It contains useful information within the token, like token address, holders, sourcecode and so on. \nBesides, we pay much attention to analysis the source code of tokens’ contract, which refers to the tag column in this dataset. The value of tag contains LLM analysis of contract source code with the token, which is useful for solidity contract creation by LLMs.", "### Dataset Summary\n\nThis dataset contains of 877 rows in total. Each row includes the following features:\n\n- token name(string) : the name of the top token we collect from etherscan\n- max_total_supply(integer): max total supply of this token\n- holders(string): holders number of this token\n- total_transfers(integer): token transfer transaction number of this token\n- circulating_supply_market_cap(string): the total market value of a cryptocurrency or token based on its circulating supply\n- fully_diluted_market_cap(string): the total market value of a cryptocurrency or token based on its maximum or fully diluted supply\n- contract_address(string): address of this token\n- source_code(string): source code of the contracts\n- abi(string): abi, application binary interface of the source code.\n- tags(json): the llm analysis of the source code display in json type. The structure of tags is :\n\n\n\ntags value contains the name and role of each contract/library/interface and the functions’ name and role within it. Tags can help poor llms clearly figure out what’s users need and feed back the correct answer.", "### Supported Tasks and Leaderboards", "### Languages\n\n- The dataset is in the English language (en).\n- Smart contracts (source code ) are in Solidity programming language.", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- token name(string) : the name of the top token we collect from etherscan\n- max_total_supply(integer): max total supply of this token\n- holders(string): holders number of this token\n- total_transfers(integer): token transfer transaction number of this token\n- circulating_supply_market_cap(string): the total market value of a cryptocurrency or token based on its circulating supply\n- fully_diluted_market_cap(string): the total market value of a cryptocurrency or token based on its maximum or fully diluted supply\n- contract_address(string): address of this token\n- source_code(string): source code of the contracts\n- abi(string): abi, application binary interface of the source code.\n- tags(json): the llm analysis of the source code display in json type. The structure of tags is :", "## Dataset Creation\nTo collect token information except tags we use beautifulsoup4 to crawl contracts from etherscan top token. \n\nAs for tags we built a tool called “Labeling Tool for Smart Contract Dataset Based on LLM”\n\nThis tool uses LLM model like GPT3.5 to figure out the structure of contracts and roles of every part. \n\nAnd we made an SmartContractTagging agent to complete this task. You can find our codes in this github link: xxxx", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization" ]
[ 34, 8, 114, 280, 10, 32, 6, 6, 206, 110, 7, 4, 10 ]
[ "passage: TAGS\n#size_categories-n<1K #language-English #license-afl-3.0 #smartcontract #blockchain #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\nThis dataset is collected from top token in etherscan which are verifed and opensource. It contains useful information within the token, like token address, holders, sourcecode and so on. \nBesides, we pay much attention to analysis the source code of tokens’ contract, which refers to the tag column in this dataset. The value of tag contains LLM analysis of contract source code with the token, which is useful for solidity contract creation by LLMs.### Dataset Summary\n\nThis dataset contains of 877 rows in total. Each row includes the following features:\n\n- token name(string) : the name of the top token we collect from etherscan\n- max_total_supply(integer): max total supply of this token\n- holders(string): holders number of this token\n- total_transfers(integer): token transfer transaction number of this token\n- circulating_supply_market_cap(string): the total market value of a cryptocurrency or token based on its circulating supply\n- fully_diluted_market_cap(string): the total market value of a cryptocurrency or token based on its maximum or fully diluted supply\n- contract_address(string): address of this token\n- source_code(string): source code of the contracts\n- abi(string): abi, application binary interface of the source code.\n- tags(json): the llm analysis of the source code display in json type. The structure of tags is :\n\n\n\ntags value contains the name and role of each contract/library/interface and the functions’ name and role within it. Tags can help poor llms clearly figure out what’s users need and feed back the correct answer.### Supported Tasks and Leaderboards### Languages\n\n- The dataset is in the English language (en).\n- Smart contracts (source code ) are in Solidity programming language.## Dataset Structure### Data Instances" ]
3fce703b4837faab693f6b76cdaaa0e3181e2533
# Dataset Card for "60k_dataset_multichoice" - MAX_LEN = 256 [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
VuongQuoc/60k_dataset_multichoice
[ "region:us" ]
2023-09-26T02:27:06+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": {"sequence": "int32"}}, {"name": "token_type_ids", "sequence": {"sequence": "int8"}}, {"name": "attention_mask", "sequence": {"sequence": "int8"}}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 465592764, "num_examples": 60000}, {"name": "test", "num_bytes": 1552000, "num_examples": 200}], "download_size": 52157007, "dataset_size": 467144764}}
2023-09-30T02:49:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for "60k_dataset_multichoice" - MAX_LEN = 256 More Information needed
[ "# Dataset Card for \"60k_dataset_multichoice\"\n\n- MAX_LEN = 256\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"60k_dataset_multichoice\"\n\n- MAX_LEN = 256\n\nMore Information needed" ]
[ 6, 25 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"60k_dataset_multichoice\"\n\n- MAX_LEN = 256\n\nMore Information needed" ]
a71fad3dd6ab5c3822c686988cf90e1ca296cefd
# Dataset of Saitō Ena This is the dataset of Saitō Ena, containing 268 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 268 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 653 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 756 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 268 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 268 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 268 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 653 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 653 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 562 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not exceeding 512x512 pixels. | | stage3-eyes-640 | 756 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 756 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/saito_ena_yurucamp
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-26T02:31:23+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-26T14:16:07+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Saitō Ena ==================== This is the dataset of Saitō Ena, containing 268 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
0945ea369ccc1fa34a665412f626eca22e9b0618
# Dataset Card for "all-huggingface-python-code-2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kye/all-huggingface-python-code-2
[ "region:us" ]
2023-09-26T02:35:33+00:00
{"dataset_info": {"features": [{"name": "python_code", "dtype": "string"}, {"name": "repo_name", "dtype": "string"}, {"name": "file_path", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28487113, "num_examples": 2634}], "download_size": 8685205, "dataset_size": 28487113}}
2023-09-26T02:35:39+00:00
[]
[]
TAGS #region-us
# Dataset Card for "all-huggingface-python-code-2" More Information needed
[ "# Dataset Card for \"all-huggingface-python-code-2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"all-huggingface-python-code-2\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"all-huggingface-python-code-2\"\n\nMore Information needed" ]
d0d5599012488486de773b86e5a9ec329aa98edb
# Dataset Card for "all-torvalds-c-code-1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kye/all-torvalds-c-code-1
[ "region:us" ]
2023-09-26T02:41:53+00:00
{"dataset_info": {"features": [{"name": "python_code", "dtype": "string"}, {"name": "repo_name", "dtype": "string"}, {"name": "file_path", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 651565756, "num_examples": 33893}], "download_size": 231630597, "dataset_size": 651565756}}
2023-09-26T02:43:29+00:00
[]
[]
TAGS #region-us
# Dataset Card for "all-torvalds-c-code-1" More Information needed
[ "# Dataset Card for \"all-torvalds-c-code-1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"all-torvalds-c-code-1\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"all-torvalds-c-code-1\"\n\nMore Information needed" ]
1afa70107e3dbb5fe8681b7f86b7891f1286ebb6
# Dataset Card for "squad_title_train_10_eval_10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/squad_title_train_10_eval_10
[ "region:us" ]
2023-09-26T02:53:55+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 238057, "num_examples": 150}, {"name": "validation", "num_bytes": 60056, "num_examples": 48}], "download_size": 72691, "dataset_size": 298113}}
2023-09-26T02:54:02+00:00
[]
[]
TAGS #region-us
# Dataset Card for "squad_title_train_10_eval_10" More Information needed
[ "# Dataset Card for \"squad_title_train_10_eval_10\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"squad_title_train_10_eval_10\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"squad_title_train_10_eval_10\"\n\nMore Information needed" ]
0e079482205e0bbefd18d7a14b58d292605571fd
# Dataset of hikawa_sayo/氷川紗夜/히카와사요 (BanG Dream!) This is the dataset of hikawa_sayo/氷川紗夜/히카와사요 (BanG Dream!), containing 500 images and their tags. The core tags of this character are `green_eyes, aqua_hair, long_hair, bangs`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 767.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hikawa_sayo_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 433.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hikawa_sayo_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1220 | 907.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hikawa_sayo_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 676.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hikawa_sayo_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1220 | 1.29 GiB | [Download](https://huggingface.co/datasets/CyberHarem/hikawa_sayo_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/hikawa_sayo_bangdream', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | short_hair, side_braids, yellow_bow, open_mouth, 2girls, :d, blush, hair_bow, school_uniform, upper_body, white_background, white_shirt, collared_shirt, twins, long_sleeves, simple_background, striped_necktie, blazer, dated, grey_jacket, looking_at_viewer, short_sleeves, sisters, skirt, sweater_vest, twin_braids | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, simple_background, solo, upper_body, white_background, blush, looking_at_viewer, hair_between_eyes, smile, white_shirt, blue_shirt, collared_shirt, open_mouth, sidelocks | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, collared_shirt, short_sleeves, solo, closed_mouth, collarbone, looking_at_viewer, upper_body, white_background, blue_shirt, blush, hand_up, necklace, pendant, white_shirt, breasts, dress_shirt, gradient_background, hair_between_eyes, simple_background | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, blush, hanasakigawa_school_uniform, serafuku, short_sleeves, simple_background, solo, looking_at_viewer, white_background, white_skirt, blue_neckerchief, blue_shirt, pleated_skirt, upper_body, white_sailor_collar, :o, collarbone, electric_guitar, holding_instrument, parted_lips, white_shirt | | 4 | 28 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | hanasakigawa_school_uniform, 1girl, long_sleeves, white_sailor_collar, neck_ribbon, red_ribbon, solo, looking_at_viewer, sailor_dress, blush, brown_dress, white_background, simple_background, upper_body, double-breasted, closed_mouth, smile | | 5 | 8 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | smile, 1girl, earrings, looking_at_viewer, solo, frills, hair_flower, pearl_necklace, upper_body, white_background, blush, braid, collarbone, dress, simple_background, bare_shoulders, hair_ribbon, rose, tiara, choker, heart | | 6 | 7 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, blue_rose, dress, earrings, looking_at_viewer, solo, electric_guitar, long_sleeves, v-shaped_eyebrows, bow, brooch, detached_sleeves, hair_ornament, hat_flower, open_mouth, striped, :d, center_frills, holding_instrument, neck_ribbon, plectrum | | 7 | 9 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, electric_guitar, hair_flower, looking_at_viewer, solo, earrings, feather_hair_ornament, black_feathers, blue_rose, choker, holding_instrument, playing_instrument, ribbon, dress, short_sleeves, upper_body, black_flower, blue_feathers, brooch | | 8 | 5 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | blue_skirt, hair_bow, neck_ribbon, sleeveless_shirt, white_shirt, 1girl, black_ribbon, blush, collared_shirt, day, looking_at_viewer, outdoors, blurry_background, smile, blue_bow, long_skirt, nail_polish, open_mouth, plaid_skirt, sitting, sky, solo_focus, standing | | 9 | 13 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 1girl, blue_headwear, earrings, solo, alternate_hairstyle, beret, hat_bow, long_sleeves, looking_at_viewer, single_braid, blue_bow, frilled_shirt_collar, hair_over_shoulder, upper_body, brooch, hair_bow, neck_ribbon, star_(symbol), constellation_print, smile, starry_sky_print, striped_bow, blue_ribbon, center_frills, gem, hairclip, open_mouth, striped_ribbon | | 10 | 8 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | 1girl, earrings, fur-trimmed_capelet, long_sleeves, smile, solo, choker, dress, hair_ornament, looking_at_viewer, white_gloves, blush, christmas, frills, hat_flower, red_bow, red_ribbon, gift, night, open_mouth, pom_pom_(clothes), sitting, box, braided_bangs, fur-trimmed_sleeves, fur_hat, hat_bow, holding_lantern, pearl_necklace, shorts, star_(symbol), striped, thighhighs, white_headwear | | 11 | 12 | ![](samples/11/clu11-sample0.png) | ![](samples/11/clu11-sample1.png) | ![](samples/11/clu11-sample2.png) | ![](samples/11/clu11-sample3.png) | ![](samples/11/clu11-sample4.png) | 1girl, floral_print, solo, hair_flower, kimono, alternate_hairstyle, looking_at_viewer, upper_body, obi, holding, smile, blue_flower, blush, braid, long_sleeves, short_hair, wide_sleeves | | 12 | 6 | ![](samples/12/clu12-sample0.png) | ![](samples/12/clu12-sample1.png) | ![](samples/12/clu12-sample2.png) | ![](samples/12/clu12-sample3.png) | ![](samples/12/clu12-sample4.png) | blush, detached_sleeves, earrings, looking_at_viewer, roswaal_mansion_maid_uniform, long_sleeves, 1girl, black_ribbon, bow, cosplay, flower, hair_ribbon, hairband, maid_headdress, neck_ribbon, pink_ribbon, solo, x_hair_ornament, black_dress, blue_hair, breasts, frilled_sleeves, ponytail, ribbon-trimmed_sleeves, smile | | 13 | 5 | ![](samples/13/clu13-sample0.png) | ![](samples/13/clu13-sample1.png) | ![](samples/13/clu13-sample2.png) | ![](samples/13/clu13-sample3.png) | ![](samples/13/clu13-sample4.png) | 1girl, black_choker, black_headwear, jacket, long_sleeves, looking_at_viewer, solo, beret, collarbone, off_shoulder, blush, butterfly, earrings, necklace, see-through_sleeves, small_breasts, upper_body, bare_shoulders, cleavage, nail_polish, parted_lips, partially_unzipped, petals, sitting, smile, tattoo, white_background | | 14 | 7 | ![](samples/14/clu14-sample0.png) | ![](samples/14/clu14-sample1.png) | ![](samples/14/clu14-sample2.png) | ![](samples/14/clu14-sample3.png) | ![](samples/14/clu14-sample4.png) | black_leotard, blush, cleavage, detached_collar, medium_breasts, playboy_bunny, 1girl, fake_animal_ears, looking_at_viewer, rabbit_ears, solo, strapless_leotard, bare_shoulders, covered_navel, wrist_cuffs, cowboy_shot, heart, standing, swept_bangs, black_pantyhose, closed_mouth, red_bowtie, two-tone_background, white_gloves | | 15 | 13 | ![](samples/15/clu15-sample0.png) | ![](samples/15/clu15-sample1.png) | ![](samples/15/clu15-sample2.png) | ![](samples/15/clu15-sample3.png) | ![](samples/15/clu15-sample4.png) | 1girl, collarbone, looking_at_viewer, navel, solo, blush, cleavage, cowboy_shot, day, cloud, medium_breasts, outdoors, sidelocks, smile, standing, blue_sky, hair_ornament, bare_shoulders, bracelet, groin, ocean, open_mouth, ponytail, beach, floral_print, green_bikini, green_hair, swept_bangs, water | | 16 | 5 | ![](samples/16/clu16-sample0.png) | ![](samples/16/clu16-sample1.png) | ![](samples/16/clu16-sample2.png) | ![](samples/16/clu16-sample3.png) | ![](samples/16/clu16-sample4.png) | 1girl, blush, navel, nipples, pussy, completely_nude, looking_at_viewer, closed_mouth, collarbone, dutch_angle, groin, indoors, mosaic_censoring, sidelocks, small_breasts, swept_bangs, blurry_background, cowboy_shot, green_hair, holding_phone, medium_breasts, open_mouth, smartphone, solo_focus, standing, stomach, wavy_hair | | 17 | 13 | ![](samples/17/clu17-sample0.png) | ![](samples/17/clu17-sample1.png) | ![](samples/17/clu17-sample2.png) | ![](samples/17/clu17-sample3.png) | ![](samples/17/clu17-sample4.png) | nipples, 1girl, blush, navel, hetero, solo_focus, open_mouth, sex, vaginal, completely_nude, medium_breasts, penis, cum_in_pussy, looking_at_viewer, mosaic_censoring, spread_legs, sweat, 1boy, collarbone, girl_on_top, multiple_boys | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | short_hair | side_braids | yellow_bow | open_mouth | 2girls | :d | blush | hair_bow | school_uniform | upper_body | white_background | white_shirt | collared_shirt | twins | long_sleeves | simple_background | striped_necktie | blazer | dated | grey_jacket | looking_at_viewer | short_sleeves | sisters | skirt | sweater_vest | twin_braids | 1girl | solo | hair_between_eyes | smile | blue_shirt | sidelocks | closed_mouth | collarbone | hand_up | necklace | pendant | breasts | dress_shirt | gradient_background | hanasakigawa_school_uniform | serafuku | white_skirt | blue_neckerchief | pleated_skirt | white_sailor_collar | :o | electric_guitar | holding_instrument | parted_lips | neck_ribbon | red_ribbon | sailor_dress | brown_dress | double-breasted | earrings | frills | hair_flower | pearl_necklace | braid | dress | bare_shoulders | hair_ribbon | rose | tiara | choker | heart | blue_rose | v-shaped_eyebrows | bow | brooch | detached_sleeves | hair_ornament | hat_flower | striped | center_frills | plectrum | feather_hair_ornament | black_feathers | playing_instrument | ribbon | black_flower | blue_feathers | blue_skirt | sleeveless_shirt | black_ribbon | day | outdoors | blurry_background | blue_bow | long_skirt | nail_polish | plaid_skirt | sitting | sky | solo_focus | standing | blue_headwear | alternate_hairstyle | beret | hat_bow | single_braid | frilled_shirt_collar | hair_over_shoulder | star_(symbol) | constellation_print | starry_sky_print | striped_bow | blue_ribbon | gem | hairclip | striped_ribbon | fur-trimmed_capelet | white_gloves | christmas | red_bow | gift | night | pom_pom_(clothes) | box | braided_bangs | fur-trimmed_sleeves | fur_hat | holding_lantern | shorts | thighhighs | white_headwear | floral_print | kimono | obi | holding | blue_flower | wide_sleeves | roswaal_mansion_maid_uniform | cosplay | flower | hairband | maid_headdress | pink_ribbon | x_hair_ornament | black_dress | blue_hair | frilled_sleeves | ponytail | ribbon-trimmed_sleeves | black_choker | black_headwear | jacket | off_shoulder | butterfly | see-through_sleeves | small_breasts | cleavage | partially_unzipped | petals | tattoo | black_leotard | detached_collar | medium_breasts | playboy_bunny | fake_animal_ears | rabbit_ears | strapless_leotard | covered_navel | wrist_cuffs | cowboy_shot | swept_bangs | black_pantyhose | red_bowtie | two-tone_background | navel | cloud | blue_sky | bracelet | groin | ocean | beach | green_bikini | green_hair | water | nipples | pussy | completely_nude | dutch_angle | indoors | mosaic_censoring | holding_phone | smartphone | stomach | wavy_hair | hetero | sex | vaginal | penis | cum_in_pussy | spread_legs | sweat | 1boy | girl_on_top | multiple_boys | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-------------|:--------------|:-------------|:-------------|:---------|:-----|:--------|:-----------|:-----------------|:-------------|:-------------------|:--------------|:-----------------|:--------|:---------------|:--------------------|:------------------|:---------|:--------|:--------------|:--------------------|:----------------|:----------|:--------|:---------------|:--------------|:--------|:-------|:--------------------|:--------|:-------------|:------------|:---------------|:-------------|:----------|:-----------|:----------|:----------|:--------------|:----------------------|:------------------------------|:-----------|:--------------|:-------------------|:----------------|:----------------------|:-----|:------------------|:---------------------|:--------------|:--------------|:-------------|:---------------|:--------------|:------------------|:-----------|:---------|:--------------|:-----------------|:--------|:--------|:-----------------|:--------------|:-------|:--------|:---------|:--------|:------------|:--------------------|:------|:---------|:-------------------|:----------------|:-------------|:----------|:----------------|:-----------|:------------------------|:-----------------|:---------------------|:---------|:---------------|:----------------|:-------------|:-------------------|:---------------|:------|:-----------|:--------------------|:-----------|:-------------|:--------------|:--------------|:----------|:------|:-------------|:-----------|:----------------|:----------------------|:--------|:----------|:---------------|:-----------------------|:---------------------|:----------------|:----------------------|:-------------------|:--------------|:--------------|:------|:-----------|:-----------------|:----------------------|:---------------|:------------|:----------|:-------|:--------|:--------------------|:------|:----------------|:----------------------|:----------|:------------------|:---------|:-------------|:-----------------|:---------------|:---------|:------|:----------|:--------------|:---------------|:-------------------------------|:----------|:---------|:-----------|:-----------------|:--------------|:------------------|:--------------|:------------|:------------------|:-----------|:-------------------------|:---------------|:-----------------|:---------|:---------------|:------------|:----------------------|:----------------|:-----------|:---------------------|:---------|:---------|:----------------|:------------------|:-----------------|:----------------|:-------------------|:--------------|:--------------------|:----------------|:--------------|:--------------|:--------------|:------------------|:-------------|:----------------------|:--------|:--------|:-----------|:-----------|:--------|:--------|:--------|:---------------|:-------------|:--------|:----------|:--------|:------------------|:--------------|:----------|:-------------------|:----------------|:-------------|:----------|:------------|:---------|:------|:----------|:--------|:---------------|:--------------|:--------|:-------|:--------------|:----------------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | | | | X | | | X | | | X | X | X | X | | | X | | | | | X | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | | | | | | | X | | | X | X | X | X | | | X | | | | | X | X | | | | | X | X | X | | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | | | | | | | X | | | X | X | X | | | | X | | | | | X | X | | | | | X | X | | | X | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 28 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | | | | | | | X | | | X | X | | | | X | X | | | | | X | | | | | | X | X | | X | | | X | | | | | | | | X | | | | | X | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 8 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | | | | | | | X | | | X | X | | | | | X | | | | | X | | | | | | X | X | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 7 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | | | | X | | X | | | | | | | | | X | | | | | | X | | | | | | X | X | | | | | | | | | | | | | | | | | | | | X | X | | X | | | | | X | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 9 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | | | | | | | | | | X | | | | | | | | | | | X | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | X | X | | | | | | | X | | X | | | X | | | | | X | | X | | | X | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 5 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | | | | X | | | X | X | | | | X | X | | | | | | | | X | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 9 | 13 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | | | | X | | | | X | | X | | | | | X | | | | | | X | | | | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | X | | | | | X | | | | | | | | | | | | | | | X | | | | | X | | | | | | | | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 10 | 8 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | | | | X | | | X | | | | | | | | X | | | | | | X | | | | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | X | | | | X | X | | X | | X | | | | | X | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | X | | | | | | | X | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 11 | 12 | ![](samples/11/clu11-sample0.png) | ![](samples/11/clu11-sample1.png) | ![](samples/11/clu11-sample2.png) | ![](samples/11/clu11-sample3.png) | ![](samples/11/clu11-sample4.png) | X | | | | | | X | | | X | | | | | X | | | | | | X | | | | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 12 | 6 | ![](samples/12/clu12-sample0.png) | ![](samples/12/clu12-sample1.png) | ![](samples/12/clu12-sample2.png) | ![](samples/12/clu12-sample3.png) | ![](samples/12/clu12-sample4.png) | | | | | | | X | | | | | | | | X | | | | | | X | | | | | | X | X | | X | | | | | | | | X | | | | | | | | | | | | | X | | | | | X | | | | | | | X | | | | | | | X | | X | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 13 | 5 | ![](samples/13/clu13-sample0.png) | ![](samples/13/clu13-sample1.png) | ![](samples/13/clu13-sample2.png) | ![](samples/13/clu13-sample3.png) | ![](samples/13/clu13-sample4.png) | | | | | | | X | | | X | X | | | | X | | | | | | X | | | | | | X | X | | X | | | | X | | X | | | | | | | | | | | | | | X | | | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 14 | 7 | ![](samples/14/clu14-sample0.png) | ![](samples/14/clu14-sample1.png) | ![](samples/14/clu14-sample2.png) | ![](samples/14/clu14-sample3.png) | ![](samples/14/clu14-sample4.png) | | | | | | | X | | | | | | | | | | | | | | X | | | | | | X | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 15 | 13 | ![](samples/15/clu15-sample0.png) | ![](samples/15/clu15-sample1.png) | ![](samples/15/clu15-sample2.png) | ![](samples/15/clu15-sample3.png) | ![](samples/15/clu15-sample4.png) | | | | X | | | X | | | | | | | | | | | | | | X | | | | | | X | X | | X | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | X | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | | X | | | | | | X | | | | | | | X | X | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | 16 | 5 | ![](samples/16/clu16-sample0.png) | ![](samples/16/clu16-sample1.png) | ![](samples/16/clu16-sample2.png) | ![](samples/16/clu16-sample3.png) | ![](samples/16/clu16-sample4.png) | | | | X | | | X | | | | | | | | | | | | | | X | | | | | | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | | | | | | | X | X | | | | X | | | | X | | | | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | 17 | 13 | ![](samples/17/clu17-sample0.png) | ![](samples/17/clu17-sample1.png) | ![](samples/17/clu17-sample2.png) | ![](samples/17/clu17-sample3.png) | ![](samples/17/clu17-sample4.png) | | | | X | | | X | | | | | | | | | | | | | | X | | | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | X | | | | | | | | | | X | | X | | | X | | | | | X | X | X | X | X | X | X | X | X | X |
CyberHarem/hikawa_sayo_bangdream
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-26T03:05:22+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T17:08:02+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of hikawa\_sayo/氷川紗夜/히카와사요 (BanG Dream!) ================================================ This is the dataset of hikawa\_sayo/氷川紗夜/히카와사요 (BanG Dream!), containing 500 images and their tags. The core tags of this character are 'green\_eyes, aqua\_hair, long\_hair, bangs', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
91b24f9a70f958fa5a8e08363a90f0833c94ff45
# Dataset of Kagamihara Nadeshiko This is the dataset of Kagamihara Nadeshiko, containing 300 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 300 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 701 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 788 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 300 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 300 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 300 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 701 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 701 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 602 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 788 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 788 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/kagamihara_nadeshiko_yurucamp
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-26T03:06:51+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-26T15:16:24+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Kagamihara Nadeshiko =============================== This is the dataset of Kagamihara Nadeshiko, containing 300 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
0c756828a5fa489318d7096be05ac4d65d502d3c
# Dataset Card for "alpaca-gpt4-tw-input-output-48k" This dataset contains English Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs. The dataset was originaly shared in this repository: https://github.com/ntunlplab/traditional-chinese-alpaca. This is just a wraper for compatibility with huggingface's datasets library. ## Dataset structure It contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca. The dataset has the same format as Alpaca data, except the output is generated by GPT-4: - `instruction`: `str`, describes the task the model should perform. Each of the 52K instructions is unique. - `input`: `str`, optional context or input for the task. - `output`: `str`, the answer to the instruction as generated by `GPT-4`. ## Difference with the original Alpaca dataset The original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example: #### Example from Alpaca-GPT3: ```bash { 'instruction': '辨識那一個平台與其他不同。', 'input': '輸入:Twitter,Instagram,Telegram。', 'output': 'Twitter,Instagram 和 Telegram 都是流行的社群媒體平台,但它們之間也有一些不同之處:\n\n- Twitter: 是一個Twitter社群媒體平台,使用者可以發佈 280 字元以內的訊息(即推文)並與其他使用者互動。它主要關注文字交流,儘管使用者也可以分享照片和視訊。\n- Instagram: 是一個圖片和視訊分享的社群媒體平台,使用者可以在 Instagram 上發佈照片和視訊並與其他使用者互動。它主要關注圖片和視訊的分享,儘管使用者也可以新增文字說明。\n- Telegram: 是一個即時通訊和社群媒體平台,使用者可以通過文字、語音、圖片和視訊與其他使用者即時交流。與 Twitter 和 Instagram 不同,Telegram 更側重於即時通訊,它還提供了端到端加密的私人聊天功能。' } ``` ## Licensing Information The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
DavidLanz/alpaca-gpt4-tw-input-output-48k
[ "task_categories:text-generation", "task_categories:conversational", "task_categories:question-answering", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "gpt3", "alpaca", "instruction-finetuning", "region:us" ]
2023-09-26T03:22:44+00:00
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "conversational", "question-answering"], "tags": ["gpt3", "alpaca", "instruction-finetuning"]}
2023-09-26T03:56:44+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #task_categories-conversational #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #gpt3 #alpaca #instruction-finetuning #region-us
# Dataset Card for "alpaca-gpt4-tw-input-output-48k" This dataset contains English Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs. The dataset was originaly shared in this repository: URL This is just a wraper for compatibility with huggingface's datasets library. ## Dataset structure It contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca. The dataset has the same format as Alpaca data, except the output is generated by GPT-4: - 'instruction': 'str', describes the task the model should perform. Each of the 52K instructions is unique. - 'input': 'str', optional context or input for the task. - 'output': 'str', the answer to the instruction as generated by 'GPT-4'. ## Difference with the original Alpaca dataset The original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example: #### Example from Alpaca-GPT3: ## Licensing Information The dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0).
[ "# Dataset Card for \"alpaca-gpt4-tw-input-output-48k\"\n\nThis dataset contains English Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.\n\nThe dataset was originaly shared in this repository: URL This is just a wraper for compatibility with huggingface's datasets library.", "## Dataset structure\n\nIt contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.\nThe dataset has the same format as Alpaca data, except the output is generated by GPT-4:\n\n - 'instruction': 'str', describes the task the model should perform. Each of the 52K instructions is unique.\n - 'input': 'str', optional context or input for the task. \n - 'output': 'str', the answer to the instruction as generated by 'GPT-4'.", "## Difference with the original Alpaca dataset\n\nThe original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:", "#### Example from Alpaca-GPT3:", "## Licensing Information\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)." ]
[ "TAGS\n#task_categories-text-generation #task_categories-conversational #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #gpt3 #alpaca #instruction-finetuning #region-us \n", "# Dataset Card for \"alpaca-gpt4-tw-input-output-48k\"\n\nThis dataset contains English Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.\n\nThe dataset was originaly shared in this repository: URL This is just a wraper for compatibility with huggingface's datasets library.", "## Dataset structure\n\nIt contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.\nThe dataset has the same format as Alpaca data, except the output is generated by GPT-4:\n\n - 'instruction': 'str', describes the task the model should perform. Each of the 52K instructions is unique.\n - 'input': 'str', optional context or input for the task. \n - 'output': 'str', the answer to the instruction as generated by 'GPT-4'.", "## Difference with the original Alpaca dataset\n\nThe original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:", "#### Example from Alpaca-GPT3:", "## Licensing Information\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)." ]
[ 77, 90, 126, 73, 12, 25 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-conversational #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #gpt3 #alpaca #instruction-finetuning #region-us \n# Dataset Card for \"alpaca-gpt4-tw-input-output-48k\"\n\nThis dataset contains English Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.\n\nThe dataset was originaly shared in this repository: URL This is just a wraper for compatibility with huggingface's datasets library.## Dataset structure\n\nIt contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.\nThe dataset has the same format as Alpaca data, except the output is generated by GPT-4:\n\n - 'instruction': 'str', describes the task the model should perform. Each of the 52K instructions is unique.\n - 'input': 'str', optional context or input for the task. \n - 'output': 'str', the answer to the instruction as generated by 'GPT-4'.## Difference with the original Alpaca dataset\n\nThe original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:#### Example from Alpaca-GPT3:## Licensing Information\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)." ]
820ff951f7e4659b0f4eca86ad787c49badd2c22
# Dataset Card for "wikipedia-ja-20230720-1k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mmnga/wikipedia-ja-20230720-1k
[ "region:us" ]
2023-09-26T03:24:00+00:00
{"dataset_info": {"features": [{"name": "curid", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2746008.4742813315, "num_examples": 1024}], "download_size": 1593280, "dataset_size": 2746008.4742813315}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-26T03:24:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for "wikipedia-ja-20230720-1k" More Information needed
[ "# Dataset Card for \"wikipedia-ja-20230720-1k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"wikipedia-ja-20230720-1k\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"wikipedia-ja-20230720-1k\"\n\nMore Information needed" ]
2ec595439faf45fa83d4bd5d8b2cfb486ded361d
# Dataset of Ōgaki Chiaki This is the dataset of Ōgaki Chiaki, containing 299 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 299 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 759 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 908 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 299 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 299 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 299 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 759 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 759 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 636 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 908 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 908 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/ogaki_chiaki_yurucamp
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-26T03:34:10+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-26T16:01:54+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Ōgaki Chiaki ======================= This is the dataset of Ōgaki Chiaki, containing 299 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
02ed917b288c711592ca7823f58dbbc504187e6c
# Dataset Card for "c_arm64" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zhangshuoming/c_arm64
[ "region:us" ]
2023-09-26T03:42:13+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4543075905, "num_examples": 1041342}], "download_size": 1217380394, "dataset_size": 4543075905}}
2023-09-26T03:51:28+00:00
[]
[]
TAGS #region-us
# Dataset Card for "c_arm64" More Information needed
[ "# Dataset Card for \"c_arm64\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"c_arm64\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"c_arm64\"\n\nMore Information needed" ]
ec9221f524cfcc253d277dc8047957a4501cb6fd
# Dataset of Inuyama Aoi This is the dataset of Inuyama Aoi, containing 299 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 299 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 739 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 843 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 299 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 299 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 299 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 739 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 739 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 626 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 843 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 843 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/inuyama_aoi_yurucamp
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-26T04:04:19+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-26T16:50:52+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Inuyama Aoi ====================== This is the dataset of Inuyama Aoi, containing 299 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
1e7a3b53e2d72050588e7b9679591ecf6b478f8a
## Citations ```bibtex @article{platypus2023, title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs}, author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz}, booktitle={arXiv preprint arxiv:2308.07317}, year={2023} } ``` ```bibtex @misc{alpaca, author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto }, title = {Stanford Alpaca: An Instruction-following LLaMA model}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}}, } ``` ```bibtex @online{DatabricksBlog2023DollyV2, author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin}, title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM}, year = {2023}, url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}, urldate = {2023-06-30} } ``` ```bibtex @article{dziri2022faithdial, title={FaithDial: A Faithful Benchmark for Information-Seeking Dialogue}, author={Dziri, Nouha and Kamalloo, Ehsan and Milton, Sivan and Zaiane, Osmar and Yu, Mo and Ponti, Edoardo and Reddy, Siva}, journal={arXiv preprint, arXiv:2204.10757}, year={2022}, url={https://arxiv.org/abs/2204.10757} } ``` ```bibtex @misc{alex2019multinews, title={Multi-News: a Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model}, author={Alexander R. Fabbri and Irene Li and Tianwei She and Suyi Li and Dragomir R. Radev}, year={2019}, eprint={1906.01749}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
CJ-gyuwonpark/merge-data-v66
[ "arxiv:2204.10757", "arxiv:1906.01749", "region:us" ]
2023-09-26T04:26:33+00:00
{}
2023-10-25T05:46:45+00:00
[ "2204.10757", "1906.01749" ]
[]
TAGS #arxiv-2204.10757 #arxiv-1906.01749 #region-us
s
[]
[ "TAGS\n#arxiv-2204.10757 #arxiv-1906.01749 #region-us \n" ]
[ 22 ]
[ "passage: TAGS\n#arxiv-2204.10757 #arxiv-1906.01749 #region-us \n" ]
9d3a2782785f8f5ecc81f22d9793798c395299fd
# Dataset of imai_lisa/今井リサ (BanG Dream!) This is the dataset of imai_lisa/今井リサ (BanG Dream!), containing 500 images and their tags. The core tags of this character are `brown_hair, long_hair, bangs, green_eyes, earrings, breasts, ponytail, sidelocks, half_updo`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 685.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/imai_lisa_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 384.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/imai_lisa_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1184 | 825.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/imai_lisa_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 600.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/imai_lisa_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1184 | 1.16 GiB | [Download](https://huggingface.co/datasets/CyberHarem/imai_lisa_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/imai_lisa_bangdream', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | blush, 1girl, solo_focus, hetero, nipples, open_mouth, sex, vaginal, 1boy, pussy, large_breasts, penis, sweat, jewelry, navel, spread_legs, completely_nude, looking_at_viewer, mosaic_censoring, smile, collarbone, medium_breasts, on_back | | 1 | 27 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, off-shoulder_sweater, smile, solo, bare_shoulders, long_sleeves, looking_at_viewer, sweater_dress, blush, collarbone, necklace, ribbed_sweater, black_belt, simple_background, white_background, medium_breasts, wavy_hair, hair_between_eyes, open_mouth, pendant, sleeves_past_wrists, black_thighhighs, cleavage, sitting, :3 | | 2 | 11 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, beret, long_sleeves, solo, white_shirt, blush, jewelry, looking_at_viewer, smile, red_headwear, simple_background, upper_body, collarbone, open_mouth, plaid_skirt, shoulder_bag, wavy_hair, closed_mouth, grey_skirt, one_eye_closed, white_background | | 3 | 10 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, collared_shirt, grey_jacket, jewelry, long_sleeves, looking_at_viewer, school_uniform, solo, striped_necktie, white_shirt, blazer, smile, plaid_skirt, pleated_skirt, simple_background, blush, white_background, brown_necktie, cowboy_shot, miniskirt, brown_skirt, closed_mouth | | 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, blush, collared_shirt, jewelry, school_uniform, solo, sweater_vest, white_shirt, open_mouth, short_sleeves, upper_body, :d, looking_at_viewer, simple_background, striped_necktie, white_background, blue_necktie, hair_between_eyes | | 5 | 12 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, looking_at_viewer, solo, blush, collarbone, day, outdoors, smile, bare_shoulders, blue_sky, cleavage, navel, standing, closed_mouth, cloud, cowboy_shot, ocean, large_breasts, medium_breasts, frilled_bikini, hair_between_eyes, stomach, wavy_hair, blurry_background, bracelet, groin, halterneck, multi-strapped_bikini, side-tie_bikini_bottom, water | | 6 | 15 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, hair_flower, solo, looking_at_viewer, smile, blush, red_rose, frills, necklace, bare_shoulders, gloves, veil, black_dress | | 7 | 8 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, black_feathers, feather_hair_ornament, hair_flower, looking_at_viewer, smile, solo, black_choker, detached_sleeves, dress, lace_choker, brooch, long_sleeves, upper_body, blush, red_bowtie, black_rose, blue_rose, electric_guitar, frills, holding, lace-trimmed_sleeves, neck_ribbon, red_ribbon, simple_background, white_background | | 8 | 5 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | feather_hair_ornament, hair_flower, hairband, looking_at_viewer, purple_rose, red_rose, smile, blue_feathers, cross-laced_clothes, crown, necklace, one_eye_closed, solo, 1girl, ;d, black_choker, blue_jacket, blue_rose, cleavage, long_sleeves, open_mouth, simple_background, upper_body, white_background, black_feathers, black_ribbon, blush, corset, cropped_jacket, dress, holding, multiple_girls, round_teeth | | 9 | 6 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | cleavage, collarbone, crop_top, hair_bow, looking_at_viewer, midriff, denim_shorts, hair_flower, heart, medium_breasts, navel, necklace, short_shorts, smile, 1girl, bare_shoulders, belt, black_bow, black_gloves, black_jacket, blush, choker, hoop_earrings, one_side_up, solo, stomach, cowboy_shot, hand_up, large_breasts, off_shoulder, open_jacket, spaghetti_strap, thigh_strap, thighhighs, wavy_hair | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | blush | 1girl | solo_focus | hetero | nipples | open_mouth | sex | vaginal | 1boy | pussy | large_breasts | penis | sweat | jewelry | navel | spread_legs | completely_nude | looking_at_viewer | mosaic_censoring | smile | collarbone | medium_breasts | on_back | off-shoulder_sweater | solo | bare_shoulders | long_sleeves | sweater_dress | necklace | ribbed_sweater | black_belt | simple_background | white_background | wavy_hair | hair_between_eyes | pendant | sleeves_past_wrists | black_thighhighs | cleavage | sitting | :3 | beret | white_shirt | red_headwear | upper_body | plaid_skirt | shoulder_bag | closed_mouth | grey_skirt | one_eye_closed | collared_shirt | grey_jacket | school_uniform | striped_necktie | blazer | pleated_skirt | brown_necktie | cowboy_shot | miniskirt | brown_skirt | sweater_vest | short_sleeves | :d | blue_necktie | day | outdoors | blue_sky | standing | cloud | ocean | frilled_bikini | stomach | blurry_background | bracelet | groin | halterneck | multi-strapped_bikini | side-tie_bikini_bottom | water | hair_flower | red_rose | frills | gloves | veil | black_dress | black_feathers | feather_hair_ornament | black_choker | detached_sleeves | dress | lace_choker | brooch | red_bowtie | black_rose | blue_rose | electric_guitar | holding | lace-trimmed_sleeves | neck_ribbon | red_ribbon | hairband | purple_rose | blue_feathers | cross-laced_clothes | crown | ;d | blue_jacket | black_ribbon | corset | cropped_jacket | multiple_girls | round_teeth | crop_top | hair_bow | midriff | denim_shorts | heart | short_shorts | belt | black_bow | black_gloves | black_jacket | choker | hoop_earrings | one_side_up | hand_up | off_shoulder | open_jacket | spaghetti_strap | thigh_strap | thighhighs | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------------|:---------|:----------|:-------------|:------|:----------|:-------|:--------|:----------------|:--------|:--------|:----------|:--------|:--------------|:------------------|:--------------------|:-------------------|:--------|:-------------|:-----------------|:----------|:-----------------------|:-------|:-----------------|:---------------|:----------------|:-----------|:-----------------|:-------------|:--------------------|:-------------------|:------------|:--------------------|:----------|:----------------------|:-------------------|:-----------|:----------|:-----|:--------|:--------------|:---------------|:-------------|:--------------|:---------------|:---------------|:-------------|:-----------------|:-----------------|:--------------|:-----------------|:------------------|:---------|:----------------|:----------------|:--------------|:------------|:--------------|:---------------|:----------------|:-----|:---------------|:------|:-----------|:-----------|:-----------|:--------|:--------|:-----------------|:----------|:--------------------|:-----------|:--------|:-------------|:------------------------|:-------------------------|:--------|:--------------|:-----------|:---------|:---------|:-------|:--------------|:-----------------|:------------------------|:---------------|:-------------------|:--------|:--------------|:---------|:-------------|:-------------|:------------|:------------------|:----------|:-----------------------|:--------------|:-------------|:-----------|:--------------|:----------------|:----------------------|:--------|:-----|:--------------|:---------------|:---------|:-----------------|:-----------------|:--------------|:-----------|:-----------|:----------|:---------------|:--------|:---------------|:-------|:------------|:---------------|:---------------|:---------|:----------------|:--------------|:----------|:---------------|:--------------|:------------------|:--------------|:-------------| | 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 27 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | | | X | | | | | | | | | | | | X | | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 11 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | | | X | | | | | | | | X | | | | X | | X | X | | | | X | | X | | | | | X | X | X | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 10 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | | | | | | | | | | | X | | | | X | | X | | | | | X | | X | | | | | X | X | | | | | | | | | | X | | | X | | X | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | | | X | | | | | | | | X | | | | X | | | | | | | X | | | | | | | X | X | | X | | | | | | | | X | | X | | | | | | X | | X | X | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 12 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | | | | | | | | | X | | | | X | | | X | | X | X | X | | | X | X | | | | | | | | X | X | | | | X | | | | | | | | | X | | | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 15 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | | | | | | | | | | | | | | | | X | | X | | | | | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 8 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | X | | | | | | | | | | | | | | | | X | | X | | | | | X | | X | | | | | X | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 5 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | X | | | | X | | | | | | | | | | | | X | | X | | | | | X | | X | | X | | | X | X | | | | | | X | | | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | | | | X | X | X | | X | | | | | X | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | 9 | 6 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | X | | | | | | | | | X | | | | X | | | X | | X | X | X | | | X | X | | | X | | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/imai_lisa_bangdream
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-26T04:31:08+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T17:22:35+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of imai\_lisa/今井リサ (BanG Dream!) ======================================== This is the dataset of imai\_lisa/今井リサ (BanG Dream!), containing 500 images and their tags. The core tags of this character are 'brown\_hair, long\_hair, bangs, green\_eyes, earrings, breasts, ponytail, sidelocks, half\_updo', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
5649b8e2996a57d399ae3ea047b1b69980e049a0
#### Persian Audio-Visual Dataset ### Dataset Summary The Persian audio-visual dataset is a collection of audio and visual data in the Persian language.The dataset consists of audio and visual data designed for speech recognition and lip reading tasks. The dataset includes over 24,000 of videos totaling over 27 hours of data.The videos are collected from news recordings and utilized a face-detection algorithm to crop and extract region of intrest. You can download the dataset by the command below: ```python from datasets import load_dataset dataset = load_dataset("SLPL/AV_Dataset") ``` ### Supported Tasks and Leaderboards The dataset can be used for tasks such as speech recognition, audio-visual speech recognition, and speaker recognition. ### Languages The dataset is in the Persian language. ### Format The video clips are in MP4 format with a duration of 4 seconds each,enabaling efficient processing and analysis.(lenght and width of 224*224 pixels) ### Data Splits The data is unsplit.
SLPL/AV_Dataset
[ "task_categories:feature-extraction", "task_categories:text-generation", "size_categories:10K<n<100K", "language:fa", "region:us" ]
2023-09-26T04:36:30+00:00
{"language": ["fa"], "size_categories": ["10K<n<100K"], "task_categories": ["feature-extraction", "text-generation"], "pretty_name": "m"}
2023-10-12T19:35:29+00:00
[]
[ "fa" ]
TAGS #task_categories-feature-extraction #task_categories-text-generation #size_categories-10K<n<100K #language-Persian #region-us
#### Persian Audio-Visual Dataset ### Dataset Summary The Persian audio-visual dataset is a collection of audio and visual data in the Persian language.The dataset consists of audio and visual data designed for speech recognition and lip reading tasks. The dataset includes over 24,000 of videos totaling over 27 hours of data.The videos are collected from news recordings and utilized a face-detection algorithm to crop and extract region of intrest. You can download the dataset by the command below: ### Supported Tasks and Leaderboards The dataset can be used for tasks such as speech recognition, audio-visual speech recognition, and speaker recognition. ### Languages The dataset is in the Persian language. ### Format The video clips are in MP4 format with a duration of 4 seconds each,enabaling efficient processing and analysis.(lenght and width of 224*224 pixels) ### Data Splits The data is unsplit.
[ "#### Persian Audio-Visual Dataset", "### Dataset Summary\n\nThe Persian audio-visual dataset is a collection of audio and visual data in the Persian language.The dataset consists of audio and visual data designed for speech recognition and lip reading tasks. The dataset includes over 24,000 of videos totaling over 27 hours of data.The videos are collected from news recordings and utilized a face-detection algorithm to crop and extract region of intrest.\n\nYou can download the dataset by the command below:", "### Supported Tasks and Leaderboards\n\nThe dataset can be used for tasks such as speech recognition, audio-visual speech recognition, and speaker recognition.", "### Languages\n\nThe dataset is in the Persian language.", "### Format\nThe video clips are in MP4 format with a duration of 4 seconds each,enabaling efficient processing and analysis.(lenght and width of 224*224 pixels)", "### Data Splits\nThe data is unsplit." ]
[ "TAGS\n#task_categories-feature-extraction #task_categories-text-generation #size_categories-10K<n<100K #language-Persian #region-us \n", "#### Persian Audio-Visual Dataset", "### Dataset Summary\n\nThe Persian audio-visual dataset is a collection of audio and visual data in the Persian language.The dataset consists of audio and visual data designed for speech recognition and lip reading tasks. The dataset includes over 24,000 of videos totaling over 27 hours of data.The videos are collected from news recordings and utilized a face-detection algorithm to crop and extract region of intrest.\n\nYou can download the dataset by the command below:", "### Supported Tasks and Leaderboards\n\nThe dataset can be used for tasks such as speech recognition, audio-visual speech recognition, and speaker recognition.", "### Languages\n\nThe dataset is in the Persian language.", "### Format\nThe video clips are in MP4 format with a duration of 4 seconds each,enabaling efficient processing and analysis.(lenght and width of 224*224 pixels)", "### Data Splits\nThe data is unsplit." ]
[ 46, 10, 105, 35, 14, 44, 12 ]
[ "passage: TAGS\n#task_categories-feature-extraction #task_categories-text-generation #size_categories-10K<n<100K #language-Persian #region-us \n#### Persian Audio-Visual Dataset### Dataset Summary\n\nThe Persian audio-visual dataset is a collection of audio and visual data in the Persian language.The dataset consists of audio and visual data designed for speech recognition and lip reading tasks. The dataset includes over 24,000 of videos totaling over 27 hours of data.The videos are collected from news recordings and utilized a face-detection algorithm to crop and extract region of intrest.\n\nYou can download the dataset by the command below:### Supported Tasks and Leaderboards\n\nThe dataset can be used for tasks such as speech recognition, audio-visual speech recognition, and speaker recognition.### Languages\n\nThe dataset is in the Persian language.### Format\nThe video clips are in MP4 format with a duration of 4 seconds each,enabaling efficient processing and analysis.(lenght and width of 224*224 pixels)### Data Splits\nThe data is unsplit." ]
74efbf7085073f4c789ac5f07f128ab102d4524e
# Dataset of hikawa_hina/氷川日菜 (BanG Dream!) This is the dataset of hikawa_hina/氷川日菜 (BanG Dream!), containing 500 images and their tags. The core tags of this character are `aqua_hair, green_eyes, short_hair, bow, bangs, braid, hair_bow, side_braids`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 731.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hikawa_hina_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 426.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hikawa_hina_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1178 | 879.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hikawa_hina_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 650.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hikawa_hina_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1178 | 1.24 GiB | [Download](https://huggingface.co/datasets/CyberHarem/hikawa_hina_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/hikawa_hina_bangdream', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 15 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, looking_at_viewer, twin_braids, frills, hair_ribbon, open_mouth, blush, blue_choker, white_ribbon, :d, blue_bow, collarbone, bare_shoulders, electric_guitar, white_background, blue_dress, teeth, wrist_bow, yellow_bow | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, blue_ribbon, looking_at_viewer, short_sleeves, solo, alternate_hairstyle, beret, blue_bow, blue_headwear, open_mouth, pom_pom_(clothes), smile, x_hair_ornament, blue_choker, blue_dress, double-breasted, neck_ribbon, striped_bow, wrist_cuffs, back_bow, blush, earrings, frilled_sleeves, hair_ribbon, hat_flower | | 2 | 18 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | earrings, 1girl, solo, beret, blue_bow, blue_headwear, frilled_shirt_collar, hair_ornament, hat_bow, alternate_hairstyle, long_sleeves, looking_at_viewer, star_(symbol), brooch, striped_bow, open_mouth, :d, constellation_print, long_hair, striped_ribbon, twin_braids, capelet, upper_body, ascot, blush, bowtie, neck_ribbon, star_(sky), starry_sky_print | | 3 | 15 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | grey_jacket, school_uniform, 1girl, blazer, collared_shirt, long_sleeves, looking_at_viewer, solo, white_shirt, blush, twin_braids, open_mouth, yellow_bow, :d, brown_necktie, diagonal-striped_necktie, plaid_skirt, pleated_skirt, cowboy_shot, diagonal_stripes, hand_up, miniskirt, standing, upper_body, upper_teeth_only, white_background | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, blush, collared_shirt, looking_at_viewer, plaid_skirt, pleated_skirt, school_uniform, simple_background, solo, twin_braids, white_shirt, black_socks, blue_necktie, blue_skirt, full_body, kneehighs, miniskirt, short_sleeves, sweater_vest, white_background, diagonal-striped_necktie, medium_hair, open_mouth, yellow_bow, breasts, grin, no_shoes, parted_lips, shadow, wariza | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 2girls, sisters, twincest, yuri, long_hair, upper_body, blush, long_sleeves, looking_at_another, parted_lips | | 6 | 7 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, earrings, white_gloves, looking_at_viewer, smile, solo, blush, fur-trimmed_capelet, hair_ornament, long_sleeves, red_ribbon, hat_flower, long_hair, pom_pom_(clothes), red_bow, braided_bangs, corset, dress, frills, fur-trimmed_sleeves, gift, holding_lantern, night, open_mouth, red_choker, shorts, sitting, thighhighs | | 7 | 15 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, demon_horns, smile, solo, mini_crown, blush, fur_collar, heart_earrings, looking_at_viewer, clothing_cutout, cross-laced_clothes, demon_tail, striped, demon_wings, red_dress, red_gloves, bracelet, fur_trim, hairband, halloween_costume, thighhighs, black_ribbon, hair_ribbon, navel, open_mouth, pink_gloves, jack-o'-lantern, medium_breasts, polearm, polka_dot_bow | | 8 | 7 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | blush, detached_collar, fake_animal_ears, looking_at_viewer, medium_breasts, rabbit_ears, black_leotard, cleavage, playboy_bunny, strapless_leotard, wrist_cuffs, 1girl, bare_shoulders, cowboy_shot, long_hair, red_bowtie, standing, fishnet_pantyhose, one_eye_closed, open_mouth, swept_bangs, 2girls, covered_navel, hairband, sisters, smile, solo_focus, two-tone_background | | 9 | 6 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 1boy, 1girl, blush, hetero, solo_focus, censored, open_mouth, sweat, collarbone, girl_on_top, looking_at_viewer, navel, nipples, penis, clothed_female_nude_male, clothed_sex, cowgirl_position, cum, green_hair, indoors, large_breasts, shirt, swept_bangs, tearing_up, twin_braids, vaginal, yellow_bow | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | twin_braids | frills | hair_ribbon | open_mouth | blush | blue_choker | white_ribbon | :d | blue_bow | collarbone | bare_shoulders | electric_guitar | white_background | blue_dress | teeth | wrist_bow | yellow_bow | blue_ribbon | short_sleeves | alternate_hairstyle | beret | blue_headwear | pom_pom_(clothes) | smile | x_hair_ornament | double-breasted | neck_ribbon | striped_bow | wrist_cuffs | back_bow | earrings | frilled_sleeves | hat_flower | frilled_shirt_collar | hair_ornament | hat_bow | long_sleeves | star_(symbol) | brooch | constellation_print | long_hair | striped_ribbon | capelet | upper_body | ascot | bowtie | star_(sky) | starry_sky_print | grey_jacket | school_uniform | blazer | collared_shirt | white_shirt | brown_necktie | diagonal-striped_necktie | plaid_skirt | pleated_skirt | cowboy_shot | diagonal_stripes | hand_up | miniskirt | standing | upper_teeth_only | simple_background | black_socks | blue_necktie | blue_skirt | full_body | kneehighs | sweater_vest | medium_hair | breasts | grin | no_shoes | parted_lips | shadow | wariza | 2girls | sisters | twincest | yuri | looking_at_another | white_gloves | fur-trimmed_capelet | red_ribbon | red_bow | braided_bangs | corset | dress | fur-trimmed_sleeves | gift | holding_lantern | night | red_choker | shorts | sitting | thighhighs | demon_horns | mini_crown | fur_collar | heart_earrings | clothing_cutout | cross-laced_clothes | demon_tail | striped | demon_wings | red_dress | red_gloves | bracelet | fur_trim | hairband | halloween_costume | black_ribbon | navel | pink_gloves | jack-o'-lantern | medium_breasts | polearm | polka_dot_bow | detached_collar | fake_animal_ears | rabbit_ears | black_leotard | cleavage | playboy_bunny | strapless_leotard | red_bowtie | fishnet_pantyhose | one_eye_closed | swept_bangs | covered_navel | solo_focus | two-tone_background | 1boy | hetero | censored | sweat | girl_on_top | nipples | penis | clothed_female_nude_male | clothed_sex | cowgirl_position | cum | green_hair | indoors | large_breasts | shirt | tearing_up | vaginal | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:--------------|:---------|:--------------|:-------------|:--------|:--------------|:---------------|:-----|:-----------|:-------------|:-----------------|:------------------|:-------------------|:-------------|:--------|:------------|:-------------|:--------------|:----------------|:----------------------|:--------|:----------------|:--------------------|:--------|:------------------|:------------------|:--------------|:--------------|:--------------|:-----------|:-----------|:------------------|:-------------|:-----------------------|:----------------|:----------|:---------------|:----------------|:---------|:----------------------|:------------|:-----------------|:----------|:-------------|:--------|:---------|:-------------|:-------------------|:--------------|:-----------------|:---------|:-----------------|:--------------|:----------------|:---------------------------|:--------------|:----------------|:--------------|:-------------------|:----------|:------------|:-----------|:-------------------|:--------------------|:--------------|:---------------|:-------------|:------------|:------------|:---------------|:--------------|:----------|:-------|:-----------|:--------------|:---------|:---------|:---------|:----------|:-----------|:-------|:---------------------|:---------------|:----------------------|:-------------|:----------|:----------------|:---------|:--------|:----------------------|:-------|:------------------|:--------|:-------------|:---------|:----------|:-------------|:--------------|:-------------|:-------------|:-----------------|:------------------|:----------------------|:-------------|:----------|:--------------|:------------|:-------------|:-----------|:-----------|:-----------|:--------------------|:---------------|:--------|:--------------|:------------------|:-----------------|:----------|:----------------|:------------------|:-------------------|:--------------|:----------------|:-----------|:----------------|:--------------------|:-------------|:--------------------|:-----------------|:--------------|:----------------|:-------------|:----------------------|:-------|:---------|:-----------|:--------|:--------------|:----------|:--------|:---------------------------|:--------------|:-------------------|:------|:-------------|:----------|:----------------|:--------|:-------------|:----------| | 0 | 15 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | | | X | X | X | X | | | X | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 18 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | | | X | X | | | X | X | | | | | | | | | | | X | X | X | | | | | X | X | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 15 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | X | | | X | X | | | X | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | X | X | | | X | X | | | | | | | | X | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | X | | X | X | X | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 7 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | X | | X | | X | X | | | | | | | | | | | | | | | | | | X | X | | | | | | | X | | X | | X | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 15 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | X | X | | | X | X | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 7 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | | X | | | | X | X | | | | | | X | | | | | | | | | | | | | X | | | | | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | 9 | 6 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | | X | X | | | X | X | | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/hikawa_hina_bangdream
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-26T04:40:39+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T17:07:01+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of hikawa\_hina/氷川日菜 (BanG Dream!) ========================================== This is the dataset of hikawa\_hina/氷川日菜 (BanG Dream!), containing 500 images and their tags. The core tags of this character are 'aqua\_hair, green\_eyes, short\_hair, bow, bangs, braid, hair\_bow, side\_braids', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
e1ce2957fda6fb928ee95553011f24489d08c932
# Dataset Card for "practice2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Manduzamzam/practice2
[ "region:us" ]
2023-09-26T04:43:00+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 5886291.0, "num_examples": 112}, {"name": "validation", "num_bytes": 63094.0, "num_examples": 10}], "download_size": 5899961, "dataset_size": 5949385.0}}
2023-09-26T06:45:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "practice2" More Information needed
[ "# Dataset Card for \"practice2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"practice2\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"practice2\"\n\nMore Information needed" ]
6da9f4e19069b6d18d342eb314610b825acd6026
# Dataset of Shima Rin This is the dataset of Shima Rin, containing 300 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 300 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 721 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 799 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 300 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 300 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 300 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 721 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 721 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 628 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 799 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 799 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/shima_rin_yurucamp
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-26T04:48:47+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-26T17:34:44+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Shima Rin ==================== This is the dataset of Shima Rin, containing 300 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
ed412f8cf2ab6fd1db9ea07e3f69cb8781e957ab
# Dataset of Toba Minami This is the dataset of Toba Minami, containing 100 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 100 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 237 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 269 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 100 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 100 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 100 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 237 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 237 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 207 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 269 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 269 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/toba_minami_yurucamp
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-26T05:07:36+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-26T17:50:37+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Toba Minami ====================== This is the dataset of Toba Minami, containing 100 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
1642d98d19456a0a18c6908e408a1cbfa1f28b01
# 中国行政区划信息 数据演示地址:[https://passer-by.com/data_location/](https://passer-by.com/data_location/) 三级联动插件:[https://jquerywidget.com/jquery-citys/](https://jquerywidget.com/jquery-citys/) ### 版权 数据库由 [passer-by.com](https://passer-by.com/) 整理,获取最新最全的数据还请关注此项目。 ### 数据说明 - 省、市、区数据来自于民政局、国务院公告、国家统计局,确保及时更新和权威; - 街道(镇、乡)数据由于数据庞大,各地各级之前公函较多,无法保证及时有效(最新数据2022年10月31日); - 街道(镇、乡)数据文件较多,为兼容旧行政区划代码,采取文件覆盖式更新; - 数据是以行政区为单位的行政区划数据。行政管理区与行政区存在重合,不予收录; (行政管理区通常包含:***经济特区/经济开发区/高新区/新区/工业区;亦有部分行政管理区升为行政区,需加以区分) ### 关于行政区划代码 使用《中华人民共和国行政区划代码》国家标准(GB/T2260). 这部分可分为三个层次,从左到右的含义分别是: - 第一、二位表示省(自治区、直辖市、特别行政区) - 第三、四位表示市(地区、自治州、盟及国家直辖市所属市辖区和县的汇总码) - 第五、六位表示县(市辖区、县级市、旗). #### 代码标准 * [中华人民共和国民政部-中华人民共和国行政区划代码](https://www.mca.gov.cn/mzsj/xzqh/2022/202201xzqh.html) * [中华人民共和国国家统计局-统计用区划和城乡划分代码](http://www.stats.gov.cn/sj/tjbz/tjyqhdmhcxhfdm/2022/index.html) * [中华人民共和国国家统计局-统计用区划代码和城乡划分代码编制规则](http://www.stats.gov.cn/sj/tjbz/gjtjbz/202302/t20230213_1902741.html) 港澳台地区编码并非标准编码,而是整理和参考标准编码规则自定义的,方便用户统一使用。 ### 反馈 如果有哪些地方数据错误或者更新不及时,还请告知(在"Issues"中留言),以便尽快更新~
SamuelGalaxys/iJupyter
[ "license:apache-2.0", "region:us" ]
2023-09-26T05:16:00+00:00
{"license": "apache-2.0"}
2023-09-27T07:34:25+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
# 中国行政区划信息 数据演示地址:URL 三级联动插件:URL ### 版权 数据库由 URL 整理,获取最新最全的数据还请关注此项目。 ### 数据说明 - 省、市、区数据来自于民政局、国务院公告、国家统计局,确保及时更新和权威; - 街道(镇、乡)数据由于数据庞大,各地各级之前公函较多,无法保证及时有效(最新数据2022年10月31日); - 街道(镇、乡)数据文件较多,为兼容旧行政区划代码,采取文件覆盖式更新; - 数据是以行政区为单位的行政区划数据。行政管理区与行政区存在重合,不予收录; (行政管理区通常包含:*经济特区/经济开发区/高新区/新区/工业区;亦有部分行政管理区升为行政区,需加以区分) ### 关于行政区划代码 使用《中华人民共和国行政区划代码》国家标准(GB/T2260). 这部分可分为三个层次,从左到右的含义分别是: - 第一、二位表示省(自治区、直辖市、特别行政区) - 第三、四位表示市(地区、自治州、盟及国家直辖市所属市辖区和县的汇总码) - 第五、六位表示县(市辖区、县级市、旗). #### 代码标准 * 中华人民共和国民政部-中华人民共和国行政区划代码 * 中华人民共和国国家统计局-统计用区划和城乡划分代码 * 中华人民共和国国家统计局-统计用区划代码和城乡划分代码编制规则 港澳台地区编码并非标准编码,而是整理和参考标准编码规则自定义的,方便用户统一使用。 ### 反馈 如果有哪些地方数据错误或者更新不及时,还请告知(在"Issues"中留言),以便尽快更新~
[ "# 中国行政区划信息\n\n\n数据演示地址:URL\n\n三级联动插件:URL", "### 版权\n数据库由 URL 整理,获取最新最全的数据还请关注此项目。", "### 数据说明\n- 省、市、区数据来自于民政局、国务院公告、国家统计局,确保及时更新和权威;\n- 街道(镇、乡)数据由于数据庞大,各地各级之前公函较多,无法保证及时有效(最新数据2022年10月31日);\n- 街道(镇、乡)数据文件较多,为兼容旧行政区划代码,采取文件覆盖式更新;\n- 数据是以行政区为单位的行政区划数据。行政管理区与行政区存在重合,不予收录;\n\n (行政管理区通常包含:*经济特区/经济开发区/高新区/新区/工业区;亦有部分行政管理区升为行政区,需加以区分)", "### 关于行政区划代码\n使用《中华人民共和国行政区划代码》国家标准(GB/T2260).\n这部分可分为三个层次,从左到右的含义分别是:\n- 第一、二位表示省(自治区、直辖市、特别行政区)\n- 第三、四位表示市(地区、自治州、盟及国家直辖市所属市辖区和县的汇总码)\n- 第五、六位表示县(市辖区、县级市、旗).", "#### 代码标准\n* 中华人民共和国民政部-中华人民共和国行政区划代码\n* 中华人民共和国国家统计局-统计用区划和城乡划分代码\n* 中华人民共和国国家统计局-统计用区划代码和城乡划分代码编制规则\n\n港澳台地区编码并非标准编码,而是整理和参考标准编码规则自定义的,方便用户统一使用。", "### 反馈\n如果有哪些地方数据错误或者更新不及时,还请告知(在\"Issues\"中留言),以便尽快更新~" ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "# 中国行政区划信息\n\n\n数据演示地址:URL\n\n三级联动插件:URL", "### 版权\n数据库由 URL 整理,获取最新最全的数据还请关注此项目。", "### 数据说明\n- 省、市、区数据来自于民政局、国务院公告、国家统计局,确保及时更新和权威;\n- 街道(镇、乡)数据由于数据庞大,各地各级之前公函较多,无法保证及时有效(最新数据2022年10月31日);\n- 街道(镇、乡)数据文件较多,为兼容旧行政区划代码,采取文件覆盖式更新;\n- 数据是以行政区为单位的行政区划数据。行政管理区与行政区存在重合,不予收录;\n\n (行政管理区通常包含:*经济特区/经济开发区/高新区/新区/工业区;亦有部分行政管理区升为行政区,需加以区分)", "### 关于行政区划代码\n使用《中华人民共和国行政区划代码》国家标准(GB/T2260).\n这部分可分为三个层次,从左到右的含义分别是:\n- 第一、二位表示省(自治区、直辖市、特别行政区)\n- 第三、四位表示市(地区、自治州、盟及国家直辖市所属市辖区和县的汇总码)\n- 第五、六位表示县(市辖区、县级市、旗).", "#### 代码标准\n* 中华人民共和国民政部-中华人民共和国行政区划代码\n* 中华人民共和国国家统计局-统计用区划和城乡划分代码\n* 中华人民共和国国家统计局-统计用区划代码和城乡划分代码编制规则\n\n港澳台地区编码并非标准编码,而是整理和参考标准编码规则自定义的,方便用户统一使用。", "### 反馈\n如果有哪些地方数据错误或者更新不及时,还请告知(在\"Issues\"中留言),以便尽快更新~" ]
[ 14, 21, 22, 163, 110, 80, 31 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n# 中国行政区划信息\n\n\n数据演示地址:URL\n\n三级联动插件:URL### 版权\n数据库由 URL 整理,获取最新最全的数据还请关注此项目。### 数据说明\n- 省、市、区数据来自于民政局、国务院公告、国家统计局,确保及时更新和权威;\n- 街道(镇、乡)数据由于数据庞大,各地各级之前公函较多,无法保证及时有效(最新数据2022年10月31日);\n- 街道(镇、乡)数据文件较多,为兼容旧行政区划代码,采取文件覆盖式更新;\n- 数据是以行政区为单位的行政区划数据。行政管理区与行政区存在重合,不予收录;\n\n (行政管理区通常包含:*经济特区/经济开发区/高新区/新区/工业区;亦有部分行政管理区升为行政区,需加以区分)### 关于行政区划代码\n使用《中华人民共和国行政区划代码》国家标准(GB/T2260).\n这部分可分为三个层次,从左到右的含义分别是:\n- 第一、二位表示省(自治区、直辖市、特别行政区)\n- 第三、四位表示市(地区、自治州、盟及国家直辖市所属市辖区和县的汇总码)\n- 第五、六位表示县(市辖区、县级市、旗).#### 代码标准\n* 中华人民共和国民政部-中华人民共和国行政区划代码\n* 中华人民共和国国家统计局-统计用区划和城乡划分代码\n* 中华人民共和国国家统计局-统计用区划代码和城乡划分代码编制规则\n\n港澳台地区编码并非标准编码,而是整理和参考标准编码规则自定义的,方便用户统一使用。### 反馈\n如果有哪些地方数据错误或者更新不及时,还请告知(在\"Issues\"中留言),以便尽快更新~" ]
4731045036f40fa209317ecbe2c8ddf40dcdb475
# Bangumi Image Base of Non Non Biyori This is the image base of bangumi Non Non Biyori, we detected 30 characters, 4423 images in total. The full dataset is [here](all.zip). **Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability). Here is the characters' preview: | # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 | |:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------| | 0 | 692 | [Download](0/dataset.zip) | ![preview 1](0/preview_1.png) | ![preview 2](0/preview_2.png) | ![preview 3](0/preview_3.png) | ![preview 4](0/preview_4.png) | ![preview 5](0/preview_5.png) | ![preview 6](0/preview_6.png) | ![preview 7](0/preview_7.png) | ![preview 8](0/preview_8.png) | | 1 | 576 | [Download](1/dataset.zip) | ![preview 1](1/preview_1.png) | ![preview 2](1/preview_2.png) | ![preview 3](1/preview_3.png) | ![preview 4](1/preview_4.png) | ![preview 5](1/preview_5.png) | ![preview 6](1/preview_6.png) | ![preview 7](1/preview_7.png) | ![preview 8](1/preview_8.png) | | 2 | 56 | [Download](2/dataset.zip) | ![preview 1](2/preview_1.png) | ![preview 2](2/preview_2.png) | ![preview 3](2/preview_3.png) | ![preview 4](2/preview_4.png) | ![preview 5](2/preview_5.png) | ![preview 6](2/preview_6.png) | ![preview 7](2/preview_7.png) | ![preview 8](2/preview_8.png) | | 3 | 18 | [Download](3/dataset.zip) | ![preview 1](3/preview_1.png) | ![preview 2](3/preview_2.png) | ![preview 3](3/preview_3.png) | ![preview 4](3/preview_4.png) | ![preview 5](3/preview_5.png) | ![preview 6](3/preview_6.png) | ![preview 7](3/preview_7.png) | ![preview 8](3/preview_8.png) | | 4 | 13 | [Download](4/dataset.zip) | ![preview 1](4/preview_1.png) | ![preview 2](4/preview_2.png) | ![preview 3](4/preview_3.png) | ![preview 4](4/preview_4.png) | ![preview 5](4/preview_5.png) | ![preview 6](4/preview_6.png) | ![preview 7](4/preview_7.png) | ![preview 8](4/preview_8.png) | | 5 | 161 | [Download](5/dataset.zip) | ![preview 1](5/preview_1.png) | ![preview 2](5/preview_2.png) | ![preview 3](5/preview_3.png) | ![preview 4](5/preview_4.png) | ![preview 5](5/preview_5.png) | ![preview 6](5/preview_6.png) | ![preview 7](5/preview_7.png) | ![preview 8](5/preview_8.png) | | 6 | 37 | [Download](6/dataset.zip) | ![preview 1](6/preview_1.png) | ![preview 2](6/preview_2.png) | ![preview 3](6/preview_3.png) | ![preview 4](6/preview_4.png) | ![preview 5](6/preview_5.png) | ![preview 6](6/preview_6.png) | ![preview 7](6/preview_7.png) | ![preview 8](6/preview_8.png) | | 7 | 37 | [Download](7/dataset.zip) | ![preview 1](7/preview_1.png) | ![preview 2](7/preview_2.png) | ![preview 3](7/preview_3.png) | ![preview 4](7/preview_4.png) | ![preview 5](7/preview_5.png) | ![preview 6](7/preview_6.png) | ![preview 7](7/preview_7.png) | ![preview 8](7/preview_8.png) | | 8 | 591 | [Download](8/dataset.zip) | ![preview 1](8/preview_1.png) | ![preview 2](8/preview_2.png) | ![preview 3](8/preview_3.png) | ![preview 4](8/preview_4.png) | ![preview 5](8/preview_5.png) | ![preview 6](8/preview_6.png) | ![preview 7](8/preview_7.png) | ![preview 8](8/preview_8.png) | | 9 | 18 | [Download](9/dataset.zip) | ![preview 1](9/preview_1.png) | ![preview 2](9/preview_2.png) | ![preview 3](9/preview_3.png) | ![preview 4](9/preview_4.png) | ![preview 5](9/preview_5.png) | ![preview 6](9/preview_6.png) | ![preview 7](9/preview_7.png) | ![preview 8](9/preview_8.png) | | 10 | 15 | [Download](10/dataset.zip) | ![preview 1](10/preview_1.png) | ![preview 2](10/preview_2.png) | ![preview 3](10/preview_3.png) | ![preview 4](10/preview_4.png) | ![preview 5](10/preview_5.png) | ![preview 6](10/preview_6.png) | ![preview 7](10/preview_7.png) | ![preview 8](10/preview_8.png) | | 11 | 27 | [Download](11/dataset.zip) | ![preview 1](11/preview_1.png) | ![preview 2](11/preview_2.png) | ![preview 3](11/preview_3.png) | ![preview 4](11/preview_4.png) | ![preview 5](11/preview_5.png) | ![preview 6](11/preview_6.png) | ![preview 7](11/preview_7.png) | ![preview 8](11/preview_8.png) | | 12 | 194 | [Download](12/dataset.zip) | ![preview 1](12/preview_1.png) | ![preview 2](12/preview_2.png) | ![preview 3](12/preview_3.png) | ![preview 4](12/preview_4.png) | ![preview 5](12/preview_5.png) | ![preview 6](12/preview_6.png) | ![preview 7](12/preview_7.png) | ![preview 8](12/preview_8.png) | | 13 | 34 | [Download](13/dataset.zip) | ![preview 1](13/preview_1.png) | ![preview 2](13/preview_2.png) | ![preview 3](13/preview_3.png) | ![preview 4](13/preview_4.png) | ![preview 5](13/preview_5.png) | ![preview 6](13/preview_6.png) | ![preview 7](13/preview_7.png) | ![preview 8](13/preview_8.png) | | 14 | 36 | [Download](14/dataset.zip) | ![preview 1](14/preview_1.png) | ![preview 2](14/preview_2.png) | ![preview 3](14/preview_3.png) | ![preview 4](14/preview_4.png) | ![preview 5](14/preview_5.png) | ![preview 6](14/preview_6.png) | ![preview 7](14/preview_7.png) | ![preview 8](14/preview_8.png) | | 15 | 174 | [Download](15/dataset.zip) | ![preview 1](15/preview_1.png) | ![preview 2](15/preview_2.png) | ![preview 3](15/preview_3.png) | ![preview 4](15/preview_4.png) | ![preview 5](15/preview_5.png) | ![preview 6](15/preview_6.png) | ![preview 7](15/preview_7.png) | ![preview 8](15/preview_8.png) | | 16 | 14 | [Download](16/dataset.zip) | ![preview 1](16/preview_1.png) | ![preview 2](16/preview_2.png) | ![preview 3](16/preview_3.png) | ![preview 4](16/preview_4.png) | ![preview 5](16/preview_5.png) | ![preview 6](16/preview_6.png) | ![preview 7](16/preview_7.png) | ![preview 8](16/preview_8.png) | | 17 | 15 | [Download](17/dataset.zip) | ![preview 1](17/preview_1.png) | ![preview 2](17/preview_2.png) | ![preview 3](17/preview_3.png) | ![preview 4](17/preview_4.png) | ![preview 5](17/preview_5.png) | ![preview 6](17/preview_6.png) | ![preview 7](17/preview_7.png) | ![preview 8](17/preview_8.png) | | 18 | 52 | [Download](18/dataset.zip) | ![preview 1](18/preview_1.png) | ![preview 2](18/preview_2.png) | ![preview 3](18/preview_3.png) | ![preview 4](18/preview_4.png) | ![preview 5](18/preview_5.png) | ![preview 6](18/preview_6.png) | ![preview 7](18/preview_7.png) | ![preview 8](18/preview_8.png) | | 19 | 92 | [Download](19/dataset.zip) | ![preview 1](19/preview_1.png) | ![preview 2](19/preview_2.png) | ![preview 3](19/preview_3.png) | ![preview 4](19/preview_4.png) | ![preview 5](19/preview_5.png) | ![preview 6](19/preview_6.png) | ![preview 7](19/preview_7.png) | ![preview 8](19/preview_8.png) | | 20 | 20 | [Download](20/dataset.zip) | ![preview 1](20/preview_1.png) | ![preview 2](20/preview_2.png) | ![preview 3](20/preview_3.png) | ![preview 4](20/preview_4.png) | ![preview 5](20/preview_5.png) | ![preview 6](20/preview_6.png) | ![preview 7](20/preview_7.png) | ![preview 8](20/preview_8.png) | | 21 | 1032 | [Download](21/dataset.zip) | ![preview 1](21/preview_1.png) | ![preview 2](21/preview_2.png) | ![preview 3](21/preview_3.png) | ![preview 4](21/preview_4.png) | ![preview 5](21/preview_5.png) | ![preview 6](21/preview_6.png) | ![preview 7](21/preview_7.png) | ![preview 8](21/preview_8.png) | | 22 | 27 | [Download](22/dataset.zip) | ![preview 1](22/preview_1.png) | ![preview 2](22/preview_2.png) | ![preview 3](22/preview_3.png) | ![preview 4](22/preview_4.png) | ![preview 5](22/preview_5.png) | ![preview 6](22/preview_6.png) | ![preview 7](22/preview_7.png) | ![preview 8](22/preview_8.png) | | 23 | 7 | [Download](23/dataset.zip) | ![preview 1](23/preview_1.png) | ![preview 2](23/preview_2.png) | ![preview 3](23/preview_3.png) | ![preview 4](23/preview_4.png) | ![preview 5](23/preview_5.png) | ![preview 6](23/preview_6.png) | ![preview 7](23/preview_7.png) | N/A | | 24 | 177 | [Download](24/dataset.zip) | ![preview 1](24/preview_1.png) | ![preview 2](24/preview_2.png) | ![preview 3](24/preview_3.png) | ![preview 4](24/preview_4.png) | ![preview 5](24/preview_5.png) | ![preview 6](24/preview_6.png) | ![preview 7](24/preview_7.png) | ![preview 8](24/preview_8.png) | | 25 | 83 | [Download](25/dataset.zip) | ![preview 1](25/preview_1.png) | ![preview 2](25/preview_2.png) | ![preview 3](25/preview_3.png) | ![preview 4](25/preview_4.png) | ![preview 5](25/preview_5.png) | ![preview 6](25/preview_6.png) | ![preview 7](25/preview_7.png) | ![preview 8](25/preview_8.png) | | 26 | 49 | [Download](26/dataset.zip) | ![preview 1](26/preview_1.png) | ![preview 2](26/preview_2.png) | ![preview 3](26/preview_3.png) | ![preview 4](26/preview_4.png) | ![preview 5](26/preview_5.png) | ![preview 6](26/preview_6.png) | ![preview 7](26/preview_7.png) | ![preview 8](26/preview_8.png) | | 27 | 51 | [Download](27/dataset.zip) | ![preview 1](27/preview_1.png) | ![preview 2](27/preview_2.png) | ![preview 3](27/preview_3.png) | ![preview 4](27/preview_4.png) | ![preview 5](27/preview_5.png) | ![preview 6](27/preview_6.png) | ![preview 7](27/preview_7.png) | ![preview 8](27/preview_8.png) | | 28 | 13 | [Download](28/dataset.zip) | ![preview 1](28/preview_1.png) | ![preview 2](28/preview_2.png) | ![preview 3](28/preview_3.png) | ![preview 4](28/preview_4.png) | ![preview 5](28/preview_5.png) | ![preview 6](28/preview_6.png) | ![preview 7](28/preview_7.png) | ![preview 8](28/preview_8.png) | | noise | 112 | [Download](-1/dataset.zip) | ![preview 1](-1/preview_1.png) | ![preview 2](-1/preview_2.png) | ![preview 3](-1/preview_3.png) | ![preview 4](-1/preview_4.png) | ![preview 5](-1/preview_5.png) | ![preview 6](-1/preview_6.png) | ![preview 7](-1/preview_7.png) | ![preview 8](-1/preview_8.png) |
BangumiBase/nonnonbiyori
[ "size_categories:1K<n<10K", "license:mit", "art", "region:us" ]
2023-09-26T05:23:31+00:00
{"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]}
2023-09-29T11:10:36+00:00
[]
[]
TAGS #size_categories-1K<n<10K #license-mit #art #region-us
Bangumi Image Base of Non Non Biyori ==================================== This is the image base of bangumi Non Non Biyori, we detected 30 characters, 4423 images in total. The full dataset is here. Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability). Here is the characters' preview:
[]
[ "TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n" ]
[ 25 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n" ]
031767896bccfd5a449445ec4d534f272d79d38e
# Dataset Card for "52f7d589" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/52f7d589
[ "region:us" ]
2023-09-26T05:35:53+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 167, "num_examples": 10}], "download_size": 1327, "dataset_size": 167}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-26T05:35:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for "52f7d589" More Information needed
[ "# Dataset Card for \"52f7d589\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"52f7d589\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"52f7d589\"\n\nMore Information needed" ]
f3b799091ba1947194309564b74d74f435c1079c
# Dataset Card for "squad_title_v3_train_10_eval_10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/squad_title_v3_train_10_eval_10
[ "region:us" ]
2023-09-26T05:36:05+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 276687, "num_examples": 184}, {"name": "validation", "num_bytes": 64836, "num_examples": 68}], "download_size": 71168, "dataset_size": 341523}}
2023-09-26T05:36:13+00:00
[]
[]
TAGS #region-us
# Dataset Card for "squad_title_v3_train_10_eval_10" More Information needed
[ "# Dataset Card for \"squad_title_v3_train_10_eval_10\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"squad_title_v3_train_10_eval_10\"\n\nMore Information needed" ]
[ 6, 27 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"squad_title_v3_train_10_eval_10\"\n\nMore Information needed" ]
bb912e0520a4146321bb28219110695feebf745b
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
Daya7624/Web_MD_Dataset_V0
[ "task_categories:text-generation", "size_categories:1K<n<10K", "medical", "region:us" ]
2023-09-26T05:39:26+00:00
{"size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "tags": ["medical"]}
2023-09-26T05:46:34+00:00
[]
[]
TAGS #task_categories-text-generation #size_categories-1K<n<10K #medical #region-us
# Dataset Card for Dataset Name ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Dataset Name", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #medical #region-us \n", "# Dataset Card for Dataset Name", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 32, 8, 24, 32, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #medical #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
763fdc5d6a0bdc48ec81ad47649f8d18ba5fa67c
# Dataset Card for "squad_baseline_v3_train_10_eval_10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/squad_baseline_v3_train_10_eval_10
[ "region:us" ]
2023-09-26T05:42:42+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 65136, "num_examples": 58}, {"name": "validation", "num_bytes": 61979, "num_examples": 68}], "download_size": 0, "dataset_size": 127115}}
2023-09-26T07:02:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "squad_baseline_v3_train_10_eval_10" More Information needed
[ "# Dataset Card for \"squad_baseline_v3_train_10_eval_10\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"squad_baseline_v3_train_10_eval_10\"\n\nMore Information needed" ]
[ 6, 28 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"squad_baseline_v3_train_10_eval_10\"\n\nMore Information needed" ]
f0ba2805698835b71e9f3bb3031efb35891a9c0a
# Dataset Card for "squad_context_v3_train_10_eval_10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/squad_context_v3_train_10_eval_10
[ "region:us" ]
2023-09-26T05:43:31+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 407101, "num_examples": 184}, {"name": "validation", "num_bytes": 103870, "num_examples": 68}], "download_size": 0, "dataset_size": 510971}}
2023-09-26T05:52:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for "squad_context_v3_train_10_eval_10" More Information needed
[ "# Dataset Card for \"squad_context_v3_train_10_eval_10\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"squad_context_v3_train_10_eval_10\"\n\nMore Information needed" ]
[ 6, 28 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"squad_context_v3_train_10_eval_10\"\n\nMore Information needed" ]
1d3307be97230be9d5ac3266e7ebd0851f0063d4
# FakeScience [JSON dataset] A dataset comprising artificially generated fictitious scientific theories, concepts, facts, laws and hypotheses, derived from a diverse array of Large Language Models (LLMs) such as GPT-3.5, GPT-4, Claude, Bard, Alpaca, LLaMA, LLaMA-2, Vicuna, and PaLM-2. ## Dataset Contents The dataset includes fictitious scientific entries, each with a unique concept, fact, hypothesis, law and theory, along with a brief yet imaginative description. These entries are entirely fictional and should not be mistaken for real scientific knowledge. ## Prompt The prompt used: ```json Generate a JSON-formatted list of fictitious scientific entries, each comprising with a title, a description, and an assigned type (concept, fact, hypothesis, law, or theory). All the entries should be crafted to appear logically plausible, creating an illusion of scientific credibility. Adhere to the specified structure: '''json [ { "title": "...", "description": "...", "type": "concept" }, { "title": "...", "description": "...", "type": "fact" }, { "title": "...", "description": "...", "type": "hypothesis" }, { "title": "...", "description": "...", "type": "law" }, { "title": "...", "description": "...", "type": "theory" } ] ''' ``` ## Disclaimer Please note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality.
Tanvir1337/fakescience
[ "size_categories:1K<n<10K", "language:en", "license:cdla-sharing-1.0", "GPT-3.5", "GPT-4", "Claude", "Bard", "Alpaca", "LLaMA", "LLaMA-2", "Vicuna", "PaLM-2", "region:us" ]
2023-09-26T05:44:09+00:00
{"language": ["en"], "license": "cdla-sharing-1.0", "size_categories": ["1K<n<10K"], "pretty_name": "FakeScience", "tags": ["GPT-3.5", "GPT-4", "Claude", "Bard", "Alpaca", "LLaMA", "LLaMA-2", "Vicuna", "PaLM-2"]}
2023-10-14T13:57:33+00:00
[]
[ "en" ]
TAGS #size_categories-1K<n<10K #language-English #license-cdla-sharing-1.0 #GPT-3.5 #GPT-4 #Claude #Bard #Alpaca #LLaMA #LLaMA-2 #Vicuna #PaLM-2 #region-us
# FakeScience [JSON dataset] A dataset comprising artificially generated fictitious scientific theories, concepts, facts, laws and hypotheses, derived from a diverse array of Large Language Models (LLMs) such as GPT-3.5, GPT-4, Claude, Bard, Alpaca, LLaMA, LLaMA-2, Vicuna, and PaLM-2. ## Dataset Contents The dataset includes fictitious scientific entries, each with a unique concept, fact, hypothesis, law and theory, along with a brief yet imaginative description. These entries are entirely fictional and should not be mistaken for real scientific knowledge. ## Prompt The prompt used: ## Disclaimer Please note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality.
[ "# FakeScience [JSON dataset]\n\nA dataset comprising artificially generated fictitious scientific theories, concepts, facts, laws and hypotheses, derived from a diverse array of Large Language Models (LLMs) such as GPT-3.5, GPT-4, Claude, Bard, Alpaca, LLaMA, LLaMA-2, Vicuna, and PaLM-2.", "## Dataset Contents\n\nThe dataset includes fictitious scientific entries, each with a unique concept, fact, hypothesis, law and theory, along with a brief yet imaginative description. These entries are entirely fictional and should not be mistaken for real scientific knowledge.", "## Prompt\n\nThe prompt used:", "## Disclaimer\n\nPlease note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality." ]
[ "TAGS\n#size_categories-1K<n<10K #language-English #license-cdla-sharing-1.0 #GPT-3.5 #GPT-4 #Claude #Bard #Alpaca #LLaMA #LLaMA-2 #Vicuna #PaLM-2 #region-us \n", "# FakeScience [JSON dataset]\n\nA dataset comprising artificially generated fictitious scientific theories, concepts, facts, laws and hypotheses, derived from a diverse array of Large Language Models (LLMs) such as GPT-3.5, GPT-4, Claude, Bard, Alpaca, LLaMA, LLaMA-2, Vicuna, and PaLM-2.", "## Dataset Contents\n\nThe dataset includes fictitious scientific entries, each with a unique concept, fact, hypothesis, law and theory, along with a brief yet imaginative description. These entries are entirely fictional and should not be mistaken for real scientific knowledge.", "## Prompt\n\nThe prompt used:", "## Disclaimer\n\nPlease note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality." ]
[ 68, 92, 59, 8, 73 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #language-English #license-cdla-sharing-1.0 #GPT-3.5 #GPT-4 #Claude #Bard #Alpaca #LLaMA #LLaMA-2 #Vicuna #PaLM-2 #region-us \n# FakeScience [JSON dataset]\n\nA dataset comprising artificially generated fictitious scientific theories, concepts, facts, laws and hypotheses, derived from a diverse array of Large Language Models (LLMs) such as GPT-3.5, GPT-4, Claude, Bard, Alpaca, LLaMA, LLaMA-2, Vicuna, and PaLM-2.## Dataset Contents\n\nThe dataset includes fictitious scientific entries, each with a unique concept, fact, hypothesis, law and theory, along with a brief yet imaginative description. These entries are entirely fictional and should not be mistaken for real scientific knowledge.## Prompt\n\nThe prompt used:## Disclaimer\n\nPlease note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality." ]
c17e27c5ba44bce22e381274d7b8450507b09200
# Dataset of shirasagi_chisato (BanG Dream!) This is the dataset of shirasagi_chisato (BanG Dream!), containing 405 images and their tags. The core tags of this character are `blonde_hair, long_hair, bangs, purple_eyes, half_updo, ribbon`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 405 | 480.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shirasagi_chisato_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 405 | 302.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shirasagi_chisato_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 939 | 612.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shirasagi_chisato_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 405 | 434.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shirasagi_chisato_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 939 | 832.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shirasagi_chisato_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/shirasagi_chisato_bangdream', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 2girls, blush, long_sleeves, hair_ribbon, white_ribbon, yuri, closed_eyes, open_mouth, purple_hair, skirt, smile, white_background | | 1 | 19 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | hanasakigawa_school_uniform, long_sleeves, sailor_dress, brown_dress, double-breasted, looking_at_viewer, 1girl, blush, neck_ribbon, red_ribbon, smile, solo, white_sailor_collar, collarbone, open_mouth, white_background | | 2 | 12 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, solo, smile, blush, looking_at_viewer, simple_background, white_background, shirt, short_sleeves, upper_body, character_name, flower, open_mouth, collarbone, happy_birthday, holding, necklace | | 3 | 10 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, looking_at_viewer, solo, white_shirt, floral_print, short_sleeves, smile, yellow_skirt, blush, print_skirt, necklace, collarbone, shoulder_cutout, star_(symbol), white_background, hair_between_eyes | | 4 | 15 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | smile, 1girl, looking_at_viewer, solo, blush, bow, white_gloves, hat, short_sleeves, yellow_dress, electric_guitar, frilled_dress, bass_guitar, hair_flower, holding, flower_earrings, frilled_sleeves, open_mouth | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, choker, frills, hair_ribbon, solo, yellow_dress, yellow_ribbon, bare_shoulders, blush, looking_at_viewer, white_ribbon, electric_guitar, holding_instrument, smile, bass_guitar, breasts, collarbone, strapless, white_thighhighs, yellow_bow | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, looking_at_viewer, smile, solo, upper_body, white_gloves, yellow_dress, blush, frills, hair_flower, one_eye_closed, petals | | 7 | 7 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, detached_collar, hair_ribbon, looking_at_viewer, smile, solo, blush, bowtie, hair_flower, pom_pom_(clothes), striped_ribbon, frilled_dress, yellow_ribbon, flower_earrings, upper_body, wrist_cuffs, yellow_dress, hand_up, plaid, purple_flower, white_ribbon | | 8 | 8 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | hair_flower, hair_ribbon, looking_at_viewer, ribbon_braid, 1girl, dress, food-themed_hair_ornament, hairband, solo, twin_braids, smile, alternate_hairstyle, short_sleeves, striped_ribbon, grapes, upper_body, white_gloves, blush, bow, center_frills, flower_earrings, hair_over_shoulder, white_background, white_flower | | 9 | 5 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | kimono, looking_at_viewer, obi, open_mouth, 1girl, :d, blush, braid, floral_print, hair_flower, solo, alternate_hairstyle, floral_background, long_sleeves, upper_body, from_behind, hair_up, looking_back, new_year, short_hair, sunflower | | 10 | 9 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | 1girl, fairy_wings, hair_flower, smile, solo, head_wreath, looking_at_viewer, blush, flower_necklace, blue_dress, cross-laced_clothes, sleeveless, upper_body, white_background | | 11 | 13 | ![](samples/11/clu11-sample0.png) | ![](samples/11/clu11-sample1.png) | ![](samples/11/clu11-sample2.png) | ![](samples/11/clu11-sample3.png) | ![](samples/11/clu11-sample4.png) | 1girl, solo, sweater, blush, long_sleeves, looking_at_viewer, white_shirt, collared_shirt, beret, hair_over_shoulder, red_headwear, simple_background, smile, single_braid, upper_body, white_background, brown_skirt, closed_mouth, hand_up, jewelry, plaid_skirt, pleated_skirt | | 12 | 5 | ![](samples/12/clu12-sample0.png) | ![](samples/12/clu12-sample1.png) | ![](samples/12/clu12-sample2.png) | ![](samples/12/clu12-sample3.png) | ![](samples/12/clu12-sample4.png) | 1girl, blush, hetero, navel, nipples, open_mouth, penis, solo_focus, 1boy, collarbone, mosaic_censoring, simple_background, white_background, erection, handjob, medium_breasts, pink_eyes, pussy, shiny_skin, spread_legs, :o, brown_hair, completely_nude, cowgirl_position, hair_ribbon, looking_at_viewer, on_back, sex, sweat, vaginal | | 13 | 6 | ![](samples/13/clu13-sample0.png) | ![](samples/13/clu13-sample1.png) | ![](samples/13/clu13-sample2.png) | ![](samples/13/clu13-sample3.png) | ![](samples/13/clu13-sample4.png) | 1girl, bracelet, collarbone, day, looking_at_viewer, outdoors, solo, cleavage, cloud, hair_flower, medium_breasts, necklace, smile, twintails, blue_sky, detached_sleeves, earrings, hair_ribbon, navel, sunflower, yellow_flower, beach, blue_ribbon, blush, choker, frills, midriff, ocean, open_mouth, print_bikini, see-through, sitting, straw_hat, water_drop, white_bikini | | 14 | 5 | ![](samples/14/clu14-sample0.png) | ![](samples/14/clu14-sample1.png) | ![](samples/14/clu14-sample2.png) | ![](samples/14/clu14-sample3.png) | ![](samples/14/clu14-sample4.png) | 1boy, 1girl, bare_shoulders, blush, choker, hair_ribbon, hetero, mosaic_censoring, solo_focus, fellatio, grey_background, simple_background, upper_body, collarbone, erection, purple_background, sidelocks, sweat, yellow_ribbon, cropped_torso, detached_sleeves, disembodied_penis, from_side, half-closed_eyes, heart, looking_at_penis, medium_breasts, off-shoulder_dress, official_alternate_costume, open_mouth, profile, saliva, sketch, strapless_dress, tearing_up, tears, tongue_out, white_ribbon, yellow_bow | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 2girls | blush | long_sleeves | hair_ribbon | white_ribbon | yuri | closed_eyes | open_mouth | purple_hair | skirt | smile | white_background | hanasakigawa_school_uniform | sailor_dress | brown_dress | double-breasted | looking_at_viewer | 1girl | neck_ribbon | red_ribbon | solo | white_sailor_collar | collarbone | simple_background | shirt | short_sleeves | upper_body | character_name | flower | happy_birthday | holding | necklace | white_shirt | floral_print | yellow_skirt | print_skirt | shoulder_cutout | star_(symbol) | hair_between_eyes | bow | white_gloves | hat | yellow_dress | electric_guitar | frilled_dress | bass_guitar | hair_flower | flower_earrings | frilled_sleeves | choker | frills | yellow_ribbon | bare_shoulders | holding_instrument | breasts | strapless | white_thighhighs | yellow_bow | one_eye_closed | petals | detached_collar | bowtie | pom_pom_(clothes) | striped_ribbon | wrist_cuffs | hand_up | plaid | purple_flower | ribbon_braid | dress | food-themed_hair_ornament | hairband | twin_braids | alternate_hairstyle | grapes | center_frills | hair_over_shoulder | white_flower | kimono | obi | :d | braid | floral_background | from_behind | hair_up | looking_back | new_year | short_hair | sunflower | fairy_wings | head_wreath | flower_necklace | blue_dress | cross-laced_clothes | sleeveless | sweater | collared_shirt | beret | red_headwear | single_braid | brown_skirt | closed_mouth | jewelry | plaid_skirt | pleated_skirt | hetero | navel | nipples | penis | solo_focus | 1boy | mosaic_censoring | erection | handjob | medium_breasts | pink_eyes | pussy | shiny_skin | spread_legs | :o | brown_hair | completely_nude | cowgirl_position | on_back | sex | sweat | vaginal | bracelet | day | outdoors | cleavage | cloud | twintails | blue_sky | detached_sleeves | earrings | yellow_flower | beach | blue_ribbon | midriff | ocean | print_bikini | see-through | sitting | straw_hat | water_drop | white_bikini | fellatio | grey_background | purple_background | sidelocks | cropped_torso | disembodied_penis | from_side | half-closed_eyes | heart | looking_at_penis | off-shoulder_dress | official_alternate_costume | profile | saliva | sketch | strapless_dress | tearing_up | tears | tongue_out | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:---------|:--------|:---------------|:--------------|:---------------|:-------|:--------------|:-------------|:--------------|:--------|:--------|:-------------------|:------------------------------|:---------------|:--------------|:------------------|:--------------------|:--------|:--------------|:-------------|:-------|:----------------------|:-------------|:--------------------|:--------|:----------------|:-------------|:-----------------|:---------|:-----------------|:----------|:-----------|:--------------|:---------------|:---------------|:--------------|:------------------|:----------------|:--------------------|:------|:---------------|:------|:---------------|:------------------|:----------------|:--------------|:--------------|:------------------|:------------------|:---------|:---------|:----------------|:-----------------|:---------------------|:----------|:------------|:-------------------|:-------------|:-----------------|:---------|:------------------|:---------|:--------------------|:-----------------|:--------------|:----------|:--------|:----------------|:---------------|:--------|:----------------------------|:-----------|:--------------|:----------------------|:---------|:----------------|:---------------------|:---------------|:---------|:------|:-----|:--------|:--------------------|:--------------|:----------|:---------------|:-----------|:-------------|:------------|:--------------|:--------------|:------------------|:-------------|:----------------------|:-------------|:----------|:-----------------|:--------|:---------------|:---------------|:--------------|:---------------|:----------|:--------------|:----------------|:---------|:--------|:----------|:--------|:-------------|:-------|:-------------------|:-----------|:----------|:-----------------|:------------|:--------|:-------------|:--------------|:-----|:-------------|:------------------|:-------------------|:----------|:------|:--------|:----------|:-----------|:------|:-----------|:-----------|:--------|:------------|:-----------|:-------------------|:-----------|:----------------|:--------|:--------------|:----------|:--------|:---------------|:--------------|:----------|:------------|:-------------|:---------------|:-----------|:------------------|:--------------------|:------------|:----------------|:--------------------|:------------|:-------------------|:--------|:-------------------|:---------------------|:-----------------------------|:----------|:---------|:---------|:------------------|:-------------|:--------|:-------------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 19 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | | X | X | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 12 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | | X | | | | | | X | | | X | X | | | | | X | X | | | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 10 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | | X | | | | | | | | | X | X | | | | | X | X | | | X | | X | | | X | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 15 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | | X | | | | | | X | | | X | | | | | | X | X | | | X | | | | | X | | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | | X | | X | X | | | | | | X | | | | | | X | X | | | X | | X | | | | | | | | | | | | | | | | | | | | X | X | | X | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | | X | | | | | | | | | X | | | | | | X | X | | | X | | | | | | X | | | | | | | | | | | | | | X | | X | | | | X | | | | X | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 7 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | | X | | X | X | | | | | | X | | | | | | X | X | | | X | | | | | | X | | | | | | | | | | | | | | | | X | | X | | X | X | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 8 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | | X | | X | | | | | | | X | X | | | | | X | X | | | X | | | | | X | X | | | | | | | | | | | | | X | X | | | | | | X | X | | | | | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 9 | 5 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | | X | X | | | | | X | | | | | | | | | X | X | | | X | | | | | | X | | | | | | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 10 | 9 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | | X | | | | | | | | | X | X | | | | | X | X | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 11 | 13 | ![](samples/11/clu11-sample0.png) | ![](samples/11/clu11-sample1.png) | ![](samples/11/clu11-sample2.png) | ![](samples/11/clu11-sample3.png) | ![](samples/11/clu11-sample4.png) | | X | X | | | | | | | | X | X | | | | | X | X | | | X | | | X | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 12 | 5 | ![](samples/12/clu12-sample0.png) | ![](samples/12/clu12-sample1.png) | ![](samples/12/clu12-sample2.png) | ![](samples/12/clu12-sample3.png) | ![](samples/12/clu12-sample4.png) | | X | | X | | | | X | | | | X | | | | | X | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 13 | 6 | ![](samples/13/clu13-sample0.png) | ![](samples/13/clu13-sample1.png) | ![](samples/13/clu13-sample2.png) | ![](samples/13/clu13-sample3.png) | ![](samples/13/clu13-sample4.png) | | X | | X | | | | X | | | X | | | | | | X | X | | | X | | X | | | | | | | | | X | | | | | | | | | | | | | | | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | 14 | 5 | ![](samples/14/clu14-sample0.png) | ![](samples/14/clu14-sample1.png) | ![](samples/14/clu14-sample2.png) | ![](samples/14/clu14-sample3.png) | ![](samples/14/clu14-sample4.png) | | X | | X | X | | | X | | | | | | | | | | X | | | | | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | X | | X | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | X | X | X | | X | | | | | | | | | | | X | | | | | | | | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/shirasagi_chisato_bangdream
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-26T05:47:42+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T20:17:44+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of shirasagi\_chisato (BanG Dream!) =========================================== This is the dataset of shirasagi\_chisato (BanG Dream!), containing 405 images and their tags. The core tags of this character are 'blonde\_hair, long\_hair, bangs, purple\_eyes, half\_updo, ribbon', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
19e50d3b577e7aaed52fedce715827f3eb8ed11d
# Dataset Card for "text-2-sql-with-context" This dataset is prepared in Alpaca format introduced by Stanford to train LLMs. This dataset has been used in fine-tuning Chat Llama-2 7B. For more information, Please visit : https://huggingface.co/ekshat/Llama-2-7b-chat-finetune-for-text2sql
ekshat/text-2-sql-with-context
[ "region:us" ]
2023-09-26T05:50:06+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32317282.06065388, "num_examples": 74648}, {"name": "test", "num_bytes": 1700977.939346119, "num_examples": 3929}], "download_size": 8982199, "dataset_size": 34018260.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2023-09-26T06:18:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "text-2-sql-with-context" This dataset is prepared in Alpaca format introduced by Stanford to train LLMs. This dataset has been used in fine-tuning Chat Llama-2 7B. For more information, Please visit : URL
[ "# Dataset Card for \"text-2-sql-with-context\"\n\nThis dataset is prepared in Alpaca format introduced by Stanford to train LLMs. This dataset has been used in fine-tuning Chat Llama-2 7B. For more information, Please visit : URL" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"text-2-sql-with-context\"\n\nThis dataset is prepared in Alpaca format introduced by Stanford to train LLMs. This dataset has been used in fine-tuning Chat Llama-2 7B. For more information, Please visit : URL" ]
[ 6, 62 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"text-2-sql-with-context\"\n\nThis dataset is prepared in Alpaca format introduced by Stanford to train LLMs. This dataset has been used in fine-tuning Chat Llama-2 7B. For more information, Please visit : URL" ]
b412ab8d10851a0320fff9c0d5d5dec50f982568
# Dataset Card for "neo_futuristic_metropolitan_prompts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Falah/neo_futuristic_metropolitan_prompts
[ "region:us" ]
2023-09-26T05:51:44+00:00
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 343001, "num_examples": 1000}], "download_size": 11150, "dataset_size": 343001}}
2023-09-26T05:51:45+00:00
[]
[]
TAGS #region-us
# Dataset Card for "neo_futuristic_metropolitan_prompts" More Information needed
[ "# Dataset Card for \"neo_futuristic_metropolitan_prompts\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"neo_futuristic_metropolitan_prompts\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"neo_futuristic_metropolitan_prompts\"\n\nMore Information needed" ]
3cf095bcbd269ab34fe965c89fac8f14398ae9dc
# Dataset Card for "squad_wrong_title_v3_train_10_eval_10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/squad_wrong_title_v3_train_10_eval_10
[ "region:us" ]
2023-09-26T05:53:19+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 276687, "num_examples": 184}, {"name": "validation", "num_bytes": 64754, "num_examples": 68}], "download_size": 71442, "dataset_size": 341441}}
2023-09-26T05:53:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for "squad_wrong_title_v3_train_10_eval_10" More Information needed
[ "# Dataset Card for \"squad_wrong_title_v3_train_10_eval_10\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"squad_wrong_title_v3_train_10_eval_10\"\n\nMore Information needed" ]
[ 6, 30 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"squad_wrong_title_v3_train_10_eval_10\"\n\nMore Information needed" ]
76fae734e9e1704c75b7f6ffe482ae43bfdb0a04
# Dataset Card for "squad_no_title_v3_train_10_eval_10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/squad_no_title_v3_train_10_eval_10
[ "region:us" ]
2023-09-26T05:53:52+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 276687, "num_examples": 184}, {"name": "validation", "num_bytes": 61979, "num_examples": 68}], "download_size": 70533, "dataset_size": 338666}}
2023-09-26T05:53:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "squad_no_title_v3_train_10_eval_10" More Information needed
[ "# Dataset Card for \"squad_no_title_v3_train_10_eval_10\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"squad_no_title_v3_train_10_eval_10\"\n\nMore Information needed" ]
[ 6, 29 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"squad_no_title_v3_train_10_eval_10\"\n\nMore Information needed" ]
c10c1ad6961529455227f2556e85fb03b1c89a1c
# Dataset Card for "photography_style_prompts_camera" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Falah/photography_style_prompts_camera
[ "region:us" ]
2023-09-26T05:55:48+00:00
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 109180, "num_examples": 1000}], "download_size": 9733, "dataset_size": 109180}}
2023-09-26T05:55:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "photography_style_prompts_camera" More Information needed
[ "# Dataset Card for \"photography_style_prompts_camera\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"photography_style_prompts_camera\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"photography_style_prompts_camera\"\n\nMore Information needed" ]
e16e079aede4dd6ddb5c8cc6796957147f253a24
# Dataset Card for "cc_news_ocr" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lowem1/cc_news_ocr
[ "region:us" ]
2023-09-26T06:06:55+00:00
{"dataset_info": {"features": [{"name": "tag", "dtype": "string"}, {"name": "ocr_data", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19826223, "num_examples": 2000}], "download_size": 7547846, "dataset_size": 19826223}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-26T06:07:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cc_news_ocr" More Information needed
[ "# Dataset Card for \"cc_news_ocr\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cc_news_ocr\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"cc_news_ocr\"\n\nMore Information needed" ]
17f8ef67c7f50d3ea2dd098b70b81f1e7562626f
# Dataset Card for "varied_photo_descriptions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Falah/varied_photo_descriptions
[ "region:us" ]
2023-09-26T06:14:15+00:00
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 221033, "num_examples": 1000}], "download_size": 22592, "dataset_size": 221033}}
2023-09-26T06:14:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "varied_photo_descriptions" More Information needed
[ "# Dataset Card for \"varied_photo_descriptions\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"varied_photo_descriptions\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"varied_photo_descriptions\"\n\nMore Information needed" ]
a50e9b699599284036f934b3f5ccc42baa6724b7
# Description PHINC is a parallel corpus for machine translation pairing code-mixed Hinglish (a fusion of Hindi and English commonly used in modern India) with human-generated English translations. # Credit All credit goes to: [PHINC: A Parallel Hinglish Social Media Code-Mixed Corpus for Machine Translation](https://aclanthology.org/2020.wnut-1.7) (Srivastava & Singh, WNUT 2020) # Original Abstract Code-mixing is the phenomenon of using more than one language in a sentence. It is a very frequently observed pattern of communication on social media platforms. Flexibility to use mixed languages in one text message might help to communicate efficiently with the target audience. But, it adds to the challenge of processing and understanding natural language to a much larger extent. Here, we are presenting a parallel corpus of the 13,738 code-mixed English-Hindi sentences and their corresponding translation in English. The translations of sentences are done manually by the annotators. We are releasing the parallel corpus to facilitate future research opportunities for code-mixed machine translation. ## Note This data has been automatically modified to become a HuggingFace dataset (including a conversion to Parquet). The original raw dataset can be found [here](https://zenodo.org/record/3605597).
veezbo/phinc
[ "task_categories:translation", "task_categories:text2text-generation", "size_categories:10K<n<100K", "language:en", "language:hi", "license:cc-by-4.0", "region:us" ]
2023-09-26T06:22:59+00:00
{"language": ["en", "hi"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["translation", "text2text-generation"], "pretty_name": "A Parallel Hinglish Social Media Code-Mixed Corpus for Machine Translation"}
2023-09-26T06:36:01+00:00
[]
[ "en", "hi" ]
TAGS #task_categories-translation #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #language-Hindi #license-cc-by-4.0 #region-us
# Description PHINC is a parallel corpus for machine translation pairing code-mixed Hinglish (a fusion of Hindi and English commonly used in modern India) with human-generated English translations. # Credit All credit goes to: PHINC: A Parallel Hinglish Social Media Code-Mixed Corpus for Machine Translation (Srivastava & Singh, WNUT 2020) # Original Abstract Code-mixing is the phenomenon of using more than one language in a sentence. It is a very frequently observed pattern of communication on social media platforms. Flexibility to use mixed languages in one text message might help to communicate efficiently with the target audience. But, it adds to the challenge of processing and understanding natural language to a much larger extent. Here, we are presenting a parallel corpus of the 13,738 code-mixed English-Hindi sentences and their corresponding translation in English. The translations of sentences are done manually by the annotators. We are releasing the parallel corpus to facilitate future research opportunities for code-mixed machine translation. ## Note This data has been automatically modified to become a HuggingFace dataset (including a conversion to Parquet). The original raw dataset can be found here.
[ "# Description\nPHINC is a parallel corpus for machine translation pairing code-mixed Hinglish (a fusion of Hindi and English commonly used in modern India) with human-generated English translations.", "# Credit\nAll credit goes to:\nPHINC: A Parallel Hinglish Social Media Code-Mixed Corpus for Machine Translation (Srivastava & Singh, WNUT 2020)", "# Original Abstract\nCode-mixing is the phenomenon of using more than one language in a sentence. It is a very frequently observed pattern of communication on social media platforms. Flexibility to use mixed languages in one text message might help to communicate efficiently with the target audience. But, it adds to the challenge of processing and understanding natural language to a much larger extent. Here, we are presenting a parallel corpus of the 13,738 code-mixed English-Hindi sentences and their corresponding translation in English. The translations of sentences are done manually by the annotators. We are releasing the parallel corpus to facilitate future research opportunities for code-mixed machine translation.", "## Note\nThis data has been automatically modified to become a HuggingFace dataset (including a conversion to Parquet). The original raw dataset can be found here." ]
[ "TAGS\n#task_categories-translation #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #language-Hindi #license-cc-by-4.0 #region-us \n", "# Description\nPHINC is a parallel corpus for machine translation pairing code-mixed Hinglish (a fusion of Hindi and English commonly used in modern India) with human-generated English translations.", "# Credit\nAll credit goes to:\nPHINC: A Parallel Hinglish Social Media Code-Mixed Corpus for Machine Translation (Srivastava & Singh, WNUT 2020)", "# Original Abstract\nCode-mixing is the phenomenon of using more than one language in a sentence. It is a very frequently observed pattern of communication on social media platforms. Flexibility to use mixed languages in one text message might help to communicate efficiently with the target audience. But, it adds to the challenge of processing and understanding natural language to a much larger extent. Here, we are presenting a parallel corpus of the 13,738 code-mixed English-Hindi sentences and their corresponding translation in English. The translations of sentences are done manually by the annotators. We are releasing the parallel corpus to facilitate future research opportunities for code-mixed machine translation.", "## Note\nThis data has been automatically modified to become a HuggingFace dataset (including a conversion to Parquet). The original raw dataset can be found here." ]
[ 57, 44, 40, 153, 36 ]
[ "passage: TAGS\n#task_categories-translation #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #language-Hindi #license-cc-by-4.0 #region-us \n# Description\nPHINC is a parallel corpus for machine translation pairing code-mixed Hinglish (a fusion of Hindi and English commonly used in modern India) with human-generated English translations.# Credit\nAll credit goes to:\nPHINC: A Parallel Hinglish Social Media Code-Mixed Corpus for Machine Translation (Srivastava & Singh, WNUT 2020)# Original Abstract\nCode-mixing is the phenomenon of using more than one language in a sentence. It is a very frequently observed pattern of communication on social media platforms. Flexibility to use mixed languages in one text message might help to communicate efficiently with the target audience. But, it adds to the challenge of processing and understanding natural language to a much larger extent. Here, we are presenting a parallel corpus of the 13,738 code-mixed English-Hindi sentences and their corresponding translation in English. The translations of sentences are done manually by the annotators. We are releasing the parallel corpus to facilitate future research opportunities for code-mixed machine translation.## Note\nThis data has been automatically modified to become a HuggingFace dataset (including a conversion to Parquet). The original raw dataset can be found here." ]
4786a725aad4bc3a1e0fc114b76ad91298c62f33
# Dataset Card for "gustacaste-1k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gustacaste/gustacaste-1k
[ "region:us" ]
2023-09-26T06:32:01+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1654448, "num_examples": 1000}], "download_size": 966693, "dataset_size": 1654448}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-26T06:32:02+00:00
[]
[]
TAGS #region-us
# Dataset Card for "gustacaste-1k" More Information needed
[ "# Dataset Card for \"gustacaste-1k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"gustacaste-1k\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"gustacaste-1k\"\n\nMore Information needed" ]
e1f731e608a5678f113f6edb68b4b6c40d1d48bc
# Dataset Card for "bus_few4_50x" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
FanChen0116/bus_few4_50x
[ "region:us" ]
2023-09-26T06:35:19+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "O", "1": "I-from_location", "2": "B-from_location", "3": "B-leaving_date", "4": "I-leaving_date", "5": "I-to_location", "6": "B-to_location"}}}}, {"name": "request_slot", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 677514, "num_examples": 3500}, {"name": "validation", "num_bytes": 6900, "num_examples": 35}, {"name": "test", "num_bytes": 70618, "num_examples": 377}], "download_size": 0, "dataset_size": 755032}}
2023-09-26T16:23:06+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bus_few4_50x" More Information needed
[ "# Dataset Card for \"bus_few4_50x\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bus_few4_50x\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"bus_few4_50x\"\n\nMore Information needed" ]
348bfa943a65ee2aedf6ee51c58744ecf1ca8158
# Dataset Card for "bus_few4_50x_empty" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
FanChen0116/bus_few4_50x_empty
[ "region:us" ]
2023-09-26T06:35:33+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "O", "1": "I-from_location", "2": "B-from_location", "3": "B-leaving_date", "4": "I-leaving_date", "5": "I-to_location", "6": "B-to_location"}}}}, {"name": "request_slot", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 603011, "num_examples": 3500}, {"name": "validation", "num_bytes": 6128, "num_examples": 35}, {"name": "test", "num_bytes": 70618, "num_examples": 377}], "download_size": 0, "dataset_size": 679757}}
2023-09-26T16:23:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bus_few4_50x_empty" More Information needed
[ "# Dataset Card for \"bus_few4_50x_empty\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bus_few4_50x_empty\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"bus_few4_50x_empty\"\n\nMore Information needed" ]
08818f23dd3482e8fd456f5a054a69ad0ea68199
# Dataset Card for "bus_few4_50x_pvi" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
FanChen0116/bus_few4_50x_pvi
[ "region:us" ]
2023-09-26T06:38:24+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "O", "1": "I-from_location", "2": "B-from_location", "3": "B-leaving_date", "4": "I-leaving_date", "5": "I-to_location", "6": "B-to_location"}}}}, {"name": "request_slot", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 431503, "num_examples": 1750}, {"name": "validation", "num_bytes": 6900, "num_examples": 35}, {"name": "test", "num_bytes": 70618, "num_examples": 377}], "download_size": 54596, "dataset_size": 509021}}
2023-09-26T19:31:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bus_few4_50x_pvi" More Information needed
[ "# Dataset Card for \"bus_few4_50x_pvi\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bus_few4_50x_pvi\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"bus_few4_50x_pvi\"\n\nMore Information needed" ]
60ca94ba5de7b5361c127759f6569698d75e542f
# Dataset Card for "ancient_landscape_descriptions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Falah/ancient_landscape_descriptions
[ "region:us" ]
2023-09-26T06:46:53+00:00
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 376505, "num_examples": 1000}], "download_size": 40969, "dataset_size": 376505}}
2023-09-26T06:46:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ancient_landscape_descriptions" More Information needed
[ "# Dataset Card for \"ancient_landscape_descriptions\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ancient_landscape_descriptions\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ancient_landscape_descriptions\"\n\nMore Information needed" ]
b1b45c1e9a2c119829e88fda833704a2520d1041
# Dataset Card for "artist_rooms_descriptions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Falah/artist_rooms_descriptions
[ "region:us" ]
2023-09-26T06:51:59+00:00
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 638372, "num_examples": 1000}], "download_size": 54548, "dataset_size": 638372}}
2023-09-26T06:52:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "artist_rooms_descriptions" More Information needed
[ "# Dataset Card for \"artist_rooms_descriptions\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"artist_rooms_descriptions\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"artist_rooms_descriptions\"\n\nMore Information needed" ]
f04fd7d830a9b3671255836b3aa1b88ac45144f7
# Dataset Card for "artist_workshop_descriptions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Falah/artist_workshop_descriptions
[ "region:us" ]
2023-09-26T06:56:07+00:00
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 660301, "num_examples": 1000}], "download_size": 62303, "dataset_size": 660301}}
2023-09-26T06:56:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "artist_workshop_descriptions" More Information needed
[ "# Dataset Card for \"artist_workshop_descriptions\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"artist_workshop_descriptions\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"artist_workshop_descriptions\"\n\nMore Information needed" ]
8664e276008d426db3a191e9387de4bdb4b2f2f4
# Dataset Card for "squad_title_v3_train_30_eval_10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/squad_title_v3_train_30_eval_10
[ "region:us" ]
2023-09-26T07:01:33+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 658246, "num_examples": 378}, {"name": "validation", "num_bytes": 68651, "num_examples": 60}], "download_size": 123968, "dataset_size": 726897}}
2023-09-26T07:01:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for "squad_title_v3_train_30_eval_10" More Information needed
[ "# Dataset Card for \"squad_title_v3_train_30_eval_10\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"squad_title_v3_train_30_eval_10\"\n\nMore Information needed" ]
[ 6, 27 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"squad_title_v3_train_30_eval_10\"\n\nMore Information needed" ]
53d5fea440ba2e7b7ce8a34bcb4e556443780ddb
# Dataset Card for "squad_baseline_v3_train_30_eval_10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/squad_baseline_v3_train_30_eval_10
[ "region:us" ]
2023-09-26T07:07:24+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 200456, "num_examples": 159}, {"name": "validation", "num_bytes": 66108, "num_examples": 60}], "download_size": 88540, "dataset_size": 266564}}
2023-09-26T07:07:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "squad_baseline_v3_train_30_eval_10" More Information needed
[ "# Dataset Card for \"squad_baseline_v3_train_30_eval_10\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"squad_baseline_v3_train_30_eval_10\"\n\nMore Information needed" ]
[ 6, 28 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"squad_baseline_v3_train_30_eval_10\"\n\nMore Information needed" ]