sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f9e03dac0c4d01ffe1b14f7018a66e239dc8371b
|
# Dataset Card for "COVID-QA-testset-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
minh21/COVID-QA-testset-data
|
[
"region:us"
] |
2023-09-25T06:02:29+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "context_chunks", "sequence": "string"}, {"name": "document_id", "dtype": "int64"}, {"name": "id", "dtype": "int64"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16708455, "num_examples": 201}], "download_size": 442083, "dataset_size": 16708455}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-06T06:10:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "COVID-QA-testset-data"
More Information needed
|
[
"# Dataset Card for \"COVID-QA-testset-data\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"COVID-QA-testset-data\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"COVID-QA-testset-data\"\n\nMore Information needed"
] |
053a30684b733834db3081811c60637355d9d702
|
# Dataset of Nagase Riko
This is the dataset of Nagase Riko, containing 104 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 104 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 248 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 104 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 104 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 104 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 104 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 104 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 248 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 248 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 248 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/nagase_riko_soundeuphonium
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T06:06:59+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-25T06:08:06+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Nagase Riko
======================
This is the dataset of Nagase Riko, containing 104 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
965ccbcb20fd74159a194d47ad453840d5ac356b
|
# Dataset Card for "TweetSumm-tuned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Andyrasika/TweetSumm-tuned
|
[
"region:us"
] |
2023-09-25T06:10:45+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "conversation", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2268632, "num_examples": 879}, {"name": "validation", "num_bytes": 267236, "num_examples": 110}, {"name": "test", "num_bytes": 296944, "num_examples": 110}], "download_size": 1595884, "dataset_size": 2832812}}
|
2023-09-25T06:10:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "TweetSumm-tuned"
More Information needed
|
[
"# Dataset Card for \"TweetSumm-tuned\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"TweetSumm-tuned\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"TweetSumm-tuned\"\n\nMore Information needed"
] |
7f0feefee0344cf4f8204d77e866a1c513da96a5
|
# Dataset of onitsuka_natsumi/鬼塚夏美 (Love Live! Superstar!!)
This is the dataset of onitsuka_natsumi/鬼塚夏美 (Love Live! Superstar!!), containing 231 images and their tags.
The core tags of this character are `blonde_hair, multicolored_hair, gradient_hair, pink_hair, long_hair, braid, twin_braids, pink_eyes, hair_ornament, hair_flower, bangs, red_eyes, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 231 | 302.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/onitsuka_natsumi_lovelivesuperstar/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 231 | 155.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/onitsuka_natsumi_lovelivesuperstar/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 552 | 347.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/onitsuka_natsumi_lovelivesuperstar/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 231 | 262.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/onitsuka_natsumi_lovelivesuperstar/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 552 | 535.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/onitsuka_natsumi_lovelivesuperstar/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/onitsuka_natsumi_lovelivesuperstar',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, flower, looking_at_viewer, neck_ribbon, red_ribbon, solo, yuigaoka_school_uniform, blue_jacket, blush, grey_dress, long_sleeves, white_background, collared_shirt, grin, one_eye_closed, open_jacket, pinafore_dress, simple_background, white_shirt |
| 1 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, red_ribbon, short_sleeves, solo, white_shirt, yuigaoka_school_uniform, neck_ribbon, pinafore_dress, upper_body, flower, summer_uniform, white_background, collared_shirt, blush, grin, one_eye_closed, simple_background |
| 2 | 10 |  |  |  |  |  | 1girl, looking_at_viewer, solo, yuigaoka_school_uniform, flower, smile, dated, happy_birthday, blush, english_text, jacket, character_name, dress, pink_background, upper_body, signature |
| 3 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, sunglasses, bikini_skirt, cleavage, heart-shaped_eyewear, smile, solo, white_bikini, blush, bracelet, flower, large_breasts, navel, necklace, simple_background, white_background, eyewear_on_head, frills, pink-tinted_eyewear, tongue_out |
| 4 | 9 |  |  |  |  |  | 1girl, blush, hetero, nipples, solo_focus, large_breasts, pussy, sweat, open_mouth, spread_legs, vaginal, flower, 1boy, bar_censor, cum, navel, breast_grab, gangbang, grabbing, handjob, missionary, multiple_boys, multiple_penises, nude, on_back, panties |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | flower | looking_at_viewer | neck_ribbon | red_ribbon | solo | yuigaoka_school_uniform | blue_jacket | blush | grey_dress | long_sleeves | white_background | collared_shirt | grin | one_eye_closed | open_jacket | pinafore_dress | simple_background | white_shirt | short_sleeves | upper_body | summer_uniform | smile | dated | happy_birthday | english_text | jacket | character_name | dress | pink_background | signature | sunglasses | bikini_skirt | cleavage | heart-shaped_eyewear | white_bikini | bracelet | large_breasts | navel | necklace | eyewear_on_head | frills | pink-tinted_eyewear | tongue_out | hetero | nipples | solo_focus | pussy | sweat | open_mouth | spread_legs | vaginal | 1boy | bar_censor | cum | breast_grab | gangbang | grabbing | handjob | missionary | multiple_boys | multiple_penises | nude | on_back | panties |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:--------------------|:--------------|:-------------|:-------|:--------------------------|:--------------|:--------|:-------------|:---------------|:-------------------|:-----------------|:-------|:-----------------|:--------------|:-----------------|:--------------------|:--------------|:----------------|:-------------|:-----------------|:--------|:--------|:-----------------|:---------------|:---------|:-----------------|:--------|:------------------|:------------|:-------------|:---------------|:-----------|:-----------------------|:---------------|:-----------|:----------------|:--------|:-----------|:------------------|:---------|:----------------------|:-------------|:---------|:----------|:-------------|:--------|:--------|:-------------|:--------------|:----------|:-------|:-------------|:------|:--------------|:-----------|:-----------|:----------|:-------------|:----------------|:-------------------|:-------|:----------|:----------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | | X | | | X | X | X | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | X | X | | | X | X | | X | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | X | | | X | | | X | | | X | | | | | | X | | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 4 | 9 |  |  |  |  |  | X | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/onitsuka_natsumi_lovelivesuperstar
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T06:14:09+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T06:22:05+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of onitsuka\_natsumi/鬼塚夏美 (Love Live! Superstar!!)
==========================================================
This is the dataset of onitsuka\_natsumi/鬼塚夏美 (Love Live! Superstar!!), containing 231 images and their tags.
The core tags of this character are 'blonde\_hair, multicolored\_hair, gradient\_hair, pink\_hair, long\_hair, braid, twin\_braids, pink\_eyes, hair\_ornament, hair\_flower, bangs, red\_eyes, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
d6d352e20d19bfd2f3552de0a68c6dccb54388da
|
# Dataset of Hisaishi Kanade
This is the dataset of Hisaishi Kanade, containing 114 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 114 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 279 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 114 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 114 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 114 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 114 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 114 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 279 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 279 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 279 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/hisaishi_kanade_soundeuphonium
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T06:15:36+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-25T06:20:06+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Hisaishi Kanade
==========================
This is the dataset of Hisaishi Kanade, containing 114 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
8f5f0a2b03d61dea0d9ce0efd570b4e7c75da2f2
|
# Dataset Card for "squad_for_gpt_train_1000_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/squad_for_gpt_train_1000_100
|
[
"region:us"
] |
2023-09-25T06:26:43+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 3564228.0, "num_examples": 1000}, {"name": "validation", "num_bytes": 371624, "num_examples": 100}], "download_size": 2479909, "dataset_size": 3935852.0}}
|
2023-09-25T08:48:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "squad_for_gpt_train_1000_100"
More Information needed
|
[
"# Dataset Card for \"squad_for_gpt_train_1000_100\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_for_gpt_train_1000_100\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_for_gpt_train_1000_100\"\n\nMore Information needed"
] |
b66ba1e815c7511aa687936ebaf7c93fc83575ba
|
# Dataset of Kasaki Nozomi
This is the dataset of Kasaki Nozomi, containing 91 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 91 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 217 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 91 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 91 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 91 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 91 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 91 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 217 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 217 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 217 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/kasaki_nozomi_soundeuphonium
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T06:27:13+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-25T06:32:10+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Kasaki Nozomi
========================
This is the dataset of Kasaki Nozomi, containing 91 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
4be3f2c1d7df3991be8953fde4bc8905623701e1
|
# Dataset of wien_margarete (Love Live! Superstar!!)
This is the dataset of wien_margarete (Love Live! Superstar!!), containing 50 images and their tags.
The core tags of this character are `long_hair, bangs, green_eyes, braid, purple_hair, blunt_bangs, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 50 | 73.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wien_margarete_lovelivesuperstar/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 50 | 37.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wien_margarete_lovelivesuperstar/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 126 | 83.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wien_margarete_lovelivesuperstar/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 50 | 62.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wien_margarete_lovelivesuperstar/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 126 | 124.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wien_margarete_lovelivesuperstar/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/wien_margarete_lovelivesuperstar',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, blush, hetero, solo_focus, nipples, pussy, uncensored, simple_background, white_background, 1boy, completely_nude, fellatio, group_sex, large_breasts, medium_breasts, multiple_boys, multiple_penises, open_mouth |
| 1 | 14 |  |  |  |  |  | looking_at_viewer, 1girl, solo, yuigaoka_school_uniform, blue_jacket, shirt, grey_dress, long_sleeves, red_ribbon, blush, neck_ribbon, pinafore_dress, smile, white_background, simple_background |
| 2 | 8 |  |  |  |  |  | 1girl, solo, looking_at_viewer, smile, black_dress, closed_mouth, facial_mark, blue_butterfly, collarbone, long_sleeves, floating_hair, glowing_butterfly, hair_ornament, pink_hair, shiny_hair, upper_body, very_long_hair |
| 3 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, navel, solo, bare_shoulders, blush, collarbone, simple_background, white_background, black_bikini, cleavage, stomach, black_choker, cowboy_shot, halterneck, hand_up, large_breasts, medium_breasts, open_mouth, parted_lips, side-tie_bikini_bottom, sitting, smile, wavy_hair |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | hetero | solo_focus | nipples | pussy | uncensored | simple_background | white_background | 1boy | completely_nude | fellatio | group_sex | large_breasts | medium_breasts | multiple_boys | multiple_penises | open_mouth | looking_at_viewer | solo | yuigaoka_school_uniform | blue_jacket | shirt | grey_dress | long_sleeves | red_ribbon | neck_ribbon | pinafore_dress | smile | black_dress | closed_mouth | facial_mark | blue_butterfly | collarbone | floating_hair | glowing_butterfly | hair_ornament | pink_hair | shiny_hair | upper_body | very_long_hair | navel | bare_shoulders | black_bikini | cleavage | stomach | black_choker | cowboy_shot | halterneck | hand_up | parted_lips | side-tie_bikini_bottom | sitting | wavy_hair |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:---------|:-------------|:----------|:--------|:-------------|:--------------------|:-------------------|:-------|:------------------|:-----------|:------------|:----------------|:-----------------|:----------------|:-------------------|:-------------|:--------------------|:-------|:--------------------------|:--------------|:--------|:-------------|:---------------|:-------------|:--------------|:-----------------|:--------|:--------------|:---------------|:--------------|:-----------------|:-------------|:----------------|:--------------------|:----------------|:------------|:-------------|:-------------|:-----------------|:--------|:-----------------|:---------------|:-----------|:----------|:---------------|:--------------|:-------------|:----------|:--------------|:-------------------------|:----------|:------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 14 |  |  |  |  |  | X | X | | | | | | X | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | | | | | | | | | | | | | | | | | | X | X | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | | | | | | X | X | | | | | X | X | | | X | X | X | | | | | | | | | X | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/wien_margarete_lovelivesuperstar
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T06:28:51+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T05:50:18+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of wien\_margarete (Love Live! Superstar!!)
===================================================
This is the dataset of wien\_margarete (Love Live! Superstar!!), containing 50 images and their tags.
The core tags of this character are 'long\_hair, bangs, green\_eyes, braid, purple\_hair, blunt\_bangs, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
90be399fabd82a8aee058c6b4b7a63c75e2e5c67
|
# Dataset of Yoroizuka Mizore
This is the dataset of Yoroizuka Mizore, containing 106 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 106 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 222 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 106 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 106 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 106 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 106 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 106 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 222 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 222 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 222 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/yoroizuka_mizore_soundeuphonium
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T06:40:47+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-25T06:46:48+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Yoroizuka Mizore
===========================
This is the dataset of Yoroizuka Mizore, containing 106 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
7f7f463aefb75bd146b0ae604ee8ecf2d4e598ae
|
# Dataset of Matsumoto Michie
This is the dataset of Matsumoto Michie, containing 74 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 74 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 164 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 74 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 74 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 74 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 74 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 74 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 164 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 164 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 164 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/matsumoto_michie_soundeuphonium
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T06:50:59+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-25T06:53:57+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Matsumoto Michie
===========================
This is the dataset of Matsumoto Michie, containing 74 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
0f98dfca16678f29e1cd68f157170300d2fc17ad
|
# Dataset Card for "wikipedia-ja-20230720-2k"
This is data extracted randomly from [izumi-lab/wikipedia-ja-20230720](https://huggingface.co/datasets/izumi-lab/wikipedia-ja-20230720), consisting of 2,048 records.
[izumi-lab/wikipedia-ja-20230720](https://huggingface.co/datasets/izumi-lab/wikipedia-ja-20230720)からデータを2k分ランダムに抽出したデータです。
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mmnga/wikipedia-ja-20230720-2k
|
[
"region:us"
] |
2023-09-25T06:51:08+00:00
|
{"dataset_info": {"features": [{"name": "curid", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5492016.948562663, "num_examples": 2048}], "download_size": 3161030, "dataset_size": 5492016.948562663}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T07:20:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wikipedia-ja-20230720-2k"
This is data extracted randomly from izumi-lab/wikipedia-ja-20230720, consisting of 2,048 records.
izumi-lab/wikipedia-ja-20230720からデータを2k分ランダムに抽出したデータです。
More Information needed
|
[
"# Dataset Card for \"wikipedia-ja-20230720-2k\"\n\nThis is data extracted randomly from izumi-lab/wikipedia-ja-20230720, consisting of 2,048 records. \n\nizumi-lab/wikipedia-ja-20230720からデータを2k分ランダムに抽出したデータです。 \n\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wikipedia-ja-20230720-2k\"\n\nThis is data extracted randomly from izumi-lab/wikipedia-ja-20230720, consisting of 2,048 records. \n\nizumi-lab/wikipedia-ja-20230720からデータを2k分ランダムに抽出したデータです。 \n\n\nMore Information needed"
] |
[
6,
69
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wikipedia-ja-20230720-2k\"\n\nThis is data extracted randomly from izumi-lab/wikipedia-ja-20230720, consisting of 2,048 records. \n\nizumi-lab/wikipedia-ja-20230720からデータを2k分ランダムに抽出したデータです。 \n\n\nMore Information needed"
] |
dea60d153392758ecd76c6c5aac238f6fa5abd00
|
# Dataset of Kabe Tomoe
This is the dataset of Kabe Tomoe, containing 52 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 52 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 118 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 52 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 52 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 52 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 52 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 52 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 118 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 118 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 118 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/kabe_tomoe_soundeuphonium
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T06:57:11+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-25T07:00:19+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Kabe Tomoe
=====================
This is the dataset of Kabe Tomoe, containing 52 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
2a7d4cd50f26898541c42335698470a2983319b2
|
# Dataset Card for "AnorexicPajama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
reversebutlerianjihad/AnorexicPajama
|
[
"region:us"
] |
2023-09-25T07:03:57+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "meta", "struct": [{"name": "redpajama_set_name", "dtype": "string"}]}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 239181187.24, "num_examples": 54890}, {"name": "test", "num_bytes": 40114950, "num_examples": 9346}, {"name": "validation", "num_bytes": 39109042, "num_examples": 9347}], "download_size": 185544769, "dataset_size": 318405179.24}}
|
2023-09-25T07:04:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "AnorexicPajama"
More Information needed
|
[
"# Dataset Card for \"AnorexicPajama\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"AnorexicPajama\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"AnorexicPajama\"\n\nMore Information needed"
] |
2d88b690f9c1392de5c9d765d4e4738e571c4acb
|
# Dataset Card for "toxic_10k_distributed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Vaibhav9401/toxic_10k_distributed
|
[
"region:us"
] |
2023-09-25T07:07:58+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "llama_finetune_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9783994, "num_examples": 10477}], "download_size": 1964339, "dataset_size": 9783994}}
|
2023-09-25T07:15:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "toxic_10k_distributed"
More Information needed
|
[
"# Dataset Card for \"toxic_10k_distributed\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"toxic_10k_distributed\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"toxic_10k_distributed\"\n\nMore Information needed"
] |
017eb9f90d2ad6d58d19406f86ba4f2c80f207fa
|
# Dataset Card for "logits-mt-it-ar-128"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
amitness/logits-mt-it-ar-128
|
[
"region:us"
] |
2023-09-25T07:12:06+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"name": "teacher_logits", "sequence": {"sequence": "float64"}}, {"name": "teacher_indices", "sequence": {"sequence": "int64"}}, {"name": "teacher_mask_indices", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 53010711524, "num_examples": 11706283}, {"name": "test", "num_bytes": 9355220532, "num_examples": 2065815}], "download_size": 0, "dataset_size": 62365932056}}
|
2023-09-27T07:56:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "logits-mt-it-ar-128"
More Information needed
|
[
"# Dataset Card for \"logits-mt-it-ar-128\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"logits-mt-it-ar-128\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"logits-mt-it-ar-128\"\n\nMore Information needed"
] |
1417df83cd250befc4d6bca3ffd009883b0d1874
|
# Dataset Card for Keyword Extraction
## Dataset Description
**Homepage:** [DSSGx Munich](https://sites.google.com/view/dssgx-munich-2023/startseite) organization page.
**Repository:** [GitHub](https://github.com/DSSGxMunich/land-sealing-dataset-and-analysis).
### Dataset Summary
This folder contains the exact keyword extraction and agent information extraction datasets.
## Dataset Structure
### Folder structure
- **exact_search**
- baunvo_keywords.csv -> appearance of BauNVO keywords in each document.
- hochwasser_keywords.csv -> appearance of hochwasser-related keywords in each document.
- **knowledge_extraction_agent**
- fh.json -> length of firsthöhe detected by agent and result from fuzzy keyword search.
- gfz.json -> Geschossflächenzahl detected by agent and result from fuzzy keyword search.
- grz.json -> Grundflächenzahl detected by agent and result from fuzzy keyword search.
- max_h.json -> Maximale gebäudehöhe detected by agent and result from fuzzy keyword search.
- min_h.json -> Minimale gebäudehöhe detected by agent and result from fuzzy keyword search.
- th.json -> Traufhöhe detected by agent and result from fuzzy keyword search.
### Data Fields
- **baunvo_keywords.csv:**
- filename: name of PDF file that was extracted.
- columns baunvo-XX and 13b: names of the categories that were searched for, and keywords that appeared matching that category.
- **hochwasser_keywords.csv:**
- filename: name of PDF file that was extracted.
- contextualised_keyword: paragraph context in which the exact keyword appears.
- actual_keyword: actual keyword searched for.
- category: category of hochwasser keyword(hq100, hqhaufig, hqextrem)
- All the files in **knowledge_extraction_agent** are .json files which contain the following structure:
- id: id of document extracted.
- keyword_input: fuzzy keyword input for the value extraction (context paragraph).
- keyword_agent_response: result of the agent.
- keyword_extracted_value: extracted value from agent.
- validation: validation of result.
## Dataset Creation
#### Initial Data Collection and Normalization
This is the result of the keyword extraction from the document_texts.csv file. The exact keyword extraction was done by selecting a set of relevant keywords and searching for them in the text. Meanwhile, the agent keyword extraction is the result of searching for certain keywords using fuzzy search to get the context surrounding them, and extracting relevant values with GPT.
## Considerations for Using the Data
### Discussion of Biases
The results of this keyword and agent results were NOT validated manually. Therefore, this is why we provide the contextual paragraph of the values: the information should be double-checked by professionals.
|
DSSGxMunich/bplan_keyword_extraction
|
[
"license:mit",
"region:us"
] |
2023-09-25T07:36:36+00:00
|
{"license": "mit"}
|
2023-10-06T09:34:57+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
# Dataset Card for Keyword Extraction
## Dataset Description
Homepage: DSSGx Munich organization page.
Repository: GitHub.
### Dataset Summary
This folder contains the exact keyword extraction and agent information extraction datasets.
## Dataset Structure
### Folder structure
- exact_search
- baunvo_keywords.csv -> appearance of BauNVO keywords in each document.
- hochwasser_keywords.csv -> appearance of hochwasser-related keywords in each document.
- knowledge_extraction_agent
- URL -> length of firsthöhe detected by agent and result from fuzzy keyword search.
- URL -> Geschossflächenzahl detected by agent and result from fuzzy keyword search.
- URL -> Grundflächenzahl detected by agent and result from fuzzy keyword search.
- max_h.json -> Maximale gebäudehöhe detected by agent and result from fuzzy keyword search.
- min_h.json -> Minimale gebäudehöhe detected by agent and result from fuzzy keyword search.
- URL -> Traufhöhe detected by agent and result from fuzzy keyword search.
### Data Fields
- baunvo_keywords.csv:
- filename: name of PDF file that was extracted.
- columns baunvo-XX and 13b: names of the categories that were searched for, and keywords that appeared matching that category.
- hochwasser_keywords.csv:
- filename: name of PDF file that was extracted.
- contextualised_keyword: paragraph context in which the exact keyword appears.
- actual_keyword: actual keyword searched for.
- category: category of hochwasser keyword(hq100, hqhaufig, hqextrem)
- All the files in knowledge_extraction_agent are .json files which contain the following structure:
- id: id of document extracted.
- keyword_input: fuzzy keyword input for the value extraction (context paragraph).
- keyword_agent_response: result of the agent.
- keyword_extracted_value: extracted value from agent.
- validation: validation of result.
## Dataset Creation
#### Initial Data Collection and Normalization
This is the result of the keyword extraction from the document_texts.csv file. The exact keyword extraction was done by selecting a set of relevant keywords and searching for them in the text. Meanwhile, the agent keyword extraction is the result of searching for certain keywords using fuzzy search to get the context surrounding them, and extracting relevant values with GPT.
## Considerations for Using the Data
### Discussion of Biases
The results of this keyword and agent results were NOT validated manually. Therefore, this is why we provide the contextual paragraph of the values: the information should be double-checked by professionals.
|
[
"# Dataset Card for Keyword Extraction",
"## Dataset Description\n\n Homepage: DSSGx Munich organization page.\n\n \n Repository: GitHub.",
"### Dataset Summary\n\nThis folder contains the exact keyword extraction and agent information extraction datasets.",
"## Dataset Structure",
"### Folder structure\n\n- exact_search\n - baunvo_keywords.csv -> appearance of BauNVO keywords in each document. \n - hochwasser_keywords.csv -> appearance of hochwasser-related keywords in each document. \n \n- knowledge_extraction_agent\n - URL -> length of firsthöhe detected by agent and result from fuzzy keyword search. \n - URL -> Geschossflächenzahl detected by agent and result from fuzzy keyword search. \n - URL -> Grundflächenzahl detected by agent and result from fuzzy keyword search. \n - max_h.json -> Maximale gebäudehöhe detected by agent and result from fuzzy keyword search. \n - min_h.json -> Minimale gebäudehöhe detected by agent and result from fuzzy keyword search. \n - URL -> Traufhöhe detected by agent and result from fuzzy keyword search.",
"### Data Fields\n\n- baunvo_keywords.csv:\n - filename: name of PDF file that was extracted. \n - columns baunvo-XX and 13b: names of the categories that were searched for, and keywords that appeared matching that category.\n \n- hochwasser_keywords.csv:\n - filename: name of PDF file that was extracted. \n - contextualised_keyword: paragraph context in which the exact keyword appears. \n - actual_keyword: actual keyword searched for.\n - category: category of hochwasser keyword(hq100, hqhaufig, hqextrem)\n\n- All the files in knowledge_extraction_agent are .json files which contain the following structure:\n - id: id of document extracted.\n - keyword_input: fuzzy keyword input for the value extraction (context paragraph).\n - keyword_agent_response: result of the agent.\n - keyword_extracted_value: extracted value from agent.\n - validation: validation of result.",
"## Dataset Creation",
"#### Initial Data Collection and Normalization\n\nThis is the result of the keyword extraction from the document_texts.csv file. The exact keyword extraction was done by selecting a set of relevant keywords and searching for them in the text. Meanwhile, the agent keyword extraction is the result of searching for certain keywords using fuzzy search to get the context surrounding them, and extracting relevant values with GPT.",
"## Considerations for Using the Data",
"### Discussion of Biases\n\nThe results of this keyword and agent results were NOT validated manually. Therefore, this is why we provide the contextual paragraph of the values: the information should be double-checked by professionals."
] |
[
"TAGS\n#license-mit #region-us \n",
"# Dataset Card for Keyword Extraction",
"## Dataset Description\n\n Homepage: DSSGx Munich organization page.\n\n \n Repository: GitHub.",
"### Dataset Summary\n\nThis folder contains the exact keyword extraction and agent information extraction datasets.",
"## Dataset Structure",
"### Folder structure\n\n- exact_search\n - baunvo_keywords.csv -> appearance of BauNVO keywords in each document. \n - hochwasser_keywords.csv -> appearance of hochwasser-related keywords in each document. \n \n- knowledge_extraction_agent\n - URL -> length of firsthöhe detected by agent and result from fuzzy keyword search. \n - URL -> Geschossflächenzahl detected by agent and result from fuzzy keyword search. \n - URL -> Grundflächenzahl detected by agent and result from fuzzy keyword search. \n - max_h.json -> Maximale gebäudehöhe detected by agent and result from fuzzy keyword search. \n - min_h.json -> Minimale gebäudehöhe detected by agent and result from fuzzy keyword search. \n - URL -> Traufhöhe detected by agent and result from fuzzy keyword search.",
"### Data Fields\n\n- baunvo_keywords.csv:\n - filename: name of PDF file that was extracted. \n - columns baunvo-XX and 13b: names of the categories that were searched for, and keywords that appeared matching that category.\n \n- hochwasser_keywords.csv:\n - filename: name of PDF file that was extracted. \n - contextualised_keyword: paragraph context in which the exact keyword appears. \n - actual_keyword: actual keyword searched for.\n - category: category of hochwasser keyword(hq100, hqhaufig, hqextrem)\n\n- All the files in knowledge_extraction_agent are .json files which contain the following structure:\n - id: id of document extracted.\n - keyword_input: fuzzy keyword input for the value extraction (context paragraph).\n - keyword_agent_response: result of the agent.\n - keyword_extracted_value: extracted value from agent.\n - validation: validation of result.",
"## Dataset Creation",
"#### Initial Data Collection and Normalization\n\nThis is the result of the keyword extraction from the document_texts.csv file. The exact keyword extraction was done by selecting a set of relevant keywords and searching for them in the text. Meanwhile, the agent keyword extraction is the result of searching for certain keywords using fuzzy search to get the context surrounding them, and extracting relevant values with GPT.",
"## Considerations for Using the Data",
"### Discussion of Biases\n\nThe results of this keyword and agent results were NOT validated manually. Therefore, this is why we provide the contextual paragraph of the values: the information should be double-checked by professionals."
] |
[
11,
8,
21,
24,
6,
189,
228,
5,
92,
8,
49
] |
[
"passage: TAGS\n#license-mit #region-us \n# Dataset Card for Keyword Extraction## Dataset Description\n\n Homepage: DSSGx Munich organization page.\n\n \n Repository: GitHub.### Dataset Summary\n\nThis folder contains the exact keyword extraction and agent information extraction datasets.## Dataset Structure### Folder structure\n\n- exact_search\n - baunvo_keywords.csv -> appearance of BauNVO keywords in each document. \n - hochwasser_keywords.csv -> appearance of hochwasser-related keywords in each document. \n \n- knowledge_extraction_agent\n - URL -> length of firsthöhe detected by agent and result from fuzzy keyword search. \n - URL -> Geschossflächenzahl detected by agent and result from fuzzy keyword search. \n - URL -> Grundflächenzahl detected by agent and result from fuzzy keyword search. \n - max_h.json -> Maximale gebäudehöhe detected by agent and result from fuzzy keyword search. \n - min_h.json -> Minimale gebäudehöhe detected by agent and result from fuzzy keyword search. \n - URL -> Traufhöhe detected by agent and result from fuzzy keyword search.### Data Fields\n\n- baunvo_keywords.csv:\n - filename: name of PDF file that was extracted. \n - columns baunvo-XX and 13b: names of the categories that were searched for, and keywords that appeared matching that category.\n \n- hochwasser_keywords.csv:\n - filename: name of PDF file that was extracted. \n - contextualised_keyword: paragraph context in which the exact keyword appears. \n - actual_keyword: actual keyword searched for.\n - category: category of hochwasser keyword(hq100, hqhaufig, hqextrem)\n\n- All the files in knowledge_extraction_agent are .json files which contain the following structure:\n - id: id of document extracted.\n - keyword_input: fuzzy keyword input for the value extraction (context paragraph).\n - keyword_agent_response: result of the agent.\n - keyword_extracted_value: extracted value from agent.\n - validation: validation of result.## Dataset Creation"
] |
b0373ec49ea59440c5488ac8ce02b105fe8e2c71
|
# Dataset Card for "thbud-doc-ocr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
napatswift/thbud-doc-ocr
|
[
"region:us"
] |
2023-09-25T07:44:29+00:00
|
{"dataset_info": {"features": [{"name": "words", "sequence": "string"}, {"name": "norm_bboxes", "sequence": {"sequence": "float64"}}, {"name": "ner_tags", "sequence": "null"}, {"name": "class", "dtype": {"class_label": {"names": {"0": "toc", "1": "entry", "2": "other"}}}}], "splits": [{"name": "train", "num_bytes": 6887148, "num_examples": 1078}], "download_size": 2658905, "dataset_size": 6887148}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T07:44:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "thbud-doc-ocr"
More Information needed
|
[
"# Dataset Card for \"thbud-doc-ocr\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"thbud-doc-ocr\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"thbud-doc-ocr\"\n\nMore Information needed"
] |
75a4fdca6beb921bd46124266c60cb0b6abe8708
|
# Dataset of tang_keke/唐可可/탕쿠쿠 (Love Live! Superstar!!)
This is the dataset of tang_keke/唐可可/탕쿠쿠 (Love Live! Superstar!!), containing 500 images and their tags.
The core tags of this character are `short_hair, bangs, blue_eyes, grey_hair, ribbon, neck_ribbon, red_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 736.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tang_keke_lovelivesuperstar/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 354.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tang_keke_lovelivesuperstar/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1273 | 821.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tang_keke_lovelivesuperstar/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 621.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tang_keke_lovelivesuperstar/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1273 | 1.28 GiB | [Download](https://huggingface.co/datasets/CyberHarem/tang_keke_lovelivesuperstar/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tang_keke_lovelivesuperstar',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 21 |  |  |  |  |  | 1girl, blue_jacket, grey_dress, long_sleeves, smile, solo, white_shirt, yuigaoka_school_uniform, collared_shirt, looking_at_viewer, open_jacket, pinafore_dress, white_background, simple_background, blush, open_mouth, breasts, multicolored_hair |
| 1 | 5 |  |  |  |  |  | 1girl, black_socks, blue_jacket, brown_footwear, grey_dress, light_brown_hair, loafers, long_sleeves, looking_at_viewer, open_jacket, pinafore_dress, shiny_hair, solo, white_background, yuigaoka_school_uniform, collared_shirt, full_body, kneehighs, smile, white_shirt, simple_background, blush, medium_breasts, multicolored_hair, open_mouth, sitting |
| 2 | 26 |  |  |  |  |  | 1girl, smile, solo, white_gloves, looking_at_viewer, elbow_gloves, hair_bow, open_mouth, blush, hairband, white_dress, brown_hair, pink_dress, pink_bow, puffy_short_sleeves |
| 3 | 24 |  |  |  |  |  | 1girl, solo, collarbone, looking_at_viewer, outdoors, smile, navel, blush, day, bracelet, cloud, blue_sky, ocean, sun_hat, hair_ornament, bikini_skirt, flower, blue_bikini, bow, choker, frilled_bikini, medium_breasts, open_mouth |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_jacket | grey_dress | long_sleeves | smile | solo | white_shirt | yuigaoka_school_uniform | collared_shirt | looking_at_viewer | open_jacket | pinafore_dress | white_background | simple_background | blush | open_mouth | breasts | multicolored_hair | black_socks | brown_footwear | light_brown_hair | loafers | shiny_hair | full_body | kneehighs | medium_breasts | sitting | white_gloves | elbow_gloves | hair_bow | hairband | white_dress | brown_hair | pink_dress | pink_bow | puffy_short_sleeves | collarbone | outdoors | navel | day | bracelet | cloud | blue_sky | ocean | sun_hat | hair_ornament | bikini_skirt | flower | blue_bikini | bow | choker | frilled_bikini |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:-------------|:---------------|:--------|:-------|:--------------|:--------------------------|:-----------------|:--------------------|:--------------|:-----------------|:-------------------|:--------------------|:--------|:-------------|:----------|:--------------------|:--------------|:-----------------|:-------------------|:----------|:-------------|:------------|:------------|:-----------------|:----------|:---------------|:---------------|:-----------|:-----------|:--------------|:-------------|:-------------|:-----------|:----------------------|:-------------|:-----------|:--------|:------|:-----------|:--------|:-----------|:--------|:----------|:----------------|:---------------|:---------|:--------------|:------|:---------|:-----------------|
| 0 | 21 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 26 |  |  |  |  |  | X | | | | X | X | | | | X | | | | | X | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 3 | 24 |  |  |  |  |  | X | | | | X | X | | | | X | | | | | X | X | | | | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/tang_keke_lovelivesuperstar
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T08:02:54+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T06:43:11+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of tang\_keke/唐可可/탕쿠쿠 (Love Live! Superstar!!)
======================================================
This is the dataset of tang\_keke/唐可可/탕쿠쿠 (Love Live! Superstar!!), containing 500 images and their tags.
The core tags of this character are 'short\_hair, bangs, blue\_eyes, grey\_hair, ribbon, neck\_ribbon, red\_ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
49c58ef99492dda5489e7f47438609e155a8e4a7
|
# Dataset of yuuki_setsuna/優木せつ菜/유키세츠나 (Love Live! School Idol Festival ALL STARS)
This is the dataset of yuuki_setsuna/優木せつ菜/유키세츠나 (Love Live! School Idol Festival ALL STARS), containing 500 images and their tags.
The core tags of this character are `long_hair, black_hair, bangs, grey_eyes, breasts, one_side_up, sidelocks, hair_ornament, black_eyes, medium_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 898.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yuuki_setsuna_loveliveschoolidolfestivalallstars/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 416.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yuuki_setsuna_loveliveschoolidolfestivalallstars/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1323 | 960.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yuuki_setsuna_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 746.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yuuki_setsuna_loveliveschoolidolfestivalallstars/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1323 | 1.51 GiB | [Download](https://huggingface.co/datasets/CyberHarem/yuuki_setsuna_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/yuuki_setsuna_loveliveschoolidolfestivalallstars',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, looking_at_viewer, nijigasaki_academy_school_uniform, smile, solo, blush, short_sleeves, summer_uniform, upper_body, simple_background, white_background, white_shirt, black_vest, collared_shirt, ribbon, skirt |
| 1 | 7 |  |  |  |  |  | 1girl, blush, looking_at_viewer, nijigasaki_academy_school_uniform, plaid_skirt, pleated_skirt, short_sleeves, simple_background, solo, white_background, white_shirt, collared_shirt, neck_ribbon, smile, summer_uniform, blue_vest, dress_shirt, open_mouth, black_vest, pink_ribbon |
| 2 | 6 |  |  |  |  |  | 1girl, blush, cleavage, collarbone, looking_at_viewer, solo, upper_body, simple_background, smile, white_background, bra, off_shoulder |
| 3 | 5 |  |  |  |  |  | 1girl, blush, cleavage, looking_at_viewer, paw_gloves, solo, open_mouth, simple_background, smile, upper_body, bear_ears, dress, fake_animal_ears, large_breasts, red_bowtie, short_sleeves, red_background, white_background |
| 4 | 5 |  |  |  |  |  | 1girl, feather_hair_ornament, hair_flower, looking_at_viewer, solo, white_gloves, blush, smile, thighhighs, asymmetrical_legwear, happy_birthday, upper_body |
| 5 | 5 |  |  |  |  |  | 1girl, feather_hair_ornament, hair_flower, looking_at_viewer, red_bowtie, solo, white_gloves, white_shirt, blush, red_skirt, smile, collared_shirt, frilled_skirt, center_frills, simple_background, sitting, white_background, yellow_jacket |
| 6 | 8 |  |  |  |  |  | blush, cropped_jacket, feather_hair_ornament, hair_flower, looking_at_viewer, red_bowtie, red_skirt, white_shirt, 1girl, :d, frilled_skirt, open_mouth, solo, white_gloves, center_frills, frilled_shirt, mismatched_legwear, yellow_jacket, blue_rose, blue_thighhighs, idol_clothes, outstretched_arm, upper_teeth_only, double-breasted, half_gloves, short_sleeves, yellow_rose, black_footwear, full_body, knee_boots, simple_background, white_background |
| 7 | 7 |  |  |  |  |  | 1girl, earrings, hat, looking_at_viewer, necktie, fingerless_gloves, red_gloves, solo, fire, blush, smile |
| 8 | 25 |  |  |  |  |  | 1girl, fingerless_gloves, looking_at_viewer, red_gloves, red_headwear, solo, smile, collared_shirt, mini_hat, white_shirt, short_sleeves, blush, open_mouth, skirt, earrings, red_vest, flower, purple_necktie, frilled_shirt |
| 9 | 14 |  |  |  |  |  | 1girl, cleavage, collarbone, braid, double_bun, looking_at_viewer, red_bikini, solo, blush, navel, suspender_shorts, white_background, simple_background, striped_bikini |
| 10 | 9 |  |  |  |  |  | 1girl, bikini, double_bun, looking_at_viewer, solo, braid, cleavage, collarbone, hair_flower, navel, tattoo, earrings, cloud, smile, suspender_shorts, blue_sky, blush, heart |
| 11 | 6 |  |  |  |  |  | 1girl, midriff, navel, red_sleeves, single_glove, single_sleeve, solo, belt, black_shorts, collarbone, fire, star_earrings, asymmetrical_sleeves, epaulettes, jacket, looking_at_viewer, open_mouth, see-through, asymmetrical_gloves |
| 12 | 7 |  |  |  |  |  | 1girl, solo, black_pantyhose, blush, hairclip, red_hoodie, legwear_under_shorts, looking_at_viewer, smile, collarbone, long_sleeves, open_mouth, shoulder_bag, handbag |
| 13 | 7 |  |  |  |  |  | 1girl, cheerleader, midriff, navel, pom_pom_(cheerleading), solo, hair_flower, headphones, looking_at_viewer, red_skirt, smile, blush, crop_top, headset, miniskirt, sleeveless_shirt, arm_up, happy_birthday, holding, pleated_skirt, socks |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | nijigasaki_academy_school_uniform | smile | solo | blush | short_sleeves | summer_uniform | upper_body | simple_background | white_background | white_shirt | black_vest | collared_shirt | ribbon | skirt | plaid_skirt | pleated_skirt | neck_ribbon | blue_vest | dress_shirt | open_mouth | pink_ribbon | cleavage | collarbone | bra | off_shoulder | paw_gloves | bear_ears | dress | fake_animal_ears | large_breasts | red_bowtie | red_background | feather_hair_ornament | hair_flower | white_gloves | thighhighs | asymmetrical_legwear | happy_birthday | red_skirt | frilled_skirt | center_frills | sitting | yellow_jacket | cropped_jacket | :d | frilled_shirt | mismatched_legwear | blue_rose | blue_thighhighs | idol_clothes | outstretched_arm | upper_teeth_only | double-breasted | half_gloves | yellow_rose | black_footwear | full_body | knee_boots | earrings | hat | necktie | fingerless_gloves | red_gloves | fire | red_headwear | mini_hat | red_vest | flower | purple_necktie | braid | double_bun | red_bikini | navel | suspender_shorts | striped_bikini | bikini | tattoo | cloud | blue_sky | heart | midriff | red_sleeves | single_glove | single_sleeve | belt | black_shorts | star_earrings | asymmetrical_sleeves | epaulettes | jacket | see-through | asymmetrical_gloves | black_pantyhose | hairclip | red_hoodie | legwear_under_shorts | long_sleeves | shoulder_bag | handbag | cheerleader | pom_pom_(cheerleading) | headphones | crop_top | headset | miniskirt | sleeveless_shirt | arm_up | holding | socks |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------------------|:------------------------------------|:--------|:-------|:--------|:----------------|:-----------------|:-------------|:--------------------|:-------------------|:--------------|:-------------|:-----------------|:---------|:--------|:--------------|:----------------|:--------------|:------------|:--------------|:-------------|:--------------|:-----------|:-------------|:------|:---------------|:-------------|:------------|:--------|:-------------------|:----------------|:-------------|:-----------------|:------------------------|:--------------|:---------------|:-------------|:-----------------------|:-----------------|:------------|:----------------|:----------------|:----------|:----------------|:-----------------|:-----|:----------------|:---------------------|:------------|:------------------|:---------------|:-------------------|:-------------------|:------------------|:--------------|:--------------|:-----------------|:------------|:-------------|:-----------|:------|:----------|:--------------------|:-------------|:-------|:---------------|:-----------|:-----------|:---------|:-----------------|:--------|:-------------|:-------------|:--------|:-------------------|:-----------------|:---------|:---------|:--------|:-----------|:--------|:----------|:--------------|:---------------|:----------------|:-------|:---------------|:----------------|:-----------------------|:-------------|:---------|:--------------|:----------------------|:------------------|:-----------|:-------------|:-----------------------|:---------------|:---------------|:----------|:--------------|:-------------------------|:-------------|:-----------|:----------|:------------|:-------------------|:---------|:----------|:--------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | | X | X | X | X | X | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | | X | X | X | | | X | X | X | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | | X | X | X | X | | X | X | X | | | | | | | | | | | X | | X | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | | X | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | | X | X | X | | | | X | X | X | | X | | | | | | | | | | | | | | | | | | | X | | X | X | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 8 |  |  |  |  |  | X | X | | | X | X | X | | | X | X | X | | | | | | | | | | X | | | | | | | | | | | X | | X | X | X | | | | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 7 |  |  |  |  |  | X | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 25 |  |  |  |  |  | X | X | | X | X | X | X | | | | | X | | X | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | X | | | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 14 |  |  |  |  |  | X | X | | | X | X | | | | X | X | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 10 | 9 |  |  |  |  |  | X | X | | X | X | X | | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | X | | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 11 | 6 |  |  |  |  |  | X | X | | | X | | | | | | | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 12 | 7 |  |  |  |  |  | X | X | | X | X | X | | | | | | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | |
| 13 | 7 |  |  |  |  |  | X | X | | X | X | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/yuuki_setsuna_loveliveschoolidolfestivalallstars
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T08:20:57+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T03:55:23+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of yuuki\_setsuna/優木せつ菜/유키세츠나 (Love Live! School Idol Festival ALL STARS)
=================================================================================
This is the dataset of yuuki\_setsuna/優木せつ菜/유키세츠나 (Love Live! School Idol Festival ALL STARS), containing 500 images and their tags.
The core tags of this character are 'long\_hair, black\_hair, bangs, grey\_eyes, breasts, one\_side\_up, sidelocks, hair\_ornament, black\_eyes, medium\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
936d8c416244aec4c07907b29e4efd0cef62c10e
|
# Dataset Card for "llama-2-nuv-intent-big"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Luciya/llama-2-nuv-intent-big
|
[
"region:us"
] |
2023-09-25T08:33:00+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 850629, "num_examples": 1563}], "download_size": 131113, "dataset_size": 850629}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T08:33:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llama-2-nuv-intent-big"
More Information needed
|
[
"# Dataset Card for \"llama-2-nuv-intent-big\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llama-2-nuv-intent-big\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llama-2-nuv-intent-big\"\n\nMore Information needed"
] |
1e2c1c0b3b97e61d456daae1733ad70f82bc5e71
|
# Dataset Card for "abusive-calls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
abhinav-jha/abusive-calls
|
[
"region:us"
] |
2023-09-25T09:14:26+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 52348881.0, "num_examples": 948}, {"name": "test", "num_bytes": 52348880.0, "num_examples": 948}], "download_size": 95446094, "dataset_size": 104697761.0}}
|
2023-09-25T09:15:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "abusive-calls"
More Information needed
|
[
"# Dataset Card for \"abusive-calls\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"abusive-calls\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"abusive-calls\"\n\nMore Information needed"
] |
dfebd67dd399c047f8a925e666b2c66bad572cb7
|
# Dataset Card for "LargerImagesLabelled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
JCAI2000/LargerImagesLabelled
|
[
"region:us"
] |
2023-09-25T09:17:29+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 513933217.0, "num_examples": 42}], "download_size": 182096737, "dataset_size": 513933217.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T09:18:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "LargerImagesLabelled"
More Information needed
|
[
"# Dataset Card for \"LargerImagesLabelled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"LargerImagesLabelled\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"LargerImagesLabelled\"\n\nMore Information needed"
] |
125d32454f598278ef86327e311e66c091bca512
|
# Dataset Card for "Network_"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Imran1/Network_
|
[
"region:us"
] |
2023-09-25T09:24:03+00:00
|
{"dataset_info": {"features": [{"name": "resp_pkts", "dtype": "int64"}, {"name": "service", "dtype": "string"}, {"name": "orig_ip_bytes", "dtype": "int64"}, {"name": "local_resp", "dtype": "bool"}, {"name": "missed_bytes", "dtype": "int64"}, {"name": "protocol", "dtype": "string"}, {"name": "duration", "dtype": "float64"}, {"name": "conn_state", "dtype": "string"}, {"name": "dest_ip", "dtype": "string"}, {"name": "orig_pkts", "dtype": "int64"}, {"name": "community_id", "dtype": "string"}, {"name": "resp_ip_bytes", "dtype": "int64"}, {"name": "dest_port", "dtype": "int64"}, {"name": "orig_bytes", "dtype": "float64"}, {"name": "local_orig", "dtype": "bool"}, {"name": "datetime", "dtype": "string"}, {"name": "history", "dtype": "string"}, {"name": "resp_bytes", "dtype": "float64"}, {"name": "uid", "dtype": "string"}, {"name": "src_port", "dtype": "int64"}, {"name": "ts", "dtype": "float64"}, {"name": "src_ip", "dtype": "string"}, {"name": "mitre_attack_tactics", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 966212295, "num_examples": 4068587}], "download_size": 231047526, "dataset_size": 966212295}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T09:24:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Network_"
More Information needed
|
[
"# Dataset Card for \"Network_\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Network_\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Network_\"\n\nMore Information needed"
] |
42b9ef9d20437f0181cf02037e20eb82231ecca3
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
abhiraj8083/ck
|
[
"region:us"
] |
2023-09-25T09:25:37+00:00
|
{}
|
2023-09-25T09:27:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
f4a71d56d8f0871fdb9f0951750cdc5efb5f9241
|
# Dataset of tennouji_rina/天王寺璃奈/텐노지리나 (Love Live! School Idol Festival ALL STARS)
This is the dataset of tennouji_rina/天王寺璃奈/텐노지리나 (Love Live! School Idol Festival ALL STARS), containing 500 images and their tags.
The core tags of this character are `pink_hair, bangs, ahoge, blunt_bangs, yellow_eyes, short_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 735.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tennouji_rina_loveliveschoolidolfestivalallstars/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 360.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tennouji_rina_loveliveschoolidolfestivalallstars/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1240 | 805.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tennouji_rina_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 622.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tennouji_rina_loveliveschoolidolfestivalallstars/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1240 | 1.25 GiB | [Download](https://huggingface.co/datasets/CyberHarem/tennouji_rina_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tennouji_rina_loveliveschoolidolfestivalallstars',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, blue_jacket, long_sleeves, looking_at_viewer, nijigasaki_academy_school_uniform, solo, blush, plaid_skirt, white_background, white_shirt, blue_hoodie, blunt_ends, holding, pleated_skirt, simple_background, sketchbook, black_thighhighs, sleeves_past_fingers, hooded_jacket, yellow_ribbon, zettai_ryouiki, blue_skirt |
| 1 | 16 |  |  |  |  |  | 1girl, looking_at_viewer, solo, white_background, blush, long_sleeves, holding, nijigasaki_academy_school_uniform, sketchbook, sleeves_past_fingers, upper_body, white_shirt, simple_background, drawing, yellow_ribbon, blue_jacket, blue_hoodie, blunt_ends, medium_hair, smile |
| 2 | 8 |  |  |  |  |  | 1girl, blunt_ends, blush, looking_at_viewer, sidelocks, solo, smile, upper_body, birthday, long_sleeves, medium_hair, holding, wings |
| 3 | 9 |  |  |  |  |  | 1girl, solo, cat_ear_headphones, long_sleeves, wings, jacket, looking_at_viewer, blunt_ends, blush, screen, sidelocks, upper_body |
| 4 | 6 |  |  |  |  |  | 1girl, cat_ear_headphones, long_sleeves, screen, solo, blunt_ends, cat_print, grey_shorts, cat_ears, >_<, wings |
| 5 | 10 |  |  |  |  |  | 1girl, skirt, solo, blunt_ends, cat_ear_headphones, fingerless_gloves, blush, detached_sleeves, looking_at_viewer, black_thighhighs, black_gloves, screen, shirt, breasts, smile |
| 6 | 10 |  |  |  |  |  | 1girl, solo, cat_ear_headphones, sailor_collar, striped_thighhighs, arm_warmers, cat_ears, collarbone, midriff, plaid, white_skirt, >_<, cat_tail, detached_sleeves, navel, pink_neckerchief, screen, white_shirt, belt, feet_out_of_frame, puffy_short_sleeves, white_background, blunt_ends, frills, necktie, simple_background, smile |
| 7 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, simple_background, solo, white_background, blush, collarbone, upper_body, blunt_ends, medium_breasts, white_shirt, cleavage, small_breasts |
| 8 | 10 |  |  |  |  |  | 1girl, double_bun, fingerless_gloves, solo, elbow_gloves, looking_at_viewer, blush, dress, sidelocks, sleeveless, bare_shoulders, birthday, black_gloves, nail_polish, smile, blue_gloves, heart_hands, thighhighs, upper_body |
| 9 | 10 |  |  |  |  |  | 1girl, solo, twintails, gradient_hair, looking_at_viewer, skirt, blue_hair, choker, official_alternate_costume, polka_dot_legwear, purple_thighhighs, sleeves_past_fingers, blush, frills, hair_bobbles, long_sleeves, collarbone, goggles_around_neck, open_clothes, sitting, smile, white_background, midriff, navel, star_(symbol), white_jacket |
| 10 | 8 |  |  |  |  |  | 1girl, looking_at_viewer, solo, blunt_ends, blush, collarbone, navel, outdoors, blue_sky, day, ocean, small_breasts, beach, bikini_skirt, cloud, frilled_bikini, sidelocks, water, jewelry, medium_hair, sand, white_bikini |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_jacket | long_sleeves | looking_at_viewer | nijigasaki_academy_school_uniform | solo | blush | plaid_skirt | white_background | white_shirt | blue_hoodie | blunt_ends | holding | pleated_skirt | simple_background | sketchbook | black_thighhighs | sleeves_past_fingers | hooded_jacket | yellow_ribbon | zettai_ryouiki | blue_skirt | upper_body | drawing | medium_hair | smile | sidelocks | birthday | wings | cat_ear_headphones | jacket | screen | cat_print | grey_shorts | cat_ears | >_< | skirt | fingerless_gloves | detached_sleeves | black_gloves | shirt | breasts | sailor_collar | striped_thighhighs | arm_warmers | collarbone | midriff | plaid | white_skirt | cat_tail | navel | pink_neckerchief | belt | feet_out_of_frame | puffy_short_sleeves | frills | necktie | medium_breasts | cleavage | small_breasts | double_bun | elbow_gloves | dress | sleeveless | bare_shoulders | nail_polish | blue_gloves | heart_hands | thighhighs | twintails | gradient_hair | blue_hair | choker | official_alternate_costume | polka_dot_legwear | purple_thighhighs | hair_bobbles | goggles_around_neck | open_clothes | sitting | star_(symbol) | white_jacket | outdoors | blue_sky | day | ocean | beach | bikini_skirt | cloud | frilled_bikini | water | jewelry | sand | white_bikini |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------------|:---------------|:--------------------|:------------------------------------|:-------|:--------|:--------------|:-------------------|:--------------|:--------------|:-------------|:----------|:----------------|:--------------------|:-------------|:-------------------|:-----------------------|:----------------|:----------------|:-----------------|:-------------|:-------------|:----------|:--------------|:--------|:------------|:-----------|:--------|:---------------------|:---------|:---------|:------------|:--------------|:-----------|:------|:--------|:--------------------|:-------------------|:---------------|:--------|:----------|:----------------|:---------------------|:--------------|:-------------|:----------|:--------|:--------------|:-----------|:--------|:-------------------|:-------|:--------------------|:----------------------|:---------|:----------|:-----------------|:-----------|:----------------|:-------------|:---------------|:--------|:-------------|:-----------------|:--------------|:--------------|:--------------|:-------------|:------------|:----------------|:------------|:---------|:-----------------------------|:--------------------|:--------------------|:---------------|:----------------------|:---------------|:----------|:----------------|:---------------|:-----------|:-----------|:------|:--------|:--------|:---------------|:--------|:-----------------|:--------|:----------|:-------|:---------------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 16 |  |  |  |  |  | X | X | X | X | X | X | X | | X | X | X | X | X | | X | X | | X | | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | | X | X | | X | X | | | | | X | X | | | | | | | | | | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 9 |  |  |  |  |  | X | | X | X | | X | X | | | | | X | | | | | | | | | | | X | | | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | | X | | | X | | | | | | X | | | | | | | | | | | | | | | | | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 10 |  |  |  |  |  | X | | | X | | X | X | | | | | X | | | | | X | | | | | | | | | X | | | | X | | X | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 10 |  |  |  |  |  | X | | | | | X | | | X | X | | X | | | X | | | | | | | | | | | X | | | | X | | X | | | X | X | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 7 |  |  |  |  |  | X | | | X | | X | X | | X | X | | X | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 10 |  |  |  |  |  | X | | | X | | X | X | | | | | | | | | | | | | | | | X | | | X | X | X | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 10 |  |  |  |  |  | X | | X | X | | X | X | | X | | | | | | | | | X | | | | | | | | X | | | | | | | | | | | X | | | | | | | | | X | X | | | | X | | | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 10 | 8 |  |  |  |  |  | X | | | X | | X | X | | | | | X | | | | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | X | | | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/tennouji_rina_loveliveschoolidolfestivalallstars
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T09:50:07+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T03:40:41+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of tennouji\_rina/天王寺璃奈/텐노지리나 (Love Live! School Idol Festival ALL STARS)
=================================================================================
This is the dataset of tennouji\_rina/天王寺璃奈/텐노지리나 (Love Live! School Idol Festival ALL STARS), containing 500 images and their tags.
The core tags of this character are 'pink\_hair, bangs, ahoge, blunt\_bangs, yellow\_eyes, short\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
a1d92ada88a68fa580977449aceb340c2faa6d6a
|
# Dataset Card for "Mixed-Arabic-Dataset"
## Mixed Arabic Datasets (MAD)
The Mixed Arabic Datasets (MAD) project provides a comprehensive collection of diverse Arabic-language datasets, sourced from various repositories, platforms, and domains. These datasets cover a wide range of text types, including books, articles, Wikipedia content, stories, and more.
### MAD Repo vs. MAD Main
#### MAD Repo
- **Versatility**: In the MAD Repository (MAD Repo), datasets are made available in their original, native form. Researchers and practitioners can selectively download specific datasets that align with their specific interests or requirements.
- **Independent Access**: Each dataset is self-contained, enabling users to work with individual datasets independently, allowing for focused analyses and experiments.
#### MAD Main or simply MAD
- **Unified Dataframe**: MAD Main represents a harmonized and unified dataframe, incorporating all datasets from the MAD Repository. It provides a seamless and consolidated view of the entire MAD collection, making it convenient for comprehensive analyses and applications.
- **Holistic Perspective**: Researchers can access a broad spectrum of Arabic-language content within a single dataframe, promoting holistic exploration and insights across diverse text sources.
### Why MAD Main?
- **Efficiency**: Working with MAD Main streamlines the data acquisition process by consolidating multiple datasets into one structured dataframe. This is particularly beneficial for large-scale projects or studies requiring diverse data sources.
- **Interoperability**: With MAD Main, the datasets are integrated into a standardized format, enhancing interoperability and compatibility with a wide range of data processing and analysis tools.
- **Meta-Analysis**: Researchers can conduct comprehensive analyses, such as cross-domain studies, trend analyses, or comparative studies, by leveraging the combined richness of all MAD datasets.
### Getting Started
- To access individual datasets in their original form, refer to the MAD Repository ([Link to MAD Repo](https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Datasets-Repo)).
- For a unified view of all datasets, conveniently organized in a dataframe, you are here in the right place.
```python
from datasets import load_dataset
dataset = load_dataset("M-A-D/Mixed-Arabic-Dataset-Main")
```
### Join Us on Discord
For discussions, contributions, and community interactions, join us on Discord! [](https://discord.gg/2NpJ9JGm)
### How to Contribute
Want to contribute to the Mixed Arabic Datasets project? Follow our comprehensive guide on Google Colab for step-by-step instructions: [Contribution Guide](https://colab.research.google.com/drive/1w7_7lL6w7nM9DcDmTZe1Vfiwkio6SA-w?usp=sharing).
**Note**: If you'd like to test a contribution before submitting it, feel free to do so on the [MAD Test Dataset](https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Dataset-test).
## Citation
```
@dataset{
title = {Mixed Arabic Datasets (MAD)},
author = {MAD Community},
howpublished = {Dataset},
url = {https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Datasets-Repo},
year = {2023},
}
```
|
M-A-D/Mixed-Arabic-Dataset-Main
|
[
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:translation",
"task_categories:summarization",
"language:ar",
"region:us"
] |
2023-09-25T09:52:11+00:00
|
{"language": ["ar"], "task_categories": ["conversational", "text-generation", "text2text-generation", "translation", "summarization"], "pretty_name": "MAD", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "GenId", "dtype": "int64"}, {"name": "SubId", "dtype": "int64"}, {"name": "DatasetName", "dtype": "string"}, {"name": "DatasetLink", "dtype": "string"}, {"name": "Text", "dtype": "string"}, {"name": "MetaData", "struct": [{"name": "AboutAuthor", "dtype": "string"}, {"name": "AboutBook", "dtype": "string"}, {"name": "Author", "dtype": "string"}, {"name": "AuthorName", "dtype": "string"}, {"name": "BookLink", "dtype": "string"}, {"name": "BookName", "dtype": "string"}, {"name": "ChapterLink", "dtype": "string"}, {"name": "ChapterName", "dtype": "string"}, {"name": "Tags", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "float64"}, {"name": "created_date", "dtype": "string"}, {"name": "deleted", "dtype": "bool"}, {"name": "detoxify", "dtype": "null"}, {"name": "emojis", "struct": [{"name": "count", "sequence": "int32"}, {"name": "name", "sequence": "string"}]}, {"name": "id", "dtype": "string"}, {"name": "labels", "struct": [{"name": "count", "sequence": "int32"}, {"name": "name", "sequence": "string"}, {"name": "value", "sequence": "float64"}]}, {"name": "lang", "dtype": "string"}, {"name": "message_id", "dtype": "string"}, {"name": "message_tree_id", "dtype": "string"}, {"name": "model_name", "dtype": "null"}, {"name": "parent_id", "dtype": "string"}, {"name": "query_id", "dtype": "string"}, {"name": "rank", "dtype": "float64"}, {"name": "review_count", "dtype": "float64"}, {"name": "review_result", "dtype": "bool"}, {"name": "role", "dtype": "string"}, {"name": "synthetic", "dtype": "bool"}, {"name": "title", "dtype": "string"}, {"name": "tree_state", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "user_id", "dtype": "string"}]}, {"name": "ConcatenatedText", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 1990497610, "num_examples": 131393}], "download_size": 790648134, "dataset_size": 1990497610}}
|
2023-10-06T16:56:33+00:00
|
[] |
[
"ar"
] |
TAGS
#task_categories-conversational #task_categories-text-generation #task_categories-text2text-generation #task_categories-translation #task_categories-summarization #language-Arabic #region-us
|
# Dataset Card for "Mixed-Arabic-Dataset"
## Mixed Arabic Datasets (MAD)
The Mixed Arabic Datasets (MAD) project provides a comprehensive collection of diverse Arabic-language datasets, sourced from various repositories, platforms, and domains. These datasets cover a wide range of text types, including books, articles, Wikipedia content, stories, and more.
### MAD Repo vs. MAD Main
#### MAD Repo
- Versatility: In the MAD Repository (MAD Repo), datasets are made available in their original, native form. Researchers and practitioners can selectively download specific datasets that align with their specific interests or requirements.
- Independent Access: Each dataset is self-contained, enabling users to work with individual datasets independently, allowing for focused analyses and experiments.
#### MAD Main or simply MAD
- Unified Dataframe: MAD Main represents a harmonized and unified dataframe, incorporating all datasets from the MAD Repository. It provides a seamless and consolidated view of the entire MAD collection, making it convenient for comprehensive analyses and applications.
- Holistic Perspective: Researchers can access a broad spectrum of Arabic-language content within a single dataframe, promoting holistic exploration and insights across diverse text sources.
### Why MAD Main?
- Efficiency: Working with MAD Main streamlines the data acquisition process by consolidating multiple datasets into one structured dataframe. This is particularly beneficial for large-scale projects or studies requiring diverse data sources.
- Interoperability: With MAD Main, the datasets are integrated into a standardized format, enhancing interoperability and compatibility with a wide range of data processing and analysis tools.
- Meta-Analysis: Researchers can conduct comprehensive analyses, such as cross-domain studies, trend analyses, or comparative studies, by leveraging the combined richness of all MAD datasets.
### Getting Started
- To access individual datasets in their original form, refer to the MAD Repository (Link to MAD Repo).
- For a unified view of all datasets, conveniently organized in a dataframe, you are here in the right place.
### Join Us on Discord
For discussions, contributions, and community interactions, join us on Discord! \n\nThe Mixed Arabic Datasets (MAD) project provides a comprehensive collection of diverse Arabic-language datasets, sourced from various repositories, platforms, and domains. These datasets cover a wide range of text types, including books, articles, Wikipedia content, stories, and more.",
"### MAD Repo vs. MAD Main",
"#### MAD Repo\n- Versatility: In the MAD Repository (MAD Repo), datasets are made available in their original, native form. Researchers and practitioners can selectively download specific datasets that align with their specific interests or requirements.\n- Independent Access: Each dataset is self-contained, enabling users to work with individual datasets independently, allowing for focused analyses and experiments.",
"#### MAD Main or simply MAD\n- Unified Dataframe: MAD Main represents a harmonized and unified dataframe, incorporating all datasets from the MAD Repository. It provides a seamless and consolidated view of the entire MAD collection, making it convenient for comprehensive analyses and applications.\n- Holistic Perspective: Researchers can access a broad spectrum of Arabic-language content within a single dataframe, promoting holistic exploration and insights across diverse text sources.",
"### Why MAD Main?\n- Efficiency: Working with MAD Main streamlines the data acquisition process by consolidating multiple datasets into one structured dataframe. This is particularly beneficial for large-scale projects or studies requiring diverse data sources.\n- Interoperability: With MAD Main, the datasets are integrated into a standardized format, enhancing interoperability and compatibility with a wide range of data processing and analysis tools.\n- Meta-Analysis: Researchers can conduct comprehensive analyses, such as cross-domain studies, trend analyses, or comparative studies, by leveraging the combined richness of all MAD datasets.",
"### Getting Started\n- To access individual datasets in their original form, refer to the MAD Repository (Link to MAD Repo).\n- For a unified view of all datasets, conveniently organized in a dataframe, you are here in the right place.",
"### Join Us on Discord\n\nFor discussions, contributions, and community interactions, join us on Discord! \n\nThe Mixed Arabic Datasets (MAD) project provides a comprehensive collection of diverse Arabic-language datasets, sourced from various repositories, platforms, and domains. These datasets cover a wide range of text types, including books, articles, Wikipedia content, stories, and more.",
"### MAD Repo vs. MAD Main",
"#### MAD Repo\n- Versatility: In the MAD Repository (MAD Repo), datasets are made available in their original, native form. Researchers and practitioners can selectively download specific datasets that align with their specific interests or requirements.\n- Independent Access: Each dataset is self-contained, enabling users to work with individual datasets independently, allowing for focused analyses and experiments.",
"#### MAD Main or simply MAD\n- Unified Dataframe: MAD Main represents a harmonized and unified dataframe, incorporating all datasets from the MAD Repository. It provides a seamless and consolidated view of the entire MAD collection, making it convenient for comprehensive analyses and applications.\n- Holistic Perspective: Researchers can access a broad spectrum of Arabic-language content within a single dataframe, promoting holistic exploration and insights across diverse text sources.",
"### Why MAD Main?\n- Efficiency: Working with MAD Main streamlines the data acquisition process by consolidating multiple datasets into one structured dataframe. This is particularly beneficial for large-scale projects or studies requiring diverse data sources.\n- Interoperability: With MAD Main, the datasets are integrated into a standardized format, enhancing interoperability and compatibility with a wide range of data processing and analysis tools.\n- Meta-Analysis: Researchers can conduct comprehensive analyses, such as cross-domain studies, trend analyses, or comparative studies, by leveraging the combined richness of all MAD datasets.",
"### Getting Started\n- To access individual datasets in their original form, refer to the MAD Repository (Link to MAD Repo).\n- For a unified view of all datasets, conveniently organized in a dataframe, you are here in the right place.",
"### Join Us on Discord\n\nFor discussions, contributions, and community interactions, join us on Discord! \n\nThe Mixed Arabic Datasets (MAD) project provides a comprehensive collection of diverse Arabic-language datasets, sourced from various repositories, platforms, and domains. These datasets cover a wide range of text types, including books, articles, Wikipedia content, stories, and more.### MAD Repo vs. MAD Main#### MAD Repo\n- Versatility: In the MAD Repository (MAD Repo), datasets are made available in their original, native form. Researchers and practitioners can selectively download specific datasets that align with their specific interests or requirements.\n- Independent Access: Each dataset is self-contained, enabling users to work with individual datasets independently, allowing for focused analyses and experiments.#### MAD Main or simply MAD\n- Unified Dataframe: MAD Main represents a harmonized and unified dataframe, incorporating all datasets from the MAD Repository. It provides a seamless and consolidated view of the entire MAD collection, making it convenient for comprehensive analyses and applications.\n- Holistic Perspective: Researchers can access a broad spectrum of Arabic-language content within a single dataframe, promoting holistic exploration and insights across diverse text sources."
] |
630bd284b3440b6dbcc14056557341c380bc5fd4
|
# Dataset Card for "balance_network"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Imran1/balance_network
|
[
"region:us"
] |
2023-09-25T10:02:51+00:00
|
{"dataset_info": {"features": [{"name": "resp_pkts", "dtype": "int64"}, {"name": "service", "dtype": "string"}, {"name": "orig_ip_bytes", "dtype": "int64"}, {"name": "local_resp", "dtype": "bool"}, {"name": "missed_bytes", "dtype": "int64"}, {"name": "protocol", "dtype": "string"}, {"name": "duration", "dtype": "float64"}, {"name": "conn_state", "dtype": "string"}, {"name": "dest_ip", "dtype": "string"}, {"name": "orig_pkts", "dtype": "int64"}, {"name": "community_id", "dtype": "string"}, {"name": "resp_ip_bytes", "dtype": "int64"}, {"name": "dest_port", "dtype": "int64"}, {"name": "orig_bytes", "dtype": "float64"}, {"name": "local_orig", "dtype": "bool"}, {"name": "datetime", "dtype": "string"}, {"name": "history", "dtype": "string"}, {"name": "resp_bytes", "dtype": "float64"}, {"name": "uid", "dtype": "string"}, {"name": "src_port", "dtype": "int64"}, {"name": "ts", "dtype": "float64"}, {"name": "src_ip", "dtype": "string"}, {"name": "mitre_attack_tactics", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 484491622, "num_examples": 2018296}], "download_size": 100944771, "dataset_size": 484491622}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T10:03:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "balance_network"
More Information needed
|
[
"# Dataset Card for \"balance_network\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"balance_network\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"balance_network\"\n\nMore Information needed"
] |
92b8d336829e879ef2bfe96350f9f856e17a46bb
|
# Dataset of hazuki_ren/葉月恋 (Love Live! Superstar!!)
This is the dataset of hazuki_ren/葉月恋 (Love Live! Superstar!!), containing 474 images and their tags.
The core tags of this character are `black_hair, long_hair, yellow_eyes, bangs, ponytail, high_ponytail, bow, breasts, ribbon, hair_bow, shiny_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 474 | 638.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hazuki_ren_lovelivesuperstar/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 474 | 331.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hazuki_ren_lovelivesuperstar/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1112 | 718.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hazuki_ren_lovelivesuperstar/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 474 | 546.44 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hazuki_ren_lovelivesuperstar/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1112 | 1.08 GiB | [Download](https://huggingface.co/datasets/CyberHarem/hazuki_ren_lovelivesuperstar/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hazuki_ren_lovelivesuperstar',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, blue_jacket, grey_dress, long_sleeves, looking_at_viewer, neck_ribbon, open_jacket, red_ribbon, smile, solo, yuigaoka_school_uniform, birthday, blush, pinafore_dress, medium_breasts, upper_body |
| 1 | 14 |  |  |  |  |  | 1girl, blue_jacket, grey_dress, looking_at_viewer, neck_ribbon, open_jacket, pinafore_dress, red_ribbon, solo, yuigaoka_school_uniform, blush, collared_shirt, long_sleeves, smile, simple_background, white_background, closed_mouth, cowboy_shot, white_shirt |
| 2 | 5 |  |  |  |  |  | 1girl, blue_jacket, brown_footwear, closed_mouth, full_body, grey_dress, loafers, long_sleeves, looking_at_viewer, neck_ribbon, open_jacket, pinafore_dress, red_ribbon, smile, solo, standing, white_background, white_shirt, white_socks, yuigaoka_school_uniform, collared_shirt, simple_background, white_bow, arms_behind_back, blush, kneehighs, leaning_forward |
| 3 | 6 |  |  |  |  |  | 1girl, birthday, looking_at_viewer, smile, solo, upper_body, blush, shiny |
| 4 | 10 |  |  |  |  |  | 1girl, birthday, looking_at_viewer, smile, solo, white_gloves, blush, medium_breasts, shiny, upper_body, sleeveless, white_dress, bubble, signature |
| 5 | 5 |  |  |  |  |  | red_bowtie, school_uniform, 1girl, collared_shirt, solo, upper_body, blush, closed_mouth, looking_at_viewer, short_sleeves, medium_breasts, skirt, white_shirt |
| 6 | 6 |  |  |  |  |  | 1girl, open_jacket, solo, full_body, looking_at_viewer, thigh_strap, white_footwear, white_jacket, dress, frills, skirt, detached_collar, simple_background, smile, white_background |
| 7 | 7 |  |  |  |  |  | 1girl, blush, cleavage, collarbone, looking_at_viewer, navel, solo, medium_breasts, white_background, white_bikini, cowboy_shot, parted_lips, simple_background, smile, stomach, bare_shoulders, blue_bikini, halterneck, large_breasts |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_jacket | grey_dress | long_sleeves | looking_at_viewer | neck_ribbon | open_jacket | red_ribbon | smile | solo | yuigaoka_school_uniform | birthday | blush | pinafore_dress | medium_breasts | upper_body | collared_shirt | simple_background | white_background | closed_mouth | cowboy_shot | white_shirt | brown_footwear | full_body | loafers | standing | white_socks | white_bow | arms_behind_back | kneehighs | leaning_forward | shiny | white_gloves | sleeveless | white_dress | bubble | signature | red_bowtie | school_uniform | short_sleeves | skirt | thigh_strap | white_footwear | white_jacket | dress | frills | detached_collar | cleavage | collarbone | navel | white_bikini | parted_lips | stomach | bare_shoulders | blue_bikini | halterneck | large_breasts |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:-------------|:---------------|:--------------------|:--------------|:--------------|:-------------|:--------|:-------|:--------------------------|:-----------|:--------|:-----------------|:-----------------|:-------------|:-----------------|:--------------------|:-------------------|:---------------|:--------------|:--------------|:-----------------|:------------|:----------|:-----------|:--------------|:------------|:-------------------|:------------|:------------------|:--------|:---------------|:-------------|:--------------|:---------|:------------|:-------------|:-----------------|:----------------|:--------|:--------------|:-----------------|:---------------|:--------|:---------|:------------------|:-----------|:-------------|:--------|:---------------|:--------------|:----------|:-----------------|:--------------|:-------------|:----------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | X | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | X | X | | | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | | | X | | | | X | X | | X | X | | | X | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 10 |  |  |  |  |  | X | | | | X | | | | X | X | | X | X | | X | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | | | X | | | | | X | | | X | | X | X | X | | | X | | X | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | | | | X | | X | | X | X | | | | | | | | X | X | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | |
| 7 | 7 |  |  |  |  |  | X | | | | X | | | | X | X | | | X | | X | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/hazuki_ren_lovelivesuperstar
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T10:04:41+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T06:51:47+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of hazuki\_ren/葉月恋 (Love Live! Superstar!!)
===================================================
This is the dataset of hazuki\_ren/葉月恋 (Love Live! Superstar!!), containing 474 images and their tags.
The core tags of this character are 'black\_hair, long\_hair, yellow\_eyes, bangs, ponytail, high\_ponytail, bow, breasts, ribbon, hair\_bow, shiny\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
611204e5a32dcbbbb6389979a4e0d6b5fbd2323f
|
# Dataset Card for "email_dataset_vb_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
786Vaibhav786/email_dataset_vb_1
|
[
"region:us"
] |
2023-09-25T10:18:03+00:00
|
{"dataset_info": {"features": [{"name": "product", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "marketing_email", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19568, "num_examples": 10}], "download_size": 25225, "dataset_size": 19568}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T10:18:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "email_dataset_vb_1"
More Information needed
|
[
"# Dataset Card for \"email_dataset_vb_1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"email_dataset_vb_1\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"email_dataset_vb_1\"\n\nMore Information needed"
] |
f125b6f58110a1aa7d7d4ad06f3ecee46f95239d
|
# Dataset Card for "pickapic_v2_no_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yuvalkirstain/pickapic_v2_no_images
|
[
"region:us"
] |
2023-09-25T10:34:27+00:00
|
{"dataset_info": {"features": [{"name": "are_different", "dtype": "bool"}, {"name": "best_image_uid", "dtype": "string"}, {"name": "caption", "dtype": "string"}, {"name": "created_at", "dtype": "timestamp[ns]"}, {"name": "has_label", "dtype": "bool"}, {"name": "image_0_uid", "dtype": "string"}, {"name": "image_0_url", "dtype": "string"}, {"name": "image_1_uid", "dtype": "string"}, {"name": "image_1_url", "dtype": "string"}, {"name": "label_0", "dtype": "float64"}, {"name": "label_1", "dtype": "float64"}, {"name": "model_0", "dtype": "string"}, {"name": "model_1", "dtype": "string"}, {"name": "ranking_id", "dtype": "int64"}, {"name": "user_id", "dtype": "int64"}, {"name": "num_example_per_prompt", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 565913782, "num_examples": 959040}, {"name": "validation", "num_bytes": 11465384, "num_examples": 20596}, {"name": "test", "num_bytes": 12098794, "num_examples": 20716}, {"name": "validation_unique", "num_bytes": 280879, "num_examples": 500}, {"name": "test_unique", "num_bytes": 277834, "num_examples": 500}], "download_size": 291928467, "dataset_size": 590036673}}
|
2023-09-25T10:34:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pickapic_v2_no_images"
More Information needed
|
[
"# Dataset Card for \"pickapic_v2_no_images\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pickapic_v2_no_images\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pickapic_v2_no_images\"\n\nMore Information needed"
] |
d15e0b2ea19ec1f215df113b73b97d3ab3200b02
|
# Dataset Card for "top10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ricardosantoss/top10
|
[
"region:us"
] |
2023-09-25T10:43:13+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "TEXT", "dtype": "string"}, {"name": "ICD9_CODE", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 295026309, "num_examples": 31478}, {"name": "test", "num_bytes": 37572145, "num_examples": 4000}, {"name": "validation", "num_bytes": 37192991, "num_examples": 4000}], "download_size": 206008521, "dataset_size": 369791445}}
|
2023-09-25T10:43:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "top10"
More Information needed
|
[
"# Dataset Card for \"top10\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"top10\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"top10\"\n\nMore Information needed"
] |
abcdb7870177995f527e568fca24858349f0bb2a
|
# Reviews on Messengers Dataset 🤳 ⭐️
The Reviews on Messengers Dataset is a comprehensive collection of **200** the most recent customer reviews on **6** messengers obtained from the popular app store, **Google Play**. See the list of the apps below.
This dataset encompasses reviews written in **5** different languages: English, French, German, Italian, Japanese.
The dataset's multilingual nature makes it useful for natural language processing tasks, sentiment analysis algorithms, and other machine learning applications that require diverse language data for training and evaluation.
The dataset can be highly valuable in training and fine-tuning machine learning models to automatically classify sentiments, predict customer satisfaction, or extract key information from customer reviews.
The data was scraped with `google-play-scraper` python lib by [TrainingData Team](https://trainingdata.pro/data-market?utm_source=kaggle&utm_medium=cpc&utm_campaign=6000-messengers-reviews-google-play).
### Apps in the dataset and their IDs:
- Telegram: `'org.telegram.messenger'`,
- Facebook Messenger: `'com.facebook.orca'`,
- Whats App: `'com.whatsapp'`,
- Viber: `'com.viber.voip'`,
- Snapchat: `'com.snapchat.android'`,
- We Chat: `'com.tencent.mm'`.
### Languages in the dataset:
- English: `EN`,
- French: `FR`,
- German: `DE`,
- Italian : `IT`,
- Japanese: `JP`
# Content
For each item, we extracted:
- **reviewId**: ID of the review,
- **userName**: name of the reviewer,
- **userImage**: profile image of the reviewer,
- **content**: text of the review,
- **score**: number of stars given to the review,
- **thumbsUpCount**: number of likes on the review,
- **at**: date of the review,
- **replyContent**: text of the developer's comment,
- **repliedAt**: date of the developer's comment,
- **appVersion**: version of the app,
- **userLang**: language of the review,
- **app_id**: ID of the app
# Try to find the messenger with the most attentive support 😉
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=messengers-reviews-google-play)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
|
TrainingDataPro/messengers-reviews-google-play
|
[
"task_categories:text-classification",
"language:en",
"language:fr",
"language:ja",
"language:it",
"language:de",
"license:cc-by-nc-nd-4.0",
"code",
"finance",
"region:us"
] |
2023-09-25T10:57:44+00:00
|
{"language": ["en", "fr", "ja", "it", "de"], "license": "cc-by-nc-nd-4.0", "task_categories": ["text-classification"], "tags": ["code", "finance"]}
|
2023-09-25T11:01:08+00:00
|
[] |
[
"en",
"fr",
"ja",
"it",
"de"
] |
TAGS
#task_categories-text-classification #language-English #language-French #language-Japanese #language-Italian #language-German #license-cc-by-nc-nd-4.0 #code #finance #region-us
|
# Reviews on Messengers Dataset ⭐️
The Reviews on Messengers Dataset is a comprehensive collection of 200 the most recent customer reviews on 6 messengers obtained from the popular app store, Google Play. See the list of the apps below.
This dataset encompasses reviews written in 5 different languages: English, French, German, Italian, Japanese.
The dataset's multilingual nature makes it useful for natural language processing tasks, sentiment analysis algorithms, and other machine learning applications that require diverse language data for training and evaluation.
The dataset can be highly valuable in training and fine-tuning machine learning models to automatically classify sentiments, predict customer satisfaction, or extract key information from customer reviews.
The data was scraped with 'google-play-scraper' python lib by TrainingData Team.
### Apps in the dataset and their IDs:
- Telegram: ''org.telegram.messenger'',
- Facebook Messenger: ''URL'',
- Whats App: ''com.whatsapp'',
- Viber: ''URL'',
- Snapchat: ''com.snapchat.android'',
- We Chat: ''URL''.
### Languages in the dataset:
- English: 'EN',
- French: 'FR',
- German: 'DE',
- Italian : 'IT',
- Japanese: 'JP'
# Content
For each item, we extracted:
- reviewId: ID of the review,
- userName: name of the reviewer,
- userImage: profile image of the reviewer,
- content: text of the review,
- score: number of stars given to the review,
- thumbsUpCount: number of likes on the review,
- at: date of the review,
- replyContent: text of the developer's comment,
- repliedAt: date of the developer's comment,
- appVersion: version of the app,
- userLang: language of the review,
- app_id: ID of the app
# Try to find the messenger with the most attentive support
## TrainingData provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: URL
TrainingData's GitHub: URL
|
[
"# Reviews on Messengers Dataset ⭐️\n\nThe Reviews on Messengers Dataset is a comprehensive collection of 200 the most recent customer reviews on 6 messengers obtained from the popular app store, Google Play. See the list of the apps below.\nThis dataset encompasses reviews written in 5 different languages: English, French, German, Italian, Japanese.\n\nThe dataset's multilingual nature makes it useful for natural language processing tasks, sentiment analysis algorithms, and other machine learning applications that require diverse language data for training and evaluation.\n\nThe dataset can be highly valuable in training and fine-tuning machine learning models to automatically classify sentiments, predict customer satisfaction, or extract key information from customer reviews. \n\nThe data was scraped with 'google-play-scraper' python lib by TrainingData Team.",
"### Apps in the dataset and their IDs: \n- Telegram: ''org.telegram.messenger'',\n- Facebook Messenger: ''URL'',\n- Whats App: ''com.whatsapp'',\n- Viber: ''URL'',\n- Snapchat: ''com.snapchat.android'',\n- We Chat: ''URL''.",
"### Languages in the dataset:\n- English: 'EN',\n- French: 'FR',\n- German: 'DE',\n- Italian : 'IT',\n- Japanese: 'JP'",
"# Content\nFor each item, we extracted:\n- reviewId: ID of the review,\n- userName: name of the reviewer,\n- userImage: profile image of the reviewer,\n- content: text of the review,\n- score: number of stars given to the review,\n- thumbsUpCount: number of likes on the review,\n- at: date of the review,\n- replyContent: text of the developer's comment,\n- repliedAt: date of the developer's comment,\n- appVersion: version of the app,\n- userLang: language of the review,\n- app_id: ID of the app",
"# Try to find the messenger with the most attentive support",
"## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL"
] |
[
"TAGS\n#task_categories-text-classification #language-English #language-French #language-Japanese #language-Italian #language-German #license-cc-by-nc-nd-4.0 #code #finance #region-us \n",
"# Reviews on Messengers Dataset ⭐️\n\nThe Reviews on Messengers Dataset is a comprehensive collection of 200 the most recent customer reviews on 6 messengers obtained from the popular app store, Google Play. See the list of the apps below.\nThis dataset encompasses reviews written in 5 different languages: English, French, German, Italian, Japanese.\n\nThe dataset's multilingual nature makes it useful for natural language processing tasks, sentiment analysis algorithms, and other machine learning applications that require diverse language data for training and evaluation.\n\nThe dataset can be highly valuable in training and fine-tuning machine learning models to automatically classify sentiments, predict customer satisfaction, or extract key information from customer reviews. \n\nThe data was scraped with 'google-play-scraper' python lib by TrainingData Team.",
"### Apps in the dataset and their IDs: \n- Telegram: ''org.telegram.messenger'',\n- Facebook Messenger: ''URL'',\n- Whats App: ''com.whatsapp'',\n- Viber: ''URL'',\n- Snapchat: ''com.snapchat.android'',\n- We Chat: ''URL''.",
"### Languages in the dataset:\n- English: 'EN',\n- French: 'FR',\n- German: 'DE',\n- Italian : 'IT',\n- Japanese: 'JP'",
"# Content\nFor each item, we extracted:\n- reviewId: ID of the review,\n- userName: name of the reviewer,\n- userImage: profile image of the reviewer,\n- content: text of the review,\n- score: number of stars given to the review,\n- thumbsUpCount: number of likes on the review,\n- at: date of the review,\n- replyContent: text of the developer's comment,\n- repliedAt: date of the developer's comment,\n- appVersion: version of the app,\n- userLang: language of the review,\n- app_id: ID of the app",
"# Try to find the messenger with the most attentive support",
"## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL"
] |
[
60,
177,
76,
43,
137,
12,
39
] |
[
"passage: TAGS\n#task_categories-text-classification #language-English #language-French #language-Japanese #language-Italian #language-German #license-cc-by-nc-nd-4.0 #code #finance #region-us \n# Reviews on Messengers Dataset ⭐️\n\nThe Reviews on Messengers Dataset is a comprehensive collection of 200 the most recent customer reviews on 6 messengers obtained from the popular app store, Google Play. See the list of the apps below.\nThis dataset encompasses reviews written in 5 different languages: English, French, German, Italian, Japanese.\n\nThe dataset's multilingual nature makes it useful for natural language processing tasks, sentiment analysis algorithms, and other machine learning applications that require diverse language data for training and evaluation.\n\nThe dataset can be highly valuable in training and fine-tuning machine learning models to automatically classify sentiments, predict customer satisfaction, or extract key information from customer reviews. \n\nThe data was scraped with 'google-play-scraper' python lib by TrainingData Team.### Apps in the dataset and their IDs: \n- Telegram: ''org.telegram.messenger'',\n- Facebook Messenger: ''URL'',\n- Whats App: ''com.whatsapp'',\n- Viber: ''URL'',\n- Snapchat: ''com.snapchat.android'',\n- We Chat: ''URL''.### Languages in the dataset:\n- English: 'EN',\n- French: 'FR',\n- German: 'DE',\n- Italian : 'IT',\n- Japanese: 'JP'# Content\nFor each item, we extracted:\n- reviewId: ID of the review,\n- userName: name of the reviewer,\n- userImage: profile image of the reviewer,\n- content: text of the review,\n- score: number of stars given to the review,\n- thumbsUpCount: number of likes on the review,\n- at: date of the review,\n- replyContent: text of the developer's comment,\n- repliedAt: date of the developer's comment,\n- appVersion: version of the app,\n- userLang: language of the review,\n- app_id: ID of the app# Try to find the messenger with the most attentive support"
] |
5fc121cea47c07bd98e59d8dad9ab0e7407d7acf
|
# Dataset Card for "what_where_when_50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/what_where_when_50k
|
[
"region:us"
] |
2023-09-25T11:07:12+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "uuid", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 42224521.044228844, "num_examples": 50000}], "download_size": 24272957, "dataset_size": 42224521.044228844}}
|
2023-09-25T11:07:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "what_where_when_50k"
More Information needed
|
[
"# Dataset Card for \"what_where_when_50k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"what_where_when_50k\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"what_where_when_50k\"\n\nMore Information needed"
] |
1fc5a1a4b382faf8de65d4ed3d35a128008b0b5f
|
# Dataset Card for "competition_math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/competition_math
|
[
"region:us"
] |
2023-09-25T11:10:37+00:00
|
{"dataset_info": {"features": [{"name": "problem", "dtype": "string"}, {"name": "level", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "solution", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5984772, "num_examples": 7500}], "download_size": 2992145, "dataset_size": 5984772}}
|
2023-09-25T11:10:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "competition_math"
More Information needed
|
[
"# Dataset Card for \"competition_math\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"competition_math\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"competition_math\"\n\nMore Information needed"
] |
a3f6d8d9686226b8449a292f7ce2ebde3ae227b2
|
# Dataset of heanna_sumire/平安名すみれ/헤안나스미레 (Love Live! Superstar!!)
This is the dataset of heanna_sumire/平安名すみれ/헤안나스미레 (Love Live! Superstar!!), containing 500 images and their tags.
The core tags of this character are `blonde_hair, bangs, green_eyes, long_hair, blunt_bangs, hairband, breasts, ribbon, red_hairband, red_ribbon, neck_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 714.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/heanna_sumire_lovelivesuperstar/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 353.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/heanna_sumire_lovelivesuperstar/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1209 | 782.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/heanna_sumire_lovelivesuperstar/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 605.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/heanna_sumire_lovelivesuperstar/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1209 | 1.19 GiB | [Download](https://huggingface.co/datasets/CyberHarem/heanna_sumire_lovelivesuperstar/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/heanna_sumire_lovelivesuperstar',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 25 |  |  |  |  |  | 1girl, blue_jacket, grey_dress, looking_at_viewer, solo, white_shirt, yuigaoka_school_uniform, open_jacket, pinafore_dress, simple_background, collared_shirt, white_background, closed_mouth, smile, long_sleeves, blush, upper_body, orange_hairband |
| 1 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, pinafore_dress, short_sleeves, solo, white_background, white_shirt, yuigaoka_school_uniform, blush, closed_mouth, collared_shirt, simple_background, smile, grey_dress, hand_on_hip, upper_body |
| 2 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, skirt, smile, solo, birthday, open_mouth, white_thighhighs, zettai_ryouiki, jacket, medium_breasts, one_eye_closed |
| 3 | 44 |  |  |  |  |  | 1girl, solo, looking_at_viewer, drill_hair, elbow_gloves, tiara, smile, purple_dress, white_gloves, puffy_short_sleeves, blush, upper_body, pearl_necklace, collarbone, purple_gloves |
| 4 | 16 |  |  |  |  |  | 1girl, crop_top, midriff, solo, eyewear_on_headwear, sunglasses, baseball_cap, looking_at_viewer, navel, green_shirt, collarbone, red_headwear, shorts, white_thighhighs, blush, medium_breasts, teeth, white_background, grin, hand_on_hip, one_eye_closed, short_sleeves, simple_background |
| 5 | 18 |  |  |  |  |  | 1girl, solo, looking_at_viewer, miko, red_hakama, skirt, holding, wide_sleeves, broom, smile, blush, white_kimono |
| 6 | 6 |  |  |  |  |  | 1girl, blush, cleavage, looking_at_viewer, simple_background, solo, white_background, collarbone, large_breasts, navel, :o, cowboy_shot, thighs, white_bikini |
| 7 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, simple_background, solo, white_background, white_thighhighs, ass, medium_breasts, white_panties, blush, shiny_skin, thighs, anus, from_behind, lingerie, looking_back, lying, nipples, thong, white_bra |
| 8 | 6 |  |  |  |  |  | green_bikini, hair_ornament, looking_at_viewer, necklace, star_(symbol), blush, cleavage, navel, one_eye_closed, smile, 1girl, bare_shoulders, collarbone, large_breasts, medium_breasts, outdoors, side_ponytail, blue_sky, cloud, day, frills, single_hair_bun, solo_focus, water |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_jacket | grey_dress | looking_at_viewer | solo | white_shirt | yuigaoka_school_uniform | open_jacket | pinafore_dress | simple_background | collared_shirt | white_background | closed_mouth | smile | long_sleeves | blush | upper_body | orange_hairband | short_sleeves | hand_on_hip | skirt | birthday | open_mouth | white_thighhighs | zettai_ryouiki | jacket | medium_breasts | one_eye_closed | drill_hair | elbow_gloves | tiara | purple_dress | white_gloves | puffy_short_sleeves | pearl_necklace | collarbone | purple_gloves | crop_top | midriff | eyewear_on_headwear | sunglasses | baseball_cap | navel | green_shirt | red_headwear | shorts | teeth | grin | miko | red_hakama | holding | wide_sleeves | broom | white_kimono | cleavage | large_breasts | :o | cowboy_shot | thighs | white_bikini | ass | white_panties | shiny_skin | anus | from_behind | lingerie | looking_back | lying | nipples | thong | white_bra | green_bikini | hair_ornament | necklace | star_(symbol) | bare_shoulders | outdoors | side_ponytail | blue_sky | cloud | day | frills | single_hair_bun | solo_focus | water |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:-------------|:--------------------|:-------|:--------------|:--------------------------|:--------------|:-----------------|:--------------------|:-----------------|:-------------------|:---------------|:--------|:---------------|:--------|:-------------|:------------------|:----------------|:--------------|:--------|:-----------|:-------------|:-------------------|:-----------------|:---------|:-----------------|:-----------------|:-------------|:---------------|:--------|:---------------|:---------------|:----------------------|:-----------------|:-------------|:----------------|:-----------|:----------|:----------------------|:-------------|:---------------|:--------|:--------------|:---------------|:---------|:--------|:-------|:-------|:-------------|:----------|:---------------|:--------|:---------------|:-----------|:----------------|:-----|:--------------|:---------|:---------------|:------|:----------------|:-------------|:-------|:--------------|:-----------|:---------------|:--------|:----------|:--------|:------------|:---------------|:----------------|:-----------|:----------------|:-----------------|:-----------|:----------------|:-----------|:--------|:------|:---------|:------------------|:-------------|:--------|
| 0 | 25 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | | X | X | X | X | X | | X | X | X | X | X | X | | X | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | | | X | X | | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 44 |  |  |  |  |  | X | | | X | X | | | | | | | | | X | | X | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 16 |  |  |  |  |  | X | | | X | X | | | | | X | | X | | | | X | | | X | X | | | | X | | | X | X | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 18 |  |  |  |  |  | X | | | X | X | | | | | | | | | X | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | | | X | X | | | | | X | | X | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | | | X | X | | | | | X | | X | | | | X | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 8 | 6 |  |  |  |  |  | X | | | X | | | | | | | | | | X | | X | | | | | | | | | | | X | X | | | | | | | | X | | | | | | | X | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/heanna_sumire_lovelivesuperstar
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T11:12:43+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T06:52:49+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of heanna\_sumire/平安名すみれ/헤안나스미레 (Love Live! Superstar!!)
================================================================
This is the dataset of heanna\_sumire/平安名すみれ/헤안나스미레 (Love Live! Superstar!!), containing 500 images and their tags.
The core tags of this character are 'blonde\_hair, bangs, green\_eyes, long\_hair, blunt\_bangs, hairband, breasts, ribbon, red\_hairband, red\_ribbon, neck\_ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
bd559a76a5564f0fbb24b990855f0f9695651e00
|
# Dataset Card for "common_languages_preprocessed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
barto17/common_languages_preprocessed
|
[
"region:us"
] |
2023-09-25T11:18:31+00:00
|
{"dataset_info": {"features": [{"name": "labels", "dtype": {"class_label": {"names": {"0": "Arabic", "1": "Basque", "2": "Breton", "3": "Catalan", "4": "Chinese_China", "5": "Chinese_Hongkong", "6": "Chinese_Taiwan", "7": "Chuvash", "8": "Czech", "9": "Dhivehi", "10": "Dutch", "11": "English", "12": "Esperanto", "13": "Estonian", "14": "French", "15": "Frisian", "16": "Georgian", "17": "German", "18": "Greek", "19": "Hakha_Chin", "20": "Indonesian", "21": "Interlingua", "22": "Italian", "23": "Japanese", "24": "Kabyle", "25": "Kinyarwanda", "26": "Kyrgyz", "27": "Latvian", "28": "Maltese", "29": "Mangolian", "30": "Persian", "31": "Polish", "32": "Portuguese", "33": "Romanian", "34": "Romansh_Sursilvan", "35": "Russian", "36": "Sakha", "37": "Slovenian", "38": "Spanish", "39": "Swedish", "40": "Tamil", "41": "Tatar", "42": "Turkish", "43": "Ukranian", "44": "Welsh"}}}}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 2076244, "num_examples": 22194}, {"name": "test", "num_bytes": 559808, "num_examples": 5963}], "download_size": 1604084, "dataset_size": 2636052}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
|
2023-09-25T11:19:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "common_languages_preprocessed"
More Information needed
|
[
"# Dataset Card for \"common_languages_preprocessed\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"common_languages_preprocessed\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"common_languages_preprocessed\"\n\nMore Information needed"
] |
c265bed939e07defa8388298a98e1547a182b673
|
# Dataset of sakurakouji_kinako/桜小路きな子/사쿠라코지키나코 (Love Live! Superstar!!)
This is the dataset of sakurakouji_kinako/桜小路きな子/사쿠라코지키나코 (Love Live! Superstar!!), containing 179 images and their tags.
The core tags of this character are `bangs, brown_hair, long_hair, green_eyes, twintails, low_twintails, braid, blunt_bangs, ribbon, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 179 | 238.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakurakouji_kinako_lovelivesuperstar/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 179 | 117.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakurakouji_kinako_lovelivesuperstar/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 415 | 258.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakurakouji_kinako_lovelivesuperstar/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 179 | 201.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakurakouji_kinako_lovelivesuperstar/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 415 | 400.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakurakouji_kinako_lovelivesuperstar/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/sakurakouji_kinako_lovelivesuperstar',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | 1girl, blue_jacket, grey_dress, long_sleeves, looking_at_viewer, neck_ribbon, solo, yuigaoka_school_uniform, smile, black_pantyhose, open_mouth, red_ribbon, blush, pinafore_dress, brown_footwear, full_body, loafers, collared_shirt, white_background |
| 1 | 7 |  |  |  |  |  | 1girl, blue_jacket, blush, grey_dress, long_sleeves, looking_at_viewer, open_jacket, solo, yuigaoka_school_uniform, neck_ribbon, pinafore_dress, red_ribbon, white_background, black_pantyhose, petals, smile, white_shirt, closed_mouth, collared_shirt, french_braid, hair_ribbon, simple_background, upper_body |
| 2 | 7 |  |  |  |  |  | 1girl, beret, looking_at_viewer, solo, blue_headwear, short_sleeves, smile, birthday, dress, jacket, blush, collarbone, open_mouth, pink_gloves, white_background |
| 3 | 11 |  |  |  |  |  | 1girl, solo, fingerless_gloves, looking_at_viewer, smile, white_gloves, sleeveless, blush, open_mouth, arm_up, armpits, bow, clothes_around_waist, skirt, confetti, medium_breasts, green_ribbon |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_jacket | grey_dress | long_sleeves | looking_at_viewer | neck_ribbon | solo | yuigaoka_school_uniform | smile | black_pantyhose | open_mouth | red_ribbon | blush | pinafore_dress | brown_footwear | full_body | loafers | collared_shirt | white_background | open_jacket | petals | white_shirt | closed_mouth | french_braid | hair_ribbon | simple_background | upper_body | beret | blue_headwear | short_sleeves | birthday | dress | jacket | collarbone | pink_gloves | fingerless_gloves | white_gloves | sleeveless | arm_up | armpits | bow | clothes_around_waist | skirt | confetti | medium_breasts | green_ribbon |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:-------------|:---------------|:--------------------|:--------------|:-------|:--------------------------|:--------|:------------------|:-------------|:-------------|:--------|:-----------------|:-----------------|:------------|:----------|:-----------------|:-------------------|:--------------|:---------|:--------------|:---------------|:---------------|:--------------|:--------------------|:-------------|:--------|:----------------|:----------------|:-----------|:--------|:---------|:-------------|:--------------|:--------------------|:---------------|:-------------|:---------|:----------|:------|:-----------------------|:--------|:-----------|:-----------------|:---------------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | X | X | X | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | | | | X | | X | | X | | X | | X | | | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 3 | 11 |  |  |  |  |  | X | | | | X | | X | | X | | X | | X | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/sakurakouji_kinako_lovelivesuperstar
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T11:23:34+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T05:36:17+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of sakurakouji\_kinako/桜小路きな子/사쿠라코지키나코 (Love Live! Superstar!!)
=======================================================================
This is the dataset of sakurakouji\_kinako/桜小路きな子/사쿠라코지키나코 (Love Live! Superstar!!), containing 179 images and their tags.
The core tags of this character are 'bangs, brown\_hair, long\_hair, green\_eyes, twintails, low\_twintails, braid, blunt\_bangs, ribbon, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
59abb07c4007221955a8bee8ec3b951d5140a166
|
# Dataset Card for "train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jay401521/train
|
[
"region:us"
] |
2023-09-25T11:24:38+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "domain", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "POS", "1": "NEG", "2": "NEU"}}}}, {"name": "rank", "dtype": "string"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 2334490, "num_examples": 27057}, {"name": "train", "num_bytes": 16412771, "num_examples": 189431}, {"name": "temp", "num_bytes": 9034358, "num_examples": 105891}, {"name": "twolabels", "num_bytes": 6014247.333333333, "num_examples": 70594}, {"name": "fewshot", "num_bytes": 2910, "num_examples": 33}, {"name": "test", "num_bytes": 2558224, "num_examples": 30021}], "download_size": 19277882, "dataset_size": 36370585.333333336}}
|
2023-10-20T08:51:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "train"
More Information needed
|
[
"# Dataset Card for \"train\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"train\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"train\"\n\nMore Information needed"
] |
0aaff3ae03d8bfcbb49885347145f7e710cc7793
|
# Dataset Card for "spoofing_detection_data_proccessed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
bvallegc/spoofing_detection_data_proccessed
|
[
"region:us"
] |
2023-09-25T11:25:21+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "speaker_id", "dtype": "string"}, {"name": "system_id", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "bonafide", "1": "spoof"}}}}, {"name": "input_values", "sequence": "float32"}, {"name": "attention_mask", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 10001392270, "num_examples": 22842}, {"name": "test", "num_bytes": 1128734898, "num_examples": 2538}], "download_size": 4762954824, "dataset_size": 11130127168}}
|
2023-09-25T11:29:47+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "spoofing_detection_data_proccessed"
More Information needed
|
[
"# Dataset Card for \"spoofing_detection_data_proccessed\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"spoofing_detection_data_proccessed\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"spoofing_detection_data_proccessed\"\n\nMore Information needed"
] |
b1f3f4693a1803230dd234ef5238c0ce871ea627
|
# Dataset of miyashita_ai/宮下愛/미야시타아이 (Love Live! School Idol Festival ALL STARS)
This is the dataset of miyashita_ai/宮下愛/미야시타아이 (Love Live! School Idol Festival ALL STARS), containing 500 images and their tags.
The core tags of this character are `blonde_hair, bangs, yellow_eyes, breasts, sidelocks, orange_eyes, medium_hair, braid, hair_ornament, ponytail`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 804.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miyashita_ai_loveliveschoolidolfestivalallstars/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 382.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miyashita_ai_loveliveschoolidolfestivalallstars/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1322 | 903.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miyashita_ai_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 678.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miyashita_ai_loveliveschoolidolfestivalallstars/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1322 | 1.39 GiB | [Download](https://huggingface.co/datasets/CyberHarem/miyashita_ai_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/miyashita_ai_loveliveschoolidolfestivalallstars',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, off-shoulder_shirt, solo, cleavage, necklace, smile, x_hair_ornament, medium_breasts, midriff, orange_shirt, blush, bracelet, collarbone, gyaru, looking_at_viewer, navel, one_eye_closed, star_(symbol), cellphone, denim, holding_phone, upper_body |
| 1 | 14 |  |  |  |  |  | 1girl, solo, blush, jacket_around_waist, looking_at_viewer, midriff, navel, short_sleeves, smile, white_shirt, cleavage, collarbone, medium_breasts, crop_top, pants, wristband, open_mouth, simple_background, white_background |
| 2 | 35 |  |  |  |  |  | 1girl, nijigasaki_academy_school_uniform, solo, looking_at_viewer, collared_shirt, brown_cardigan, plaid_skirt, short_sleeves, gyaru, smile, summer_uniform, neck_ribbon, pleated_skirt, jacket_around_waist, simple_background, sweater_around_waist, white_shirt, blue_skirt, short_ponytail, white_background, blush, medium_breasts, flower |
| 3 | 7 |  |  |  |  |  | 1girl, brown_cardigan, school_uniform, solo, upper_body, blush, collared_shirt, smile, white_background, white_shirt, looking_at_viewer, red_ribbon, simple_background, closed_mouth, large_breasts, neck_ribbon |
| 4 | 26 |  |  |  |  |  | 1girl, solo, hair_flower, cleavage, smile, looking_at_viewer, navel, midriff, medium_breasts, short_shorts, star_(symbol), bracelet, collarbone, one_eye_closed, black_shorts, boots, side_ponytail, blush, gyaru, orange_nails, black_footwear, heart_tattoo, necklace, ring, underwear |
| 5 | 8 |  |  |  |  |  | 1girl, solo, english_text, looking_at_viewer, smile, character_name, happy_birthday, blush, dress, dated, hat, jewelry, side_braid, white_gloves, medium_breasts, side_ponytail |
| 6 | 5 |  |  |  |  |  | 1girl, beanie, blue_headwear, long_sleeves, looking_at_viewer, solo, blush, shirt, long_hair, off_shoulder, white_background, collarbone, green_pants, grin, simple_background, sitting, sneakers, tank_top |
| 7 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, smile, solo, dress, blush, white_gloves, heart, open_mouth |
| 8 | 61 |  |  |  |  |  | 1girl, solo, tank_top, midriff, cheerleader, orange_skirt, french_braid, side_ponytail, wristband, collarbone, gyaru, miniskirt, pom_pom_(cheerleading), heart_necklace, navel, hairclip, black_belt, thighhighs, smile, hair_tie, off_shoulder, crop_top, pendant, asymmetrical_legwear |
| 9 | 7 |  |  |  |  |  | 1girl, black_gloves, fingerless_gloves, looking_at_viewer, solo, x_hair_ornament, black_choker, cropped_jacket, red_nails, crop_top, heart_necklace, midriff, navel, one_eye_closed, short_ponytail, belt, character_name, collarbone, french_braid, hairclip, miniskirt, nail_polish, ribbon, black_skirt, blush, bracelet, earrings, fishnet_top, grey_tank_top, grin, hair_bow, layered_skirt, side_braid |
| 10 | 15 |  |  |  |  |  | 1girl, solo, jacket, bracelet, looking_at_viewer, skirt, twintails, demon_horns, demon_tail, nail_polish, fake_horns, fishnet_pantyhose, necklace, smile, black_choker, black_nails, black_necktie, boots, gyaru, holding_weapon, one_eye_closed, white_shirt |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | off-shoulder_shirt | solo | cleavage | necklace | smile | x_hair_ornament | medium_breasts | midriff | orange_shirt | blush | bracelet | collarbone | gyaru | looking_at_viewer | navel | one_eye_closed | star_(symbol) | cellphone | denim | holding_phone | upper_body | jacket_around_waist | short_sleeves | white_shirt | crop_top | pants | wristband | open_mouth | simple_background | white_background | nijigasaki_academy_school_uniform | collared_shirt | brown_cardigan | plaid_skirt | summer_uniform | neck_ribbon | pleated_skirt | sweater_around_waist | blue_skirt | short_ponytail | flower | school_uniform | red_ribbon | closed_mouth | large_breasts | hair_flower | short_shorts | black_shorts | boots | side_ponytail | orange_nails | black_footwear | heart_tattoo | ring | underwear | english_text | character_name | happy_birthday | dress | dated | hat | jewelry | side_braid | white_gloves | beanie | blue_headwear | long_sleeves | shirt | long_hair | off_shoulder | green_pants | grin | sitting | sneakers | tank_top | heart | cheerleader | orange_skirt | french_braid | miniskirt | pom_pom_(cheerleading) | heart_necklace | hairclip | black_belt | thighhighs | hair_tie | pendant | asymmetrical_legwear | black_gloves | fingerless_gloves | black_choker | cropped_jacket | red_nails | belt | nail_polish | ribbon | black_skirt | earrings | fishnet_top | grey_tank_top | hair_bow | layered_skirt | jacket | skirt | twintails | demon_horns | demon_tail | fake_horns | fishnet_pantyhose | black_nails | black_necktie | holding_weapon |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:---------------------|:-------|:-----------|:-----------|:--------|:------------------|:-----------------|:----------|:---------------|:--------|:-----------|:-------------|:--------|:--------------------|:--------|:-----------------|:----------------|:------------|:--------|:----------------|:-------------|:----------------------|:----------------|:--------------|:-----------|:--------|:------------|:-------------|:--------------------|:-------------------|:------------------------------------|:-----------------|:-----------------|:--------------|:-----------------|:--------------|:----------------|:-----------------------|:-------------|:-----------------|:---------|:-----------------|:-------------|:---------------|:----------------|:--------------|:---------------|:---------------|:--------|:----------------|:---------------|:-----------------|:---------------|:-------|:------------|:---------------|:-----------------|:-----------------|:--------|:--------|:------|:----------|:-------------|:---------------|:---------|:----------------|:---------------|:--------|:------------|:---------------|:--------------|:-------|:----------|:-----------|:-----------|:--------|:--------------|:---------------|:---------------|:------------|:-------------------------|:-----------------|:-----------|:-------------|:-------------|:-----------|:----------|:-----------------------|:---------------|:--------------------|:---------------|:-----------------|:------------|:-------|:--------------|:---------|:--------------|:-----------|:--------------|:----------------|:-----------|:----------------|:---------|:--------|:------------|:--------------|:-------------|:-------------|:--------------------|:--------------|:----------------|:-----------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 14 |  |  |  |  |  | X | | X | X | | X | | X | X | | X | | X | | X | X | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 35 |  |  |  |  |  | X | | X | | | X | | X | | | X | | | X | X | | | | | | | | X | X | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | | X | | | X | | | | | X | | | | X | | | | | | | X | | | X | | | | | X | X | | X | X | | | X | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 26 |  |  |  |  |  | X | | X | X | X | X | | X | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 8 |  |  |  |  |  | X | | X | | | X | | X | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | | X | | | | | | | | X | | X | | X | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 7 |  |  |  |  |  | X | | X | | | X | | | | | X | | | | X | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 61 |  |  |  |  |  | X | | X | | | X | | | X | | | | X | X | | X | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 7 |  |  |  |  |  | X | | X | | | | X | | X | | X | X | X | | X | X | X | | | | | | | | | X | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | | | | | | X | | | | | | | | | X | | | | | | | X | X | | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 10 | 15 |  |  |  |  |  | X | | X | | X | X | | | | | | X | | X | X | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/miyashita_ai_loveliveschoolidolfestivalallstars
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T11:36:32+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T04:44:36+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of miyashita\_ai/宮下愛/미야시타아이 (Love Live! School Idol Festival ALL STARS)
===============================================================================
This is the dataset of miyashita\_ai/宮下愛/미야시타아이 (Love Live! School Idol Festival ALL STARS), containing 500 images and their tags.
The core tags of this character are 'blonde\_hair, bangs, yellow\_eyes, breasts, sidelocks, orange\_eyes, medium\_hair, braid, hair\_ornament, ponytail', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
2389abea041d2196bcc91076acebcf111b0ec322
|
# Dataset Card for "processed-old-with-embeddings"
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Chunks of about 256 words split by whitespace and their embeddings computed with the pretrained spacy model ["de_dep_news_trf"] (https://github.com/explosion/spacy-models/releases/tag/de_dep_news_trf-3.6.1).
The splits are created with respect to sentence boundaries parsed with the same model, sentences are concatenated if the result does not exceed max_words = 256, therefore the chunk length varies.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset contains texts from the legal domain in German language. (German court decisions)
## Dataset Structure
[More Information Needed]
### Data Instances
{'slug': 'ag-pinneberg-2003-12-19-68-ii-9302-weg',
'text_chunk': 'Die Berufung des Klägers gegen das am 23. April 2002 verkündete Urteil der 1. Zivilkammer des Landgerichts Wuppertal wird zurückgewiesen.\n\n Der Kläger trägt (...)',
'embedding': [-0.055155396461486816, -0.3904547095298767, -0.0033536632545292377, 0.8048776984214783, 0.30156993865966797, 0.5924882888793945, (...)]]}
### Data Fields
{
'slug': data['slug'],
'text_chunk': text,
'embedding': embedding
}
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
This dataset contains texts from the legal domain in German language. (German court decisions)
### Citation Information
@inproceedings{10.1145/3383583.3398616,
author = {Ostendorff, Malte and Blume, Till and Ostendorff, Saskia},
title = {Towards an Open Platform for Legal Information},
year = {2020},
isbn = {9781450375856},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3383583.3398616},
doi = {10.1145/3383583.3398616},
booktitle = {Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020},
pages = {385–388},
numpages = {4},
keywords = {open data, open source, legal information system, legal data},
location = {Virtual Event, China},
series = {JCDL '20}
}
|
pmpc/processed-old-with-embeddings
|
[
"region:us"
] |
2023-09-25T11:47:30+00:00
|
{"dataset_info": [{"config_name": "default", "features": [{"name": "slug", "dtype": "string"}, {"name": "text_chunk", "dtype": "string"}, {"name": "embedding", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 17448677826, "num_examples": 3655376}], "download_size": 14805980593, "dataset_size": 17448677826}, {"config_name": "small", "features": [{"name": "slug", "dtype": "string"}, {"name": "text_chunk", "dtype": "string"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 475656222.6698008, "num_examples": 99531}, {"name": "test", "num_bytes": 23459991.330199156, "num_examples": 4909}], "download_size": 488406448, "dataset_size": 499116214.0}], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}, {"config_name": "small", "data_files": [{"split": "train", "path": "small/train-*"}, {"split": "test", "path": "small/test-*"}]}]}
|
2023-09-26T09:56:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "processed-old-with-embeddings"
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
Chunks of about 256 words split by whitespace and their embeddings computed with the pretrained spacy model ["de_dep_news_trf"] (URL
The splits are created with respect to sentence boundaries parsed with the same model, sentences are concatenated if the result does not exceed max_words = 256, therefore the chunk length varies.
### Supported Tasks and Leaderboards
### Languages
This dataset contains texts from the legal domain in German language. (German court decisions)
## Dataset Structure
### Data Instances
{'slug': 'ag-pinneberg-2003-12-19-68-ii-9302-weg',
'text_chunk': 'Die Berufung des Klägers gegen das am 23. April 2002 verkündete Urteil der 1. Zivilkammer des Landgerichts Wuppertal wird zurückgewiesen.\n\n Der Kläger trägt (...)',
'embedding': [-0.055155396461486816, -0.3904547095298767, -0.0033536632545292377, 0.8048776984214783, 0.30156993865966797, 0.5924882888793945, (...)]]}
### Data Fields
{
'slug': data['slug'],
'text_chunk': text,
'embedding': embedding
}
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
This dataset contains texts from the legal domain in German language. (German court decisions)
@inproceedings{10.1145/3383583.3398616,
author = {Ostendorff, Malte and Blume, Till and Ostendorff, Saskia},
title = {Towards an Open Platform for Legal Information},
year = {2020},
isbn = {9781450375856},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {URL
doi = {10.1145/3383583.3398616},
booktitle = {Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020},
pages = {385–388},
numpages = {4},
keywords = {open data, open source, legal information system, legal data},
location = {Virtual Event, China},
series = {JCDL '20}
}
|
[
"# Dataset Card for \"processed-old-with-embeddings\"",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nChunks of about 256 words split by whitespace and their embeddings computed with the pretrained spacy model [\"de_dep_news_trf\"] (URL\nThe splits are created with respect to sentence boundaries parsed with the same model, sentences are concatenated if the result does not exceed max_words = 256, therefore the chunk length varies.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nThis dataset contains texts from the legal domain in German language. (German court decisions)",
"## Dataset Structure",
"### Data Instances\n\n{'slug': 'ag-pinneberg-2003-12-19-68-ii-9302-weg', \n'text_chunk': 'Die Berufung des Klägers gegen das am 23. April 2002 verkündete Urteil der 1. Zivilkammer des Landgerichts Wuppertal wird zurückgewiesen.\\n\\n Der Kläger trägt (...)',\n'embedding': [-0.055155396461486816, -0.3904547095298767, -0.0033536632545292377, 0.8048776984214783, 0.30156993865966797, 0.5924882888793945, (...)]]}",
"### Data Fields\n{\n 'slug': data['slug'],\n 'text_chunk': text,\n 'embedding': embedding\n}",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?\n\nThis dataset contains texts from the legal domain in German language. (German court decisions)\n\n\n\n@inproceedings{10.1145/3383583.3398616,\nauthor = {Ostendorff, Malte and Blume, Till and Ostendorff, Saskia},\ntitle = {Towards an Open Platform for Legal Information},\nyear = {2020},\nisbn = {9781450375856},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {URL\ndoi = {10.1145/3383583.3398616},\nbooktitle = {Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020},\npages = {385–388},\nnumpages = {4},\nkeywords = {open data, open source, legal information system, legal data},\nlocation = {Virtual Event, China},\nseries = {JCDL '20}\n}"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"processed-old-with-embeddings\"",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nChunks of about 256 words split by whitespace and their embeddings computed with the pretrained spacy model [\"de_dep_news_trf\"] (URL\nThe splits are created with respect to sentence boundaries parsed with the same model, sentences are concatenated if the result does not exceed max_words = 256, therefore the chunk length varies.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nThis dataset contains texts from the legal domain in German language. (German court decisions)",
"## Dataset Structure",
"### Data Instances\n\n{'slug': 'ag-pinneberg-2003-12-19-68-ii-9302-weg', \n'text_chunk': 'Die Berufung des Klägers gegen das am 23. April 2002 verkündete Urteil der 1. Zivilkammer des Landgerichts Wuppertal wird zurückgewiesen.\\n\\n Der Kläger trägt (...)',\n'embedding': [-0.055155396461486816, -0.3904547095298767, -0.0033536632545292377, 0.8048776984214783, 0.30156993865966797, 0.5924882888793945, (...)]]}",
"### Data Fields\n{\n 'slug': data['slug'],\n 'text_chunk': text,\n 'embedding': embedding\n}",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?\n\nThis dataset contains texts from the legal domain in German language. (German court decisions)\n\n\n\n@inproceedings{10.1145/3383583.3398616,\nauthor = {Ostendorff, Malte and Blume, Till and Ostendorff, Saskia},\ntitle = {Towards an Open Platform for Legal Information},\nyear = {2020},\nisbn = {9781450375856},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {URL\ndoi = {10.1145/3383583.3398616},\nbooktitle = {Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020},\npages = {385–388},\nnumpages = {4},\nkeywords = {open data, open source, legal information system, legal data},\nlocation = {Virtual Event, China},\nseries = {JCDL '20}\n}"
] |
[
6,
17,
24,
95,
10,
24,
6,
151,
37,
5,
5,
7,
4,
10,
230
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"processed-old-with-embeddings\"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nChunks of about 256 words split by whitespace and their embeddings computed with the pretrained spacy model [\"de_dep_news_trf\"] (URL\nThe splits are created with respect to sentence boundaries parsed with the same model, sentences are concatenated if the result does not exceed max_words = 256, therefore the chunk length varies.### Supported Tasks and Leaderboards### Languages\n\nThis dataset contains texts from the legal domain in German language. (German court decisions)## Dataset Structure### Data Instances\n\n{'slug': 'ag-pinneberg-2003-12-19-68-ii-9302-weg', \n'text_chunk': 'Die Berufung des Klägers gegen das am 23. April 2002 verkündete Urteil der 1. Zivilkammer des Landgerichts Wuppertal wird zurückgewiesen.\\n\\n Der Kläger trägt (...)',\n'embedding': [-0.055155396461486816, -0.3904547095298767, -0.0033536632545292377, 0.8048776984214783, 0.30156993865966797, 0.5924882888793945, (...)]]}### Data Fields\n{\n 'slug': data['slug'],\n 'text_chunk': text,\n 'embedding': embedding\n}### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization"
] |
40fa8431a5bfb76f3c6f14d480bb9110dd3c789d
|
# Dataset Card for "beans_all_preprocessed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fmagot01/beans_all_preprocessed
|
[
"region:us"
] |
2023-09-25T11:52:22+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image_file_path", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "angular_leaf_spot", "1": "bean_rust", "2": "healthy"}}}}], "splits": [{"name": "train", "num_bytes": 143754816.662, "num_examples": 1034}, {"name": "validation", "num_bytes": 18514596.0, "num_examples": 133}, {"name": "test", "num_bytes": 17719412.0, "num_examples": 128}], "download_size": 179978089, "dataset_size": 179988824.662}}
|
2023-09-25T11:52:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "beans_all_preprocessed"
More Information needed
|
[
"# Dataset Card for \"beans_all_preprocessed\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"beans_all_preprocessed\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"beans_all_preprocessed\"\n\nMore Information needed"
] |
647944181a71baa54a3001c2033ef0a153ebcdfb
|
(Placeholder)
# DNSMOS-TTS
DNSMOS-TTS contains DNSMOS Scores for common TTS datasets
This repo uses Lhotse to manage datasets.
For example, to load LJ-Speech:
```py
from lhotse import CutSet
for cut in CutSet.from_webdataset("pipe:curl -s -L https://huggingface.co/datasets/Gatozu35/DNSMOS-TTS/resolve/main/ljspeech_mos.tar"):
wav = cut.load_audio()
mos = cut.supervisions[0].custom["mos"]
...
```
If you don't want to use lhotse, I have also uploaded a csv of the scores for each id.
|
Gatozu35/DNSMOS-TTS
|
[
"region:us"
] |
2023-09-25T11:59:02+00:00
|
{"pretty_name": "DNSMOS Score for common TTS datasets"}
|
2024-01-31T19:28:40+00:00
|
[] |
[] |
TAGS
#region-us
|
(Placeholder)
# DNSMOS-TTS
DNSMOS-TTS contains DNSMOS Scores for common TTS datasets
This repo uses Lhotse to manage datasets.
For example, to load LJ-Speech:
If you don't want to use lhotse, I have also uploaded a csv of the scores for each id.
|
[
"# DNSMOS-TTS\n\nDNSMOS-TTS contains DNSMOS Scores for common TTS datasets\n\nThis repo uses Lhotse to manage datasets.\n\nFor example, to load LJ-Speech:\n\n\nIf you don't want to use lhotse, I have also uploaded a csv of the scores for each id."
] |
[
"TAGS\n#region-us \n",
"# DNSMOS-TTS\n\nDNSMOS-TTS contains DNSMOS Scores for common TTS datasets\n\nThis repo uses Lhotse to manage datasets.\n\nFor example, to load LJ-Speech:\n\n\nIf you don't want to use lhotse, I have also uploaded a csv of the scores for each id."
] |
[
6,
77
] |
[
"passage: TAGS\n#region-us \n# DNSMOS-TTS\n\nDNSMOS-TTS contains DNSMOS Scores for common TTS datasets\n\nThis repo uses Lhotse to manage datasets.\n\nFor example, to load LJ-Speech:\n\n\nIf you don't want to use lhotse, I have also uploaded a csv of the scores for each id."
] |
f654e083a8febd230bd56d170148786a7f759759
|
# Dataset Card for Dataset Name
## Dataset Summary
Genre-6 dataset is an English dataset based on Kindletrends (UK & US). It contains more than 20k books and associated categories with ready-made binary classification and multilabel classification labels.
## Dataset Structure
### Data Instances
`` {"text": "...", "categories": "Engineering & Transportation;Science & Math", "fiction": "non-fiction", "split1": ['Science & Math'], "split2" : ['Engineering & Transportation', 'Science & Math'], "split3": ['Science & Math']} ``
### Data Fields
- text: Kindletrends text
- categories: Kidletrends categories (1 to 2 categories per book)
- fiction: binary label for fiction and non-fiction books
- splits 1,2,3: multilabel for different subsets of the categories
### Data Splits
The dataset contains train (80%), validation (10%) and test (10%) splits.
The splits for multilabels are following:
- split1: 'Biology & Nature & Biological Sciences','Computer Science', 'Fantasy','Medicine & Health Sciences','Philosophy','Science & Math'.
- split2: 'Biology & Nature & Biological Sciences','Computer Science', 'Engineering & Transportation','Fantasy','Medicine & Health Sciences','Science & Math'.
- split3: 'Biology & Nature & Biological Sciences','Computer Science', 'Fantasy','Medicine & Health Sciences', 'Poetry', 'Politics & Social Sciences', 'Science & Math'.
More splits can be generated from the field "categories".
### Source Data
[Kindletrends](https://kindletrends.com/categories/)
|
TurkuNLP/genre-6
|
[
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] |
2023-09-25T12:08:04+00:00
|
{"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"]}
|
2023-09-26T05:42:00+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #size_categories-10K<n<100K #language-English #region-us
|
# Dataset Card for Dataset Name
## Dataset Summary
Genre-6 dataset is an English dataset based on Kindletrends (UK & US). It contains more than 20k books and associated categories with ready-made binary classification and multilabel classification labels.
## Dataset Structure
### Data Instances
'' {"text": "...", "categories": "Engineering & Transportation;Science & Math", "fiction": "non-fiction", "split1": ['Science & Math'], "split2" : ['Engineering & Transportation', 'Science & Math'], "split3": ['Science & Math']} ''
### Data Fields
- text: Kindletrends text
- categories: Kidletrends categories (1 to 2 categories per book)
- fiction: binary label for fiction and non-fiction books
- splits 1,2,3: multilabel for different subsets of the categories
### Data Splits
The dataset contains train (80%), validation (10%) and test (10%) splits.
The splits for multilabels are following:
- split1: 'Biology & Nature & Biological Sciences','Computer Science', 'Fantasy','Medicine & Health Sciences','Philosophy','Science & Math'.
- split2: 'Biology & Nature & Biological Sciences','Computer Science', 'Engineering & Transportation','Fantasy','Medicine & Health Sciences','Science & Math'.
- split3: 'Biology & Nature & Biological Sciences','Computer Science', 'Fantasy','Medicine & Health Sciences', 'Poetry', 'Politics & Social Sciences', 'Science & Math'.
More splits can be generated from the field "categories".
### Source Data
Kindletrends
|
[
"# Dataset Card for Dataset Name",
"## Dataset Summary\n\nGenre-6 dataset is an English dataset based on Kindletrends (UK & US). It contains more than 20k books and associated categories with ready-made binary classification and multilabel classification labels.",
"## Dataset Structure",
"### Data Instances\n\n'' {\"text\": \"...\", \"categories\": \"Engineering & Transportation;Science & Math\", \"fiction\": \"non-fiction\", \"split1\": ['Science & Math'], \"split2\" : ['Engineering & Transportation', 'Science & Math'], \"split3\": ['Science & Math']} ''",
"### Data Fields\n\n- text: Kindletrends text\n- categories: Kidletrends categories (1 to 2 categories per book)\n- fiction: binary label for fiction and non-fiction books\n- splits 1,2,3: multilabel for different subsets of the categories",
"### Data Splits\n\nThe dataset contains train (80%), validation (10%) and test (10%) splits.\n\nThe splits for multilabels are following:\n- split1: 'Biology & Nature & Biological Sciences','Computer Science', 'Fantasy','Medicine & Health Sciences','Philosophy','Science & Math'.\n- split2: 'Biology & Nature & Biological Sciences','Computer Science', 'Engineering & Transportation','Fantasy','Medicine & Health Sciences','Science & Math'.\n- split3: 'Biology & Nature & Biological Sciences','Computer Science', 'Fantasy','Medicine & Health Sciences', 'Poetry', 'Politics & Social Sciences', 'Science & Math'.\n\nMore splits can be generated from the field \"categories\".",
"### Source Data\n\nKindletrends"
] |
[
"TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Summary\n\nGenre-6 dataset is an English dataset based on Kindletrends (UK & US). It contains more than 20k books and associated categories with ready-made binary classification and multilabel classification labels.",
"## Dataset Structure",
"### Data Instances\n\n'' {\"text\": \"...\", \"categories\": \"Engineering & Transportation;Science & Math\", \"fiction\": \"non-fiction\", \"split1\": ['Science & Math'], \"split2\" : ['Engineering & Transportation', 'Science & Math'], \"split3\": ['Science & Math']} ''",
"### Data Fields\n\n- text: Kindletrends text\n- categories: Kidletrends categories (1 to 2 categories per book)\n- fiction: binary label for fiction and non-fiction books\n- splits 1,2,3: multilabel for different subsets of the categories",
"### Data Splits\n\nThe dataset contains train (80%), validation (10%) and test (10%) splits.\n\nThe splits for multilabels are following:\n- split1: 'Biology & Nature & Biological Sciences','Computer Science', 'Fantasy','Medicine & Health Sciences','Philosophy','Science & Math'.\n- split2: 'Biology & Nature & Biological Sciences','Computer Science', 'Engineering & Transportation','Fantasy','Medicine & Health Sciences','Science & Math'.\n- split3: 'Biology & Nature & Biological Sciences','Computer Science', 'Fantasy','Medicine & Health Sciences', 'Poetry', 'Politics & Social Sciences', 'Science & Math'.\n\nMore splits can be generated from the field \"categories\".",
"### Source Data\n\nKindletrends"
] |
[
33,
8,
53,
6,
95,
62,
206,
7
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #region-us \n# Dataset Card for Dataset Name## Dataset Summary\n\nGenre-6 dataset is an English dataset based on Kindletrends (UK & US). It contains more than 20k books and associated categories with ready-made binary classification and multilabel classification labels.## Dataset Structure### Data Instances\n\n'' {\"text\": \"...\", \"categories\": \"Engineering & Transportation;Science & Math\", \"fiction\": \"non-fiction\", \"split1\": ['Science & Math'], \"split2\" : ['Engineering & Transportation', 'Science & Math'], \"split3\": ['Science & Math']} ''### Data Fields\n\n- text: Kindletrends text\n- categories: Kidletrends categories (1 to 2 categories per book)\n- fiction: binary label for fiction and non-fiction books\n- splits 1,2,3: multilabel for different subsets of the categories### Data Splits\n\nThe dataset contains train (80%), validation (10%) and test (10%) splits.\n\nThe splits for multilabels are following:\n- split1: 'Biology & Nature & Biological Sciences','Computer Science', 'Fantasy','Medicine & Health Sciences','Philosophy','Science & Math'.\n- split2: 'Biology & Nature & Biological Sciences','Computer Science', 'Engineering & Transportation','Fantasy','Medicine & Health Sciences','Science & Math'.\n- split3: 'Biology & Nature & Biological Sciences','Computer Science', 'Fantasy','Medicine & Health Sciences', 'Poetry', 'Politics & Social Sciences', 'Science & Math'.\n\nMore splits can be generated from the field \"categories\".### Source Data\n\nKindletrends"
] |
756f73c8a52e2b0fbdc8f42569e1e8c6f3865a49
|
# Pirá: A Bilingual Portuguese-English Dataset for Question-Answering about the Ocean, the Brazilian coast, and climate change
Pirá is a crowdsourced reading comprehension dataset on the ocean, the Brazilian coast, and climate change.
QA sets are presented in both Portuguese and English, together with their corresponding textual context.
The dataset also contains human and automatic paraphrases for questions and answers, as well as a number of qualitative assessments.
The original paper was published at CIKM'21 and can be found [here](https://dl.acm.org/doi/pdf/10.1145/3459637.3482012).
As a subsequent project, we have produced a curated version of the dataset, which we refer to as Pirá 2.0.
In this step, we have also defined a number of benchmarks and reported the corresponding baselines.
This is the version that we make available at HuggingFace.
Pirá 2.0's preprint is available in [Arxiv](https://arxiv.org/abs/2309.10945).
Pirá is, to the best of our knowledge, the first QA dataset with supporting texts in Portuguese, and, perhaps more importantly,
the first bilingual QA dataset that includes Portuguese as one of its languages.
Pirá is also the first QA dataset in Portuguese with unanswerable questions so as to allow the study of answer triggering.
Finally, it is the first QA dataset that tackles scientific knowledge about the ocean, climate change, and marine biodiversity.
More information on the methodology, dataset versions, and benchmarks can be found on the project's [Github page](https://github.com/C4AI/Pira/).
You can also find there the Multiple-Choice version of Pirá.
# Dataset
The dataset is split into train, validation, and test sets.
| Split | Size | #QAs |
|---|---|---|
| Training | 80% | 1806 |
| Validation | 10% | 225 |
| Test | 10% | 227 |
| Full dataset | 100% | 2258 |
Above is an example of a question-answer set from Pirá:
```
{
'id_qa': 'B2142',
'corpus": 2,
'question_en_origin': 'What are the proportion of men and women employed in the fishery sector worlwide?',
'question_pt_origin': 'Qual é a proporção de homens e mulheres empregados no setor pesqueiro em todo o mundo?',
'question_en_paraphase': 'Which share of the fishery sector workers of the world are women?',
'question_pt_paraphase': 'Qual parcela dos trabalhadores do setor da pesca no mundo são mulheres?',
'answer_en_origin': '85 per cent men and 15 per cent women.',
'answer_pt_origin': '85 por cento homens e 15 por cento mulheres.',
'answer_en_validate': 'It is estimated that more than fifteen per cent of the fishing sector workers are women.',
'answer_pt_validate': 'Estima-se que mais de quinze por cento dos trabalhadores do setor da pesca são mulheres.',
'eid_article_scopus': '',
'text_excerpts_un_reports': 'Distribution of ocean benefits and disbenefits Developments in employment and income from fisheries and aquaculture The global harvest of marine capture fisheries has expanded rapidly since the early 1950s and is currently estimated to be about 80 million tons a year. That harvest is estimated to have a first (gross) value on the order of 113 billion dollars. Although it is difficult to produce accurate employment statistics, estimates using a fairly narrow definition of employment have put the figure of those employed in fisheries and aquaculture at 58.3 million people (4.4 per cent of the estimated total of economically active people), of which 84 per cent are in Asia and 10 per cent in Africa. Women are estimated to account for more than 15 per cent of people employed in the fishery sector. Other estimates, probably taking into account a wider definition of employment, suggest that capture fisheries provide direct and indirect employment for at least 120 million persons worldwide. Small-scale fisheries employ more than 90 per cent of the world’s capture fishermen and fish workers, about half of whom are women. When all dependants of those taking full- or part-time employment in the full value chain and support industries (boatbuilding, gear construction, etc.) of fisheries and aquaculture are included, one estimate concludes that between 660 and 820 million persons have some economic or livelihood dependence on fish capture and culture and the subsequent direct value chain. No sound information appears to be available on the levels of death and injury of those engaged in capture fishing or aquaculture, but capture fishing is commonly characterized as a dangerous occupation. Over time, a striking shift has occurred in the operation and location of capture fisheries. In the 1950s, capture fisheries were largely undertaken by developed fishing States. Since then, developing countries have increased their share. As a broad illustration, in the 1950s, the southern hemisphere accounted for no more than 8 per cent of landed values. By the last decade, the southern hemisphere’s share had risen to 20 per cent. In 2012, international trade represented 37 per cent of the total fish production in value, with a total export value of 129 billion dollars, of which 70 billion dollars (58 per cent) was exports by developing countries. Aquaculture is responsible for the bulk of the production of seaweeds. Worldwide, reports show that 24.9 million tons was produced in 2012, valued at about 6 billion dollars. In addition, about 1 million tons of wild seaweed were harvested. Few data were found on international trade in seaweeds, but their culture is concentrated in countries where consumption of seaweeds is high.',
'question_generic': false,
'answer_in_text': true,
'answer_difficulty': 1,
'question_meaningful': 5,
'answer_equivalent': 5,
'question_type': 'None of the above'
}
```
# Automatic Paraphrases
As we have only generated automatic paraphrases for questions and answers in the train set, they had to be saved in a different Dataset file.
To download the automatic paraphrases, just run:
```
paraphrases = load_dataset("paulopirozelli/pira", "paraphrases")
```
# Multiple Choice Question Answering
We have also developed a multiple choice question answering version of Pirá 2.0.
To download the automatic paraphrases, just run:
```
mcqa = load_dataset("paulopirozelli/pira", "mcqa")
```
Above is an example of a question-answer set from Pirá:
```
{
'id_qa': 'A1582',
'corpus': 1,
'question_en_origin': 'In the estuary, with marine influence, what was associated to deep areas with sandy sediment?',
'question_pt_origin': 'No estuário, com influência marinha, o que foi associado a áreas profundas com sedimento arenoso?',
'question_en_paraphase': 'What was discovered in estuary under deep areas with sand sediment and marine influence?',
'question_pt_paraphase': 'O que foi descoberto no estuário sob áreas profundas com sedimento arenoso e influência marítima?',
'answer_en_origin': 'The Laryngosigma lactea and Pyrgo oblonga foraminifera species.',
'answer_pt_origin': 'As espécies Laryngosigma lactea e Pyrgo oblonga de foraminíferos.',
'answer_en_validate': 'The species Laryngosigma lactea and Pyrgo oblonga.',
'answer_pt_validate': 'A espécie Laryngosigma lactea e Pyrgo oblonga.',
'eid_article_scopus': '2-s2.0-85092100205',
'text_excerpts_un_reports': None,
'question_generic': False,
'answer_in_text': True,
'answer_difficulty': 4.0,
'question_meaningful': 5.0,
'answer_equivalent': 4.0,
'question_type': 'Who'
}
```
# Pirá 1.0
You can also access the original Pirá dataset. Just run:
```
pira1 = load_dataset("paulopirozelli/pira", "pira_version1")
```
|
paulopirozelli/pira
|
[
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:pt",
"language:en",
"license:cc-by-4.0",
"climate",
"arxiv:2309.10945",
"region:us"
] |
2023-09-25T12:14:54+00:00
|
{"language": ["pt", "en"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}, {"config_name": "mcqa", "data_files": [{"split": "train", "path": "mcqa/train-*"}, {"split": "validation", "path": "mcqa/validation-*"}, {"split": "test", "path": "mcqa/test-*"}]}, {"config_name": "paraphrases", "data_files": [{"split": "train", "path": "paraphrases/train-*"}]}, {"config_name": "pira_version1", "data_files": [{"split": "train", "path": "pira_version1/train-*"}]}], "dataset_info": [{"config_name": "default", "features": [{"name": "id_qa", "dtype": "string"}, {"name": "corpus", "dtype": "int64"}, {"name": "question_en_origin", "dtype": "string"}, {"name": "question_pt_origin", "dtype": "string"}, {"name": "question_en_paraphase", "dtype": "string"}, {"name": "question_pt_paraphase", "dtype": "string"}, {"name": "answer_en_origin", "dtype": "string"}, {"name": "answer_pt_origin", "dtype": "string"}, {"name": "answer_en_validate", "dtype": "string"}, {"name": "answer_pt_validate", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "eid_article_scopus", "dtype": "string"}, {"name": "question_generic", "dtype": "float64"}, {"name": "answer_in_text", "dtype": "float64"}, {"name": "answer_difficulty", "dtype": "float64"}, {"name": "question_meaningful", "dtype": "float64"}, {"name": "answer_equivalent", "dtype": "float64"}, {"name": "question_type", "dtype": "string"}, {"name": "abstract_translated_pt", "dtype": "string"}, {"name": "pt_question_translated_to_en", "dtype": "string"}, {"name": "at_labels", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 8002269, "num_examples": 1806}, {"name": "validation", "num_bytes": 994524, "num_examples": 225}, {"name": "test", "num_bytes": 940555, "num_examples": 227}], "download_size": 3976683, "dataset_size": 9937348}, {"config_name": "mcqa", "features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "A", "dtype": "string"}, {"name": "B", "dtype": "string"}, {"name": "C", "dtype": "string"}, {"name": "D", "dtype": "string"}, {"name": "E", "dtype": "string"}, {"name": "correct", "dtype": "string"}, {"name": "alternative", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4327619, "num_examples": 1798}, {"name": "validation", "num_bytes": 582526, "num_examples": 225}, {"name": "test", "num_bytes": 551723, "num_examples": 227}], "download_size": 2148096, "dataset_size": 5461868}, {"config_name": "paraphrases", "features": [{"name": "question_AUT_EN_1", "dtype": "string"}, {"name": "question_AUT_EN_2", "dtype": "string"}, {"name": "answer_AUT_EN_1", "dtype": "string"}, {"name": "answer_AUT_EN_2", "dtype": "string"}, {"name": "question_AUT_PT_1", "dtype": "string"}, {"name": "question_AUT_PT_2", "dtype": "string"}, {"name": "answer_AUT_PT_1", "dtype": "string"}, {"name": "answer_AUT_PT_2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1175020, "num_examples": 1806}], "download_size": 720519, "dataset_size": 1175020}, {"config_name": "pira_version1", "features": [{"name": "id_qa", "dtype": "string"}, {"name": "corpus", "dtype": "int64"}, {"name": "question_en_origin", "dtype": "string"}, {"name": "question_pt_origin", "dtype": "string"}, {"name": "question_en_paraphase", "dtype": "string"}, {"name": "question_pt_paraphase", "dtype": "string"}, {"name": "answer_en_origin", "dtype": "string"}, {"name": "answer_pt_origin", "dtype": "string"}, {"name": "answer_en_validate", "dtype": "string"}, {"name": "answer_pt_validate", "dtype": "string"}, {"name": "eid_article_scopus", "dtype": "string"}, {"name": "text_excerpts_un_reports", "dtype": "string"}, {"name": "question_generic", "dtype": "bool"}, {"name": "answer_in_text", "dtype": "bool"}, {"name": "answer_difficulty", "dtype": "float64"}, {"name": "question_meaningful", "dtype": "float64"}, {"name": "answer_equivalent", "dtype": "float64"}, {"name": "question_type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3096316, "num_examples": 2271}], "download_size": 1342133, "dataset_size": 3096316}], "tags": ["climate"]}
|
2023-10-04T12:52:11+00:00
|
[
"2309.10945"
] |
[
"pt",
"en"
] |
TAGS
#task_categories-question-answering #size_categories-1K<n<10K #language-Portuguese #language-English #license-cc-by-4.0 #climate #arxiv-2309.10945 #region-us
|
Pirá: A Bilingual Portuguese-English Dataset for Question-Answering about the Ocean, the Brazilian coast, and climate change
============================================================================================================================
Pirá is a crowdsourced reading comprehension dataset on the ocean, the Brazilian coast, and climate change.
QA sets are presented in both Portuguese and English, together with their corresponding textual context.
The dataset also contains human and automatic paraphrases for questions and answers, as well as a number of qualitative assessments.
The original paper was published at CIKM'21 and can be found here.
As a subsequent project, we have produced a curated version of the dataset, which we refer to as Pirá 2.0.
In this step, we have also defined a number of benchmarks and reported the corresponding baselines.
This is the version that we make available at HuggingFace.
Pirá 2.0's preprint is available in Arxiv.
Pirá is, to the best of our knowledge, the first QA dataset with supporting texts in Portuguese, and, perhaps more importantly,
the first bilingual QA dataset that includes Portuguese as one of its languages.
Pirá is also the first QA dataset in Portuguese with unanswerable questions so as to allow the study of answer triggering.
Finally, it is the first QA dataset that tackles scientific knowledge about the ocean, climate change, and marine biodiversity.
More information on the methodology, dataset versions, and benchmarks can be found on the project's Github page.
You can also find there the Multiple-Choice version of Pirá.
Dataset
=======
The dataset is split into train, validation, and test sets.
Split: Training, Size: 80%, #QAs: 1806
Split: Validation, Size: 10%, #QAs: 225
Split: Test, Size: 10%, #QAs: 227
Split: Full dataset, Size: 100%, #QAs: 2258
Above is an example of a question-answer set from Pirá:
Automatic Paraphrases
=====================
As we have only generated automatic paraphrases for questions and answers in the train set, they had to be saved in a different Dataset file.
To download the automatic paraphrases, just run:
Multiple Choice Question Answering
==================================
We have also developed a multiple choice question answering version of Pirá 2.0.
To download the automatic paraphrases, just run:
Above is an example of a question-answer set from Pirá:
Pirá 1.0
========
You can also access the original Pirá dataset. Just run:
|
[] |
[
"TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Portuguese #language-English #license-cc-by-4.0 #climate #arxiv-2309.10945 #region-us \n"
] |
[
61
] |
[
"passage: TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Portuguese #language-English #license-cc-by-4.0 #climate #arxiv-2309.10945 #region-us \n"
] |
192005149b997fbb69619e36a6cb0a64615993b3
|
# Dataset Card for "sharegpt_short_en_30k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/sharegpt_short_en_30k
|
[
"region:us"
] |
2023-09-25T12:15:28+00:00
|
{"dataset_info": {"features": [{"name": "conversation", "sequence": "string"}, {"name": "hash", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 88612458, "num_examples": 29597}], "download_size": 44347819, "dataset_size": 88612458}}
|
2023-09-25T12:16:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "sharegpt_short_en_30k"
More Information needed
|
[
"# Dataset Card for \"sharegpt_short_en_30k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"sharegpt_short_en_30k\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"sharegpt_short_en_30k\"\n\nMore Information needed"
] |
c617c3fe5dcdc4629054c30b030895222bb3b927
|
# Dataset Card for "ru_turbo_alpaca_evol_instruct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/ru_turbo_alpaca_evol_instruct
|
[
"region:us"
] |
2023-09-25T12:19:36+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "iteration", "dtype": "uint32"}], "splits": [{"name": "train", "num_bytes": 105428021, "num_examples": 47793}], "download_size": 50796845, "dataset_size": 105428021}}
|
2023-09-25T12:19:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ru_turbo_alpaca_evol_instruct"
More Information needed
|
[
"# Dataset Card for \"ru_turbo_alpaca_evol_instruct\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ru_turbo_alpaca_evol_instruct\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ru_turbo_alpaca_evol_instruct\"\n\nMore Information needed"
] |
951944dbc46d5c490dd23d21979ac89cdf691fb6
|
# Dataset Card for "authors_merged_model_prs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davanstrien/authors_merged_model_prs
|
[
"region:us"
] |
2023-09-25T12:23:16+00:00
|
{"dataset_info": {"features": [{"name": "authors", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 1, "num_examples": 1}], "download_size": 705, "dataset_size": 1}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T12:23:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "authors_merged_model_prs"
More Information needed
|
[
"# Dataset Card for \"authors_merged_model_prs\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"authors_merged_model_prs\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"authors_merged_model_prs\"\n\nMore Information needed"
] |
b32077d514eaabce84cedd586513eebefb1f373b
|
# Dataset Card for "ru_turbo_saiga"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/ru_turbo_saiga
|
[
"region:us"
] |
2023-09-25T12:23:33+00:00
|
{"dataset_info": {"features": [{"name": "messages", "sequence": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}, {"name": "seed", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "model_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 87316730, "num_examples": 37731}], "download_size": 39768554, "dataset_size": 87316730}}
|
2023-09-25T12:24:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ru_turbo_saiga"
More Information needed
|
[
"# Dataset Card for \"ru_turbo_saiga\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ru_turbo_saiga\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ru_turbo_saiga\"\n\nMore Information needed"
] |
f1cf1b573b4e2f731124bd1d9457973297e4a744
|
# Dataset Card for "authors_merged_dataset_prs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
davanstrien/authors_merged_dataset_prs
|
[
"region:us"
] |
2023-09-25T12:24:29+00:00
|
{"dataset_info": {"features": [{"name": "authors", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6818, "num_examples": 539}], "download_size": 7290, "dataset_size": 6818}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T12:24:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "authors_merged_dataset_prs"
More Information needed
|
[
"# Dataset Card for \"authors_merged_dataset_prs\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"authors_merged_dataset_prs\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"authors_merged_dataset_prs\"\n\nMore Information needed"
] |
093f7dbee9bd79f7ef6da66b197ec6dda574ac86
|
# Dataset Card for "librispeech_asr_dummy_noise-noise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sanchit-gandhi/librispeech_asr_dummy_noise-noise
|
[
"region:us"
] |
2023-09-25T12:30:09+00:00
|
{"dataset_info": [{"config_name": "validation-pub-noise", "features": [{"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "40", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "35", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "30", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "25", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "20", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "15", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "10", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "5", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "0", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "minus5", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "minus10", "num_bytes": 3708657.0, "num_examples": 6}], "download_size": 23320628, "dataset_size": 40795227.0}, {"config_name": "validation-white-noise", "features": [{"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "40", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "35", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "30", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "25", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "20", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "15", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "10", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "5", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "0", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "minus5", "num_bytes": 3708657.0, "num_examples": 6}, {"name": "minus10", "num_bytes": 3708657.0, "num_examples": 6}], "download_size": 23568938, "dataset_size": 40795227.0}], "configs": [{"config_name": "validation-pub-noise", "data_files": [{"split": "40", "path": "validation-pub-noise/40-*"}, {"split": "35", "path": "validation-pub-noise/35-*"}, {"split": "30", "path": "validation-pub-noise/30-*"}, {"split": "25", "path": "validation-pub-noise/25-*"}, {"split": "20", "path": "validation-pub-noise/20-*"}, {"split": "15", "path": "validation-pub-noise/15-*"}, {"split": "10", "path": "validation-pub-noise/10-*"}, {"split": "5", "path": "validation-pub-noise/5-*"}, {"split": "0", "path": "validation-pub-noise/0-*"}, {"split": "minus5", "path": "validation-pub-noise/minus5-*"}, {"split": "minus10", "path": "validation-pub-noise/minus10-*"}]}, {"config_name": "validation-white-noise", "data_files": [{"split": "40", "path": "validation-white-noise/40-*"}, {"split": "35", "path": "validation-white-noise/35-*"}, {"split": "30", "path": "validation-white-noise/30-*"}, {"split": "25", "path": "validation-white-noise/25-*"}, {"split": "20", "path": "validation-white-noise/20-*"}, {"split": "15", "path": "validation-white-noise/15-*"}, {"split": "10", "path": "validation-white-noise/10-*"}, {"split": "5", "path": "validation-white-noise/5-*"}, {"split": "0", "path": "validation-white-noise/0-*"}, {"split": "minus5", "path": "validation-white-noise/minus5-*"}, {"split": "minus10", "path": "validation-white-noise/minus10-*"}]}]}
|
2023-09-25T12:56:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "librispeech_asr_dummy_noise-noise"
More Information needed
|
[
"# Dataset Card for \"librispeech_asr_dummy_noise-noise\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"librispeech_asr_dummy_noise-noise\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"librispeech_asr_dummy_noise-noise\"\n\nMore Information needed"
] |
687c065425a3ad4d5c7c4cf04fbf59383bdb9e05
|
# Dataset Card for "raw_bugurts_8k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/raw_bugurts_8k
|
[
"region:us"
] |
2023-09-25T12:32:41+00:00
|
{"dataset_info": {"features": [{"name": "bugurt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8339763, "num_examples": 8360}], "download_size": 4343568, "dataset_size": 8339763}}
|
2023-09-25T12:32:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "raw_bugurts_8k"
More Information needed
|
[
"# Dataset Card for \"raw_bugurts_8k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"raw_bugurts_8k\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"raw_bugurts_8k\"\n\nMore Information needed"
] |
52fa870e55929758036c9e6117c91e8f7489e5ab
|
# Dataset Card for "tldr_17_50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/tldr_17_50k
|
[
"region:us"
] |
2023-09-25T12:45:30+00:00
|
{"dataset_info": {"features": [{"name": "author", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "normalizedBody", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "subreddit_id", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 246031411.71625096, "num_examples": 50000}], "download_size": 156564697, "dataset_size": 246031411.71625096}}
|
2023-09-25T12:49:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "tldr_17_50k"
More Information needed
|
[
"# Dataset Card for \"tldr_17_50k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"tldr_17_50k\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"tldr_17_50k\"\n\nMore Information needed"
] |
4090366877854fd6b8b4ea1f53cca9adec6f6074
|
# Dataset Card for "grade_school_math_instructions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/grade_school_math_instructions
|
[
"region:us"
] |
2023-09-25T12:50:04+00:00
|
{"dataset_info": {"features": [{"name": "INSTRUCTION", "dtype": "string"}, {"name": "RESPONSE", "dtype": "string"}, {"name": "SOURCE", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4804916, "num_examples": 8792}], "download_size": 2555411, "dataset_size": 4804916}}
|
2023-09-25T12:50:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "grade_school_math_instructions"
More Information needed
|
[
"# Dataset Card for \"grade_school_math_instructions\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"grade_school_math_instructions\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"grade_school_math_instructions\"\n\nMore Information needed"
] |
40398dc4a5478cf015ce48946d0804fba11572cf
|
# Dataset Card for "tldr_news"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/tldr_news
|
[
"region:us"
] |
2023-09-25T12:51:55+00:00
|
{"dataset_info": {"features": [{"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": {"class_label": {"names": {"0": "Sponsor", "1": "Big Tech & Startups", "2": "Science and Futuristic Technology", "3": "Programming, Design & Data Science", "4": "Miscellaneous"}}}}], "splits": [{"name": "train", "num_bytes": 4000442, "num_examples": 7138}], "download_size": 2554140, "dataset_size": 4000442}}
|
2023-09-25T12:52:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "tldr_news"
More Information needed
|
[
"# Dataset Card for \"tldr_news\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"tldr_news\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"tldr_news\"\n\nMore Information needed"
] |
be06be92c5d257d0c387429947255ded16b4aef0
|
# Dataset of wakana_shiki/若菜四季/와카나시키 (Love Live! Superstar!!)
This is the dataset of wakana_shiki/若菜四季/와카나시키 (Love Live! Superstar!!), containing 239 images and their tags.
The core tags of this character are `blue_hair, short_hair, bangs, hair_between_eyes, earrings, ribbon, breasts, red_ribbon, neck_ribbon, orange_eyes, red_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 239 | 310.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakana_shiki_lovelivesuperstar/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 239 | 157.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakana_shiki_lovelivesuperstar/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 567 | 353.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakana_shiki_lovelivesuperstar/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 239 | 266.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakana_shiki_lovelivesuperstar/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 567 | 544.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakana_shiki_lovelivesuperstar/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/wakana_shiki_lovelivesuperstar',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 31 |  |  |  |  |  | 1girl, solo, yuigaoka_school_uniform, looking_at_viewer, white_shirt, pinafore_dress, blue_jacket, grey_dress, collared_shirt, long_sleeves, white_background, open_jacket, blush, simple_background, upper_body, jewelry, parted_lips |
| 1 | 6 |  |  |  |  |  | 2girls, solo_focus, looking_at_viewer, smile, jacket, jewelry, upper_body, birthday, blonde_hair, blush, red_hair, yuigaoka_school_uniform |
| 2 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, navel, solo, blush, cleavage, collarbone, large_breasts, hair_flower, necklace, open_mouth, brown_eyes, side-tie_bikini_bottom, sitting, smile, thighs |
| 3 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, midriff, navel, tied_shirt, blue_shirt, blush, medium_breasts, short_sleeves, shorts, solo, arms_up, open_mouth, parted_lips, simple_background, stud_earrings, sweat, white_background, white_pants |
| 4 | 7 |  |  |  |  |  | 1boy, 1girl, blush, hetero, penis, solo_focus, jewelry, navel, nipples, pov, pussy, sex, spread_legs, sweat, vaginal, completely_nude, crossed_bangs, looking_at_viewer, mosaic_censoring, open_mouth, dark-skinned_male, on_back, collarbone, large_breasts, missionary, motion_lines, stomach, upper_teeth_only |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | yuigaoka_school_uniform | looking_at_viewer | white_shirt | pinafore_dress | blue_jacket | grey_dress | collared_shirt | long_sleeves | white_background | open_jacket | blush | simple_background | upper_body | jewelry | parted_lips | 2girls | solo_focus | smile | jacket | birthday | blonde_hair | red_hair | navel | cleavage | collarbone | large_breasts | hair_flower | necklace | open_mouth | brown_eyes | side-tie_bikini_bottom | sitting | thighs | midriff | tied_shirt | blue_shirt | medium_breasts | short_sleeves | shorts | arms_up | stud_earrings | sweat | white_pants | 1boy | hetero | penis | nipples | pov | pussy | sex | spread_legs | vaginal | completely_nude | crossed_bangs | mosaic_censoring | dark-skinned_male | on_back | missionary | motion_lines | stomach | upper_teeth_only |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------------|:--------------------|:--------------|:-----------------|:--------------|:-------------|:-----------------|:---------------|:-------------------|:--------------|:--------|:--------------------|:-------------|:----------|:--------------|:---------|:-------------|:--------|:---------|:-----------|:--------------|:-----------|:--------|:-----------|:-------------|:----------------|:--------------|:-----------|:-------------|:-------------|:-------------------------|:----------|:---------|:----------|:-------------|:-------------|:-----------------|:----------------|:---------|:----------|:----------------|:--------|:--------------|:-------|:---------|:--------|:----------|:------|:--------|:------|:--------------|:----------|:------------------|:----------------|:-------------------|:--------------------|:----------|:-------------|:---------------|:----------|:-------------------|
| 0 | 31 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | | | X | X | | | | | | | | | X | | X | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | | X | | | | | | | | | X | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | | X | | | | | | | X | | X | X | | | X | | | | | | | | X | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | X | | | X | | | | | | | | | X | | | X | | | X | | | | | | X | | X | X | | | X | | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/wakana_shiki_lovelivesuperstar
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T12:53:21+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T05:48:06+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of wakana\_shiki/若菜四季/와카나시키 (Love Live! Superstar!!)
============================================================
This is the dataset of wakana\_shiki/若菜四季/와카나시키 (Love Live! Superstar!!), containing 239 images and their tags.
The core tags of this character are 'blue\_hair, short\_hair, bangs, hair\_between\_eyes, earrings, ribbon, breasts, red\_ribbon, neck\_ribbon, orange\_eyes, red\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
1ce21520f51bf4dac78def2a89236b76a3b77dc5
|
# Dataset Card for "grade_school_math_instructions_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/grade_school_math_instructions_ru
|
[
"region:us"
] |
2023-09-25T12:56:36+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6815618, "num_examples": 7473}], "download_size": 3284007, "dataset_size": 6815618}}
|
2023-09-25T12:56:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "grade_school_math_instructions_ru"
More Information needed
|
[
"# Dataset Card for \"grade_school_math_instructions_ru\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"grade_school_math_instructions_ru\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"grade_school_math_instructions_ru\"\n\nMore Information needed"
] |
c9d47df0ce5b4955b8f8862d98b6499d748a4715
|
# Dataset Card for "dialogsum_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/dialogsum_ru
|
[
"region:us"
] |
2023-09-25T12:59:29+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "dialogue", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "topic", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19115158, "num_examples": 12460}], "download_size": 9286024, "dataset_size": 19115158}}
|
2023-09-25T12:59:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "dialogsum_ru"
More Information needed
|
[
"# Dataset Card for \"dialogsum_ru\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"dialogsum_ru\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"dialogsum_ru\"\n\nMore Information needed"
] |
450be4f1882429e5f02883830f40ef00e5ab3333
|
# A 10K docs sample from MS MARCO
This is a sample dataset of random 10K rows from the [MS MARCO](https://microsoft.github.io/msmarco/) dataset. This is used in Nixiesearch [quickstart guide](https://www.nixiesearch.ai/quickstart/) to save some time indexing a full MSMARCO with 8M documents.
## Schema
This is a JSONL-formatted dataset with only two fields inside: `id` for document identifier and `text` for the actual text snippet.
```json
{
"id": "0",
"text": "The presence of communication amid scientific minds was equally important to the success of the Manhattan Project as scientific intellect was. The only cloud hanging over the impressive achievement of the atomic researchers and engineers is what their success truly meant; hundreds of thousands of innocent lives obliterated."
}
```
## License
Apache 2.0
|
nixiesearch/msmarco-10k
|
[
"language:en",
"license:apache-2.0",
"msmarco",
"nlp",
"search",
"region:us"
] |
2023-09-25T13:13:49+00:00
|
{"language": ["en"], "license": "apache-2.0", "tags": ["msmarco", "nlp", "search"]}
|
2023-09-26T10:21:45+00:00
|
[] |
[
"en"
] |
TAGS
#language-English #license-apache-2.0 #msmarco #nlp #search #region-us
|
# A 10K docs sample from MS MARCO
This is a sample dataset of random 10K rows from the MS MARCO dataset. This is used in Nixiesearch quickstart guide to save some time indexing a full MSMARCO with 8M documents.
## Schema
This is a JSONL-formatted dataset with only two fields inside: 'id' for document identifier and 'text' for the actual text snippet.
## License
Apache 2.0
|
[
"# A 10K docs sample from MS MARCO\n\nThis is a sample dataset of random 10K rows from the MS MARCO dataset. This is used in Nixiesearch quickstart guide to save some time indexing a full MSMARCO with 8M documents.",
"## Schema\n\nThis is a JSONL-formatted dataset with only two fields inside: 'id' for document identifier and 'text' for the actual text snippet.",
"## License \n\nApache 2.0"
] |
[
"TAGS\n#language-English #license-apache-2.0 #msmarco #nlp #search #region-us \n",
"# A 10K docs sample from MS MARCO\n\nThis is a sample dataset of random 10K rows from the MS MARCO dataset. This is used in Nixiesearch quickstart guide to save some time indexing a full MSMARCO with 8M documents.",
"## Schema\n\nThis is a JSONL-formatted dataset with only two fields inside: 'id' for document identifier and 'text' for the actual text snippet.",
"## License \n\nApache 2.0"
] |
[
27,
58,
41,
5
] |
[
"passage: TAGS\n#language-English #license-apache-2.0 #msmarco #nlp #search #region-us \n# A 10K docs sample from MS MARCO\n\nThis is a sample dataset of random 10K rows from the MS MARCO dataset. This is used in Nixiesearch quickstart guide to save some time indexing a full MSMARCO with 8M documents.## Schema\n\nThis is a JSONL-formatted dataset with only two fields inside: 'id' for document identifier and 'text' for the actual text snippet.## License \n\nApache 2.0"
] |
3078bc0072749e6c9c6187057ae76389cbf5f3a2
|
# Dataset Card for "spamming-email-classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
legacy107/spamming-email-classification
|
[
"region:us"
] |
2023-09-25T13:22:14+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Text", "dtype": "string"}, {"name": "Spam", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 7196065, "num_examples": 4556}, {"name": "val", "num_bytes": 819608, "num_examples": 569}, {"name": "test", "num_bytes": 925859, "num_examples": 570}], "download_size": 4959617, "dataset_size": 8941532}}
|
2023-10-02T08:39:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "spamming-email-classification"
More Information needed
|
[
"# Dataset Card for \"spamming-email-classification\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"spamming-email-classification\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"spamming-email-classification\"\n\nMore Information needed"
] |
cc4ba9ff7038e9f22ac42c52679f552af1a4b1a6
|
# Dataset Card for "hyperpartisannewsdetection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pietrolesci/hyperpartisan_news_detection
|
[
"region:us"
] |
2023-09-25T13:24:32+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}, {"config_name": "embedding_all-mpnet-base-v2", "data_files": [{"split": "train", "path": "embedding_all-mpnet-base-v2/train-*"}, {"split": "validation", "path": "embedding_all-mpnet-base-v2/validation-*"}]}], "dataset_info": [{"config_name": "default", "features": [{"name": "news_text", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "hyperpartisan", "dtype": "bool"}, {"name": "url", "dtype": "string"}, {"name": "published_at", "dtype": "string"}, {"name": "bias", "dtype": {"class_label": {"names": {"0": "right", "1": "right-center", "2": "least", "3": "left-center", "4": "left"}}}}, {"name": "text", "dtype": "string"}, {"name": "uid", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 5549889491, "num_examples": 600000}, {"name": "validation", "num_bytes": 1906305570, "num_examples": 150000}], "download_size": 4230482849, "dataset_size": 7456195061}, {"config_name": "embedding_all-mpnet-base-v2", "features": [{"name": "uid", "dtype": "int64"}, {"name": "embedding_all-mpnet-base-v2", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 1850400000, "num_examples": 600000}, {"name": "validation", "num_bytes": 462600000, "num_examples": 150000}], "download_size": 2776673253, "dataset_size": 2313000000}]}
|
2023-09-25T13:32:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "hyperpartisannewsdetection"
More Information needed
|
[
"# Dataset Card for \"hyperpartisannewsdetection\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"hyperpartisannewsdetection\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"hyperpartisannewsdetection\"\n\nMore Information needed"
] |
5456dae9601b4908570ba8eebcbd90276d137344
|
# Dataset of takasaki_yuu/高咲侑/타카사키유우 (Love Live! Nijigasaki Gakuen School Idol Doukoukai)
This is the dataset of takasaki_yuu/高咲侑/타카사키유우 (Love Live! Nijigasaki Gakuen School Idol Doukoukai), containing 500 images and their tags.
The core tags of this character are `black_hair, green_hair, multicolored_hair, gradient_hair, bangs, two-tone_hair, twintails, medium_hair, green_eyes, hair_between_eyes, ribbon, neck_ribbon, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 692.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takasaki_yuu_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 333.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takasaki_yuu_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1219 | 753.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takasaki_yuu_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 582.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takasaki_yuu_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1219 | 1.17 GiB | [Download](https://huggingface.co/datasets/CyberHarem/takasaki_yuu_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/takasaki_yuu_lovelivenijigasakihighschoolidolclub',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, black_jacket, closed_mouth, collared_shirt, looking_at_viewer, nijigasaki_academy_school_uniform, red_ribbon, solo, upper_body, white_shirt, smile, white_background, winter_uniform, blazer, simple_background, blush, long_sleeves, two-tone_background |
| 1 | 8 |  |  |  |  |  | 1girl, collared_shirt, looking_at_viewer, nijigasaki_academy_school_uniform, short_sleeves, solo, summer_uniform, upper_body, white_shirt, blush, closed_mouth, dress_shirt, pink_ribbon, white_background, hand_up, simple_background, smile, twitter_username |
| 2 | 22 |  |  |  |  |  | 1girl, collared_shirt, nijigasaki_academy_school_uniform, plaid_skirt, short_sleeves, solo, summer_uniform, looking_at_viewer, pleated_skirt, blue_skirt, closed_mouth, white_background, white_shirt, simple_background, black_thighhighs, pink_ribbon, smile, zettai_ryouiki, blush, dress_shirt, cowboy_shot |
| 3 | 13 |  |  |  |  |  | 1girl, collared_shirt, long_sleeves, nijigasaki_academy_school_uniform, plaid_skirt, pleated_skirt, red_ribbon, solo, white_shirt, white_skirt, black_jacket, black_thighhighs, blazer, looking_at_viewer, smile, simple_background, white_background, zettai_ryouiki, blush, winter_uniform, open_jacket, sweater_vest, closed_mouth, miniskirt, buttons, cowboy_shot |
| 4 | 7 |  |  |  |  |  | 1girl, black_jacket, blazer, collared_shirt, long_sleeves, looking_at_viewer, nijigasaki_academy_school_uniform, plaid_skirt, pleated_skirt, red_ribbon, solo, white_shirt, open_mouth, rainbow, white_skirt, black_thighhighs, blue_sky, winter_uniform, cloud, sweater_vest, :d, open_jacket, upper_teeth_only, zettai_ryouiki |
| 5 | 6 |  |  |  |  |  | 1girl, black_jacket, black_necktie, collared_shirt, formal, looking_at_viewer, solo, suit, long_sleeves, white_shirt, black_pants, closed_mouth, simple_background, upper_body, white_background |
| 6 | 5 |  |  |  |  |  | 2girls, blush, smile, yuri, open_mouth, shirt |
| 7 | 6 |  |  |  |  |  | 1girl, blush, looking_at_viewer, simple_background, solo, white_background, bare_shoulders, collarbone, medium_breasts, open_mouth, black_bikini, cleavage, navel, large_breasts, smile |
| 8 | 9 |  |  |  |  |  | looking_at_viewer, 1girl, smile, solo, kimono, floral_print, obi, hair_flower, streaked_hair |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_jacket | closed_mouth | collared_shirt | looking_at_viewer | nijigasaki_academy_school_uniform | red_ribbon | solo | upper_body | white_shirt | smile | white_background | winter_uniform | blazer | simple_background | blush | long_sleeves | two-tone_background | short_sleeves | summer_uniform | dress_shirt | pink_ribbon | hand_up | twitter_username | plaid_skirt | pleated_skirt | blue_skirt | black_thighhighs | zettai_ryouiki | cowboy_shot | white_skirt | open_jacket | sweater_vest | miniskirt | buttons | open_mouth | rainbow | blue_sky | cloud | :d | upper_teeth_only | black_necktie | formal | suit | black_pants | 2girls | yuri | shirt | bare_shoulders | collarbone | medium_breasts | black_bikini | cleavage | navel | large_breasts | kimono | floral_print | obi | hair_flower | streaked_hair |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:---------------|:-----------------|:--------------------|:------------------------------------|:-------------|:-------|:-------------|:--------------|:--------|:-------------------|:-----------------|:---------|:--------------------|:--------|:---------------|:----------------------|:----------------|:-----------------|:--------------|:--------------|:----------|:-------------------|:--------------|:----------------|:-------------|:-------------------|:-----------------|:--------------|:--------------|:--------------|:---------------|:------------|:----------|:-------------|:----------|:-----------|:--------|:-----|:-------------------|:----------------|:---------|:-------|:--------------|:---------|:-------|:--------|:-----------------|:-------------|:-----------------|:---------------|:-----------|:--------|:----------------|:---------|:---------------|:------|:--------------|:----------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | | X | X | X | X | | X | X | X | X | X | | | X | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 22 |  |  |  |  |  | X | | X | X | X | X | | X | | X | X | X | | | X | X | | | X | X | X | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | | | | | | | | X | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | X | X | | X | X | X | X | X | | X | | | X | X | | | X | | | | | | | | X | X | | X | X | | X | X | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | X | X | X | X | | | X | X | X | | X | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | | | | | | | | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | X | X | | | | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | | | | X | | | X | | | X | X | | | X | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | |
| 8 | 9 |  |  |  |  |  | X | | | | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X |
|
CyberHarem/takasaki_yuu_lovelivenijigasakihighschoolidolclub
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T13:25:50+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T02:42:17+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of takasaki\_yuu/高咲侑/타카사키유우 (Love Live! Nijigasaki Gakuen School Idol Doukoukai)
========================================================================================
This is the dataset of takasaki\_yuu/高咲侑/타카사키유우 (Love Live! Nijigasaki Gakuen School Idol Doukoukai), containing 500 images and their tags.
The core tags of this character are 'black\_hair, green\_hair, multicolored\_hair, gradient\_hair, bangs, two-tone\_hair, twintails, medium\_hair, green\_eyes, hair\_between\_eyes, ribbon, neck\_ribbon, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
89247b148a92ee18aa3f17113db5cbc924326b89
|
# Dataset Card for "Soldering-Data-pix2pix-0925"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ouvic215/Soldering-Data-pix2pix-0925
|
[
"region:us"
] |
2023-09-25T13:27:24+00:00
|
{"dataset_info": {"features": [{"name": "mask_image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 108631363.5, "num_examples": 1338}], "download_size": 108561754, "dataset_size": 108631363.5}}
|
2023-09-25T13:29:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Soldering-Data-pix2pix-0925"
More Information needed
|
[
"# Dataset Card for \"Soldering-Data-pix2pix-0925\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Soldering-Data-pix2pix-0925\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Soldering-Data-pix2pix-0925\"\n\nMore Information needed"
] |
5e794eacaf99fc122f84ac5958bebd3c76daa0d3
|
# BALO (Bulletin of mandatory legal notices)
Announcements published in the [BALO](https://www.data.gouv.fr/en/datasets/balo/) (Bulletin des annonces légales obligatoires).
The BALO publishes compulsory notices for companies making public offerings and for banking and credit institutions. The announcements relate to all financial transactions, accounting documents and notices of shareholders' general meetings.
|
Nicolas-BZRD/BALO_opendata
|
[
"size_categories:100K<n<1M",
"language:fr",
"license:odc-by",
"finance",
"legal",
"region:us"
] |
2023-09-25T13:28:35+00:00
|
{"language": ["fr"], "license": "odc-by", "size_categories": ["100K<n<1M"], "pretty_name": "Bulletin of mandatory legal notices", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1106418284, "num_examples": 135575}], "download_size": 439587100, "dataset_size": 1106418284}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["finance", "legal"]}
|
2023-09-28T18:03:01+00:00
|
[] |
[
"fr"
] |
TAGS
#size_categories-100K<n<1M #language-French #license-odc-by #finance #legal #region-us
|
# BALO (Bulletin of mandatory legal notices)
Announcements published in the BALO (Bulletin des annonces légales obligatoires).
The BALO publishes compulsory notices for companies making public offerings and for banking and credit institutions. The announcements relate to all financial transactions, accounting documents and notices of shareholders' general meetings.
|
[
"# BALO (Bulletin of mandatory legal notices)\n\n\nAnnouncements published in the BALO (Bulletin des annonces légales obligatoires).\n\nThe BALO publishes compulsory notices for companies making public offerings and for banking and credit institutions. The announcements relate to all financial transactions, accounting documents and notices of shareholders' general meetings."
] |
[
"TAGS\n#size_categories-100K<n<1M #language-French #license-odc-by #finance #legal #region-us \n",
"# BALO (Bulletin of mandatory legal notices)\n\n\nAnnouncements published in the BALO (Bulletin des annonces légales obligatoires).\n\nThe BALO publishes compulsory notices for companies making public offerings and for banking and credit institutions. The announcements relate to all financial transactions, accounting documents and notices of shareholders' general meetings."
] |
[
37,
86
] |
[
"passage: TAGS\n#size_categories-100K<n<1M #language-French #license-odc-by #finance #legal #region-us \n# BALO (Bulletin of mandatory legal notices)\n\n\nAnnouncements published in the BALO (Bulletin des annonces légales obligatoires).\n\nThe BALO publishes compulsory notices for companies making public offerings and for banking and credit institutions. The announcements relate to all financial transactions, accounting documents and notices of shareholders' general meetings."
] |
afeca3ea1862facd5a65e4008b9d51060bdbbee8
|
https://politico-tech.simplecast.com/episodes/the-hugging-face-case-for-open-ai
|
lunarflu/the-hugging-face-case-for-open-AI
|
[
"region:us"
] |
2023-09-25T13:29:56+00:00
|
{}
|
2023-09-25T13:30:07+00:00
|
[] |
[] |
TAGS
#region-us
|
URL
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
54ee438c56b8090005e7525faed06879ef3f2c51
|
# Dataset Card for "c_arm64_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zhangshuoming/c_arm64_small
|
[
"region:us"
] |
2023-09-25T13:30:54+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 87599526, "num_examples": 19949}], "download_size": 23472860, "dataset_size": 87599526}}
|
2023-09-27T07:24:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "c_arm64_small"
More Information needed
|
[
"# Dataset Card for \"c_arm64_small\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"c_arm64_small\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"c_arm64_small\"\n\nMore Information needed"
] |
4945786acc9ee07e877122b8b940ff3576523c10
|
# Dataset of ousaka_shizuku/桜坂しずく/오사카시즈쿠 (Love Live! School Idol Festival ALL STARS)
This is the dataset of ousaka_shizuku/桜坂しずく/오사카시즈쿠 (Love Live! School Idol Festival ALL STARS), containing 500 images and their tags.
The core tags of this character are `brown_hair, long_hair, blue_eyes, bangs, bow, sidelocks, hair_between_eyes, hair_bow, half_updo, red_bow, ponytail`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 830.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ousaka_shizuku_loveliveschoolidolfestivalallstars/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 394.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ousaka_shizuku_loveliveschoolidolfestivalallstars/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1334 | 925.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ousaka_shizuku_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 696.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ousaka_shizuku_loveliveschoolidolfestivalallstars/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1334 | 1.42 GiB | [Download](https://huggingface.co/datasets/CyberHarem/ousaka_shizuku_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ousaka_shizuku_loveliveschoolidolfestivalallstars',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 41 |  |  |  |  |  | 1girl, solo, looking_at_viewer, white_dress, collarbone, sleeveless_dress, smile, blush, flower, frills, lace-trimmed_dress, white_bow, pearl_necklace, earrings, star_hair_ornament, arm_garter, cross-laced_clothes, open_mouth, pearl_bracelet, ribbon, breasts |
| 1 | 15 |  |  |  |  |  | 1girl, black_dress, black_gloves, buttons, choker, collarbone, evening_gown, looking_at_viewer, multicolored_clothes, necklace, off-shoulder_dress, solo, two-tone_dress, bare_shoulders, sleeveless_dress, white_dress, earrings, lace_gloves, black_bow, huge_bow, blush, open_mouth, rain, black_belt, water_drop, white_bow, smile |
| 2 | 7 |  |  |  |  |  | 1girl, bare_shoulders, black_dress, black_gloves, buttons, evening_gown, huge_bow, lace_gloves, necklace, off-shoulder_dress, sleeveless_dress, solo, two-tone_dress, white_dress, choker, earrings, multicolored_clothes, black_bow, collarbone, white_bow, black_belt, looking_at_viewer, pantyhose |
| 3 | 13 |  |  |  |  |  | 1girl, solo, short_sleeves, blush, looking_at_viewer, smile, blue_dress, collared_dress, open_mouth, white_background |
| 4 | 48 |  |  |  |  |  | 1girl, nijigasaki_academy_school_uniform, solo, white_shirt, short_sleeves, collared_shirt, neck_ribbon, yellow_ribbon, summer_uniform, looking_at_viewer, blush, plaid_skirt, pleated_skirt, blue_vest, smile, open_mouth, white_background, blue_skirt, simple_background |
| 5 | 30 |  |  |  |  |  | nijigasaki_academy_school_uniform, 1girl, collared_shirt, solo, white_shirt, blush, long_sleeves, neck_ribbon, looking_at_viewer, black_jacket, winter_uniform, smile, yellow_ribbon, blazer, pleated_skirt, white_background, white_skirt, plaid_skirt, simple_background, upper_body |
| 6 | 16 |  |  |  |  |  | 1girl, solo, smile, blue_bikini, blush, looking_at_viewer, medium_breasts, cleavage, navel, collarbone, bikini_skirt, frilled_bikini, bracelet, ocean, open_mouth, wrist_scrunchie |
| 7 | 7 |  |  |  |  |  | 1girl, holding_umbrella, looking_at_viewer, red_coat, solo, upper_body, black_bow, blush, buttons, long_sleeves, closed_mouth, striped |
| 8 | 5 |  |  |  |  |  | 1girl, blue_kimono, blush, looking_at_viewer, obi, smile, solo, floral_print, upper_body |
| 9 | 8 |  |  |  |  |  | 1girl, katana, looking_at_viewer, solo, holding_sword, petals, blush, cherry_blossoms, white_background, wide_sleeves, blue_kimono, hakama_skirt, smile, unsheathing |
| 10 | 8 |  |  |  |  |  | 1girl, bear_ears, paw_gloves, solo, blush, looking_at_viewer, open_mouth, animal_hood, shorts, flower, :d, dress, polka_dot |
| 11 | 5 |  |  |  |  |  | detached_collar, looking_at_viewer, medium_breasts, playboy_bunny, rabbit_ears, rabbit_tail, strapless_leotard, 1girl, black_pantyhose, cleavage, solo, wrist_cuffs, blue_leotard, fake_animal_ears, sitting, black_bowtie, black_leotard, brown_pantyhose, earrings, high_heels, simple_background, table, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | white_dress | collarbone | sleeveless_dress | smile | blush | flower | frills | lace-trimmed_dress | white_bow | pearl_necklace | earrings | star_hair_ornament | arm_garter | cross-laced_clothes | open_mouth | pearl_bracelet | ribbon | breasts | black_dress | black_gloves | buttons | choker | evening_gown | multicolored_clothes | necklace | off-shoulder_dress | two-tone_dress | bare_shoulders | lace_gloves | black_bow | huge_bow | rain | black_belt | water_drop | pantyhose | short_sleeves | blue_dress | collared_dress | white_background | nijigasaki_academy_school_uniform | white_shirt | collared_shirt | neck_ribbon | yellow_ribbon | summer_uniform | plaid_skirt | pleated_skirt | blue_vest | blue_skirt | simple_background | long_sleeves | black_jacket | winter_uniform | blazer | white_skirt | upper_body | blue_bikini | medium_breasts | cleavage | navel | bikini_skirt | frilled_bikini | bracelet | ocean | wrist_scrunchie | holding_umbrella | red_coat | closed_mouth | striped | blue_kimono | obi | floral_print | katana | holding_sword | petals | cherry_blossoms | wide_sleeves | hakama_skirt | unsheathing | bear_ears | paw_gloves | animal_hood | shorts | :d | dress | polka_dot | detached_collar | playboy_bunny | rabbit_ears | rabbit_tail | strapless_leotard | black_pantyhose | wrist_cuffs | blue_leotard | fake_animal_ears | sitting | black_bowtie | black_leotard | brown_pantyhose | high_heels | table |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:-------|:--------------------|:--------------|:-------------|:-------------------|:--------|:--------|:---------|:---------|:---------------------|:------------|:-----------------|:-----------|:---------------------|:-------------|:----------------------|:-------------|:-----------------|:---------|:----------|:--------------|:---------------|:----------|:---------|:---------------|:-----------------------|:-----------|:---------------------|:-----------------|:-----------------|:--------------|:------------|:-----------|:-------|:-------------|:-------------|:------------|:----------------|:-------------|:-----------------|:-------------------|:------------------------------------|:--------------|:-----------------|:--------------|:----------------|:-----------------|:--------------|:----------------|:------------|:-------------|:--------------------|:---------------|:---------------|:-----------------|:---------|:--------------|:-------------|:--------------|:-----------------|:-----------|:--------|:---------------|:-----------------|:-----------|:--------|:------------------|:-------------------|:-----------|:---------------|:----------|:--------------|:------|:---------------|:---------|:----------------|:---------|:------------------|:---------------|:---------------|:--------------|:------------|:-------------|:--------------|:---------|:-----|:--------|:------------|:------------------|:----------------|:--------------|:--------------|:--------------------|:------------------|:--------------|:---------------|:-------------------|:----------|:---------------|:----------------|:------------------|:-------------|:--------|
| 0 | 41 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | X | | X | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | X | X | X | X | | | | | | X | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 13 |  |  |  |  |  | X | X | X | | | | X | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 48 |  |  |  |  |  | X | X | X | | | | X | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 30 |  |  |  |  |  | X | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | X | X | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 16 |  |  |  |  |  | X | X | X | | X | | X | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 7 |  |  |  |  |  | X | X | X | | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | X | | | | | X | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 8 |  |  |  |  |  | X | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 10 | 8 |  |  |  |  |  | X | X | X | | | | | X | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 11 | 5 |  |  |  |  |  | X | X | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/ousaka_shizuku_loveliveschoolidolfestivalallstars
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T13:32:43+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T03:51:41+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of ousaka\_shizuku/桜坂しずく/오사카시즈쿠 (Love Live! School Idol Festival ALL STARS)
===================================================================================
This is the dataset of ousaka\_shizuku/桜坂しずく/오사카시즈쿠 (Love Live! School Idol Festival ALL STARS), containing 500 images and their tags.
The core tags of this character are 'brown\_hair, long\_hair, blue\_eyes, bangs, bow, sidelocks, hair\_between\_eyes, hair\_bow, half\_updo, red\_bow, ponytail', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
095b49cc3d2e0eb87fa99927d8dc6b686fe89306
|
# Dataset Card for "dialogsum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/dialogsum
|
[
"region:us"
] |
2023-09-25T13:43:50+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "dialogue", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "topic", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11439628, "num_examples": 12460}], "download_size": 6516766, "dataset_size": 11439628}}
|
2023-09-25T13:43:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "dialogsum"
More Information needed
|
[
"# Dataset Card for \"dialogsum\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"dialogsum\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"dialogsum\"\n\nMore Information needed"
] |
57ee0d3d1b04c86d9afc21c85e7cf25e41c127ea
|
# Dataset Card for dummy
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: [info]**
- **Repository: [info]**
- **Paper: [info]**
- **Leaderboard: [info]**
- **Point of Contact: [info]**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
Zaid/dummy
|
[
"region:us"
] |
2023-09-25T13:45:25+00:00
|
{}
|
2023-11-06T14:51:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for dummy
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: [info]
- Repository: [info]
- Paper: [info]
- Leaderboard: [info]
- Point of Contact: [info]
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset.
|
[
"# Dataset Card for dummy",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: [info]\n- Repository: [info]\n- Paper: [info]\n- Leaderboard: [info]\n- Point of Contact: [info]",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for dummy",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: [info]\n- Repository: [info]\n- Paper: [info]\n- Leaderboard: [info]\n- Point of Contact: [info]",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
[
6,
7,
125,
39,
6,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for dummy## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: [info]\n- Repository: [info]\n- Paper: [info]\n- Leaderboard: [info]\n- Point of Contact: [info]### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @github-username for adding this dataset."
] |
134508e63c22d50784700d7a022df52b3c26a401
|
# Dataset Card for "HC3_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/HC3_ru
|
[
"region:us"
] |
2023-09-25T13:50:00+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "human_answers", "sequence": "string"}, {"name": "chatgpt_answers", "sequence": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 135406074, "num_examples": 24322}], "download_size": 62378894, "dataset_size": 135406074}}
|
2023-09-25T13:51:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "HC3_ru"
More Information needed
|
[
"# Dataset Card for \"HC3_ru\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"HC3_ru\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"HC3_ru\"\n\nMore Information needed"
] |
3b21b36d17521ed7e3ffca4d3232be3aaeab143b
|
https://www.youtube.com/watch?v=CV6UagCYo4c
|
lunarflu/open-source-generative-AI-at-hugging-face
|
[
"region:us"
] |
2023-09-25T13:56:34+00:00
|
{}
|
2023-09-25T13:56:49+00:00
|
[] |
[] |
TAGS
#region-us
|
URL
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
2e0316ac9960c76c0355d8cc2238d14855c2262a
|
# Dataset of yoneme_mei/米女メイ/요네메메이 (Love Live! Superstar!!)
This is the dataset of yoneme_mei/米女メイ/요네메메이 (Love Live! Superstar!!), containing 200 images and their tags.
The core tags of this character are `red_hair, blue_eyes, bangs, hair_bun, long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 200 | 288.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yoneme_mei_lovelivesuperstar/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 200 | 144.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yoneme_mei_lovelivesuperstar/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 444 | 302.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yoneme_mei_lovelivesuperstar/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 200 | 247.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yoneme_mei_lovelivesuperstar/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 444 | 482.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yoneme_mei_lovelivesuperstar/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/yoneme_mei_lovelivesuperstar',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, solo, collarbone, short_sleeves, sidelocks, single_side_bun, smile, upper_body, birthday, blush, shiny_hair, single_hair_bun, dress, necktie |
| 1 | 17 |  |  |  |  |  | 1girl, solo, yuigaoka_school_uniform, blue_jacket, grey_dress, looking_at_viewer, collared_shirt, white_shirt, long_sleeves, white_background, blush, simple_background, hair_between_eyes, open_jacket, pinafore_dress, closed_mouth, brown_footwear, loafers, medium_hair, smile |
| 2 | 8 |  |  |  |  |  | blush, yuigaoka_school_uniform, 2girls, shiny_hair, upper_body, birthday, double_bun, sidelocks, solo_focus, collared_shirt, jacket, open_mouth |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | collarbone | short_sleeves | sidelocks | single_side_bun | smile | upper_body | birthday | blush | shiny_hair | single_hair_bun | dress | necktie | yuigaoka_school_uniform | blue_jacket | grey_dress | collared_shirt | white_shirt | long_sleeves | white_background | simple_background | hair_between_eyes | open_jacket | pinafore_dress | closed_mouth | brown_footwear | loafers | medium_hair | 2girls | double_bun | solo_focus | jacket | open_mouth |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:-------------|:----------------|:------------|:------------------|:--------|:-------------|:-----------|:--------|:-------------|:------------------|:--------|:----------|:--------------------------|:--------------|:-------------|:-----------------|:--------------|:---------------|:-------------------|:--------------------|:--------------------|:--------------|:-----------------|:---------------|:-----------------|:----------|:--------------|:---------|:-------------|:-------------|:---------|:-------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 1 | 17 |  |  |  |  |  | X | X | X | | | | | X | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | |
| 2 | 8 |  |  |  |  |  | | | | | | X | | | X | X | X | X | | | | X | | | X | | | | | | | | | | | | X | X | X | X | X |
|
CyberHarem/yoneme_mei_lovelivesuperstar
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T14:02:20+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T05:43:34+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of yoneme\_mei/米女メイ/요네메메이 (Love Live! Superstar!!)
==========================================================
This is the dataset of yoneme\_mei/米女メイ/요네메메이 (Love Live! Superstar!!), containing 200 images and their tags.
The core tags of this character are 'red\_hair, blue\_eyes, bangs, hair\_bun, long\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
2b0a35c0f1730a60957319f9b25f4858e7012e44
|
# Dataset Card for Dataset Name
## Dataset Description
- **Repository: https://github.com/lukasberglund/reversal_curse**
- **Paper: https://arxiv.org/abs/2309.12288**
### Dataset Summary
Datasets used for experiments 1, 2, and 3 from the reversal curse paper.
1. Experiment 1 uses `name_description_dataset`
2. Experiment 2 uses `celebrity_relations`
3. Experiment 3 uses `instruction_dataset`
|
lberglund/reversal_curse
|
[
"language:en",
"license:mit",
"arxiv:2309.12288",
"region:us"
] |
2023-09-25T14:06:42+00:00
|
{"language": ["en"], "license": "mit"}
|
2023-09-25T14:33:57+00:00
|
[
"2309.12288"
] |
[
"en"
] |
TAGS
#language-English #license-mit #arxiv-2309.12288 #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Repository: URL
- Paper: URL
### Dataset Summary
Datasets used for experiments 1, 2, and 3 from the reversal curse paper.
1. Experiment 1 uses 'name_description_dataset'
2. Experiment 2 uses 'celebrity_relations'
3. Experiment 3 uses 'instruction_dataset'
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n\n- Repository: URL \n- Paper: URL",
"### Dataset Summary\n\nDatasets used for experiments 1, 2, and 3 from the reversal curse paper. \n\n1. Experiment 1 uses 'name_description_dataset'\n2. Experiment 2 uses 'celebrity_relations'\n3. Experiment 3 uses 'instruction_dataset'"
] |
[
"TAGS\n#language-English #license-mit #arxiv-2309.12288 #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n\n- Repository: URL \n- Paper: URL",
"### Dataset Summary\n\nDatasets used for experiments 1, 2, and 3 from the reversal curse paper. \n\n1. Experiment 1 uses 'name_description_dataset'\n2. Experiment 2 uses 'celebrity_relations'\n3. Experiment 3 uses 'instruction_dataset'"
] |
[
23,
8,
14,
65
] |
[
"passage: TAGS\n#language-English #license-mit #arxiv-2309.12288 #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n\n- Repository: URL \n- Paper: URL### Dataset Summary\n\nDatasets used for experiments 1, 2, and 3 from the reversal curse paper. \n\n1. Experiment 1 uses 'name_description_dataset'\n2. Experiment 2 uses 'celebrity_relations'\n3. Experiment 3 uses 'instruction_dataset'"
] |
dd673c1724dd7ac1c5563e7b1ad555083b8140e1
|
# Dataset Card for "horoscopes_ru_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/horoscopes_ru_10k
|
[
"region:us"
] |
2023-09-25T14:08:17+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9449348, "num_examples": 10000}], "download_size": 4589882, "dataset_size": 9449348}}
|
2023-09-25T21:23:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "horoscopes_ru_10k"
More Information needed
|
[
"# Dataset Card for \"horoscopes_ru_10k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"horoscopes_ru_10k\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"horoscopes_ru_10k\"\n\nMore Information needed"
] |
7bf662901b54c35125c38beb44c98f8c2ec5f2e6
|
# Dataset Card for "panorama_prompts_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/panorama_prompts_10k
|
[
"region:us"
] |
2023-09-25T14:16:34+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30478073, "num_examples": 11024}], "download_size": 15784032, "dataset_size": 30478073}}
|
2023-09-25T14:16:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "panorama_prompts_10k"
More Information needed
|
[
"# Dataset Card for \"panorama_prompts_10k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"panorama_prompts_10k\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"panorama_prompts_10k\"\n\nMore Information needed"
] |
7543dc23d012166bff92d82183de06510dfa6aae
|
# Dataset Card for "lotr-book"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tojete6465/lotr-book
|
[
"region:us"
] |
2023-09-25T14:16:35+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 2196528.0, "num_examples": 268}, {"name": "test", "num_bytes": 245880.0, "num_examples": 30}], "download_size": 1121236, "dataset_size": 2442408.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
|
2023-09-25T16:00:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "lotr-book"
More Information needed
|
[
"# Dataset Card for \"lotr-book\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"lotr-book\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"lotr-book\"\n\nMore Information needed"
] |
38d22c5d7d4f86fc3a57aee433bafe981d49a5ec
|
# Dataset Card for "REPV"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ssahir/REPV
|
[
"region:us"
] |
2023-09-25T14:16:35+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "path", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "emotion", "dtype": "string"}, {"name": "speech", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 380197186, "num_examples": 1628}, {"name": "test", "num_bytes": 92682047, "num_examples": 407}], "download_size": 0, "dataset_size": 472879233}}
|
2023-09-26T19:02:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "REPV"
More Information needed
|
[
"# Dataset Card for \"REPV\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"REPV\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"REPV\"\n\nMore Information needed"
] |
11b393a5545f706a357ebcd4a5285d93db176715
|
# Dataset Card for "anthropic-hh-first-prompt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
wentingzhao/anthropic-hh-first-prompt
|
[
"region:us"
] |
2023-09-25T14:25:21+00:00
|
{"dataset_info": {"features": [{"name": "user", "dtype": "string"}, {"name": "system", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 931647, "num_examples": 8552}], "download_size": 472764, "dataset_size": 931647}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T14:25:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "anthropic-hh-first-prompt"
More Information needed
|
[
"# Dataset Card for \"anthropic-hh-first-prompt\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"anthropic-hh-first-prompt\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"anthropic-hh-first-prompt\"\n\nMore Information needed"
] |
13a757493f91229dd06674281e0c9abd397d21a0
|
# Dataset Card for Common Voice Corpus 13.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Vaibhav Srivastav](mailto:[email protected])
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Autoevaluate Leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=mozilla-foundation%2Fcommon_voice_11_0&only_verified=0&task=automatic-speech-recognition&config=ar&split=test&metric=wer)
### Languages
```
Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dioula, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Icelandic, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Lao, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Occitan, Odia, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Turkmen, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Yoruba
```
## How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
```python
from datasets import load_dataset
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train", streaming=True)
print(next(iter(cv_13)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
### Local
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
batch_sampler = BatchSampler(RandomSampler(cv_13), batch_size=32, drop_last=False)
dataloader = DataLoader(cv_13, batch_sampler=batch_sampler)
```
### Streaming
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
dataloader = DataLoader(cv_13, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_13_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
|
afern24/common_voice_13_0_dv_preprocessed
|
[
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] |
2023-09-25T14:36:30+00:00
|
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": {"ab": ["10K<n<100K"], "ar": ["100K<n<1M"], "as": ["1K<n<10K"], "ast": ["1K<n<10K"], "az": ["n<1K"], "ba": ["100K<n<1M"], "bas": ["1K<n<10K"], "be": ["1M<n<10M"], "bg": ["10K<n<100K"], "bn": ["1M<n<10M"], "br": ["10K<n<100K"], "ca": ["1M<n<10M"], "ckb": ["100K<n<1M"], "cnh": ["1K<n<10K"], "cs": ["100K<n<1M"], "cv": ["10K<n<100K"], "cy": ["100K<n<1M"], "da": ["10K<n<100K"], "de": ["100K<n<1M"], "dv": ["10K<n<100K"], "dyu": ["n<1K"], "el": ["10K<n<100K"], "en": ["1M<n<10M"], "eo": ["1M<n<10M"], "es": ["1M<n<10M"], "et": ["10K<n<100K"], "eu": ["100K<n<1M"], "fa": ["100K<n<1M"], "fi": ["10K<n<100K"], "fr": ["100K<n<1M"], "fy-NL": ["100K<n<1M"], "ga-IE": ["10K<n<100K"], "gl": ["10K<n<100K"], "gn": ["1K<n<10K"], "ha": ["10K<n<100K"], "hi": ["10K<n<100K"], "hsb": ["1K<n<10K"], "hu": ["10K<n<100K"], "hy-AM": ["1K<n<10K"], "ia": ["10K<n<100K"], "id": ["10K<n<100K"], "ig": ["1K<n<10K"], "is": ["n<1K"], "it": ["100K<n<1M"], "ja": ["100K<n<1M"], "ka": ["10K<n<100K"], "kab": ["100K<n<1M"], "kk": ["1K<n<10K"], "kmr": ["10K<n<100K"], "ko": ["1K<n<10K"], "ky": ["10K<n<100K"], "lg": ["100K<n<1M"], "lo": ["n<1K"], "lt": ["10K<n<100K"], "lv": ["10K<n<100K"], "mdf": ["n<1K"], "mhr": ["100K<n<1M"], "mk": ["n<1K"], "ml": ["1K<n<10K"], "mn": ["10K<n<100K"], "mr": ["10K<n<100K"], "mrj": ["10K<n<100K"], "mt": ["10K<n<100K"], "myv": ["1K<n<10K"], "nan-tw": ["10K<n<100K"], "ne-NP": ["n<1K"], "nl": ["10K<n<100K"], "nn-NO": ["n<1K"], "oc": ["1K<n<10K"], "or": ["1K<n<10K"], "pa-IN": ["1K<n<10K"], "pl": ["100K<n<1M"], "pt": ["100K<n<1M"], "quy": ["n<1K"], "rm-sursilv": ["1K<n<10K"], "rm-vallader": ["1K<n<10K"], "ro": ["10K<n<100K"], "ru": ["100K<n<1M"], "rw": ["1M<n<10M"], "sah": ["1K<n<10K"], "sat": ["n<1K"], "sc": ["1K<n<10K"], "sk": ["10K<n<100K"], "skr": ["1K<n<10K"], "sl": ["10K<n<100K"], "sr": ["1K<n<10K"], "sv-SE": ["10K<n<100K"], "sw": ["100K<n<1M"], "ta": ["100K<n<1M"], "th": ["100K<n<1M"], "ti": ["n<1K"], "tig": ["n<1K"], "tk": ["1K<n<10K"], "tok": ["10K<n<100K"], "tr": ["10K<n<100K"], "tt": ["10K<n<100K"], "tw": ["n<1K"], "ug": ["10K<n<100K"], "uk": ["10K<n<100K"], "ur": ["100K<n<1M"], "uz": ["100K<n<1M"], "vi": ["10K<n<100K"], "vot": ["n<1K"], "yo": ["1K<n<10K"], "yue": ["10K<n<100K"], "zh-CN": ["100K<n<1M"], "zh-HK": ["100K<n<1M"], "zh-TW": ["100K<n<1M"]}, "source_datasets": ["extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "paperswithcode_id": "common-voice", "pretty_name": "Common Voice Corpus 13.0", "language_bcp47": ["ab", "ar", "as", "ast", "az", "ba", "bas", "be", "bg", "bn", "br", "ca", "ckb", "cnh", "cs", "cv", "cy", "da", "de", "dv", "dyu", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy-NL", "ga-IE", "gl", "gn", "ha", "hi", "hsb", "hu", "hy-AM", "ia", "id", "ig", "is", "it", "ja", "ka", "kab", "kk", "kmr", "ko", "ky", "lg", "lo", "lt", "lv", "mdf", "mhr", "mk", "ml", "mn", "mr", "mrj", "mt", "myv", "nan-tw", "ne-NP", "nl", "nn-NO", "oc", "or", "pa-IN", "pl", "pt", "quy", "rm-sursilv", "rm-vallader", "ro", "ru", "rw", "sah", "sat", "sc", "sk", "skr", "sl", "sr", "sv-SE", "sw", "ta", "th", "ti", "tig", "tk", "tok", "tr", "tt", "tw", "ug", "uk", "ur", "uz", "vi", "vot", "yo", "yue", "zh-CN", "zh-HK", "zh-TW"], "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."}
|
2023-09-27T08:48:04+00:00
|
[
"1912.06670"
] |
[] |
TAGS
#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #source_datasets-extended|common_voice #license-cc0-1.0 #arxiv-1912.06670 #region-us
|
# Dataset Card for Common Voice Corpus 13.0
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- How to use
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard: URL
- Point of Contact: Vaibhav Srivastav
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added.
Take a look at the Languages page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
Autoevaluate Leaderboard
### Languages
## How to use
The 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
Using the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).
### Local
### Streaming
To find out more about loading and preparing audio datasets, head over to URL
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with 'transformers' - here.
## Dataset Structure
### Data Instances
A typical data point comprises the 'path' to the audio file and its 'sentence'.
Additional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.
### Data Fields
'client_id' ('string'): An id for which client (voice) made the recording
'path' ('string'): The path to the audio file
'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
'sentence' ('string'): The sentence the user was prompted to speak
'up_votes' ('int64'): How many upvotes the audio file has received from reviewers
'down_votes' ('int64'): How many downvotes the audio file has received from reviewers
'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')
'gender' ('string'): The gender of the speaker
'accent' ('string'): Accent of the speaker
'locale' ('string'): The locale of the speaker
'segment' ('string'): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
Public Domain, CC-0
|
[
"# Dataset Card for Common Voice Corpus 13.0",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - How to use\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Vaibhav Srivastav",
"### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. \nMany of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent \nthat can help improve the accuracy of speech recognition engines.\n\nThe dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added. \nTake a look at the Languages page to request a language or start contributing.",
"### Supported Tasks and Leaderboards\n\nThe results for models trained on the Common Voice datasets are available via the \n Autoevaluate Leaderboard",
"### Languages",
"## How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Hindi config, simply specify the corresponding language config name (i.e., \"hi\" for Hindi):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.\n\n\n*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).",
"### Local",
"### Streaming\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL",
"### Example scripts\n\nTrain your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with 'transformers' - here.",
"## Dataset Structure",
"### Data Instances\n\nA typical data point comprises the 'path' to the audio file and its 'sentence'. \nAdditional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.",
"### Data Fields\n\n'client_id' ('string'): An id for which client (voice) made the recording\n\n'path' ('string'): The path to the audio file\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n'up_votes' ('int64'): How many upvotes the audio file has received from reviewers\n\n'down_votes' ('int64'): How many downvotes the audio file has received from reviewers\n\n'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')\n\n'gender' ('string'): The gender of the speaker\n\n'accent' ('string'): Accent of the speaker\n\n'locale' ('string'): The locale of the speaker\n\n'segment' ('string'): Usually an empty field",
"### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand received downvotes indicating that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.",
"## Data Preprocessing Recommended by Hugging Face\n\nThe following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. \n\nMany examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.\n\nIn addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nPublic Domain, CC-0"
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #source_datasets-extended|common_voice #license-cc0-1.0 #arxiv-1912.06670 #region-us \n",
"# Dataset Card for Common Voice Corpus 13.0",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - How to use\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Vaibhav Srivastav",
"### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. \nMany of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent \nthat can help improve the accuracy of speech recognition engines.\n\nThe dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added. \nTake a look at the Languages page to request a language or start contributing.",
"### Supported Tasks and Leaderboards\n\nThe results for models trained on the Common Voice datasets are available via the \n Autoevaluate Leaderboard",
"### Languages",
"## How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Hindi config, simply specify the corresponding language config name (i.e., \"hi\" for Hindi):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.\n\n\n*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).",
"### Local",
"### Streaming\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL",
"### Example scripts\n\nTrain your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with 'transformers' - here.",
"## Dataset Structure",
"### Data Instances\n\nA typical data point comprises the 'path' to the audio file and its 'sentence'. \nAdditional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.",
"### Data Fields\n\n'client_id' ('string'): An id for which client (voice) made the recording\n\n'path' ('string'): The path to the audio file\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n'up_votes' ('int64'): How many upvotes the audio file has received from reviewers\n\n'down_votes' ('int64'): How many downvotes the audio file has received from reviewers\n\n'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')\n\n'gender' ('string'): The gender of the speaker\n\n'accent' ('string'): Accent of the speaker\n\n'locale' ('string'): The locale of the speaker\n\n'segment' ('string'): Usually an empty field",
"### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand received downvotes indicating that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.",
"## Data Preprocessing Recommended by Hugging Face\n\nThe following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. \n\nMany examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.\n\nIn addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nPublic Domain, CC-0"
] |
[
87,
10,
124,
34,
108,
32,
4,
190,
3,
22,
36,
6,
77,
378,
145,
233,
5,
7,
4,
10,
10,
5,
5,
9,
42,
8,
41,
8,
7,
5,
6,
11
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #source_datasets-extended|common_voice #license-cc0-1.0 #arxiv-1912.06670 #region-us \n# Dataset Card for Common Voice Corpus 13.0## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - How to use\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Vaibhav Srivastav### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. \nMany of the 27141 recorded hours in the dataset also include demographic metadata like age, sex, and accent \nthat can help improve the accuracy of speech recognition engines.\n\nThe dataset currently consists of 17689 validated hours in 108 languages, but more voices and languages are always added. \nTake a look at the Languages page to request a language or start contributing.### Supported Tasks and Leaderboards\n\nThe results for models trained on the Common Voice datasets are available via the \n Autoevaluate Leaderboard### Languages",
"passage: ## How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Hindi config, simply specify the corresponding language config name (i.e., \"hi\" for Hindi):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.\n\n\n*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).### Local### Streaming\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL### Example scripts\n\nTrain your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with 'transformers' - here.## Dataset Structure### Data Instances\n\nA typical data point comprises the 'path' to the audio file and its 'sentence'. \nAdditional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.",
"passage: ### Data Fields\n\n'client_id' ('string'): An id for which client (voice) made the recording\n\n'path' ('string'): The path to the audio file\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n'up_votes' ('int64'): How many upvotes the audio file has received from reviewers\n\n'down_votes' ('int64'): How many downvotes the audio file has received from reviewers\n\n'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')\n\n'gender' ('string'): The gender of the speaker\n\n'accent' ('string'): Accent of the speaker\n\n'locale' ('string'): The locale of the speaker\n\n'segment' ('string'): Usually an empty field### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand received downvotes indicating that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.## Data Preprocessing Recommended by Hugging Face\n\nThe following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. \n\nMany examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.\n\nIn addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.## Considerations for Using the Data"
] |
f274e23fe6cfed69125d1db7b7c7a9aadd2b1b5f
|
# Dataset Card for "bugurt_completion_prompts_8k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/bugurt_completion_prompts_8k
|
[
"region:us"
] |
2023-09-25T14:39:46+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "bugurt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9139097, "num_examples": 8360}], "download_size": 4667499, "dataset_size": 9139097}}
|
2023-09-25T14:39:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bugurt_completion_prompts_8k"
More Information needed
|
[
"# Dataset Card for \"bugurt_completion_prompts_8k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bugurt_completion_prompts_8k\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bugurt_completion_prompts_8k\"\n\nMore Information needed"
] |
27a87f6dd1b566fad2cd21a87a21ae5d89634504
|
# Dataset Card for "fquad2_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
manu/fquad2_test
|
[
"region:us"
] |
2023-09-25T15:01:09+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answers_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}, {"name": "is_impossible", "dtype": "bool"}], "splits": [{"name": "test", "num_bytes": 865505, "num_examples": 800}, {"name": "valid", "num_bytes": 217746, "num_examples": 200}, {"name": "test_hasAns", "num_bytes": 458114, "num_examples": 400}, {"name": "valid_hasAns", "num_bytes": 113725, "num_examples": 100}], "download_size": 785547, "dataset_size": 1655090}}
|
2024-02-01T16:50:10+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fquad2_test"
More Information needed
|
[
"# Dataset Card for \"fquad2_test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fquad2_test\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fquad2_test\"\n\nMore Information needed"
] |
415b5faf15964cc5deb31bff34fc77ef87060e9e
|
# Dataset Card for "yahooanswerstopics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pietrolesci/yahoo_answers_topics
|
[
"region:us"
] |
2023-09-25T15:03:11+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}, {"config_name": "embedding_all-mpnet-base-v2", "data_files": [{"split": "train", "path": "embedding_all-mpnet-base-v2/train-*"}, {"split": "test", "path": "embedding_all-mpnet-base-v2/test-*"}]}], "dataset_info": [{"config_name": "default", "features": [{"name": "id", "dtype": "int32"}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "Society & Culture", "1": "Science & Mathematics", "2": "Health", "3": "Education & Reference", "4": "Computers & Internet", "5": "Sports", "6": "Business & Finance", "7": "Entertainment & Music", "8": "Family & Relationships", "9": "Politics & Government"}}}}, {"name": "question_title", "dtype": "string"}, {"name": "question_content", "dtype": "string"}, {"name": "best_answer", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "uid", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1506571390, "num_examples": 1400000}, {"name": "test", "num_bytes": 64707724, "num_examples": 60000}], "download_size": 1050038594, "dataset_size": 1571279114}, {"config_name": "embedding_all-mpnet-base-v2", "features": [{"name": "uid", "dtype": "int64"}, {"name": "embedding_all-mpnet-base-v2", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 4317600000, "num_examples": 1400000}, {"name": "test", "num_bytes": 185040000, "num_examples": 60000}], "download_size": 5407717474, "dataset_size": 4502640000}]}
|
2023-09-25T15:10:12+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "yahooanswerstopics"
More Information needed
|
[
"# Dataset Card for \"yahooanswerstopics\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"yahooanswerstopics\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"yahooanswerstopics\"\n\nMore Information needed"
] |
34a74d7d3fd81258a23cab4a38b6b1c1495692dc
|
# Dataset Card for "Llama-Math-format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Cris-AV/Llama-Math-format
|
[
"region:us"
] |
2023-09-25T15:11:01+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10269, "num_examples": 50}], "download_size": 0, "dataset_size": 10269}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T17:41:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Llama-Math-format"
More Information needed
|
[
"# Dataset Card for \"Llama-Math-format\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Llama-Math-format\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Llama-Math-format\"\n\nMore Information needed"
] |
15ff1d9d742f51d651185eca0cd1a963bd86624f
|
# Dataset of konoe_kanata/近江彼方/코노에카나타 (Love Live! School Idol Festival ALL STARS)
This is the dataset of konoe_kanata/近江彼方/코노에카나타 (Love Live! School Idol Festival ALL STARS), containing 500 images and their tags.
The core tags of this character are `long_hair, purple_eyes, bangs, orange_hair, breasts, hair_ornament, wavy_hair, brown_hair, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 888.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/konoe_kanata_loveliveschoolidolfestivalallstars/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 405.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/konoe_kanata_loveliveschoolidolfestivalallstars/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1316 | 944.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/konoe_kanata_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 736.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/konoe_kanata_loveliveschoolidolfestivalallstars/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1316 | 1.50 GiB | [Download](https://huggingface.co/datasets/CyberHarem/konoe_kanata_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/konoe_kanata_loveliveschoolidolfestivalallstars',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, midriff, navel, smile, solo, birthday, crown, detached_sleeves, double_bun, earrings, shorts, medium_breasts |
| 1 | 5 |  |  |  |  |  | 1girl, earrings, looking_at_viewer, smile, solo, witch_hat, black_gloves, blush, open_mouth, star_(symbol), upper_body |
| 2 | 9 |  |  |  |  |  | 1girl, looking_at_viewer, solo, blush, smile, upper_body, hat |
| 3 | 7 |  |  |  |  |  | 1girl, holding_umbrella, looking_at_viewer, smile, solo, white_background, blush, hair_between_eyes, simple_background, upper_body, closed_mouth, ribbon, white_dress, hair_bun, long_sleeves, wristwatch |
| 4 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, nijigasaki_academy_school_uniform, simple_background, solo, white_background, blush, long_sleeves, open_mouth, plaid_skirt, smile, black_jacket, white_shirt, white_skirt, blazer, hairclip |
| 5 | 10 |  |  |  |  |  | 1girl, looking_at_viewer, nijigasaki_academy_school_uniform, solo, jacket, pillow_hug, blush, one_eye_closed, skirt, birthday, smile |
| 6 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, solo, bikini, cleavage, hair_flower, twin_braids, cloud, earrings, navel, outdoors, sky, ocean, smile, upper_body |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | midriff | navel | smile | solo | birthday | crown | detached_sleeves | double_bun | earrings | shorts | medium_breasts | witch_hat | black_gloves | blush | open_mouth | star_(symbol) | upper_body | hat | holding_umbrella | white_background | hair_between_eyes | simple_background | closed_mouth | ribbon | white_dress | hair_bun | long_sleeves | wristwatch | nijigasaki_academy_school_uniform | plaid_skirt | black_jacket | white_shirt | white_skirt | blazer | hairclip | jacket | pillow_hug | one_eye_closed | skirt | bikini | cleavage | hair_flower | twin_braids | cloud | outdoors | sky | ocean |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:----------|:--------|:--------|:-------|:-----------|:--------|:-------------------|:-------------|:-----------|:---------|:-----------------|:------------|:---------------|:--------|:-------------|:----------------|:-------------|:------|:-------------------|:-------------------|:--------------------|:--------------------|:---------------|:---------|:--------------|:-----------|:---------------|:-------------|:------------------------------------|:--------------|:---------------|:--------------|:--------------|:---------|:-----------|:---------|:-------------|:-----------------|:--------|:---------|:-----------|:--------------|:--------------|:--------|:-----------|:------|:--------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | | | X | X | | | | | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | X | X | | | X | X | | | | | | | | | | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | X | | | X | X | | | | | | | | | | X | | | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | X | X | | | X | X | | | | | | | | | | X | X | | | | | X | | X | | | | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 5 | 10 |  |  |  |  |  | X | X | | | X | X | X | | | | | | | | | X | | | | | | | | | | | | | | | X | | | | | | | X | X | X | X | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | X | | X | X | X | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X |
|
CyberHarem/konoe_kanata_loveliveschoolidolfestivalallstars
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T15:28:56+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T03:54:00+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of konoe\_kanata/近江彼方/코노에카나타 (Love Live! School Idol Festival ALL STARS)
================================================================================
This is the dataset of konoe\_kanata/近江彼方/코노에카나타 (Love Live! School Idol Festival ALL STARS), containing 500 images and their tags.
The core tags of this character are 'long\_hair, purple\_eyes, bangs, orange\_hair, breasts, hair\_ornament, wavy\_hair, brown\_hair, large\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
2277caf98d417a8d6cdb13962d9fbfab92da9a29
|
# Dataset Card for "data_modelGenerated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
serhatkurt/data_modelGenerated
|
[
"region:us"
] |
2023-09-25T15:45:33+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1982154.0, "num_examples": 16}], "download_size": 1983278, "dataset_size": 1982154.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-25T15:50:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "data_modelGenerated"
More Information needed
|
[
"# Dataset Card for \"data_modelGenerated\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"data_modelGenerated\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"data_modelGenerated\"\n\nMore Information needed"
] |
d02d26674ae59f70e2ba55f5cdc4d56d757de14d
|
# Dataset Card for "babylm_10M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nthngdy/babylm_10M
|
[
"region:us"
] |
2023-09-25T15:52:03+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 55441912.303940535, "num_examples": 1015494}], "download_size": 36288832, "dataset_size": 55441912.303940535}}
|
2023-09-25T15:52:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "babylm_10M"
More Information needed
|
[
"# Dataset Card for \"babylm_10M\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"babylm_10M\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"babylm_10M\"\n\nMore Information needed"
] |
72b3df8799244d9b2ce692988b07b5295ab2dbf6
|
_Note: this database has been uploaded by Hugging Face staff. Please see the original paper, repo, and hosted database below for any questions._
# Google DeepMind AlphaMissense Database
<img src="https://www.science.org/cms/10.1126/science.adg7492/asset/e028b855-19a9-40ab-a39f-759afedb5b22/assets/images/large/science.adg7492-fa.jpg" alt="drawing" width="600"/>
- **Paper:** https://www.science.org/doi/10.1126/science.adg7492
- **Github Repo:** https://github.com/google-deepmind/alphamissense
- **Original Database:** https://console.cloud.google.com/storage/browser/dm_alphamissense
## File descriptions
* **AlphaMissense_hg19.tsv.gz, AlphaMissense_hg38.tsv.gz**: Predictions for all possible single nucleotide missense variants (71M) from 19k human
protein-coding genes (canonical transcripts) for both hg19 and hg38 coordinates. These
files are sorted by genomic coordinates.
* **AlphaMissense_gene_hg19.tsv.gz, AlphaMissense_gene_hg38.tsv.gz**: Gene-level average predictions, which were computed by taking the mean
alphamissense_pathogenicity over all possible missense variants in a transcript
(canonical transcript).
* **AlphaMissense_aa_substitutions.tsv.gz**: Predictions for all possible single amino acid substitutions within 20k UniProt canonical
isoforms (216M protein variants). These are a superset of the amino acid substitutions
induced by single nucleotide missense variants. This file uses UniProt accession
numbers for proteins and does not have genomic coordinates.
* **AlphaMissense_isoforms_hg38.tsv.gz**: Predictions for all possible missense variants for 60k non-canonical transcript isoforms
(hg38, GENCODE V32). This file has transcript_id but no UniProt accession numbers.
Predictions for non-canonical isoforms were not thoroughly evaluated and should be
used with caution. This file is sorted by genomic coordinates.
* **AlphaMissense_isoforms_aa_substitutions.tsv.gz**: Predictions for all possible single amino acid substitutions for 60k non-canonical
transcript isoforms (GENCODE V32). These are a superset of the amino acid
substitutions induced by single nucleotide missense variants.This file has transcript_id
but no UniProt accession numbers.
All transcript annotations are based on GENCODE V27 (hg19) or V32 (hg38).
Canonical transcripts are defined as described in the publication.
All files are compressed with bgzip.
## Column descriptions
**Note**: Not all columns are present in every file.
- **CHROM**
The chromosome as a string: `chr<N>`, where N is one of [1-22, X, Y, M].
- **POS**
Genome position (1-based).
- **REF**
The reference nucleotide (GRCh38.p13 for hg38, GRCh37.p13 for hg19).
- **ALT**
The alternative nucleotide.
- **genome**
The genome build, hg38 or hg19.
- **uniprot_id**
UniProtKB accession number of the protein in which the variant induces a single amino-acid substitution (UniProt release 2021_02).
- **transcript_id**
Ensembl transcript ID from GENCODE V27 (hg19) or V32 (hg38).
- **protein_variant**
Amino acid change induced by the alternative allele, in the format `<Reference amino acid><POS_aa><Alternative amino acid>` (e.g. V2L). POS_aa is the 1-based position of the residue within the protein amino acid sequence.
- **am_pathogenicity**
Calibrated AlphaMissense pathogenicity scores (ranging between 0 and 1), which can be interpreted as the predicted probability of a variant being clinically pathogenic.
- **am_class**
Classification of the `protein_variant` into one of three discrete categories: 'likely_benign', 'likely_pathogenic', or 'ambiguous'. These are derived using the following thresholds: 'likely_benign' if `alphamissense_pathogenicity` < 0.34; 'likely_pathogenic' if `alphamissense_pathogenicity` > 0.564; and 'ambiguous' otherwise.
- **mean_am_pathogenicity**
The average `alphamissense_pathogenicity` of all missense variants per transcript.
## Citation/license and disclaimer
AlphaMissense Database Copyright (2023) DeepMind Technologies Limited. All predictions are provided for non-commercial research use only under [CC BY-NC-SA license](https://creativecommons.org/licenses/by-nc-sa/4.0/).
Researchers interested in predictions not yet provided, and for non-commercial use, can send an expression of interest to [[email protected]](mailto:[email protected]).
## Disclaimer
The AlphaMissense Database and other information provided on this site is for theoretical modelling only, caution should be exercised in use. It is provided “as-is” without any warranty of any kind, whether express or implied. For clarity, no warranty is given that use of the information shall not infringe the rights of any third party. The information provided is not intended to be a substitute for professional medical advice, diagnosis, or treatment, and does not constitute medical or other professional advice. The predictions in the AlphaMissense Database are predictions only, with varying levels of confidence and should be interpreted carefully.
## Citation
If you use this resource for your research please cite the following publication:
“Accurate proteome-wide missense variant effect prediction with AlphaMissense”
Jun Cheng, Guido Novati, Joshua Pan, Clare Bycroft, Akvilė Žemgulytė, Taylor Applebaum, Alexander Pritzel, Lai Hong Wong, Michal Zielinski, Tobias Sargeant, Rosalia G. Schneider, Andrew W. Senior, John Jumper, Demis Hassabis, Pushmeet Kohli, Žiga Avsec
Use of the AlphaMissense Database is subject to [Google Cloud Platform Terms of Service](https://cloud.google.com/terms).
|
katielink/dm_alphamissense
|
[
"license:cc-by-nc-sa-4.0",
"biology",
"region:us"
] |
2023-09-25T15:52:42+00:00
|
{"license": "cc-by-nc-sa-4.0", "tags": ["biology"], "configs": [{"config_name": "gene_hg19", "data_files": "AlphaMissense_gene_hg19.csv"}, {"config_name": "gene_hg38", "data_files": "AlphaMissense_gene_hg38.csv"}]}
|
2023-10-05T01:10:28+00:00
|
[] |
[] |
TAGS
#license-cc-by-nc-sa-4.0 #biology #region-us
|
_Note: this database has been uploaded by Hugging Face staff. Please see the original paper, repo, and hosted database below for any questions._
# Google DeepMind AlphaMissense Database
<img src="URL alt="drawing" width="600"/>
- Paper: URL
- Github Repo: URL
- Original Database: URL
## File descriptions
* AlphaMissense_hg19.URL, AlphaMissense_hg38.URL: Predictions for all possible single nucleotide missense variants (71M) from 19k human
protein-coding genes (canonical transcripts) for both hg19 and hg38 coordinates. These
files are sorted by genomic coordinates.
* AlphaMissense_gene_hg19.URL, AlphaMissense_gene_hg38.URL: Gene-level average predictions, which were computed by taking the mean
alphamissense_pathogenicity over all possible missense variants in a transcript
(canonical transcript).
* AlphaMissense_aa_substitutions.URL: Predictions for all possible single amino acid substitutions within 20k UniProt canonical
isoforms (216M protein variants). These are a superset of the amino acid substitutions
induced by single nucleotide missense variants. This file uses UniProt accession
numbers for proteins and does not have genomic coordinates.
* AlphaMissense_isoforms_hg38.URL: Predictions for all possible missense variants for 60k non-canonical transcript isoforms
(hg38, GENCODE V32). This file has transcript_id but no UniProt accession numbers.
Predictions for non-canonical isoforms were not thoroughly evaluated and should be
used with caution. This file is sorted by genomic coordinates.
* AlphaMissense_isoforms_aa_substitutions.URL: Predictions for all possible single amino acid substitutions for 60k non-canonical
transcript isoforms (GENCODE V32). These are a superset of the amino acid
substitutions induced by single nucleotide missense variants.This file has transcript_id
but no UniProt accession numbers.
All transcript annotations are based on GENCODE V27 (hg19) or V32 (hg38).
Canonical transcripts are defined as described in the publication.
All files are compressed with bgzip.
## Column descriptions
Note: Not all columns are present in every file.
- CHROM
The chromosome as a string: 'chr<N>', where N is one of [1-22, X, Y, M].
- POS
Genome position (1-based).
- REF
The reference nucleotide (GRCh38.p13 for hg38, GRCh37.p13 for hg19).
- ALT
The alternative nucleotide.
- genome
The genome build, hg38 or hg19.
- uniprot_id
UniProtKB accession number of the protein in which the variant induces a single amino-acid substitution (UniProt release 2021_02).
- transcript_id
Ensembl transcript ID from GENCODE V27 (hg19) or V32 (hg38).
- protein_variant
Amino acid change induced by the alternative allele, in the format '<Reference amino acid><POS_aa><Alternative amino acid>' (e.g. V2L). POS_aa is the 1-based position of the residue within the protein amino acid sequence.
- am_pathogenicity
Calibrated AlphaMissense pathogenicity scores (ranging between 0 and 1), which can be interpreted as the predicted probability of a variant being clinically pathogenic.
- am_class
Classification of the 'protein_variant' into one of three discrete categories: 'likely_benign', 'likely_pathogenic', or 'ambiguous'. These are derived using the following thresholds: 'likely_benign' if 'alphamissense_pathogenicity' < 0.34; 'likely_pathogenic' if 'alphamissense_pathogenicity' > 0.564; and 'ambiguous' otherwise.
- mean_am_pathogenicity
The average 'alphamissense_pathogenicity' of all missense variants per transcript.
/license and disclaimer
AlphaMissense Database Copyright (2023) DeepMind Technologies Limited. All predictions are provided for non-commercial research use only under CC BY-NC-SA license.
Researchers interested in predictions not yet provided, and for non-commercial use, can send an expression of interest to alphamissense@URL.
## Disclaimer
The AlphaMissense Database and other information provided on this site is for theoretical modelling only, caution should be exercised in use. It is provided “as-is” without any warranty of any kind, whether express or implied. For clarity, no warranty is given that use of the information shall not infringe the rights of any third party. The information provided is not intended to be a substitute for professional medical advice, diagnosis, or treatment, and does not constitute medical or other professional advice. The predictions in the AlphaMissense Database are predictions only, with varying levels of confidence and should be interpreted carefully.
If you use this resource for your research please cite the following publication:
“Accurate proteome-wide missense variant effect prediction with AlphaMissense”
Jun Cheng, Guido Novati, Joshua Pan, Clare Bycroft, Akvilė Žemgulytė, Taylor Applebaum, Alexander Pritzel, Lai Hong Wong, Michal Zielinski, Tobias Sargeant, Rosalia G. Schneider, Andrew W. Senior, John Jumper, Demis Hassabis, Pushmeet Kohli, Žiga Avsec
Use of the AlphaMissense Database is subject to Google Cloud Platform Terms of Service.
|
[
"# Google DeepMind AlphaMissense Database\n\n<img src=\"URL alt=\"drawing\" width=\"600\"/>\n\n- Paper: URL\n- Github Repo: URL\n- Original Database: URL",
"## File descriptions\n* AlphaMissense_hg19.URL, AlphaMissense_hg38.URL: Predictions for all possible single nucleotide missense variants (71M) from 19k human\nprotein-coding genes (canonical transcripts) for both hg19 and hg38 coordinates. These\nfiles are sorted by genomic coordinates.\n* AlphaMissense_gene_hg19.URL, AlphaMissense_gene_hg38.URL: Gene-level average predictions, which were computed by taking the mean\nalphamissense_pathogenicity over all possible missense variants in a transcript\n(canonical transcript).\n* AlphaMissense_aa_substitutions.URL: Predictions for all possible single amino acid substitutions within 20k UniProt canonical\nisoforms (216M protein variants). These are a superset of the amino acid substitutions\ninduced by single nucleotide missense variants. This file uses UniProt accession\nnumbers for proteins and does not have genomic coordinates.\n* AlphaMissense_isoforms_hg38.URL: Predictions for all possible missense variants for 60k non-canonical transcript isoforms\n(hg38, GENCODE V32). This file has transcript_id but no UniProt accession numbers.\nPredictions for non-canonical isoforms were not thoroughly evaluated and should be\nused with caution. This file is sorted by genomic coordinates.\n* AlphaMissense_isoforms_aa_substitutions.URL: Predictions for all possible single amino acid substitutions for 60k non-canonical\ntranscript isoforms (GENCODE V32). These are a superset of the amino acid\nsubstitutions induced by single nucleotide missense variants.This file has transcript_id\nbut no UniProt accession numbers.\n\nAll transcript annotations are based on GENCODE V27 (hg19) or V32 (hg38).\n\nCanonical transcripts are defined as described in the publication.\n\nAll files are compressed with bgzip.",
"## Column descriptions\n\nNote: Not all columns are present in every file.\n\n- CHROM \n The chromosome as a string: 'chr<N>', where N is one of [1-22, X, Y, M].\n\n- POS \n Genome position (1-based).\n\n- REF \n The reference nucleotide (GRCh38.p13 for hg38, GRCh37.p13 for hg19).\n\n- ALT \n The alternative nucleotide.\n\n- genome \n The genome build, hg38 or hg19.\n\n- uniprot_id \n UniProtKB accession number of the protein in which the variant induces a single amino-acid substitution (UniProt release 2021_02).\n\n- transcript_id \n Ensembl transcript ID from GENCODE V27 (hg19) or V32 (hg38).\n\n- protein_variant \n Amino acid change induced by the alternative allele, in the format '<Reference amino acid><POS_aa><Alternative amino acid>' (e.g. V2L). POS_aa is the 1-based position of the residue within the protein amino acid sequence.\n\n- am_pathogenicity \n Calibrated AlphaMissense pathogenicity scores (ranging between 0 and 1), which can be interpreted as the predicted probability of a variant being clinically pathogenic.\n\n- am_class \n Classification of the 'protein_variant' into one of three discrete categories: 'likely_benign', 'likely_pathogenic', or 'ambiguous'. These are derived using the following thresholds: 'likely_benign' if 'alphamissense_pathogenicity' < 0.34; 'likely_pathogenic' if 'alphamissense_pathogenicity' > 0.564; and 'ambiguous' otherwise.\n\n- mean_am_pathogenicity \n The average 'alphamissense_pathogenicity' of all missense variants per transcript.\n\n/license and disclaimer\n\nAlphaMissense Database Copyright (2023) DeepMind Technologies Limited. All predictions are provided for non-commercial research use only under CC BY-NC-SA license. \nResearchers interested in predictions not yet provided, and for non-commercial use, can send an expression of interest to alphamissense@URL.",
"## Disclaimer\n\nThe AlphaMissense Database and other information provided on this site is for theoretical modelling only, caution should be exercised in use. It is provided “as-is” without any warranty of any kind, whether express or implied. For clarity, no warranty is given that use of the information shall not infringe the rights of any third party. The information provided is not intended to be a substitute for professional medical advice, diagnosis, or treatment, and does not constitute medical or other professional advice. The predictions in the AlphaMissense Database are predictions only, with varying levels of confidence and should be interpreted carefully.\n\nIf you use this resource for your research please cite the following publication: \n“Accurate proteome-wide missense variant effect prediction with AlphaMissense” \nJun Cheng, Guido Novati, Joshua Pan, Clare Bycroft, Akvilė Žemgulytė, Taylor Applebaum, Alexander Pritzel, Lai Hong Wong, Michal Zielinski, Tobias Sargeant, Rosalia G. Schneider, Andrew W. Senior, John Jumper, Demis Hassabis, Pushmeet Kohli, Žiga Avsec\n\nUse of the AlphaMissense Database is subject to Google Cloud Platform Terms of Service."
] |
[
"TAGS\n#license-cc-by-nc-sa-4.0 #biology #region-us \n",
"# Google DeepMind AlphaMissense Database\n\n<img src=\"URL alt=\"drawing\" width=\"600\"/>\n\n- Paper: URL\n- Github Repo: URL\n- Original Database: URL",
"## File descriptions\n* AlphaMissense_hg19.URL, AlphaMissense_hg38.URL: Predictions for all possible single nucleotide missense variants (71M) from 19k human\nprotein-coding genes (canonical transcripts) for both hg19 and hg38 coordinates. These\nfiles are sorted by genomic coordinates.\n* AlphaMissense_gene_hg19.URL, AlphaMissense_gene_hg38.URL: Gene-level average predictions, which were computed by taking the mean\nalphamissense_pathogenicity over all possible missense variants in a transcript\n(canonical transcript).\n* AlphaMissense_aa_substitutions.URL: Predictions for all possible single amino acid substitutions within 20k UniProt canonical\nisoforms (216M protein variants). These are a superset of the amino acid substitutions\ninduced by single nucleotide missense variants. This file uses UniProt accession\nnumbers for proteins and does not have genomic coordinates.\n* AlphaMissense_isoforms_hg38.URL: Predictions for all possible missense variants for 60k non-canonical transcript isoforms\n(hg38, GENCODE V32). This file has transcript_id but no UniProt accession numbers.\nPredictions for non-canonical isoforms were not thoroughly evaluated and should be\nused with caution. This file is sorted by genomic coordinates.\n* AlphaMissense_isoforms_aa_substitutions.URL: Predictions for all possible single amino acid substitutions for 60k non-canonical\ntranscript isoforms (GENCODE V32). These are a superset of the amino acid\nsubstitutions induced by single nucleotide missense variants.This file has transcript_id\nbut no UniProt accession numbers.\n\nAll transcript annotations are based on GENCODE V27 (hg19) or V32 (hg38).\n\nCanonical transcripts are defined as described in the publication.\n\nAll files are compressed with bgzip.",
"## Column descriptions\n\nNote: Not all columns are present in every file.\n\n- CHROM \n The chromosome as a string: 'chr<N>', where N is one of [1-22, X, Y, M].\n\n- POS \n Genome position (1-based).\n\n- REF \n The reference nucleotide (GRCh38.p13 for hg38, GRCh37.p13 for hg19).\n\n- ALT \n The alternative nucleotide.\n\n- genome \n The genome build, hg38 or hg19.\n\n- uniprot_id \n UniProtKB accession number of the protein in which the variant induces a single amino-acid substitution (UniProt release 2021_02).\n\n- transcript_id \n Ensembl transcript ID from GENCODE V27 (hg19) or V32 (hg38).\n\n- protein_variant \n Amino acid change induced by the alternative allele, in the format '<Reference amino acid><POS_aa><Alternative amino acid>' (e.g. V2L). POS_aa is the 1-based position of the residue within the protein amino acid sequence.\n\n- am_pathogenicity \n Calibrated AlphaMissense pathogenicity scores (ranging between 0 and 1), which can be interpreted as the predicted probability of a variant being clinically pathogenic.\n\n- am_class \n Classification of the 'protein_variant' into one of three discrete categories: 'likely_benign', 'likely_pathogenic', or 'ambiguous'. These are derived using the following thresholds: 'likely_benign' if 'alphamissense_pathogenicity' < 0.34; 'likely_pathogenic' if 'alphamissense_pathogenicity' > 0.564; and 'ambiguous' otherwise.\n\n- mean_am_pathogenicity \n The average 'alphamissense_pathogenicity' of all missense variants per transcript.\n\n/license and disclaimer\n\nAlphaMissense Database Copyright (2023) DeepMind Technologies Limited. All predictions are provided for non-commercial research use only under CC BY-NC-SA license. \nResearchers interested in predictions not yet provided, and for non-commercial use, can send an expression of interest to alphamissense@URL.",
"## Disclaimer\n\nThe AlphaMissense Database and other information provided on this site is for theoretical modelling only, caution should be exercised in use. It is provided “as-is” without any warranty of any kind, whether express or implied. For clarity, no warranty is given that use of the information shall not infringe the rights of any third party. The information provided is not intended to be a substitute for professional medical advice, diagnosis, or treatment, and does not constitute medical or other professional advice. The predictions in the AlphaMissense Database are predictions only, with varying levels of confidence and should be interpreted carefully.\n\nIf you use this resource for your research please cite the following publication: \n“Accurate proteome-wide missense variant effect prediction with AlphaMissense” \nJun Cheng, Guido Novati, Joshua Pan, Clare Bycroft, Akvilė Žemgulytė, Taylor Applebaum, Alexander Pritzel, Lai Hong Wong, Michal Zielinski, Tobias Sargeant, Rosalia G. Schneider, Andrew W. Senior, John Jumper, Demis Hassabis, Pushmeet Kohli, Žiga Avsec\n\nUse of the AlphaMissense Database is subject to Google Cloud Platform Terms of Service."
] |
[
22,
44,
479,
525,
266
] |
[
"passage: TAGS\n#license-cc-by-nc-sa-4.0 #biology #region-us \n# Google DeepMind AlphaMissense Database\n\n<img src=\"URL alt=\"drawing\" width=\"600\"/>\n\n- Paper: URL\n- Github Repo: URL\n- Original Database: URL",
"passage: ## File descriptions\n* AlphaMissense_hg19.URL, AlphaMissense_hg38.URL: Predictions for all possible single nucleotide missense variants (71M) from 19k human\nprotein-coding genes (canonical transcripts) for both hg19 and hg38 coordinates. These\nfiles are sorted by genomic coordinates.\n* AlphaMissense_gene_hg19.URL, AlphaMissense_gene_hg38.URL: Gene-level average predictions, which were computed by taking the mean\nalphamissense_pathogenicity over all possible missense variants in a transcript\n(canonical transcript).\n* AlphaMissense_aa_substitutions.URL: Predictions for all possible single amino acid substitutions within 20k UniProt canonical\nisoforms (216M protein variants). These are a superset of the amino acid substitutions\ninduced by single nucleotide missense variants. This file uses UniProt accession\nnumbers for proteins and does not have genomic coordinates.\n* AlphaMissense_isoforms_hg38.URL: Predictions for all possible missense variants for 60k non-canonical transcript isoforms\n(hg38, GENCODE V32). This file has transcript_id but no UniProt accession numbers.\nPredictions for non-canonical isoforms were not thoroughly evaluated and should be\nused with caution. This file is sorted by genomic coordinates.\n* AlphaMissense_isoforms_aa_substitutions.URL: Predictions for all possible single amino acid substitutions for 60k non-canonical\ntranscript isoforms (GENCODE V32). These are a superset of the amino acid\nsubstitutions induced by single nucleotide missense variants.This file has transcript_id\nbut no UniProt accession numbers.\n\nAll transcript annotations are based on GENCODE V27 (hg19) or V32 (hg38).\n\nCanonical transcripts are defined as described in the publication.\n\nAll files are compressed with bgzip."
] |
513fb14c5f75c0e44ec714fa5961a8a2a20ffda4
|
# About dataset
Source: http://finugorbib.com/index.html
|
udmurtNLP/udmurt-bible-parallel-corpora
|
[
"task_categories:translation",
"size_categories:10K<n<100K",
"language:udm",
"region:us"
] |
2023-09-25T15:58:01+00:00
|
{"language": ["udm"], "size_categories": ["10K<n<100K"], "task_categories": ["translation"], "dataset_info": {"features": [{"name": "udm", "dtype": "string"}, {"name": "ru", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15350364, "num_examples": 33752}], "download_size": 6172011, "dataset_size": 15350364}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-28T15:31:55+00:00
|
[] |
[
"udm"
] |
TAGS
#task_categories-translation #size_categories-10K<n<100K #language-Udmurt #region-us
|
# About dataset
Source: URL
|
[
"# About dataset\n\nSource: URL"
] |
[
"TAGS\n#task_categories-translation #size_categories-10K<n<100K #language-Udmurt #region-us \n",
"# About dataset\n\nSource: URL"
] |
[
34,
7
] |
[
"passage: TAGS\n#task_categories-translation #size_categories-10K<n<100K #language-Udmurt #region-us \n# About dataset\n\nSource: URL"
] |
3270ce4605647d8bc501c1c4c8de5d9eac2bc08a
|
# Dataset of maruyama_aya/丸山彩/마루야마아야 (BanG Dream!)
This is the dataset of maruyama_aya/丸山彩/마루야마아야 (BanG Dream!), containing 500 images and their tags.
The core tags of this character are `pink_hair, bangs, pink_eyes, bow, twintails, ribbon, sidelocks, long_hair, hair_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 719.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/maruyama_aya_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 393.44 MiB | [Download](https://huggingface.co/datasets/CyberHarem/maruyama_aya_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1261 | 879.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/maruyama_aya_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 626.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/maruyama_aya_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1261 | 1.26 GiB | [Download](https://huggingface.co/datasets/CyberHarem/maruyama_aya_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/maruyama_aya_bangdream',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, blush, looking_at_viewer, open_mouth, short_sleeves, solo, collarbone, polka_dot_shirt, yellow_shirt, :d, pink_bow, ;d, one_eye_closed, pink_pants |
| 1 | 18 |  |  |  |  |  | 1girl, long_sleeves, solo, white_sailor_collar, hanasakigawa_school_uniform, looking_at_viewer, blush, red_ribbon, neck_ribbon, sailor_dress, white_background, brown_dress, open_mouth, simple_background, double-breasted, upper_body, :d, hair_down, medium_hair |
| 2 | 8 |  |  |  |  |  | 1girl, :d, bare_shoulders, holding_microphone, open_mouth, pink_choker, solo, collarbone, frilled_dress, looking_at_viewer, strapless_dress, wrist_bow, blush, hair_bow, pink_bowtie, pink_dress, simple_background, white_ribbon, upper_teeth_only, white_background, index_finger_raised, medium_breasts, wrist_ribbon |
| 3 | 13 |  |  |  |  |  | 1girl, bare_shoulders, collarbone, pink_choker, solo, white_background, looking_at_viewer, simple_background, smile, strapless_dress, blush, pink_bowtie, pink_dress, hair_bow, upper_body, white_ribbon, wrist_bow, frilled_dress, breasts, hands_up, open_mouth, teeth |
| 4 | 14 |  |  |  |  |  | 1girl, solo, looking_at_viewer, open_mouth, white_gloves, :d, blush, striped_bow, bowtie, hair_bow, hair_ornament, frills, neck_ribbon, pink_bow, striped_ribbon, holding_microphone, white_ribbon, dress, pink_ribbon, white_background, white_bow, back_bow, flower_earrings, upper_body, simple_background, sparkle |
| 5 | 11 |  |  |  |  |  | 1girl, bowtie, drill_hair, solo, striped_ribbon, white_gloves, blue_headwear, macaron, striped_bow, earrings, hat_ribbon, looking_at_viewer, neck_ribbon, open_mouth, strawberry, center_frills, food-themed_hair_ornament, hair_bow, :d, alternate_hairstyle, blush, heart, top_hat, corset, upper_body, frilled_dress, pink_bow, red_ribbon, frilled_hat, mini_hat, pink_ribbon, sleeveless, white_background |
| 6 | 10 |  |  |  |  |  | 1girl, hair_flower, pink_ribbon, solo, blush, detached_collar, earrings, open_mouth, wrist_cuffs, looking_at_viewer, neck_ribbon, yellow_flower, :d, upper_body, dated, frilled_dress, pink_dress, pom_pom_(clothes), bare_shoulders, character_name, happy_birthday, holding, white_background |
| 7 | 6 |  |  |  |  |  | 1girl, :d, blush, long_sleeves, looking_at_viewer, open_mouth, see-through_sleeves, solo, white_ribbon, frilled_dress, hairband, white_choker, white_dress, center_frills, collarbone, frilled_sleeves, hair_bow, hand_on_own_chest, ribbon_choker |
| 8 | 5 |  |  |  |  |  | 1girl, long_sleeves, looking_at_viewer, solo, :d, blush, floral_print, open_mouth, white_background, frilled_sleeves, hair_bow, hair_flower, petals, pink_dress, red_ribbon, simple_background, cherry_blossoms, half_updo, japanese_clothes, neck_ribbon, obi, red_bow, upper_body, wide_sleeves |
| 9 | 9 |  |  |  |  |  | 1girl, blush, large_breasts, looking_at_viewer, navel, nipples, open_mouth, pussy, solo, completely_nude, simple_background, collarbone, stomach, cleft_of_venus, sweat, uncensored, white_background, :d, groin, shiny_skin, spread_legs, standing |
| 10 | 6 |  |  |  |  |  | 1girl, hairclip, long_sleeves, looking_at_viewer, solo, x_hair_ornament, blue_jacket, blush, crop_top, cropped_jacket, cross-laced_clothes, midriff, navel, pink_choker, pink_skirt, :d, frilled_skirt, necklace, open_mouth, pink_ribbon, blue_ribbon, collarbone, heart_earrings, holding_camera, open_jacket, side_ponytail, socks, striped_ribbon, white_shirt |
| 11 | 9 |  |  |  |  |  | candy_hair_ornament, frills, jack-o'-lantern, looking_at_viewer, smile, star_(symbol), 1girl, blush, double_bun, earrings, purple_gloves, solo, head_wings, short_sleeves, lollipop, ghost, holding_candy, tongue_out, vertical_stripes, alternate_hairstyle, black_bow, bowtie, dress, hair_bow, halloween_costume, mismatched_legwear, one_eye_closed, open_mouth, polka_dot_bow, purple_bow, upper_body |
| 12 | 5 |  |  |  |  |  | 1girl, blush, cardigan, long_sleeves, smile, solo, white_shirt, looking_at_viewer, open_clothes, belt, breasts, cherry_blossoms, collared_shirt, day, floral_print, open_mouth, outdoors, petals, print_skirt, white_skirt, ;d, bag, blue_sky, building, one_eye_closed, pink_skirt, yellow_jacket |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | looking_at_viewer | open_mouth | short_sleeves | solo | collarbone | polka_dot_shirt | yellow_shirt | :d | pink_bow | ;d | one_eye_closed | pink_pants | long_sleeves | white_sailor_collar | hanasakigawa_school_uniform | red_ribbon | neck_ribbon | sailor_dress | white_background | brown_dress | simple_background | double-breasted | upper_body | hair_down | medium_hair | bare_shoulders | holding_microphone | pink_choker | frilled_dress | strapless_dress | wrist_bow | hair_bow | pink_bowtie | pink_dress | white_ribbon | upper_teeth_only | index_finger_raised | medium_breasts | wrist_ribbon | smile | breasts | hands_up | teeth | white_gloves | striped_bow | bowtie | hair_ornament | frills | striped_ribbon | dress | pink_ribbon | white_bow | back_bow | flower_earrings | sparkle | drill_hair | blue_headwear | macaron | earrings | hat_ribbon | strawberry | center_frills | food-themed_hair_ornament | alternate_hairstyle | heart | top_hat | corset | frilled_hat | mini_hat | sleeveless | hair_flower | detached_collar | wrist_cuffs | yellow_flower | dated | pom_pom_(clothes) | character_name | happy_birthday | holding | see-through_sleeves | hairband | white_choker | white_dress | frilled_sleeves | hand_on_own_chest | ribbon_choker | floral_print | petals | cherry_blossoms | half_updo | japanese_clothes | obi | red_bow | wide_sleeves | large_breasts | navel | nipples | pussy | completely_nude | stomach | cleft_of_venus | sweat | uncensored | groin | shiny_skin | spread_legs | standing | hairclip | x_hair_ornament | blue_jacket | crop_top | cropped_jacket | cross-laced_clothes | midriff | pink_skirt | frilled_skirt | necklace | blue_ribbon | heart_earrings | holding_camera | open_jacket | side_ponytail | socks | white_shirt | candy_hair_ornament | jack-o'-lantern | star_(symbol) | double_bun | purple_gloves | head_wings | lollipop | ghost | holding_candy | tongue_out | vertical_stripes | black_bow | halloween_costume | mismatched_legwear | polka_dot_bow | purple_bow | cardigan | open_clothes | belt | collared_shirt | day | outdoors | print_skirt | white_skirt | bag | blue_sky | building | yellow_jacket |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------|:--------------------|:-------------|:----------------|:-------|:-------------|:------------------|:---------------|:-----|:-----------|:-----|:-----------------|:-------------|:---------------|:----------------------|:------------------------------|:-------------|:--------------|:---------------|:-------------------|:--------------|:--------------------|:------------------|:-------------|:------------|:--------------|:-----------------|:---------------------|:--------------|:----------------|:------------------|:------------|:-----------|:--------------|:-------------|:---------------|:-------------------|:----------------------|:-----------------|:---------------|:--------|:----------|:-----------|:--------|:---------------|:--------------|:---------|:----------------|:---------|:-----------------|:--------|:--------------|:------------|:-----------|:------------------|:----------|:-------------|:----------------|:----------|:-----------|:-------------|:-------------|:----------------|:----------------------------|:----------------------|:--------|:----------|:---------|:--------------|:-----------|:-------------|:--------------|:------------------|:--------------|:----------------|:--------|:--------------------|:-----------------|:-----------------|:----------|:----------------------|:-----------|:---------------|:--------------|:------------------|:--------------------|:----------------|:---------------|:---------|:------------------|:------------|:-------------------|:------|:----------|:---------------|:----------------|:--------|:----------|:--------|:------------------|:----------|:-----------------|:--------|:-------------|:--------|:-------------|:--------------|:-----------|:-----------|:------------------|:--------------|:-----------|:-----------------|:----------------------|:----------|:-------------|:----------------|:-----------|:--------------|:-----------------|:-----------------|:--------------|:----------------|:--------|:--------------|:----------------------|:------------------|:----------------|:-------------|:----------------|:-------------|:-----------|:--------|:----------------|:-------------|:-------------------|:------------|:--------------------|:---------------------|:----------------|:-------------|:-----------|:---------------|:-------|:-----------------|:------|:-----------|:--------------|:--------------|:------|:-----------|:-----------|:----------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 18 |  |  |  |  |  | X | X | X | X | | X | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | X | X | X | | X | X | | | X | | | | | | | | | | | X | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 13 |  |  |  |  |  | X | X | X | X | | X | X | | | | | | | | | | | | | | X | | X | | X | | | X | | X | X | X | X | X | X | X | X | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 14 |  |  |  |  |  | X | X | X | X | | X | | | | X | X | | | | | | | | X | | X | | X | | X | | | | X | | | | | X | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 11 |  |  |  |  |  | X | X | X | X | | X | | | | X | X | | | | | | | X | X | | X | | | | X | | | | | | X | | | X | | | | | | | | | | | | X | X | X | | | X | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 10 |  |  |  |  |  | X | X | X | X | | X | | | | X | | | | | | | | | X | | X | | | | X | | | X | | | X | | | | | X | | | | | | | | | | | | | | | | | X | | | | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | X | X | X | | X | X | | | X | | | | | X | | | | | | | | | | | | | | | | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | X | X | X | | X | | | | X | | | | | X | | | X | X | | X | | X | | X | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | X | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 9 |  |  |  |  |  | X | X | X | X | | X | X | | | X | | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 10 | 6 |  |  |  |  |  | X | X | X | X | | X | X | | | X | | | | | X | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 11 | 9 |  |  |  |  |  | X | X | X | X | X | X | | | | | | | X | | | | | | | | | | | | X | | | | | | | | | X | | | | | | | | X | | | | | | X | | X | | X | | | | | | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 12 | 5 |  |  |  |  |  | X | X | X | X | | X | | | | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/maruyama_aya_bangdream
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T16:01:42+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T17:16:07+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of maruyama\_aya/丸山彩/마루야마아야 (BanG Dream!)
=================================================
This is the dataset of maruyama\_aya/丸山彩/마루야마아야 (BanG Dream!), containing 500 images and their tags.
The core tags of this character are 'pink\_hair, bangs, pink\_eyes, bow, twintails, ribbon, sidelocks, long\_hair, hair\_ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
80d0c6910582a4b5841604df2e88541034fcffdb
|
# Dataset of nakasu_kasumi/中須かすみ/나카스카스미 (Love Live! School Idol Festival ALL STARS)
This is the dataset of nakasu_kasumi/中須かすみ/나카스카스미 (Love Live! School Idol Festival ALL STARS), containing 500 images and their tags.
The core tags of this character are `short_hair, bangs, brown_hair, red_eyes, bob_cut, bow, hair_ornament, ribbon, grey_hair, asymmetrical_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 826.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nakasu_kasumi_loveliveschoolidolfestivalallstars/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 383.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nakasu_kasumi_loveliveschoolidolfestivalallstars/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1320 | 907.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nakasu_kasumi_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 689.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nakasu_kasumi_loveliveschoolidolfestivalallstars/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1320 | 1.43 GiB | [Download](https://huggingface.co/datasets/CyberHarem/nakasu_kasumi_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nakasu_kasumi_loveliveschoolidolfestivalallstars',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 29 |  |  |  |  |  | 1girl, nijigasaki_academy_school_uniform, short_sleeves, solo, summer_uniform, sweater_vest, plaid_skirt, pleated_skirt, looking_at_viewer, collared_shirt, smile, blush, neck_ribbon, open_mouth, crescent_hair_ornament, light_brown_hair, white_background, yellow_ribbon, blue_shirt, simple_background, blue_skirt, white_shirt, breasts |
| 1 | 21 |  |  |  |  |  | 1girl, black_jacket, long_sleeves, nijigasaki_academy_school_uniform, solo, cardigan, looking_at_viewer, winter_uniform, collared_shirt, plaid_skirt, pleated_skirt, blazer, neck_ribbon, yellow_ribbon, crescent_hair_ornament, blush, smile, white_background, white_shirt, white_skirt, simple_background, open_mouth, hairclip, light_brown_hair |
| 2 | 11 |  |  |  |  |  | black_jacket, crescent_hair_ornament, 1girl, blazer, blush, long_sleeves, looking_at_viewer, nijigasaki_academy_school_uniform, solo, upper_body, winter_uniform, yellow_ribbon, collared_shirt, neck_ribbon, cardigan, hairclip, white_shirt, white_background, smile, star_(symbol), buttons, open_mouth |
| 3 | 21 |  |  |  |  |  | 1girl, solo, beret, looking_at_viewer, yellow_headwear, frills, blush, feathers, green_bow, wrist_cuffs, crescent_hair_ornament, puffy_short_sleeves, star_(symbol), yellow_dress, smile, hairclip, one_eye_closed, open_mouth, yellow_skirt, collarbone, white_background, nail_polish, pointing_at_self, yellow_nails |
| 4 | 5 |  |  |  |  |  | 1girl, green_bow, hair_bow, long_sleeves, looking_at_viewer, smile, solo, upper_body, yellow_dress, vertical_stripes, blush, bowtie, neck_ribbon, puffy_sleeves |
| 5 | 13 |  |  |  |  |  | 1girl, green_bow, hair_bow, looking_at_viewer, solo, yellow_dress, bowtie, puffy_long_sleeves, huge_bow, vertical_stripes, blush, smile, open_mouth, heart, one_eye_closed, pleated_dress, pointing_at_self |
| 6 | 11 |  |  |  |  |  | 1girl, looking_at_viewer, polka_dot_headwear, puffy_short_sleeves, solo, top_hat, white_gloves, yellow_dress, belt, buttons, yellow_headwear, smile, blush, blue_ribbon, hat_ribbon, one_eye_closed, pointing, open_mouth |
| 7 | 5 |  |  |  |  |  | 1girl, beret, blush, magical_girl, solo, white_gloves, hairclip, holding_wand, looking_at_viewer, pink_dress, puffy_short_sleeves, smile, star_(symbol), white_headwear, x_hair_ornament, frills, pink_bow, pink_headwear, staff, wings, closed_mouth, headset, light_brown_hair, one_eye_closed, open_mouth, pink_eyes, pleated_skirt, shirt, thighhighs |
| 8 | 5 |  |  |  |  |  | 1girl, blush, christmas, fur_trim, looking_at_viewer, red_headwear, santa_costume, santa_hat, solo, red_capelet, red_dress, upper_body, :d, bare_shoulders, collarbone, crescent_hair_ornament, hand_up, open_mouth, santa_dress, simple_background, strapless, white_background |
| 9 | 6 |  |  |  |  |  | 1girl, hat, looking_at_viewer, smile, solo, white_headwear, open_mouth, white_gloves, light_brown_hair, pink_eyes, white_capelet, blush, earmuffs, fur_trim, white_dress, yellow_bowtie |
| 10 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, solo, bare_shoulders, black_dress, blush, sleeveless_dress, smile, black_gloves, feathers, beret, black_headwear, collarbone, hair_bow, yellow_bow, breasts, light_brown_hair, one_eye_closed, pearl_bracelet, simple_background, white_background |
| 11 | 15 |  |  |  |  |  | 1girl, solo, jacket, necklace, hairclip, looking_at_viewer, blush, collarbone, braid, smile, white_dress, open_mouth, breasts, dated, flower |
| 12 | 10 |  |  |  |  |  | 1girl, looking_at_viewer, solo, floral_print, obi, open_mouth, wide_sleeves, blush, yellow_kimono, yukata, :d, blurry, candy_apple, holding_food, light_brown_hair, long_sleeves |
| 13 | 5 |  |  |  |  |  | 1girl, blush, looking_at_viewer, solo, black_thighhighs, earrings, side_braid, skirt, :p, hair_bow, light_brown_hair, beret, collarbone, happy_birthday, heart, nail_polish, pink_eyes, pink_sweater, white_bow, yellow_background |
| 14 | 20 |  |  |  |  |  | 1girl, looking_at_viewer, solo, navel, yellow_bikini, blush, smile, collarbone, bikini_skirt, frills, one_eye_closed, water, halterneck, outdoors, light_brown_hair, white_bow, breasts, ocean, open_mouth, sky, x_hair_ornament |
| 15 | 6 |  |  |  |  |  | 1girl, fake_animal_ears, looking_at_viewer, playboy_bunny, solo, strapless_leotard, detached_collar, fake_tail, rabbit_ears, black_leotard, black_pantyhose, blush, bowtie, cleavage, rabbit_tail, simple_background, white_background, bare_shoulders, collarbone, covered_navel, hairclip, medium_breasts, wrist_cuffs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | nijigasaki_academy_school_uniform | short_sleeves | solo | summer_uniform | sweater_vest | plaid_skirt | pleated_skirt | looking_at_viewer | collared_shirt | smile | blush | neck_ribbon | open_mouth | crescent_hair_ornament | light_brown_hair | white_background | yellow_ribbon | blue_shirt | simple_background | blue_skirt | white_shirt | breasts | black_jacket | long_sleeves | cardigan | winter_uniform | blazer | white_skirt | hairclip | upper_body | star_(symbol) | buttons | beret | yellow_headwear | frills | feathers | green_bow | wrist_cuffs | puffy_short_sleeves | yellow_dress | one_eye_closed | yellow_skirt | collarbone | nail_polish | pointing_at_self | yellow_nails | hair_bow | vertical_stripes | bowtie | puffy_sleeves | puffy_long_sleeves | huge_bow | heart | pleated_dress | polka_dot_headwear | top_hat | white_gloves | belt | blue_ribbon | hat_ribbon | pointing | magical_girl | holding_wand | pink_dress | white_headwear | x_hair_ornament | pink_bow | pink_headwear | staff | wings | closed_mouth | headset | pink_eyes | shirt | thighhighs | christmas | fur_trim | red_headwear | santa_costume | santa_hat | red_capelet | red_dress | :d | bare_shoulders | hand_up | santa_dress | strapless | hat | white_capelet | earmuffs | white_dress | yellow_bowtie | black_dress | sleeveless_dress | black_gloves | black_headwear | yellow_bow | pearl_bracelet | jacket | necklace | braid | dated | flower | floral_print | obi | wide_sleeves | yellow_kimono | yukata | blurry | candy_apple | holding_food | black_thighhighs | earrings | side_braid | skirt | :p | happy_birthday | pink_sweater | white_bow | yellow_background | navel | yellow_bikini | bikini_skirt | water | halterneck | outdoors | ocean | sky | fake_animal_ears | playboy_bunny | strapless_leotard | detached_collar | fake_tail | rabbit_ears | black_leotard | black_pantyhose | cleavage | rabbit_tail | covered_navel | medium_breasts |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:------------------------------------|:----------------|:-------|:-----------------|:---------------|:--------------|:----------------|:--------------------|:-----------------|:--------|:--------|:--------------|:-------------|:-------------------------|:-------------------|:-------------------|:----------------|:-------------|:--------------------|:-------------|:--------------|:----------|:---------------|:---------------|:-----------|:-----------------|:---------|:--------------|:-----------|:-------------|:----------------|:----------|:--------|:------------------|:---------|:-----------|:------------|:--------------|:----------------------|:---------------|:-----------------|:---------------|:-------------|:--------------|:-------------------|:---------------|:-----------|:-------------------|:---------|:----------------|:---------------------|:-----------|:--------|:----------------|:---------------------|:----------|:---------------|:-------|:--------------|:-------------|:-----------|:---------------|:---------------|:-------------|:-----------------|:------------------|:-----------|:----------------|:--------|:--------|:---------------|:----------|:------------|:--------|:-------------|:------------|:-----------|:---------------|:----------------|:------------|:--------------|:------------|:-----|:-----------------|:----------|:--------------|:------------|:------|:----------------|:-----------|:--------------|:----------------|:--------------|:-------------------|:---------------|:-----------------|:-------------|:-----------------|:---------|:-----------|:--------|:--------|:---------|:---------------|:------|:---------------|:----------------|:---------|:---------|:--------------|:---------------|:-------------------|:-----------|:-------------|:--------|:-----|:-----------------|:---------------|:------------|:--------------------|:--------|:----------------|:---------------|:--------|:-------------|:-----------|:--------|:------|:-------------------|:----------------|:--------------------|:------------------|:------------|:--------------|:----------------|:------------------|:-----------|:--------------|:----------------|:-----------------|
| 0 | 29 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 21 |  |  |  |  |  | X | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | | X | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 11 |  |  |  |  |  | X | X | | X | | | | | X | X | X | X | X | X | X | | X | X | | | | X | | X | X | X | X | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 21 |  |  |  |  |  | X | | | X | | | | | X | | X | X | | X | X | | X | | | | | | | | | | | | | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | | | X | | | | | X | | X | X | X | | | | | | | | | | | | X | | | | | | X | | | | | | | X | | | X | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 13 |  |  |  |  |  | X | | | X | | | | | X | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | X | X | | | | X | | X | X | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 11 |  |  |  |  |  | X | | | X | | | | | X | | X | X | | X | | | | | | | | | | | | | | | | | | | X | | X | | | | | X | X | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | | | X | | | | X | X | | X | X | | X | | X | | | | | | | | | | | | | | X | | X | | X | | X | | | | X | | X | | | | | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | | | X | | | | | X | | | X | | X | X | | X | | | X | | | | | | | | | | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 6 |  |  |  |  |  | X | | | X | | | | | X | | X | X | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | | | | | | | | X | | | | X | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 10 | 7 |  |  |  |  |  | X | | | X | | | | | X | | X | X | | | | X | X | | | X | | | X | | | | | | | | | | | X | | | X | | | | | X | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 11 | 15 |  |  |  |  |  | X | | | X | | | | | X | | X | X | | X | | | | | | | | | X | | | | | | | X | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 12 | 10 |  |  |  |  |  | X | | | X | | | | | X | | | X | | X | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 13 | 5 |  |  |  |  |  | X | | | X | | | | | X | | | X | | | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | X | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 14 | 20 |  |  |  |  |  | X | | | X | | | | | X | | X | X | | X | | X | | | | | | | X | | | | | | | | | | | | | X | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 15 | 6 |  |  |  |  |  | X | | | X | | | | | X | | | X | | | | | X | | | X | | | | | | | | | | X | | | | | | | | | X | | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/nakasu_kasumi_loveliveschoolidolfestivalallstars
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T16:02:33+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T02:35:44+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of nakasu\_kasumi/中須かすみ/나카스카스미 (Love Live! School Idol Festival ALL STARS)
==================================================================================
This is the dataset of nakasu\_kasumi/中須かすみ/나카스카스미 (Love Live! School Idol Festival ALL STARS), containing 500 images and their tags.
The core tags of this character are 'short\_hair, bangs, brown\_hair, red\_eyes, bob\_cut, bow, hair\_ornament, ribbon, grey\_hair, asymmetrical\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
6d7df3d17c758d80f18bf756ba0f1844d452e0c3
|
# Dataset Card for "jimmybaek-llama2-826"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jimmybaek/jimmybaek-llama2-826
|
[
"region:us"
] |
2023-09-25T16:03:57+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3304, "num_examples": 826}], "download_size": 715, "dataset_size": 3304}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T15:46:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "jimmybaek-llama2-826"
More Information needed
|
[
"# Dataset Card for \"jimmybaek-llama2-826\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"jimmybaek-llama2-826\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"jimmybaek-llama2-826\"\n\nMore Information needed"
] |
2326a5e68176762e5be6ec2987e01a63d9d6c6ff
|
# Dataset Card for "Zeroshot_Gold_Test-1K_nenhuma"
This dataset is a test dataset for the Zeroshot models.
It has 1000 data in a prompt format exclusively for testing with class 'nenhuma' in Brazilian Portuguese.
Prompt:
```
"Classifique o tweet entre 'classe1', 'classe2', 'classe3', 'classe4' \\n\\nTweet: frase \\n\\nLabel:
```
## How to load and use this dataset:
```
from datasets import load_dataset
dataset = load_dataset("Weni/Zeroshot_Gold_Test-1K_nenhuma")
dataset
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Weni/Zeroshot_Test-Gold-1K_nenhuma
|
[
"region:us"
] |
2023-09-25T16:17:13+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "texto", "dtype": "string"}, {"name": "true_class", "dtype": "string"}, {"name": "BERT", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 188891, "num_examples": 1000}], "download_size": 54999, "dataset_size": 188891}}
|
2023-09-26T11:40:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Zeroshot_Gold_Test-1K_nenhuma"
This dataset is a test dataset for the Zeroshot models.
It has 1000 data in a prompt format exclusively for testing with class 'nenhuma' in Brazilian Portuguese.
Prompt:
## How to load and use this dataset:
More Information needed
|
[
"# Dataset Card for \"Zeroshot_Gold_Test-1K_nenhuma\"\n\nThis dataset is a test dataset for the Zeroshot models. \nIt has 1000 data in a prompt format exclusively for testing with class 'nenhuma' in Brazilian Portuguese.\n\nPrompt:",
"## How to load and use this dataset:\n\n\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Zeroshot_Gold_Test-1K_nenhuma\"\n\nThis dataset is a test dataset for the Zeroshot models. \nIt has 1000 data in a prompt format exclusively for testing with class 'nenhuma' in Brazilian Portuguese.\n\nPrompt:",
"## How to load and use this dataset:\n\n\n\nMore Information needed"
] |
[
6,
62,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Zeroshot_Gold_Test-1K_nenhuma\"\n\nThis dataset is a test dataset for the Zeroshot models. \nIt has 1000 data in a prompt format exclusively for testing with class 'nenhuma' in Brazilian Portuguese.\n\nPrompt:## How to load and use this dataset:\n\n\n\nMore Information needed"
] |
23ed29236ffc08a7f8cb74a6883b35042420ae0e
|
# Dataset Card for "text2tile_large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
anonymoussubmissions/text2tile_large
|
[
"region:us"
] |
2023-09-25T16:29:42+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4317901388.0, "num_examples": 164662}], "download_size": 4276914179, "dataset_size": 4317901388.0}}
|
2023-09-25T16:45:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "text2tile_large"
More Information needed
|
[
"# Dataset Card for \"text2tile_large\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"text2tile_large\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"text2tile_large\"\n\nMore Information needed"
] |
6e7c3d256e8faded239a0e4de526de7fa2569f46
|
# Dataset Card for "amazon_polarity_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
atmallen/amazon_polarity_embeddings
|
[
"region:us"
] |
2023-09-25T16:45:36+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}, {"name": "embedding", "sequence": "float32"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7148364432, "num_examples": 3600000}, {"name": "test", "num_bytes": 19940712, "num_examples": 10000}], "download_size": 3973281260, "dataset_size": 7168305144}}
|
2023-09-25T19:19:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "amazon_polarity_embeddings"
More Information needed
|
[
"# Dataset Card for \"amazon_polarity_embeddings\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"amazon_polarity_embeddings\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"amazon_polarity_embeddings\"\n\nMore Information needed"
] |
921970ed02a2893d149c2f954cd095795fcb4591
|
# Bangumi Image Base of Yuru Camp
This is the image base of bangumi Yuru Camp, we detected 25 characters, 3285 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 772 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 9 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 10 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 158 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 242 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 15 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 49 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 41 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 60 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 218 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 60 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 20 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 12 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 478 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 52 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 17 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 22 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 15 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 17 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 26 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 770 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 33 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 21 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 22 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 146 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/yurucamp
|
[
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] |
2023-09-25T17:03:10+00:00
|
{"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]}
|
2023-09-29T10:58:27+00:00
|
[] |
[] |
TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
|
Bangumi Image Base of Yuru Camp
===============================
This is the image base of bangumi Yuru Camp, we detected 25 characters, 3285 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
|
[] |
[
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
[
25
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
0e43ce3d7387de9e6900f11eae38a14df38aa7f5
|
# Dataset of mifune_shioriko/三船栞子/미후네시오리코 (Love Live! Nijigasaki Gakuen School Idol Doukoukai)
This is the dataset of mifune_shioriko/三船栞子/미후네시오리코 (Love Live! Nijigasaki Gakuen School Idol Doukoukai), containing 500 images and their tags.
The core tags of this character are `bangs, short_hair, red_eyes, black_hair, ribbon, fang, dark_green_hair, hair_ribbon, orange_eyes, swept_bangs, hair_ornament`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 887.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mifune_shioriko_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 400.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mifune_shioriko_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1318 | 940.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mifune_shioriko_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 735.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mifune_shioriko_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1318 | 1.51 GiB | [Download](https://huggingface.co/datasets/CyberHarem/mifune_shioriko_lovelivenijigasakihighschoolidolclub/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/mifune_shioriko_lovelivenijigasakihighschoolidolclub',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, black_jacket, long_sleeves, nijigasaki_academy_school_uniform, plaid_skirt, pleated_skirt, solo, white_skirt, winter_uniform, yellow_ribbon, armband, blazer, neck_ribbon, looking_at_viewer, open_mouth, white_shirt, brown_footwear, buttons, full_body, grey_socks, loafers |
| 1 | 8 |  |  |  |  |  | 1girl, looking_at_viewer, neck_ribbon, nijigasaki_academy_school_uniform, plaid_skirt, pleated_skirt, short_sleeves, solo, summer_uniform, white_shirt, blush, collared_shirt, open_mouth, yellow_ribbon, armband, blue_vest, dress_shirt, black_vest, simple_background, white_background, breasts |
| 2 | 5 |  |  |  |  |  | 1girl, hairclip, hat, long_sleeves, solo, blush, looking_at_viewer, smile, upper_body, open_mouth, plaid, white_background, white_shirt, green_headwear, side_braid, simple_background |
| 3 | 6 |  |  |  |  |  | 1girl, black_gloves, hat, solo, blush, fur_trim, looking_at_viewer, open_mouth, white_headwear, bow, dress, simple_background, smile, upper_body |
| 4 | 47 |  |  |  |  |  | 1girl, solo, white_gloves, green_dress, hat, looking_at_viewer, white_headwear, pearl_necklace, puffy_short_sleeves, smile, open_mouth, collarbone, earrings, blush |
| 5 | 6 |  |  |  |  |  | 1girl, blush, breasts, looking_at_viewer, short_sleeves, solo, white_dress, smile |
| 6 | 10 |  |  |  |  |  | 1girl, solo, detached_sleeves, dress, earrings, looking_at_viewer, chinese_clothes, jiangshi, ofuda, open_mouth, qing_guanmao, upper_body, blush, sleeves_past_fingers |
| 7 | 11 |  |  |  |  |  | 1girl, looking_at_viewer, blush, medium_breasts, solo, collarbone, open_mouth, green_bikini, cleavage, navel, white_background, simple_background |
| 8 | 23 |  |  |  |  |  | 1girl, solo, looking_at_viewer, maid_apron, maid_headdress, blush, enmaided, short_sleeves, cat_ears, cat_tail, open_mouth, puffy_sleeves, white_apron, cat_girl |
| 9 | 18 |  |  |  |  |  | 1girl, aiguillette, epaulettes, floral_print, furisode, gold_trim, hair_flower, mini_hat, obi, peony_(flower), solo, white_dress, white_headwear, white_kimono, wide_sleeves, blue_flower, crystal, puffy_short_sleeves, detached_sleeves, frilled_dress |
| 10 | 5 |  |  |  |  |  | 1girl, aiguillette, blue_flower, crystal, floral_print, furisode, gold_trim, green_background, hair_flower, mini_hat, obi, peony_(flower), puffy_short_sleeves, solo, white_dress, white_headwear, white_kimono, wide_sleeves, detached_sleeves, epaulettes, zettai_ryouiki, black_thighhighs, frilled_dress |
| 11 | 6 |  |  |  |  |  | 1girl, obi, solo, floral_print, furisode, looking_at_viewer, wide_sleeves, blush, green_kimono, smile, upper_body, hair_flower, holding, open_mouth |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_jacket | long_sleeves | nijigasaki_academy_school_uniform | plaid_skirt | pleated_skirt | solo | white_skirt | winter_uniform | yellow_ribbon | armband | blazer | neck_ribbon | looking_at_viewer | open_mouth | white_shirt | brown_footwear | buttons | full_body | grey_socks | loafers | short_sleeves | summer_uniform | blush | collared_shirt | blue_vest | dress_shirt | black_vest | simple_background | white_background | breasts | hairclip | hat | smile | upper_body | plaid | green_headwear | side_braid | black_gloves | fur_trim | white_headwear | bow | dress | white_gloves | green_dress | pearl_necklace | puffy_short_sleeves | collarbone | earrings | white_dress | detached_sleeves | chinese_clothes | jiangshi | ofuda | qing_guanmao | sleeves_past_fingers | medium_breasts | green_bikini | cleavage | navel | maid_apron | maid_headdress | enmaided | cat_ears | cat_tail | puffy_sleeves | white_apron | cat_girl | aiguillette | epaulettes | floral_print | furisode | gold_trim | hair_flower | mini_hat | obi | peony_(flower) | white_kimono | wide_sleeves | blue_flower | crystal | frilled_dress | green_background | zettai_ryouiki | black_thighhighs | green_kimono | holding |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:---------------|:---------------|:------------------------------------|:--------------|:----------------|:-------|:--------------|:-----------------|:----------------|:----------|:---------|:--------------|:--------------------|:-------------|:--------------|:-----------------|:----------|:------------|:-------------|:----------|:----------------|:-----------------|:--------|:-----------------|:------------|:--------------|:-------------|:--------------------|:-------------------|:----------|:-----------|:------|:--------|:-------------|:--------|:-----------------|:-------------|:---------------|:-----------|:-----------------|:------|:--------|:---------------|:--------------|:-----------------|:----------------------|:-------------|:-----------|:--------------|:-------------------|:------------------|:-----------|:--------|:---------------|:-----------------------|:-----------------|:---------------|:-----------|:--------|:-------------|:-----------------|:-----------|:-----------|:-----------|:----------------|:--------------|:-----------|:--------------|:-------------|:---------------|:-----------|:------------|:--------------|:-----------|:------|:-----------------|:---------------|:---------------|:--------------|:----------|:----------------|:-------------------|:-----------------|:-------------------|:---------------|:----------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | | | X | X | X | X | | | X | X | | X | X | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | | X | | | | X | | | | | | | X | X | X | | | | | | | | X | | | | | X | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | | | | | X | | | | | | | X | X | | | | | | | | | X | | | | | X | | | | X | X | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 47 |  |  |  |  |  | X | | | | | | X | | | | | | | X | X | | | | | | | | | X | | | | | | | | | X | X | | | | | | | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | | | | | | X | | | | | | | X | | | | | | | | X | | X | | | | | | | X | | | X | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 10 |  |  |  |  |  | X | | | | | | X | | | | | | | X | X | | | | | | | | | X | | | | | | | | | | | X | | | | | | | | X | | | | | | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 11 |  |  |  |  |  | X | | | | | | X | | | | | | | X | X | | | | | | | | | X | | | | | X | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 23 |  |  |  |  |  | X | | | | | | X | | | | | | | X | X | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 9 | 18 |  |  |  |  |  | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | | | X | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | |
| 10 | 5 |  |  |  |  |  | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | | | X | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | |
| 11 | 6 |  |  |  |  |  | X | | | | | | X | | | | | | | X | X | | | | | | | | | X | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | X | | X | | | X | | | | | | | X | X |
|
CyberHarem/mifune_shioriko_lovelivenijigasakihighschoolidolclub
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T17:09:44+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-17T02:38:37+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of mifune\_shioriko/三船栞子/미후네시오리코 (Love Live! Nijigasaki Gakuen School Idol Doukoukai)
=============================================================================================
This is the dataset of mifune\_shioriko/三船栞子/미후네시오리코 (Love Live! Nijigasaki Gakuen School Idol Doukoukai), containing 500 images and their tags.
The core tags of this character are 'bangs, short\_hair, red\_eyes, black\_hair, ribbon, fang, dark\_green\_hair, hair\_ribbon, orange\_eyes, swept\_bangs, hair\_ornament', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
e284bb41a2e69236c06dbd63a5e46b20dfc4ec2e
|
# Dataset Card for "amazon_polarity_embeddings_random0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
atmallen/amazon_polarity_embeddings_random0
|
[
"region:us"
] |
2023-09-25T17:32:10+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}, {"name": "embedding", "sequence": "float32"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7148364432, "num_examples": 3600000}, {"name": "test", "num_bytes": 19940712, "num_examples": 10000}], "download_size": 3903677724, "dataset_size": 7168305144}}
|
2023-09-26T00:31:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "amazon_polarity_embeddings_random0"
More Information needed
|
[
"# Dataset Card for \"amazon_polarity_embeddings_random0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"amazon_polarity_embeddings_random0\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"amazon_polarity_embeddings_random0\"\n\nMore Information needed"
] |
f09f4c0544df9aa763c6392872bf18d8b12c94fa
|
# Dataset Card for "ParcelSummaryDS"
|
mammoth-blaze/ParcelSummaryDS
|
[
"task_categories:text-classification",
"size_categories:n<1K",
"doi:10.57967/hf/1149",
"region:us"
] |
2023-09-25T17:34:04+00:00
|
{"size_categories": ["n<1K"], "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "contactNames", "dtype": "string"}, {"name": "parcelId", "dtype": "string"}, {"name": "parcelAddress", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "propertyUseCode", "dtype": "string"}, {"name": "acreage", "dtype": "string"}, {"name": "homestead", "dtype": "string"}, {"name": "link", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 700, "num_examples": 1}], "download_size": 639, "dataset_size": 1400}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-27T19:38:12+00:00
|
[] |
[] |
TAGS
#task_categories-text-classification #size_categories-n<1K #doi-10.57967/hf/1149 #region-us
|
# Dataset Card for "ParcelSummaryDS"
|
[
"# Dataset Card for \"ParcelSummaryDS\""
] |
[
"TAGS\n#task_categories-text-classification #size_categories-n<1K #doi-10.57967/hf/1149 #region-us \n",
"# Dataset Card for \"ParcelSummaryDS\""
] |
[
39,
13
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-n<1K #doi-10.57967/hf/1149 #region-us \n# Dataset Card for \"ParcelSummaryDS\""
] |
fe145192272390b40ad0b301413038250e0dacf0
|
# Dataset Card for "Zeroshot_Gold_Test-1K_bias"
This dataset is a test dataset for the Zeroshot models.
It has 1000 data in a prompt format exclusively for testing with class 'bias' in Brazilian Portuguese.
Prompt:
```
"Classifique o tweet entre 'classe1', 'classe2', 'classe3', 'classe4', 'bias' \\n\\nTweet: frase \\n\\nLabel:
```
## How to load and use this dataset:
```
from datasets import load_dataset
dataset = load_dataset("Weni/Zeroshot_Gold_Test-1K_bias")
dataset
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Weni/Zeroshot_Test-Gold-1K_bias
|
[
"region:us"
] |
2023-09-25T17:39:01+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "texto", "dtype": "string"}, {"name": "true_class", "dtype": "string"}, {"name": "BERT", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 183928, "num_examples": 1000}], "download_size": 54527, "dataset_size": 183928}}
|
2023-09-25T17:43:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Zeroshot_Gold_Test-1K_bias"
This dataset is a test dataset for the Zeroshot models.
It has 1000 data in a prompt format exclusively for testing with class 'bias' in Brazilian Portuguese.
Prompt:
## How to load and use this dataset:
More Information needed
|
[
"# Dataset Card for \"Zeroshot_Gold_Test-1K_bias\"\n\nThis dataset is a test dataset for the Zeroshot models. \nIt has 1000 data in a prompt format exclusively for testing with class 'bias' in Brazilian Portuguese.\n\nPrompt:",
"## How to load and use this dataset:\n\n\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Zeroshot_Gold_Test-1K_bias\"\n\nThis dataset is a test dataset for the Zeroshot models. \nIt has 1000 data in a prompt format exclusively for testing with class 'bias' in Brazilian Portuguese.\n\nPrompt:",
"## How to load and use this dataset:\n\n\n\nMore Information needed"
] |
[
6,
62,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Zeroshot_Gold_Test-1K_bias\"\n\nThis dataset is a test dataset for the Zeroshot models. \nIt has 1000 data in a prompt format exclusively for testing with class 'bias' in Brazilian Portuguese.\n\nPrompt:## How to load and use this dataset:\n\n\n\nMore Information needed"
] |
bac4793cb13f9a5ae770a1d0341cc00d3019ef9a
|
# Dataset of tsurumaki_kokoro (BanG Dream!)
This is the dataset of tsurumaki_kokoro (BanG Dream!), containing 500 images and their tags.
The core tags of this character are `blonde_hair, bangs, long_hair, yellow_eyes, sidelocks, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 748.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tsurumaki_kokoro_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 413.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tsurumaki_kokoro_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1275 | 906.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tsurumaki_kokoro_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 652.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tsurumaki_kokoro_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1275 | 1.30 GiB | [Download](https://huggingface.co/datasets/CyberHarem/tsurumaki_kokoro_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tsurumaki_kokoro_bangdream',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | looking_at_viewer, short_sleeves, wrist_cuffs, 1girl, earrings, hair_bow, midriff, smile, solo, blush, thighhighs, crop_top, navel, necklace, blue_skirt, multicolored_skirt, open_mouth, belt, boots, choker, frilled_skirt, layered_skirt, multicolored_shirt, polka_dot, see-through_sleeves, shoes, socks, white_background, white_footwear |
| 1 | 14 |  |  |  |  |  | 1girl, earrings, solo, black_headwear, hat_bow, looking_at_viewer, pom_pom_(clothes), red_bowtie, top_hat, cleavage, medium_breasts, polka_dot_bow, open_mouth, blush, frills, smiley_face, black_shorts, :d, confetti, holding, short_shorts, teeth, white_background, back_bow |
| 2 | 10 |  |  |  |  |  | frills, looking_at_viewer, 1girl, blush, solo, confetti, earrings, white_gloves, hair_bow, open_mouth, star_(symbol), :d, red_bowtie, upper_body, ribbon, string_of_flags, twintails, balloon, blue_bow, corset, short_sleeves, striped_bowtie, top_hat |
| 3 | 22 |  |  |  |  |  | 1girl, solo, looking_at_viewer, blush, bowtie, open_mouth, earrings, wrist_cuffs, frills, choker, fur-trimmed_capelet, striped_bow, mini_crown, navel, ribbon, blue_bow, red_capelet, :d, sparkle, midriff, white_background, one_eye_closed, short_sleeves, upper_body |
| 4 | 6 |  |  |  |  |  | 1girl, epaulettes, looking_at_viewer, open_mouth, shako_cap, sleeveless, solo, upper_body, :d, band_uniform, blush, sash, wrist_cuffs, upper_teeth_only |
| 5 | 13 |  |  |  |  |  | 1girl, :d, looking_at_viewer, open_mouth, solo, white_skirt, band_uniform, blush, epaulettes, wrist_cuffs, shako_cap, sleeveless_shirt, teeth, thighhighs, medium_breasts, sash, frilled_skirt, red_footwear, armpits, standing, thigh_boots, white_background, cowboy_shot, simple_background, star_(symbol) |
| 6 | 32 |  |  |  |  |  | 1girl, solo, looking_at_viewer, short_sleeves, blush, red_shirt, striped_shirt, smile, collarbone, open_mouth, simple_background, white_background, overall_shorts, upper_body, medium_breasts, teeth |
| 7 | 6 |  |  |  |  |  | 1girl, :d, blush, long_sleeves, looking_at_viewer, open_mouth, sheep_horns, sleep_mask, solo, star_(symbol), upper_teeth_only, bow, hair_flower, mask_on_head, headset, sparkle, apron, arms_up, center_frills, frilled_sleeves, pink_rose, ribbon, striped, upper_body |
| 8 | 24 |  |  |  |  |  | 1girl, solo, white_dress, looking_at_viewer, sleeveless_dress, sun_hat, sundress, blush, day, outdoors, straw_hat, sunflower, open_mouth, collarbone, :d, frills, hat_flower, blue_sky, cloud, medium_breasts, cleavage, upper_body, upper_teeth_only |
| 9 | 10 |  |  |  |  |  | 1girl, brown_dress, hanasakigawa_school_uniform, long_sleeves, looking_at_viewer, neck_ribbon, red_ribbon, sailor_dress, solo, double-breasted, white_sailor_collar, blush, open_mouth, white_background, simple_background, pleated_dress, one_eye_closed, teeth, :d, ;d, bow |
| 10 | 10 |  |  |  |  |  | 1girl, looking_at_viewer, pleated_skirt, serafuku, short_sleeves, solo, white_skirt, blush, hanasakigawa_school_uniform, white_sailor_collar, white_background, blue_neckerchief, blue_shirt, open_mouth, simple_background, miniskirt, collarbone, :d |
| 11 | 5 |  |  |  |  |  | 1girl, earrings, smile, solo, star_(symbol), looking_at_viewer, beret, blush, bowtie, frilled_shirt_collar, hat_bow, short_sleeves, upper_body, alternate_hairstyle, argyle, center_frills, christmas, cleavage, constellation_print, dress, frilled_sleeves, hairclip, headset, long_sleeves, red_headwear, ribbon, striped_bow, twintails |
| 12 | 5 |  |  |  |  |  | 1girl, blush, hair_flower, hair_ribbon, solo, sunflower, twintails, :d, bracelet, looking_at_viewer, open_mouth, alternate_hairstyle, blue_dress, blue_ribbon, frills, holding, necklace, beachball, bow, day, hairband, polka_dot, sky, sparkle, upper_body, white_background |
| 13 | 8 |  |  |  |  |  | 1girl, looking_at_viewer, skirt, solo, twintails, white_gloves, handcuffs, midriff, navel, police_hat, star_(symbol), blue_headwear, blush, short_sleeves, crop_top, hair_ribbon, open_mouth, peaked_cap, :d, cropped_jacket, detached_collar, frills, orange_necktie, short_necktie, belt, cleavage, knee_boots, open_clothes, shirt, striped, white_background, white_jacket, white_thighhighs |
| 14 | 14 |  |  |  |  |  | 1girl, kimono, obi, blush, looking_at_viewer, solo, wide_sleeves, floral_print, hair_flower, long_sleeves, ponytail, open_mouth, hair_bow, :d, holding, detached_sleeves, frills, red_flower, red_bow, sky, teeth, upper_body |
| 15 | 16 |  |  |  |  |  | blush, looking_at_viewer, 1girl, solo, nipples, open_mouth, completely_nude, pussy, collarbone, navel, simple_background, barefoot, large_breasts, medium_breasts, stomach, white_background, :d, fingernails, full_body, anus, armpits, blunt_bangs, cleft_of_venus, heart, toes, uncensored |
| 16 | 5 |  |  |  |  |  | 1boy, 1girl, blush, hetero, medium_breasts, open_mouth, solo_focus, sweat, completely_nude, saliva, sex_from_behind, heart, heavy_breathing, nipples, standing_sex, tears, upper_teeth_only, vaginal, :d, arm_support, ass, bent_over, cum, from_side, indoors, large_breasts, looking_at_viewer, looking_to_the_side, mosaic_censoring, motion_lines, penis, torso_grab, trembling |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | looking_at_viewer | short_sleeves | wrist_cuffs | 1girl | earrings | hair_bow | midriff | smile | solo | blush | thighhighs | crop_top | navel | necklace | blue_skirt | multicolored_skirt | open_mouth | belt | boots | choker | frilled_skirt | layered_skirt | multicolored_shirt | polka_dot | see-through_sleeves | shoes | socks | white_background | white_footwear | black_headwear | hat_bow | pom_pom_(clothes) | red_bowtie | top_hat | cleavage | medium_breasts | polka_dot_bow | frills | smiley_face | black_shorts | :d | confetti | holding | short_shorts | teeth | back_bow | white_gloves | star_(symbol) | upper_body | ribbon | string_of_flags | twintails | balloon | blue_bow | corset | striped_bowtie | bowtie | fur-trimmed_capelet | striped_bow | mini_crown | red_capelet | sparkle | one_eye_closed | epaulettes | shako_cap | sleeveless | band_uniform | sash | upper_teeth_only | white_skirt | sleeveless_shirt | red_footwear | armpits | standing | thigh_boots | cowboy_shot | simple_background | red_shirt | striped_shirt | collarbone | overall_shorts | long_sleeves | sheep_horns | sleep_mask | bow | hair_flower | mask_on_head | headset | apron | arms_up | center_frills | frilled_sleeves | pink_rose | striped | white_dress | sleeveless_dress | sun_hat | sundress | day | outdoors | straw_hat | sunflower | hat_flower | blue_sky | cloud | brown_dress | hanasakigawa_school_uniform | neck_ribbon | red_ribbon | sailor_dress | double-breasted | white_sailor_collar | pleated_dress | ;d | pleated_skirt | serafuku | blue_neckerchief | blue_shirt | miniskirt | beret | frilled_shirt_collar | alternate_hairstyle | argyle | christmas | constellation_print | dress | hairclip | red_headwear | hair_ribbon | bracelet | blue_dress | blue_ribbon | beachball | hairband | sky | skirt | handcuffs | police_hat | blue_headwear | peaked_cap | cropped_jacket | detached_collar | orange_necktie | short_necktie | knee_boots | open_clothes | shirt | white_jacket | white_thighhighs | kimono | obi | wide_sleeves | floral_print | ponytail | detached_sleeves | red_flower | red_bow | nipples | completely_nude | pussy | barefoot | large_breasts | stomach | fingernails | full_body | anus | blunt_bangs | cleft_of_venus | heart | toes | uncensored | 1boy | hetero | solo_focus | sweat | saliva | sex_from_behind | heavy_breathing | standing_sex | tears | vaginal | arm_support | ass | bent_over | cum | from_side | indoors | looking_to_the_side | mosaic_censoring | motion_lines | penis | torso_grab | trembling |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------------------|:----------------|:--------------|:--------|:-----------|:-----------|:----------|:--------|:-------|:--------|:-------------|:-----------|:--------|:-----------|:-------------|:---------------------|:-------------|:-------|:--------|:---------|:----------------|:----------------|:---------------------|:------------|:----------------------|:--------|:--------|:-------------------|:-----------------|:-----------------|:----------|:--------------------|:-------------|:----------|:-----------|:-----------------|:----------------|:---------|:--------------|:---------------|:-----|:-----------|:----------|:---------------|:--------|:-----------|:---------------|:----------------|:-------------|:---------|:------------------|:------------|:----------|:-----------|:---------|:-----------------|:---------|:----------------------|:--------------|:-------------|:--------------|:----------|:-----------------|:-------------|:------------|:-------------|:---------------|:-------|:-------------------|:--------------|:-------------------|:---------------|:----------|:-----------|:--------------|:--------------|:--------------------|:------------|:----------------|:-------------|:-----------------|:---------------|:--------------|:-------------|:------|:--------------|:---------------|:----------|:--------|:----------|:----------------|:------------------|:------------|:----------|:--------------|:-------------------|:----------|:-----------|:------|:-----------|:------------|:------------|:-------------|:-----------|:--------|:--------------|:------------------------------|:--------------|:-------------|:---------------|:------------------|:----------------------|:----------------|:-----|:----------------|:-----------|:-------------------|:-------------|:------------|:--------|:-----------------------|:----------------------|:---------|:------------|:----------------------|:--------|:-----------|:---------------|:--------------|:-----------|:-------------|:--------------|:------------|:-----------|:------|:--------|:------------|:-------------|:----------------|:-------------|:-----------------|:------------------|:-----------------|:----------------|:-------------|:---------------|:--------|:---------------|:-------------------|:---------|:------|:---------------|:---------------|:-----------|:-------------------|:-------------|:----------|:----------|:------------------|:--------|:-----------|:----------------|:----------|:--------------|:------------|:-------|:--------------|:-----------------|:--------|:-------|:-------------|:-------|:---------|:-------------|:--------|:---------|:------------------|:------------------|:---------------|:--------|:----------|:--------------|:------|:------------|:------|:------------|:----------|:----------------------|:-------------------|:---------------|:--------|:-------------|:------------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 14 |  |  |  |  |  | X | | | X | X | | | | X | X | | | | | | | X | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | X | | X | X | X | | | X | X | | | | | | | X | | | | | | | | | | | | | | | | X | X | | | | X | | | X | X | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 22 |  |  |  |  |  | X | X | X | X | X | | X | | X | X | | | X | | | | X | | | X | | | | | | | | X | | | | | | | | | | X | | | X | | | | | | | | X | X | | | | X | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | | X | X | | | | | X | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 13 |  |  |  |  |  | X | | X | X | | | | | X | X | X | | | | | | X | | | | X | | | | | | | X | | | | | | | | X | | | | | X | | | | X | | | X | | | | | | | | | | | | | | | | X | X | | X | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 32 |  |  |  |  |  | X | X | | X | | | | X | X | X | | | | | | | X | | | | | | | | | | | X | | | | | | | | X | | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | | | X | | | | | X | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | X | X | | | | | | | | | | | | X | | | | | | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 24 |  |  |  |  |  | X | | | X | | | | | X | X | | | | | | | X | | | | | | | | | | | | | | | | | | X | X | | X | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 10 |  |  |  |  |  | X | | | X | | | | | X | X | | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | X | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 10 | 10 |  |  |  |  |  | X | X | | X | | | | | X | X | | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 11 | 5 |  |  |  |  |  | X | X | | X | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | X | | | | X | | | | | | | | | | | | | X | X | X | | X | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 12 | 5 |  |  |  |  |  | X | | | X | | | | | X | X | | | | X | | | X | | | | | | | X | | | | X | | | | | | | | | | X | | | X | | X | | | | | | X | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 13 | 8 |  |  |  |  |  | X | X | | X | | | X | | X | X | | X | X | | | | X | X | | | | | | | | | | X | | | | | | | X | | | X | | | X | | | | | | X | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 14 | 14 |  |  |  |  |  | X | | | X | | X | | | X | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | X | | | X | | X | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 15 | 16 |  |  |  |  |  | X | | | X | | | | | X | X | | | X | | | | X | | | | | | | | | | | X | | | | | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 16 | 5 |  |  |  |  |  | X | | | X | | | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | | X | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/tsurumaki_kokoro_bangdream
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-25T17:47:28+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T20:19:26+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of tsurumaki\_kokoro (BanG Dream!)
==========================================
This is the dataset of tsurumaki\_kokoro (BanG Dream!), containing 500 images and their tags.
The core tags of this character are 'blonde\_hair, bangs, long\_hair, yellow\_eyes, sidelocks, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.