sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
listlengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
listlengths
0
25
languages
listlengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
listlengths
0
352
processed_texts
listlengths
1
353
tokens_length
listlengths
1
353
input_texts
listlengths
1
40
f263152711945e8de7954e26ca5f676bce03bf9f
![Change can be sunshine if you let it in..png](https://cdn-uploads.huggingface.co/production/uploads/64c7bfe8ac1016256b69ea02/r9ZWYaWBovYF7HafTEMVb.png) # 📔 **DATASET** | **Dataset** | Class | Number of Questions | | ------- | ----------------------------------------------------------------- | ------------------------ | | **FLAN_CoT(zs)** | Reasoning 、 MATH 、 ScienceQA 、 Commonsense | 8000 | | **Prm800k** | Reasoning 、 MATH | 6713 | | **ScienceQA** | ScienceQA | 5177 | | **SciBench** | ScienceQA | 695 | | **ReClor** | Reasoning | 1624 | | **TheoremQA** | Commonsense 、 MATH 、 ScienceQA | 800 | | **OpenBookQA** | Text_Understanding 、 Reasoning 、 Commonsense 、 ScienceQA | 5957 | | **ARB** | Reasoning 、 MATH 、 ScienceQA 、 Commonsense 、 Text_Understanding | 605 | | **Openassistant-guanaco** | Commonsense 、 Text_Understanding 、 Reasoning | 802 | | **SAT** | Text_Understanding 、 Reasoning 、 MATH | 426 | | **GRE、GMAT** | Reasoning 、 MATH | 254 | | **AMC、AIME** | Reasoning 、 MATH | 1000 | | **LSAT** | Reasoning 、 LAW | 1009 | | **Gaokao-biology** | Comprehensive | 210 | | **Gaokao-chemistry** | Comprehensive | 207 | | **Gaokao-chinese** | Comprehensive | 246 | | **Gaokao-english** | Comprehensive | 306 | | **Gaokao-geography** | Comprehensive | 199 | | **Gaokao-mathcloze** | Comprehensive | 118 | | **Gaokao-mathqa** | Comprehensive | 351 | | **Gaokao-physics** | Comprehensive | 200 | | **LogiQA** | Reasoning | 651 | | **LeetCode** | Reasoning 、 Code | 2359 | # 📌 **Methon** ## *Improving the dataset* Based on the content of the "Textbooks are all you need" paper, We want to try fine-tuning using advanced questions. ## *Dataset Format Definition* Use "instruction、input、output" tend to lean towards guided datasets. In this format, each sample includes an instruction, an input, and an expected output. The instruction provides guidance on how to process the input to generate the output. This format of dataset is often used to train models to perform specific tasks, as they explicitly indicate the operations the model should perform. ``` { "input": "", "output": "", "instruction": "" } ``` - ### [FLAN_V2 COT(ZS)](https://huggingface.co/datasets/conceptofmind/cot_submix_original/tree/main) We only extract the 'zs_opt' from COT and categorize each task. - ### SAT、GRE、GMAT、AMC、AIME、LSAT We will configure the input for datasets such as GRE, GMAT, SAT etc. as "Please read the question and options carefully, then select the most appropriate answer and provide the corresponding explanation." Meanwhile, for the math input, it will be set as "Please provide the answer along with a corresponding explanation based on the given question." Moreover, the questions will be arranged in ascending order of difficulty levels. This is done because, according to the ORCA paper, they started training the model using GPT-3.5 and later transitioned to GPT-4. To avoid the student model from acquiring knowledge beyond its scope and thereby delivering suboptimal results, a progressive learning strategy was utilized. This approach was found to be effective, therefore, in datasets like AMC, AIME which have various levels of difficulty, I have arranged them in a way that embodies this gradual, progressive learning technique. Furthermore, their question and options are combined to form the instruction, and the label and solution are merged to become the output. Lastly, for the LSAT dataset, since it doesn't involve step-by-step processes, the passage is transformed into instruction, while the combination of the question and options serves as the input, and the label represents the output. - ### Gaokao Most of the inputs are configured by us: "Please read and understand the requirements and content of the question carefully, and then choose the option that best fits the description of the question or best answers the question from the options provided." Only gaokao-mathcloze is configured by us: "Please read and comprehend the requirements and content of the question carefully. Gradually ponder upon it and present the most appropriate answer based on your judgment." - ### LeetCode Input configuration: "Analyze the problem description and constraints, then develop a step-by-step Python function to generate the expected output based on the given inputs. Include brief explanations at each step to illustrate your solution process." - ### LogiQA Only perform general conversion - ### [OTHER](https://github.com/arielnlee/Platypus/tree/main/data_pipeline) Prm800k, ScienceQA, SciBench, ReClor, TheoremQA, OpenBookQA, ARB, and OpenAssistant-Guanaco datasets adopt the same format as Platypus. ## *Sampling Algorithms* Since the flan_v2 cot dataset includes tasks like: - cot_esnli - cot_strategyqa - cot_qasc - stream_qed - cot_gsm8k - cot_ecqa - cot_creak - stream_aqua To ensure this dataset contains diverse high-quality data, we first select zs_opt questions. Then, we filter out questions with output lengths exceeding the average length. This step aims to help the model learn richer reasoning steps. After that, we perform stratified sampling. Initially, we attempted stratified sampling before the length-based filtering, but we found that this approach resulted in varying sample sizes, making it challenging to reproduce. Thus, we decided to first filter by length and then perform stratified sampling. ```py import json import random with open("cot_ORIGINAL.json", "r") as f: abc = json.load(f) # --- part1 --- zsopt_data = [] # "zs_opt" for i in abc : if i["template_type"] == "zs_opt": zsopt_data.append(i) # --- part2 --- output_lengths = [len(i["targets"]) for i in zsopt_data] average_length = sum(output_lengths) / len(output_lengths) # average length filtered_data = [] for a in zsopt_data: if len(a["targets"]) >= average_length: filtered_data.append(a) # output length need to >= average_length class_counts = {} # Count the number of samples for each class for a in filtered_data: task_name = a["task_name"] if task_name in class_counts: class_counts[task_name] += 1 else: class_counts[task_name] = 1 # --- part3 --- total_samples = 8000 # we plan to select a total of 8000 samples sample_ratios = {} for task_name, count in class_counts.items(): sample_ratios[task_name] = count / len(filtered_data) sample_sizes = {} for task_name, sample_ratio in sample_ratios.items(): sample_sizes[task_name] = round(sample_ratio * total_samples) stratified_samples = {} # Perform stratified sampling for each class for task_name, sample_size in sample_sizes.items(): class_samples = [] for data in filtered_data: if data["task_name"] == task_name: class_samples.append(data) selected_samples = random.sample(class_samples, sample_size) stratified_samples[task_name] = selected_samples final_samples = [] # Convert to the specified format for task_name, samples in stratified_samples.items(): for sample in samples: final_samples.append( { "input": "", # use "" "output": sample["targets"], # output "instruction": sample["inputs"], # question } ) with open("cot_change.json", "w") as f: json.dump(final_samples, f, indent=2) ``` LSAT arranged according to LEVEL ```py import json with open("math-json.json", "r", encoding="utf-8") as f: data_list = json.load(f) sorted_data = sorted(data_list, key=lambda x: x["other"]["level"]) output_data = [ { "input": "Please provide the answer along with a corresponding explanation based on the given question.", "output": f"{item['answer']},solution:{item['other']['solution']}", "instruction": item["question"], } for item in sorted_data ] with open("math_convert.json", "w", encoding="utf-8") as output_file: json.dump(output_data, output_file, ensure_ascii=False, indent=4) ```
huangyt/FINETUNE4
[ "license:openrail", "region:us" ]
2023-09-15T15:22:29+00:00
{"license": "openrail"}
2023-09-16T05:02:11+00:00
[]
[]
TAGS #license-openrail #region-us
!Change can be sunshine if you let it in..png DATASET ======= Dataset: FLAN\_CoT(zs), Class: Reasoning 、 MATH 、 ScienceQA 、 Commonsense, Number of Questions: 8000 Dataset: Prm800k, Class: Reasoning 、 MATH, Number of Questions: 6713 Dataset: ScienceQA, Class: ScienceQA, Number of Questions: 5177 Dataset: SciBench, Class: ScienceQA, Number of Questions: 695 Dataset: ReClor, Class: Reasoning, Number of Questions: 1624 Dataset: TheoremQA, Class: Commonsense 、 MATH 、 ScienceQA, Number of Questions: 800 Dataset: OpenBookQA, Class: Text\_Understanding 、 Reasoning 、 Commonsense 、 ScienceQA, Number of Questions: 5957 Dataset: ARB, Class: Reasoning 、 MATH 、 ScienceQA 、 Commonsense 、 Text\_Understanding, Number of Questions: 605 Dataset: Openassistant-guanaco, Class: Commonsense 、 Text\_Understanding 、 Reasoning, Number of Questions: 802 Dataset: SAT, Class: Text\_Understanding 、 Reasoning 、 MATH, Number of Questions: 426 Dataset: GRE、GMAT, Class: Reasoning 、 MATH, Number of Questions: 254 Dataset: AMC、AIME, Class: Reasoning 、 MATH, Number of Questions: 1000 Dataset: LSAT, Class: Reasoning 、 LAW, Number of Questions: 1009 Dataset: Gaokao-biology, Class: Comprehensive, Number of Questions: 210 Dataset: Gaokao-chemistry, Class: Comprehensive, Number of Questions: 207 Dataset: Gaokao-chinese, Class: Comprehensive, Number of Questions: 246 Dataset: Gaokao-english, Class: Comprehensive, Number of Questions: 306 Dataset: Gaokao-geography, Class: Comprehensive, Number of Questions: 199 Dataset: Gaokao-mathcloze, Class: Comprehensive, Number of Questions: 118 Dataset: Gaokao-mathqa, Class: Comprehensive, Number of Questions: 351 Dataset: Gaokao-physics, Class: Comprehensive, Number of Questions: 200 Dataset: LogiQA, Class: Reasoning, Number of Questions: 651 Dataset: LeetCode, Class: Reasoning 、 Code, Number of Questions: 2359 Methon ====== *Improving the dataset* ----------------------- Based on the content of the "Textbooks are all you need" paper, We want to try fine-tuning using advanced questions. *Dataset Format Definition* --------------------------- Use "instruction、input、output" tend to lean towards guided datasets. In this format, each sample includes an instruction, an input, and an expected output. The instruction provides guidance on how to process the input to generate the output. This format of dataset is often used to train models to perform specific tasks, as they explicitly indicate the operations the model should perform. * ### FLAN\_V2 COT(ZS) We only extract the 'zs\_opt' from COT and categorize each task. * ### SAT、GRE、GMAT、AMC、AIME、LSAT We will configure the input for datasets such as GRE, GMAT, SAT etc. as "Please read the question and options carefully, then select the most appropriate answer and provide the corresponding explanation." Meanwhile, for the math input, it will be set as "Please provide the answer along with a corresponding explanation based on the given question." Moreover, the questions will be arranged in ascending order of difficulty levels. This is done because, according to the ORCA paper, they started training the model using GPT-3.5 and later transitioned to GPT-4. To avoid the student model from acquiring knowledge beyond its scope and thereby delivering suboptimal results, a progressive learning strategy was utilized. This approach was found to be effective, therefore, in datasets like AMC, AIME which have various levels of difficulty, I have arranged them in a way that embodies this gradual, progressive learning technique. Furthermore, their question and options are combined to form the instruction, and the label and solution are merged to become the output. Lastly, for the LSAT dataset, since it doesn't involve step-by-step processes, the passage is transformed into instruction, while the combination of the question and options serves as the input, and the label represents the output. * ### Gaokao Most of the inputs are configured by us: "Please read and understand the requirements and content of the question carefully, and then choose the option that best fits the description of the question or best answers the question from the options provided." Only gaokao-mathcloze is configured by us: "Please read and comprehend the requirements and content of the question carefully. Gradually ponder upon it and present the most appropriate answer based on your judgment." * ### LeetCode Input configuration: "Analyze the problem description and constraints, then develop a step-by-step Python function to generate the expected output based on the given inputs. Include brief explanations at each step to illustrate your solution process." * ### LogiQA Only perform general conversion * ### OTHER Prm800k, ScienceQA, SciBench, ReClor, TheoremQA, OpenBookQA, ARB, and OpenAssistant-Guanaco datasets adopt the same format as Platypus. *Sampling Algorithms* --------------------- Since the flan\_v2 cot dataset includes tasks like: * cot\_esnli * cot\_strategyqa * cot\_qasc * stream\_qed * cot\_gsm8k * cot\_ecqa * cot\_creak * stream\_aqua To ensure this dataset contains diverse high-quality data, we first select zs\_opt questions. Then, we filter out questions with output lengths exceeding the average length. This step aims to help the model learn richer reasoning steps. After that, we perform stratified sampling. Initially, we attempted stratified sampling before the length-based filtering, but we found that this approach resulted in varying sample sizes, making it challenging to reproduce. Thus, we decided to first filter by length and then perform stratified sampling. LSAT arranged according to LEVEL
[ "### FLAN\\_V2 COT(ZS)\n\n\nWe only extract the 'zs\\_opt' from COT and categorize each task.\n* ### SAT、GRE、GMAT、AMC、AIME、LSAT\n\n\nWe will configure the input for datasets such as GRE, GMAT, SAT etc. as \"Please read the question and options carefully, then select the most appropriate answer and provide the corresponding explanation.\" Meanwhile, for the math input, it will be set as \"Please provide the answer along with a corresponding explanation based on the given question.\" Moreover, the questions will be arranged in ascending order of difficulty levels. This is done because, according to the ORCA paper, they started training the model using GPT-3.5 and later transitioned to GPT-4. To avoid the student model from acquiring knowledge beyond its scope and thereby delivering suboptimal results, a progressive learning strategy was utilized. This approach was found to be effective, therefore, in datasets like AMC, AIME which have various levels of difficulty, I have arranged them in a way that embodies this gradual, progressive learning technique.\n\n\nFurthermore, their question and options are combined to form the instruction, and the label and solution are merged to become the output.\n\n\nLastly, for the LSAT dataset, since it doesn't involve step-by-step processes, the passage is transformed into instruction, while the combination of the question and options serves as the input, and the label represents the output.\n* ### Gaokao\n\n\nMost of the inputs are configured by us:\n\"Please read and understand the requirements and content of the question carefully, and then choose the option that best fits the description of the question or best answers the question from the options provided.\"\nOnly gaokao-mathcloze is configured by us:\n\"Please read and comprehend the requirements and content of the question carefully. Gradually ponder upon it and present the most appropriate answer based on your judgment.\"\n* ### LeetCode\n\n\nInput configuration:\n\"Analyze the problem description and constraints, then develop a step-by-step Python function to generate the expected output based on the given inputs. Include brief explanations at each step to illustrate your solution process.\"\n* ### LogiQA\n\n\nOnly perform general conversion\n* ### OTHER\n\n\nPrm800k, ScienceQA, SciBench, ReClor, TheoremQA, OpenBookQA, ARB, and OpenAssistant-Guanaco datasets adopt the same format as Platypus.\n\n\n*Sampling Algorithms*\n---------------------\n\n\nSince the flan\\_v2 cot dataset includes tasks like:\n\n\n* cot\\_esnli\n* cot\\_strategyqa\n* cot\\_qasc\n* stream\\_qed\n* cot\\_gsm8k\n* cot\\_ecqa\n* cot\\_creak\n* stream\\_aqua\n\n\nTo ensure this dataset contains diverse high-quality data, we first select zs\\_opt questions. Then, we filter out questions with output lengths exceeding the average length. This step aims to help the model learn richer reasoning steps. After that, we perform stratified sampling. Initially, we attempted stratified sampling before the length-based filtering, but we found that this approach resulted in varying sample sizes, making it challenging to reproduce. Thus, we decided to first filter by length and then perform stratified sampling.\n\n\nLSAT arranged according to LEVEL" ]
[ "TAGS\n#license-openrail #region-us \n", "### FLAN\\_V2 COT(ZS)\n\n\nWe only extract the 'zs\\_opt' from COT and categorize each task.\n* ### SAT、GRE、GMAT、AMC、AIME、LSAT\n\n\nWe will configure the input for datasets such as GRE, GMAT, SAT etc. as \"Please read the question and options carefully, then select the most appropriate answer and provide the corresponding explanation.\" Meanwhile, for the math input, it will be set as \"Please provide the answer along with a corresponding explanation based on the given question.\" Moreover, the questions will be arranged in ascending order of difficulty levels. This is done because, according to the ORCA paper, they started training the model using GPT-3.5 and later transitioned to GPT-4. To avoid the student model from acquiring knowledge beyond its scope and thereby delivering suboptimal results, a progressive learning strategy was utilized. This approach was found to be effective, therefore, in datasets like AMC, AIME which have various levels of difficulty, I have arranged them in a way that embodies this gradual, progressive learning technique.\n\n\nFurthermore, their question and options are combined to form the instruction, and the label and solution are merged to become the output.\n\n\nLastly, for the LSAT dataset, since it doesn't involve step-by-step processes, the passage is transformed into instruction, while the combination of the question and options serves as the input, and the label represents the output.\n* ### Gaokao\n\n\nMost of the inputs are configured by us:\n\"Please read and understand the requirements and content of the question carefully, and then choose the option that best fits the description of the question or best answers the question from the options provided.\"\nOnly gaokao-mathcloze is configured by us:\n\"Please read and comprehend the requirements and content of the question carefully. Gradually ponder upon it and present the most appropriate answer based on your judgment.\"\n* ### LeetCode\n\n\nInput configuration:\n\"Analyze the problem description and constraints, then develop a step-by-step Python function to generate the expected output based on the given inputs. Include brief explanations at each step to illustrate your solution process.\"\n* ### LogiQA\n\n\nOnly perform general conversion\n* ### OTHER\n\n\nPrm800k, ScienceQA, SciBench, ReClor, TheoremQA, OpenBookQA, ARB, and OpenAssistant-Guanaco datasets adopt the same format as Platypus.\n\n\n*Sampling Algorithms*\n---------------------\n\n\nSince the flan\\_v2 cot dataset includes tasks like:\n\n\n* cot\\_esnli\n* cot\\_strategyqa\n* cot\\_qasc\n* stream\\_qed\n* cot\\_gsm8k\n* cot\\_ecqa\n* cot\\_creak\n* stream\\_aqua\n\n\nTo ensure this dataset contains diverse high-quality data, we first select zs\\_opt questions. Then, we filter out questions with output lengths exceeding the average length. This step aims to help the model learn richer reasoning steps. After that, we perform stratified sampling. Initially, we attempted stratified sampling before the length-based filtering, but we found that this approach resulted in varying sample sizes, making it challenging to reproduce. Thus, we decided to first filter by length and then perform stratified sampling.\n\n\nLSAT arranged according to LEVEL" ]
[ 12, 781 ]
[ "passage: TAGS\n#license-openrail #region-us \n" ]
3d20baf1fe2349d78616b75b0055f34d70dcfa10
# Dataset Card for "spacecraft_prompts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Falah/spacecraft_prompts
[ "region:us" ]
2023-09-15T15:31:16+00:00
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5116462, "num_examples": 10000}], "download_size": 622894, "dataset_size": 5116462}}
2023-09-15T15:31:18+00:00
[]
[]
TAGS #region-us
# Dataset Card for "spacecraft_prompts" More Information needed
[ "# Dataset Card for \"spacecraft_prompts\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"spacecraft_prompts\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"spacecraft_prompts\"\n\nMore Information needed" ]
a6bec5e6a78774bfe1ca752059618cebb31f36bc
# Dataset of yamato_aki/大和亜季 (THE iDOLM@STER: Cinderella Girls) This is the dataset of yamato_aki/大和亜季 (THE iDOLM@STER: Cinderella Girls), containing 137 images and their tags. The core tags of this character are `green_eyes, long_hair, breasts, ponytail, large_breasts, black_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 137 | 138.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_aki_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 137 | 92.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_aki_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 305 | 182.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_aki_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 137 | 127.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_aki_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 305 | 238.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_aki_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/yamato_aki_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 20 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, smile, camouflage, navel, cleavage, fingerless_gloves, beret, dog_tags, looking_at_viewer, bikini, military, black_gloves, open_mouth, shorts, uniform, earrings, midriff, one_eye_closed, rifle | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, navel, simple_background, smile, solo, white_background, abs, midriff, looking_at_viewer, cleavage, collarbone, huge_breasts, muscular_female, open_mouth, sports_bra | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | smile | camouflage | navel | cleavage | fingerless_gloves | beret | dog_tags | looking_at_viewer | bikini | military | black_gloves | open_mouth | shorts | uniform | earrings | midriff | one_eye_closed | rifle | simple_background | white_background | abs | collarbone | huge_breasts | muscular_female | sports_bra | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:-------------|:--------|:-----------|:--------------------|:--------|:-----------|:--------------------|:---------|:-----------|:---------------|:-------------|:---------|:----------|:-----------|:----------|:-----------------|:--------|:--------------------|:-------------------|:------|:-------------|:---------------|:------------------|:-------------| | 0 | 20 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | | X | X | | | | X | | | | X | | | | X | | | X | X | X | X | X | X | X |
CyberHarem/yamato_aki_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T15:33:28+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T19:53:05+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of yamato\_aki/大和亜季 (THE iDOLM@STER: Cinderella Girls) ============================================================== This is the dataset of yamato\_aki/大和亜季 (THE iDOLM@STER: Cinderella Girls), containing 137 images and their tags. The core tags of this character are 'green\_eyes, long\_hair, breasts, ponytail, large\_breasts, black\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
94b1229f74b0a07b7bad9f216c1a99868bd2c55c
# Dataset of Aozaki Touko This is the dataset of Aozaki Touko, containing 156 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 156 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 338 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 156 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 156 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 156 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 156 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 156 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 338 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 338 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 338 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
CyberHarem/aozaki_touko_karanokyoukai
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T15:40:21+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:40:07+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Aozaki Touko ======================= This is the dataset of Aozaki Touko, containing 156 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
9089a2ddbd96417e7616dea5287260c9053c70c6
# Dataset of yoshioka_saki/吉岡沙紀 (THE iDOLM@STER: Cinderella Girls) This is the dataset of yoshioka_saki/吉岡沙紀 (THE iDOLM@STER: Cinderella Girls), containing 50 images and their tags. The core tags of this character are `short_hair, brown_hair, green_eyes, breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 50 | 57.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yoshioka_saki_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 50 | 37.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yoshioka_saki_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 124 | 77.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yoshioka_saki_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 50 | 51.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yoshioka_saki_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 124 | 99.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yoshioka_saki_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/yoshioka_saki_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, open_mouth, smile, fingerless_gloves, looking_at_viewer, midriff, solo, cleavage, headset, jacket, navel, belt, black_gloves, blush, collarbone, crop_top, hood, medium_breasts | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, smile, solo, earrings, fingerless_gloves, hair_ornament, looking_at_viewer, black_gloves, ninja, sleeveless, bangs, bare_shoulders, choker, cleavage, collarbone, hair_between_eyes, kimono, blush, fishnet_thighhighs, flower, garter_straps, gradient_background, large_breasts, obi, open_mouth, shuriken, upper_body | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | smile, 1girl, hat, hoodie, solo, white_gloves, paint | | 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | solo, 1girl, bracelet, character_name, cleavage, medium_breasts, grin, looking_at_viewer, navel, card_(medium), earrings, gem_(symbol), open_mouth, orange_hair, pants, weapon | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | open_mouth | smile | fingerless_gloves | looking_at_viewer | midriff | solo | cleavage | headset | jacket | navel | belt | black_gloves | blush | collarbone | crop_top | hood | medium_breasts | earrings | hair_ornament | ninja | sleeveless | bangs | bare_shoulders | choker | hair_between_eyes | kimono | fishnet_thighhighs | flower | garter_straps | gradient_background | large_breasts | obi | shuriken | upper_body | hat | hoodie | white_gloves | paint | bracelet | character_name | grin | card_(medium) | gem_(symbol) | orange_hair | pants | weapon | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:--------|:--------------------|:--------------------|:----------|:-------|:-----------|:----------|:---------|:--------|:-------|:---------------|:--------|:-------------|:-----------|:-------|:-----------------|:-----------|:----------------|:--------|:-------------|:--------|:-----------------|:---------|:--------------------|:---------|:---------------------|:---------|:----------------|:----------------------|:----------------|:------|:-----------|:-------------|:------|:---------|:---------------|:--------|:-----------|:-----------------|:-------|:----------------|:---------------|:--------------|:--------|:---------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | | X | X | | | | | X | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | | X | | X | X | | | X | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X |
CyberHarem/yoshioka_saki_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T15:46:56+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T20:45:29+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of yoshioka\_saki/吉岡沙紀 (THE iDOLM@STER: Cinderella Girls) ================================================================= This is the dataset of yoshioka\_saki/吉岡沙紀 (THE iDOLM@STER: Cinderella Girls), containing 50 images and their tags. The core tags of this character are 'short\_hair, brown\_hair, green\_eyes, breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
00a6823c2b934aee2015e2d7d748fab412d2acab
# Dataset of matsubara_saya/松原早耶 (THE iDOLM@STER: Cinderella Girls) This is the dataset of matsubara_saya/松原早耶 (THE iDOLM@STER: Cinderella Girls), containing 20 images and their tags. The core tags of this character are `short_hair, red_eyes, black_hair, bangs, earrings, hat, bow`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 20 | 24.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsubara_saya_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 20 | 14.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsubara_saya_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 44 | 28.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsubara_saya_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 20 | 21.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsubara_saya_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 44 | 40.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsubara_saya_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/matsubara_saya_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------| | 0 | 20 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, looking_at_viewer, jewelry, dress, frills, blush, gloves, heart | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | jewelry | dress | frills | blush | gloves | heart | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:----------|:--------|:---------|:--------|:---------|:--------| | 0 | 20 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X |
CyberHarem/matsubara_saya_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T15:49:00+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:59:54+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of matsubara\_saya/松原早耶 (THE iDOLM@STER: Cinderella Girls) ================================================================== This is the dataset of matsubara\_saya/松原早耶 (THE iDOLM@STER: Cinderella Girls), containing 20 images and their tags. The core tags of this character are 'short\_hair, red\_eyes, black\_hair, bangs, earrings, hat, bow', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
dd5a87981d7713e669278c8caca9145de3cc1585
# Dataset Card for "babylm-10M-aochildes" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
deven367/babylm-10M-aochildes
[ "region:us" ]
2023-09-15T16:04:05+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2140547, "num_examples": 80000}, {"name": "valid", "num_bytes": 1987198, "num_examples": 70000}, {"name": "test", "num_bytes": 1648555, "num_examples": 60000}], "download_size": 3235049, "dataset_size": 5776300}}
2023-09-15T16:04:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "babylm-10M-aochildes" More Information needed
[ "# Dataset Card for \"babylm-10M-aochildes\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"babylm-10M-aochildes\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"babylm-10M-aochildes\"\n\nMore Information needed" ]
62acc1b015a42a1387b65b4b7ef557ef0571bf46
# Dataset Card for "babylm-10M-children-stories" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
deven367/babylm-10M-children-stories
[ "region:us" ]
2023-09-15T16:06:07+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1827662, "num_examples": 5737}, {"name": "valid", "num_bytes": 1425137, "num_examples": 5996}, {"name": "test", "num_bytes": 1804421, "num_examples": 7959}], "download_size": 3064805, "dataset_size": 5057220}}
2023-09-15T16:06:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "babylm-10M-children-stories" More Information needed
[ "# Dataset Card for \"babylm-10M-children-stories\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"babylm-10M-children-stories\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"babylm-10M-children-stories\"\n\nMore Information needed" ]
8c97fb4406f7a8a4572c43a86a5bebcbe30d9e0a
# Dataset Card for "babylm-10M-cbt" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
deven367/babylm-10M-cbt
[ "region:us" ]
2023-09-15T16:06:43+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2705697, "num_examples": 26000}, {"name": "valid", "num_bytes": 1220938, "num_examples": 12747}, {"name": "test", "num_bytes": 1578682, "num_examples": 16646}], "download_size": 3370383, "dataset_size": 5505317}}
2023-09-15T16:06:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for "babylm-10M-cbt" More Information needed
[ "# Dataset Card for \"babylm-10M-cbt\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"babylm-10M-cbt\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"babylm-10M-cbt\"\n\nMore Information needed" ]
b2961c2b210b53131423cc55c64dc916246e8607
# Dataset of nanjou_hikaru/南条光 (THE iDOLM@STER: Cinderella Girls) This is the dataset of nanjou_hikaru/南条光 (THE iDOLM@STER: Cinderella Girls), containing 128 images and their tags. The core tags of this character are `long_hair, blue_eyes, black_hair, ahoge`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 128 | 114.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nanjou_hikaru_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 128 | 78.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nanjou_hikaru_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 258 | 145.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nanjou_hikaru_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 128 | 104.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nanjou_hikaru_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 258 | 190.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nanjou_hikaru_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/nanjou_hikaru_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------| | 0 | 50 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, smile, solo, gloves, scarf, belt, looking_at_viewer | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | solo | gloves | scarf | belt | looking_at_viewer | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:---------|:--------|:-------|:--------------------| | 0 | 50 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X |
CyberHarem/nanjou_hikaru_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T16:16:11+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T16:06:23+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of nanjou\_hikaru/南条光 (THE iDOLM@STER: Cinderella Girls) ================================================================ This is the dataset of nanjou\_hikaru/南条光 (THE iDOLM@STER: Cinderella Girls), containing 128 images and their tags. The core tags of this character are 'long\_hair, blue\_eyes, black\_hair, ahoge', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
430616b8a0e3088b6ef6a76a608e9b8318f97fde
# HHH-alignment ## Install To install `lm-eval` from the github repository main branch, run: ```bash git clone https://github.com/hieunguyen1053/lm-evaluation-harness cd lm-evaluation-harness pip install -e . ``` ## Basic Usage > **Note**: When reporting results from eval harness, please include the task versions (shown in `results["versions"]`) for reproducibility. This allows bug fixes to tasks while also ensuring that previously reported scores are reproducible. See the [Task Versioning](#task-versioning) section for more info. ### Hugging Face `transformers` To evaluate a model hosted on the [HuggingFace Hub](https://huggingface.co/models) (e.g. vlsp-2023-vllm/hoa-1b4) on `hellaswag_vi` you can use the following command: ```bash python main.py \ --model hf-causal \ --model_args pretrained=vlsp-2023-vllm/hoa-1b4 \ --tasks hhh_alignment_vi \ --batch_size auto \ --device cuda:0 ``` Additional arguments can be provided to the model constructor using the `--model_args` flag. Most notably, this supports the common practice of using the `revisions` feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model: ```bash python main.py \ --model hf-causal \ --model_args pretrained=vlsp-2023-vllm/hoa-1b4,revision=step100000,dtype="float" \ --tasks hhh_alignment_vi \ --device cuda:0 ``` To evaluate models that are loaded via `AutoSeq2SeqLM` in Huggingface, you instead use `hf-seq2seq`. *To evaluate (causal) models across multiple GPUs, use `--model hf-causal-experimental`* > **Warning**: Choosing the wrong model may result in erroneous outputs despite not erroring.
vlsp-2023-vllm/hhh_alignment
[ "region:us" ]
2023-09-15T16:17:32+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "targets", "struct": [{"name": "choices", "sequence": "string"}, {"name": "labels", "sequence": "int32"}]}, {"name": "metadata", "struct": [{"name": "subset", "dtype": "string"}]}], "splits": [{"name": "test", "num_bytes": 285938, "num_examples": 221}], "download_size": 66013, "dataset_size": 285938}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
2023-10-30T03:32:46+00:00
[]
[]
TAGS #region-us
# HHH-alignment ## Install To install 'lm-eval' from the github repository main branch, run: ## Basic Usage > Note: When reporting results from eval harness, please include the task versions (shown in 'results["versions"]') for reproducibility. This allows bug fixes to tasks while also ensuring that previously reported scores are reproducible. See the Task Versioning section for more info. ### Hugging Face 'transformers' To evaluate a model hosted on the HuggingFace Hub (e.g. vlsp-2023-vllm/hoa-1b4) on 'hellaswag_vi' you can use the following command: Additional arguments can be provided to the model constructor using the '--model_args' flag. Most notably, this supports the common practice of using the 'revisions' feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model: To evaluate models that are loaded via 'AutoSeq2SeqLM' in Huggingface, you instead use 'hf-seq2seq'. *To evaluate (causal) models across multiple GPUs, use '--model hf-causal-experimental'* > Warning: Choosing the wrong model may result in erroneous outputs despite not erroring.
[ "# HHH-alignment", "## Install\n\nTo install 'lm-eval' from the github repository main branch, run:", "## Basic Usage\n\n> Note: When reporting results from eval harness, please include the task versions (shown in 'results[\"versions\"]') for reproducibility. This allows bug fixes to tasks while also ensuring that previously reported scores are reproducible. See the Task Versioning section for more info.", "### Hugging Face 'transformers'\n\nTo evaluate a model hosted on the HuggingFace Hub (e.g. vlsp-2023-vllm/hoa-1b4) on 'hellaswag_vi' you can use the following command:\n\n\n\n\nAdditional arguments can be provided to the model constructor using the '--model_args' flag. Most notably, this supports the common practice of using the 'revisions' feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model:\n\n\n\nTo evaluate models that are loaded via 'AutoSeq2SeqLM' in Huggingface, you instead use 'hf-seq2seq'. *To evaluate (causal) models across multiple GPUs, use '--model hf-causal-experimental'*\n\n> Warning: Choosing the wrong model may result in erroneous outputs despite not erroring." ]
[ "TAGS\n#region-us \n", "# HHH-alignment", "## Install\n\nTo install 'lm-eval' from the github repository main branch, run:", "## Basic Usage\n\n> Note: When reporting results from eval harness, please include the task versions (shown in 'results[\"versions\"]') for reproducibility. This allows bug fixes to tasks while also ensuring that previously reported scores are reproducible. See the Task Versioning section for more info.", "### Hugging Face 'transformers'\n\nTo evaluate a model hosted on the HuggingFace Hub (e.g. vlsp-2023-vllm/hoa-1b4) on 'hellaswag_vi' you can use the following command:\n\n\n\n\nAdditional arguments can be provided to the model constructor using the '--model_args' flag. Most notably, this supports the common practice of using the 'revisions' feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model:\n\n\n\nTo evaluate models that are loaded via 'AutoSeq2SeqLM' in Huggingface, you instead use 'hf-seq2seq'. *To evaluate (causal) models across multiple GPUs, use '--model hf-causal-experimental'*\n\n> Warning: Choosing the wrong model may result in erroneous outputs despite not erroring." ]
[ 6, 6, 23, 77, 219 ]
[ "passage: TAGS\n#region-us \n# HHH-alignment## Install\n\nTo install 'lm-eval' from the github repository main branch, run:## Basic Usage\n\n> Note: When reporting results from eval harness, please include the task versions (shown in 'results[\"versions\"]') for reproducibility. This allows bug fixes to tasks while also ensuring that previously reported scores are reproducible. See the Task Versioning section for more info.### Hugging Face 'transformers'\n\nTo evaluate a model hosted on the HuggingFace Hub (e.g. vlsp-2023-vllm/hoa-1b4) on 'hellaswag_vi' you can use the following command:\n\n\n\n\nAdditional arguments can be provided to the model constructor using the '--model_args' flag. Most notably, this supports the common practice of using the 'revisions' feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model:\n\n\n\nTo evaluate models that are loaded via 'AutoSeq2SeqLM' in Huggingface, you instead use 'hf-seq2seq'. *To evaluate (causal) models across multiple GPUs, use '--model hf-causal-experimental'*\n\n> Warning: Choosing the wrong model may result in erroneous outputs despite not erroring." ]
210e64b1523845e783b910743aa3502636c57859
# Dataset Card for "clean_prs2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
loubnabnl/clean_prs2
[ "region:us" ]
2023-09-15T16:35:40+00:00
{"dataset_info": {"features": [{"name": "bucket", "dtype": "string"}, {"name": "pull_request_info", "struct": [{"name": "org.id", "dtype": "int64"}, {"name": "public", "dtype": "bool"}, {"name": "pull_request.additions", "dtype": "int64"}, {"name": "pull_request.body", "dtype": "string"}, {"name": "pull_request.changed_files", "dtype": "int64"}, {"name": "pull_request.closed_at", "dtype": "string"}, {"name": "pull_request.comments", "dtype": "int64"}, {"name": "pull_request.commits", "dtype": "int64"}, {"name": "pull_request.created_at", "dtype": "string"}, {"name": "pull_request.deletions", "dtype": "int64"}, {"name": "pull_request.guid", "dtype": "string"}, {"name": "pull_request.id", "dtype": "int64"}, {"name": "pull_request.merged_at", "dtype": "string"}, {"name": "pull_request.merged_by.login", "dtype": "string"}, {"name": "pull_request.milestone.description", "dtype": "string"}, {"name": "pull_request.milestone.number", "dtype": "int64"}, {"name": "pull_request.milestone.title", "dtype": "string"}, {"name": "pull_request.number", "dtype": "int64"}, {"name": "pull_request.review_comments", "dtype": "int64"}, {"name": "pull_request.state", "dtype": "string"}, {"name": "pull_request.title", "dtype": "string"}, {"name": "pull_request.user.id", "dtype": "int64"}, {"name": "pull_request.user.login", "dtype": "string"}, {"name": "repo.id", "dtype": "int64"}, {"name": "repo.name", "dtype": "string"}]}, {"name": "head_repo_info", "struct": [{"name": "pull_request.head.label", "dtype": "string"}, {"name": "pull_request.head.ref", "dtype": "string"}, {"name": "pull_request.head.repo.default_branch", "dtype": "string"}, {"name": "pull_request.head.repo.description", "dtype": "string"}, {"name": "pull_request.head.repo.homepage", "dtype": "string"}, {"name": "pull_request.head.repo.language", "dtype": "string"}, {"name": "pull_request.head.repo.license.name", "dtype": "string"}, {"name": "pull_request.head.repo.name", "dtype": "string"}, {"name": "pull_request.head.repo.owner.login", "dtype": "string"}, {"name": "pull_request.head.repo.owner.type", "dtype": "string"}, {"name": "pull_request.head.repo.private", "dtype": "bool"}, {"name": "pull_request.head.repo.stargazers_count", "dtype": "int64"}, {"name": "pull_request.head.sha", "dtype": "string"}, {"name": "pull_request.head.user.login", "dtype": "string"}, {"name": "pull_request.head.user.type", "dtype": "string"}]}, {"name": "base_repo_info", "struct": [{"name": "pull_request.base.label", "dtype": "string"}, {"name": "pull_request.base.ref", "dtype": "string"}, {"name": "pull_request.base.repo.default_branch", "dtype": "string"}, {"name": "pull_request.base.repo.description", "dtype": "string"}, {"name": "pull_request.base.repo.forks_count", "dtype": "int64"}, {"name": "pull_request.base.repo.homepage", "dtype": "string"}, {"name": "pull_request.base.repo.language", "dtype": "string"}, {"name": "pull_request.base.repo.license.name", "dtype": "string"}, {"name": "pull_request.base.repo.name", "dtype": "string"}, {"name": "pull_request.base.repo.open_issues_count", "dtype": "int64"}, {"name": "pull_request.base.repo.owner.login", "dtype": "string"}, {"name": "pull_request.base.repo.owner.type", "dtype": "string"}, {"name": "pull_request.base.repo.private", "dtype": "bool"}, {"name": "pull_request.base.repo.stargazers_count", "dtype": "int64"}, {"name": "pull_request.base.repo.watchers_count", "dtype": "int64"}, {"name": "pull_request.base.sha", "dtype": "string"}, {"name": "pull_request.base.user.login", "dtype": "string"}, {"name": "pull_request.base.user.type", "dtype": "string"}, {"name": "pull_request.comments", "dtype": "int64"}, {"name": "pull_request.label.name", "dtype": "null"}, {"name": "pull_request.review_comments", "dtype": "int64"}]}, {"name": "events", "list": [{"name": "action", "dtype": "string"}, {"name": "actor.id", "dtype": "int64"}, {"name": "actor.login", "dtype": "string"}, {"name": "comment.author_association", "dtype": "string"}, {"name": "comment.body", "dtype": "string"}, {"name": "comment.commit_id", "dtype": "string"}, {"name": "comment.created_at", "dtype": "string"}, {"name": "comment.diff_hunk", "dtype": "string"}, {"name": "comment.id", "dtype": "int64"}, {"name": "comment.in_reply_to_id", "dtype": "int64"}, {"name": "comment.line", "dtype": "int64"}, {"name": "comment.original_commit_id", "dtype": "string"}, {"name": "comment.original_line", "dtype": "int64"}, {"name": "comment.original_position", "dtype": "int64"}, {"name": "comment.original_start_line", "dtype": "int64"}, {"name": "comment.path", "dtype": "string"}, {"name": "comment.position", "dtype": "int64"}, {"name": "comment.side", "dtype": "string"}, {"name": "comment.start_line", "dtype": "int64"}, {"name": "comment.start_side", "dtype": "string"}, {"name": "comment.updated_at", "dtype": "string"}, {"name": "created_at", "dtype": "timestamp[us, tz=UTC]"}, {"name": "issue.author", "dtype": "string"}, {"name": "issue.comment", "dtype": "string"}, {"name": "issue.comment_id", "dtype": "float64"}, {"name": "review.author_association", "dtype": "string"}, {"name": "review.body", "dtype": "string"}, {"name": "review.commit_id", "dtype": "string"}, {"name": "review.id", "dtype": "int64"}, {"name": "review.state", "dtype": "string"}, {"name": "review.submitted_at", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "user.login", "dtype": "string"}, {"name": "user.type", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 54214029, "num_examples": 10000}], "download_size": 16095878, "dataset_size": 54214029}}
2023-09-15T16:58:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "clean_prs2" More Information needed
[ "# Dataset Card for \"clean_prs2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"clean_prs2\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"clean_prs2\"\n\nMore Information needed" ]
79ac2a5b8ac84e53b43ba30a66bdbdd475aa1e50
# Dataset Card for Evaluation run of oh-yeontaek/llama-2-70B-LoRA-assemble-v3 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/oh-yeontaek/llama-2-70B-LoRA-assemble-v3 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [oh-yeontaek/llama-2-70B-LoRA-assemble-v3](https://huggingface.co/oh-yeontaek/llama-2-70B-LoRA-assemble-v3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_oh-yeontaek__llama-2-70B-LoRA-assemble-v3", "harness_truthfulqa_mc_0", split="train") ``` ## Latest results These are the [latest results from run 2023-09-15T17:36:30.757691](https://huggingface.co/datasets/open-llm-leaderboard/details_oh-yeontaek__llama-2-70B-LoRA-assemble-v3/blob/main/results_2023-09-15T17-36-30.757691.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6985803552112708, "acc_stderr": 0.03118492094070661, "acc_norm": 0.7024274155828159, "acc_norm_stderr": 0.031154550420018332, "mc1": 0.47980416156670747, "mc1_stderr": 0.01748921684973705, "mc2": 0.658093697491632, "mc2_stderr": 0.014747866760131165 }, "harness|arc:challenge|25": { "acc": 0.6860068259385665, "acc_stderr": 0.013562691224726291, "acc_norm": 0.7209897610921502, "acc_norm_stderr": 0.013106784883601334 }, "harness|hellaswag|10": { "acc": 0.6820354511053575, "acc_stderr": 0.004647338877642188, "acc_norm": 0.8740290778729337, "acc_norm_stderr": 0.0033113844981586464 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.4, "acc_stderr": 0.049236596391733084, "acc_norm": 0.4, "acc_norm_stderr": 0.049236596391733084 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6370370370370371, "acc_stderr": 0.041539484047424, "acc_norm": 0.6370370370370371, "acc_norm_stderr": 0.041539484047424 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7828947368421053, "acc_stderr": 0.03355045304882924, "acc_norm": 0.7828947368421053, "acc_norm_stderr": 0.03355045304882924 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.76, "acc_stderr": 0.04292346959909284, "acc_norm": 0.76, "acc_norm_stderr": 0.04292346959909284 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7547169811320755, "acc_stderr": 0.026480357179895695, "acc_norm": 0.7547169811320755, "acc_norm_stderr": 0.026480357179895695 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.8194444444444444, "acc_stderr": 0.03216600808802267, "acc_norm": 0.8194444444444444, "acc_norm_stderr": 0.03216600808802267 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.48, "acc_stderr": 0.050211673156867795, "acc_norm": 0.48, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.62, "acc_stderr": 0.04878317312145632, "acc_norm": 0.62, "acc_norm_stderr": 0.04878317312145632 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.38, "acc_stderr": 0.048783173121456316, "acc_norm": 0.38, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6647398843930635, "acc_stderr": 0.03599586301247077, "acc_norm": 0.6647398843930635, "acc_norm_stderr": 0.03599586301247077 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.3431372549019608, "acc_stderr": 0.04724007352383888, "acc_norm": 0.3431372549019608, "acc_norm_stderr": 0.04724007352383888 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.74, "acc_stderr": 0.04408440022768078, "acc_norm": 0.74, "acc_norm_stderr": 0.04408440022768078 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.676595744680851, "acc_stderr": 0.03057944277361034, "acc_norm": 0.676595744680851, "acc_norm_stderr": 0.03057944277361034 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4649122807017544, "acc_stderr": 0.04692008381368909, "acc_norm": 0.4649122807017544, "acc_norm_stderr": 0.04692008381368909 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6413793103448275, "acc_stderr": 0.03996629574876719, "acc_norm": 0.6413793103448275, "acc_norm_stderr": 0.03996629574876719 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.47354497354497355, "acc_stderr": 0.025715239811346758, "acc_norm": 0.47354497354497355, "acc_norm_stderr": 0.025715239811346758 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.49206349206349204, "acc_stderr": 0.044715725362943486, "acc_norm": 0.49206349206349204, "acc_norm_stderr": 0.044715725362943486 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.49, "acc_stderr": 0.05024183937956912, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8193548387096774, "acc_stderr": 0.02188617856717253, "acc_norm": 0.8193548387096774, "acc_norm_stderr": 0.02188617856717253 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.541871921182266, "acc_stderr": 0.03505630140785741, "acc_norm": 0.541871921182266, "acc_norm_stderr": 0.03505630140785741 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.79, "acc_stderr": 0.040936018074033256, "acc_norm": 0.79, "acc_norm_stderr": 0.040936018074033256 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8484848484848485, "acc_stderr": 0.027998073798781675, "acc_norm": 0.8484848484848485, "acc_norm_stderr": 0.027998073798781675 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8888888888888888, "acc_stderr": 0.022390787638216763, "acc_norm": 0.8888888888888888, "acc_norm_stderr": 0.022390787638216763 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.927461139896373, "acc_stderr": 0.018718998520678178, "acc_norm": 0.927461139896373, "acc_norm_stderr": 0.018718998520678178 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6974358974358974, "acc_stderr": 0.02329088805377272, "acc_norm": 0.6974358974358974, "acc_norm_stderr": 0.02329088805377272 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.32592592592592595, "acc_stderr": 0.028578348365473072, "acc_norm": 0.32592592592592595, "acc_norm_stderr": 0.028578348365473072 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.773109243697479, "acc_stderr": 0.02720537153827947, "acc_norm": 0.773109243697479, "acc_norm_stderr": 0.02720537153827947 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.4966887417218543, "acc_stderr": 0.04082393379449654, "acc_norm": 0.4966887417218543, "acc_norm_stderr": 0.04082393379449654 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8954128440366973, "acc_stderr": 0.013120530245265586, "acc_norm": 0.8954128440366973, "acc_norm_stderr": 0.013120530245265586 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5833333333333334, "acc_stderr": 0.03362277436608043, "acc_norm": 0.5833333333333334, "acc_norm_stderr": 0.03362277436608043 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.9019607843137255, "acc_stderr": 0.020871118455552097, "acc_norm": 0.9019607843137255, "acc_norm_stderr": 0.020871118455552097 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.890295358649789, "acc_stderr": 0.020343400734868837, "acc_norm": 0.890295358649789, "acc_norm_stderr": 0.020343400734868837 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7623318385650224, "acc_stderr": 0.028568079464714274, "acc_norm": 0.7623318385650224, "acc_norm_stderr": 0.028568079464714274 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.8396946564885496, "acc_stderr": 0.03217829420744632, "acc_norm": 0.8396946564885496, "acc_norm_stderr": 0.03217829420744632 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8512396694214877, "acc_stderr": 0.03248470083807194, "acc_norm": 0.8512396694214877, "acc_norm_stderr": 0.03248470083807194 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8333333333333334, "acc_stderr": 0.03602814176392645, "acc_norm": 0.8333333333333334, "acc_norm_stderr": 0.03602814176392645 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.8282208588957055, "acc_stderr": 0.02963471727237104, "acc_norm": 0.8282208588957055, "acc_norm_stderr": 0.02963471727237104 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.48214285714285715, "acc_stderr": 0.047427623612430116, "acc_norm": 0.48214285714285715, "acc_norm_stderr": 0.047427623612430116 }, "harness|hendrycksTest-management|5": { "acc": 0.8349514563106796, "acc_stderr": 0.03675668832233188, "acc_norm": 0.8349514563106796, "acc_norm_stderr": 0.03675668832233188 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8974358974358975, "acc_stderr": 0.01987565502786745, "acc_norm": 0.8974358974358975, "acc_norm_stderr": 0.01987565502786745 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.859514687100894, "acc_stderr": 0.012426211353093448, "acc_norm": 0.859514687100894, "acc_norm_stderr": 0.012426211353093448 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7658959537572254, "acc_stderr": 0.022797110278071128, "acc_norm": 0.7658959537572254, "acc_norm_stderr": 0.022797110278071128 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.582122905027933, "acc_stderr": 0.016495400635820084, "acc_norm": 0.582122905027933, "acc_norm_stderr": 0.016495400635820084 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7483660130718954, "acc_stderr": 0.024848018263875195, "acc_norm": 0.7483660130718954, "acc_norm_stderr": 0.024848018263875195 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7427652733118971, "acc_stderr": 0.024826171289250888, "acc_norm": 0.7427652733118971, "acc_norm_stderr": 0.024826171289250888 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.8117283950617284, "acc_stderr": 0.021751866060815882, "acc_norm": 0.8117283950617284, "acc_norm_stderr": 0.021751866060815882 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.574468085106383, "acc_stderr": 0.02949482760014436, "acc_norm": 0.574468085106383, "acc_norm_stderr": 0.02949482760014436 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.5788787483702738, "acc_stderr": 0.012610325733489905, "acc_norm": 0.5788787483702738, "acc_norm_stderr": 0.012610325733489905 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7242647058823529, "acc_stderr": 0.027146271936625162, "acc_norm": 0.7242647058823529, "acc_norm_stderr": 0.027146271936625162 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.7565359477124183, "acc_stderr": 0.017362473762146613, "acc_norm": 0.7565359477124183, "acc_norm_stderr": 0.017362473762146613 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7454545454545455, "acc_stderr": 0.04172343038705383, "acc_norm": 0.7454545454545455, "acc_norm_stderr": 0.04172343038705383 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7959183673469388, "acc_stderr": 0.025801283475090496, "acc_norm": 0.7959183673469388, "acc_norm_stderr": 0.025801283475090496 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8905472636815921, "acc_stderr": 0.02207632610182466, "acc_norm": 0.8905472636815921, "acc_norm_stderr": 0.02207632610182466 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.87, "acc_stderr": 0.033799766898963086, "acc_norm": 0.87, "acc_norm_stderr": 0.033799766898963086 }, "harness|hendrycksTest-virology|5": { "acc": 0.5120481927710844, "acc_stderr": 0.03891364495835817, "acc_norm": 0.5120481927710844, "acc_norm_stderr": 0.03891364495835817 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8596491228070176, "acc_stderr": 0.0266405825391332, "acc_norm": 0.8596491228070176, "acc_norm_stderr": 0.0266405825391332 }, "harness|truthfulqa:mc|0": { "mc1": 0.47980416156670747, "mc1_stderr": 0.01748921684973705, "mc2": 0.658093697491632, "mc2_stderr": 0.014747866760131165 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_oh-yeontaek__llama-2-70B-LoRA-assemble-v3
[ "region:us" ]
2023-09-15T16:36:47+00:00
{"pretty_name": "Evaluation run of oh-yeontaek/llama-2-70B-LoRA-assemble-v3", "dataset_summary": "Dataset automatically created during the evaluation run of model [oh-yeontaek/llama-2-70B-LoRA-assemble-v3](https://huggingface.co/oh-yeontaek/llama-2-70B-LoRA-assemble-v3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_oh-yeontaek__llama-2-70B-LoRA-assemble-v3\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-15T17:36:30.757691](https://huggingface.co/datasets/open-llm-leaderboard/details_oh-yeontaek__llama-2-70B-LoRA-assemble-v3/blob/main/results_2023-09-15T17-36-30.757691.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6985803552112708,\n \"acc_stderr\": 0.03118492094070661,\n \"acc_norm\": 0.7024274155828159,\n \"acc_norm_stderr\": 0.031154550420018332,\n \"mc1\": 0.47980416156670747,\n \"mc1_stderr\": 0.01748921684973705,\n \"mc2\": 0.658093697491632,\n \"mc2_stderr\": 0.014747866760131165\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6860068259385665,\n \"acc_stderr\": 0.013562691224726291,\n \"acc_norm\": 0.7209897610921502,\n \"acc_norm_stderr\": 0.013106784883601334\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6820354511053575,\n \"acc_stderr\": 0.004647338877642188,\n \"acc_norm\": 0.8740290778729337,\n \"acc_norm_stderr\": 0.0033113844981586464\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n \"acc_stderr\": 0.041539484047424,\n \"acc_norm\": 0.6370370370370371,\n \"acc_norm_stderr\": 0.041539484047424\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7828947368421053,\n \"acc_stderr\": 0.03355045304882924,\n \"acc_norm\": 0.7828947368421053,\n \"acc_norm_stderr\": 0.03355045304882924\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909284,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7547169811320755,\n \"acc_stderr\": 0.026480357179895695,\n \"acc_norm\": 0.7547169811320755,\n \"acc_norm_stderr\": 0.026480357179895695\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8194444444444444,\n \"acc_stderr\": 0.03216600808802267,\n \"acc_norm\": 0.8194444444444444,\n \"acc_norm_stderr\": 0.03216600808802267\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3431372549019608,\n \"acc_stderr\": 0.04724007352383888,\n \"acc_norm\": 0.3431372549019608,\n \"acc_norm_stderr\": 0.04724007352383888\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.676595744680851,\n \"acc_stderr\": 0.03057944277361034,\n \"acc_norm\": 0.676595744680851,\n \"acc_norm_stderr\": 0.03057944277361034\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4649122807017544,\n \"acc_stderr\": 0.04692008381368909,\n \"acc_norm\": 0.4649122807017544,\n \"acc_norm_stderr\": 0.04692008381368909\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6413793103448275,\n \"acc_stderr\": 0.03996629574876719,\n \"acc_norm\": 0.6413793103448275,\n \"acc_norm_stderr\": 0.03996629574876719\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.47354497354497355,\n \"acc_stderr\": 0.025715239811346758,\n \"acc_norm\": 0.47354497354497355,\n \"acc_norm_stderr\": 0.025715239811346758\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.49206349206349204,\n \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.49206349206349204,\n \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8193548387096774,\n \"acc_stderr\": 0.02188617856717253,\n \"acc_norm\": 0.8193548387096774,\n \"acc_norm_stderr\": 0.02188617856717253\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.541871921182266,\n \"acc_stderr\": 0.03505630140785741,\n \"acc_norm\": 0.541871921182266,\n \"acc_norm_stderr\": 0.03505630140785741\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8484848484848485,\n \"acc_stderr\": 0.027998073798781675,\n \"acc_norm\": 0.8484848484848485,\n \"acc_norm_stderr\": 0.027998073798781675\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.022390787638216763,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.022390787638216763\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.927461139896373,\n \"acc_stderr\": 0.018718998520678178,\n \"acc_norm\": 0.927461139896373,\n \"acc_norm_stderr\": 0.018718998520678178\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6974358974358974,\n \"acc_stderr\": 0.02329088805377272,\n \"acc_norm\": 0.6974358974358974,\n \"acc_norm_stderr\": 0.02329088805377272\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.32592592592592595,\n \"acc_stderr\": 0.028578348365473072,\n \"acc_norm\": 0.32592592592592595,\n \"acc_norm_stderr\": 0.028578348365473072\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.773109243697479,\n \"acc_stderr\": 0.02720537153827947,\n \"acc_norm\": 0.773109243697479,\n \"acc_norm_stderr\": 0.02720537153827947\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.4966887417218543,\n \"acc_stderr\": 0.04082393379449654,\n \"acc_norm\": 0.4966887417218543,\n \"acc_norm_stderr\": 0.04082393379449654\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8954128440366973,\n \"acc_stderr\": 0.013120530245265586,\n \"acc_norm\": 0.8954128440366973,\n \"acc_norm_stderr\": 0.013120530245265586\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5833333333333334,\n \"acc_stderr\": 0.03362277436608043,\n \"acc_norm\": 0.5833333333333334,\n \"acc_norm_stderr\": 0.03362277436608043\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9019607843137255,\n \"acc_stderr\": 0.020871118455552097,\n \"acc_norm\": 0.9019607843137255,\n \"acc_norm_stderr\": 0.020871118455552097\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.890295358649789,\n \"acc_stderr\": 0.020343400734868837,\n \"acc_norm\": 0.890295358649789,\n \"acc_norm_stderr\": 0.020343400734868837\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7623318385650224,\n \"acc_stderr\": 0.028568079464714274,\n \"acc_norm\": 0.7623318385650224,\n \"acc_norm_stderr\": 0.028568079464714274\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8396946564885496,\n \"acc_stderr\": 0.03217829420744632,\n \"acc_norm\": 0.8396946564885496,\n \"acc_norm_stderr\": 0.03217829420744632\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8512396694214877,\n \"acc_stderr\": 0.03248470083807194,\n \"acc_norm\": 0.8512396694214877,\n \"acc_norm_stderr\": 0.03248470083807194\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.03602814176392645,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.03602814176392645\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.8282208588957055,\n \"acc_stderr\": 0.02963471727237104,\n \"acc_norm\": 0.8282208588957055,\n \"acc_norm_stderr\": 0.02963471727237104\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.03675668832233188,\n \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.03675668832233188\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8974358974358975,\n \"acc_stderr\": 0.01987565502786745,\n \"acc_norm\": 0.8974358974358975,\n \"acc_norm_stderr\": 0.01987565502786745\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.859514687100894,\n \"acc_stderr\": 0.012426211353093448,\n \"acc_norm\": 0.859514687100894,\n \"acc_norm_stderr\": 0.012426211353093448\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7658959537572254,\n \"acc_stderr\": 0.022797110278071128,\n \"acc_norm\": 0.7658959537572254,\n \"acc_norm_stderr\": 0.022797110278071128\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.582122905027933,\n \"acc_stderr\": 0.016495400635820084,\n \"acc_norm\": 0.582122905027933,\n \"acc_norm_stderr\": 0.016495400635820084\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7483660130718954,\n \"acc_stderr\": 0.024848018263875195,\n \"acc_norm\": 0.7483660130718954,\n \"acc_norm_stderr\": 0.024848018263875195\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7427652733118971,\n \"acc_stderr\": 0.024826171289250888,\n \"acc_norm\": 0.7427652733118971,\n \"acc_norm_stderr\": 0.024826171289250888\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8117283950617284,\n \"acc_stderr\": 0.021751866060815882,\n \"acc_norm\": 0.8117283950617284,\n \"acc_norm_stderr\": 0.021751866060815882\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.574468085106383,\n \"acc_stderr\": 0.02949482760014436,\n \"acc_norm\": 0.574468085106383,\n \"acc_norm_stderr\": 0.02949482760014436\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5788787483702738,\n \"acc_stderr\": 0.012610325733489905,\n \"acc_norm\": 0.5788787483702738,\n \"acc_norm_stderr\": 0.012610325733489905\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7242647058823529,\n \"acc_stderr\": 0.027146271936625162,\n \"acc_norm\": 0.7242647058823529,\n \"acc_norm_stderr\": 0.027146271936625162\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7565359477124183,\n \"acc_stderr\": 0.017362473762146613,\n \"acc_norm\": 0.7565359477124183,\n \"acc_norm_stderr\": 0.017362473762146613\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7454545454545455,\n \"acc_stderr\": 0.04172343038705383,\n \"acc_norm\": 0.7454545454545455,\n \"acc_norm_stderr\": 0.04172343038705383\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7959183673469388,\n \"acc_stderr\": 0.025801283475090496,\n \"acc_norm\": 0.7959183673469388,\n \"acc_norm_stderr\": 0.025801283475090496\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8905472636815921,\n \"acc_stderr\": 0.02207632610182466,\n \"acc_norm\": 0.8905472636815921,\n \"acc_norm_stderr\": 0.02207632610182466\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n \"acc_stderr\": 0.03891364495835817,\n \"acc_norm\": 0.5120481927710844,\n \"acc_norm_stderr\": 0.03891364495835817\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8596491228070176,\n \"acc_stderr\": 0.0266405825391332,\n \"acc_norm\": 0.8596491228070176,\n \"acc_norm_stderr\": 0.0266405825391332\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.47980416156670747,\n \"mc1_stderr\": 0.01748921684973705,\n \"mc2\": 0.658093697491632,\n \"mc2_stderr\": 0.014747866760131165\n }\n}\n```", "repo_url": "https://huggingface.co/oh-yeontaek/llama-2-70B-LoRA-assemble-v3", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|arc:challenge|25_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hellaswag|10_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-15T17-36-30.757691.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-15T17-36-30.757691.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_15T17_36_30.757691", "path": ["results_2023-09-15T17-36-30.757691.parquet"]}, {"split": "latest", "path": ["results_2023-09-15T17-36-30.757691.parquet"]}]}]}
2023-09-15T16:37:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of oh-yeontaek/llama-2-70B-LoRA-assemble-v3 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model oh-yeontaek/llama-2-70B-LoRA-assemble-v3 on the Open LLM Leaderboard. The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-09-15T17:36:30.757691(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of oh-yeontaek/llama-2-70B-LoRA-assemble-v3", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model oh-yeontaek/llama-2-70B-LoRA-assemble-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-09-15T17:36:30.757691(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of oh-yeontaek/llama-2-70B-LoRA-assemble-v3", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model oh-yeontaek/llama-2-70B-LoRA-assemble-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-09-15T17:36:30.757691(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 28, 31, 176, 68, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of oh-yeontaek/llama-2-70B-LoRA-assemble-v3## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model oh-yeontaek/llama-2-70B-LoRA-assemble-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-15T17:36:30.757691(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
52221e1238d29811656ebeccb23870b156db99c9
# Dataset of kusakabe_wakaba/日下部若葉 (THE iDOLM@STER: Cinderella Girls) This is the dataset of kusakabe_wakaba/日下部若葉 (THE iDOLM@STER: Cinderella Girls), containing 157 images and their tags. The core tags of this character are `brown_hair, long_hair, green_eyes, thick_eyebrows, wavy_hair, bow`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 157 | 142.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kusakabe_wakaba_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 157 | 96.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kusakabe_wakaba_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 342 | 190.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kusakabe_wakaba_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 157 | 131.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kusakabe_wakaba_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 342 | 253.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kusakabe_wakaba_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/kusakabe_wakaba_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, hair_scrunchie, twintails, blush, green_bikini, navel, polka_dot_bikini, sweat, white_background, looking_at_viewer, open_mouth, small_breasts, solo, yellow_scrunchie, bracelet, petite, simple_background, smile | | 1 | 30 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, blush, looking_at_viewer, smile, dress, open_mouth, white_background, hair_flower, simple_background, hairband, jewelry | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | hair_scrunchie | twintails | blush | green_bikini | navel | polka_dot_bikini | sweat | white_background | looking_at_viewer | open_mouth | small_breasts | solo | yellow_scrunchie | bracelet | petite | simple_background | smile | dress | hair_flower | hairband | jewelry | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:------------|:--------|:---------------|:--------|:-------------------|:--------|:-------------------|:--------------------|:-------------|:----------------|:-------|:-------------------|:-----------|:---------|:--------------------|:--------|:--------|:--------------|:-----------|:----------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | 1 | 30 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | | X | | | | | X | X | X | | X | | | | X | X | X | X | X | X |
CyberHarem/kusakabe_wakaba_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T16:44:20+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:37:45+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of kusakabe\_wakaba/日下部若葉 (THE iDOLM@STER: Cinderella Girls) ==================================================================== This is the dataset of kusakabe\_wakaba/日下部若葉 (THE iDOLM@STER: Cinderella Girls), containing 157 images and their tags. The core tags of this character are 'brown\_hair, long\_hair, green\_eyes, thick\_eyebrows, wavy\_hair, bow', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
34fb3f47975ff48454c6d581579e462e611a7719
A dataset for regularization during training was created using NAI and [7eu7d7/HCP-Diffusion-datas](https://huggingface.co/datasets/7eu7d7/HCP-Diffusion-datas). The dataset has dimensions of 512x512 and consists of 2000 images.
deepghs/anime_regular_dataset
[ "size_categories:1K<n<10K", "license:mit", "art", "region:us" ]
2023-09-15T16:45:24+00:00
{"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]}
2023-09-15T16:57:42+00:00
[]
[]
TAGS #size_categories-1K<n<10K #license-mit #art #region-us
A dataset for regularization during training was created using NAI and 7eu7d7/HCP-Diffusion-datas. The dataset has dimensions of 512x512 and consists of 2000 images.
[]
[ "TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n" ]
[ 25 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n" ]
5e100c8182148b7117525a88a2a38fd7be49ea10
# Dataset of ujiie_mutsumi/氏家むつみ (THE iDOLM@STER: Cinderella Girls) This is the dataset of ujiie_mutsumi/氏家むつみ (THE iDOLM@STER: Cinderella Girls), containing 30 images and their tags. The core tags of this character are `black_hair, long_hair, bangs, braid, blunt_bangs, blue_eyes, single_braid`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 30 | 22.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ujiie_mutsumi_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 30 | 18.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ujiie_mutsumi_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 64 | 34.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ujiie_mutsumi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 30 | 21.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ujiie_mutsumi_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 64 | 39.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ujiie_mutsumi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/ujiie_mutsumi_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, smile, open_mouth, earrings, hat, skirt, thighhighs, belt, card_(medium), character_name, gem_(symbol), necklace | | 1 | 13 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, blush, solo, looking_at_viewer, open_mouth, smile, long_sleeves, hair_ornament, hair_over_shoulder, simple_background, sweat, white_background, white_shirt | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | smile | open_mouth | earrings | hat | skirt | thighhighs | belt | card_(medium) | character_name | gem_(symbol) | necklace | blush | looking_at_viewer | long_sleeves | hair_ornament | hair_over_shoulder | simple_background | sweat | white_background | white_shirt | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:-------------|:-----------|:------|:--------|:-------------|:-------|:----------------|:-----------------|:---------------|:-----------|:--------|:--------------------|:---------------|:----------------|:---------------------|:--------------------|:--------|:-------------------|:--------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | 1 | 13 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | | | | | | | | | | X | X | X | X | X | X | X | X | X |
CyberHarem/ujiie_mutsumi_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T16:48:50+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:21:33+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of ujiie\_mutsumi/氏家むつみ (THE iDOLM@STER: Cinderella Girls) ================================================================== This is the dataset of ujiie\_mutsumi/氏家むつみ (THE iDOLM@STER: Cinderella Girls), containing 30 images and their tags. The core tags of this character are 'black\_hair, long\_hair, bangs, braid, blunt\_bangs, blue\_eyes, single\_braid', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
fa127ec1813271ef8b05c26aa745989f274fe96c
# Dataset of narumiya_yume/成宮由愛 (THE iDOLM@STER: Cinderella Girls) This is the dataset of narumiya_yume/成宮由愛 (THE iDOLM@STER: Cinderella Girls), containing 125 images and their tags. The core tags of this character are `grey_hair, short_hair, mole, mole_under_eye, brown_eyes, bangs, hairband, hair_between_eyes, bow`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 125 | 99.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/narumiya_yume_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 125 | 76.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/narumiya_yume_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 256 | 143.44 MiB | [Download](https://huggingface.co/datasets/CyberHarem/narumiya_yume_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 125 | 94.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/narumiya_yume_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 256 | 171.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/narumiya_yume_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/narumiya_yume_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 11 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blush, solo, :d, looking_at_viewer, open_mouth, white_background, simple_background, dress, long_sleeves, hair_bow, shirt, upper_body | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, hair_flower, solo, smile, bracelet, dress, looking_at_viewer, sitting | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | solo | :d | looking_at_viewer | open_mouth | white_background | simple_background | dress | long_sleeves | hair_bow | shirt | upper_body | hair_flower | smile | bracelet | sitting | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:-----|:--------------------|:-------------|:-------------------|:--------------------|:--------|:---------------|:-----------|:--------|:-------------|:--------------|:--------|:-----------|:----------| | 0 | 11 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | | X | | | | X | | | | | X | X | X | X |
CyberHarem/narumiya_yume_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T16:52:58+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T18:37:28+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of narumiya\_yume/成宮由愛 (THE iDOLM@STER: Cinderella Girls) ================================================================= This is the dataset of narumiya\_yume/成宮由愛 (THE iDOLM@STER: Cinderella Girls), containing 125 images and their tags. The core tags of this character are 'grey\_hair, short\_hair, mole, mole\_under\_eye, brown\_eyes, bangs, hairband, hair\_between\_eyes, bow', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
77679a4044b356f2042b522495ab8c2603964928
# Dataset Card for "thevault-docstringstyle" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
NamCyan/thevault-docstringstyle
[ "region:us" ]
2023-09-15T17:06:10+00:00
{"dataset_info": {"features": [{"name": "hexsha", "dtype": "string"}, {"name": "repo", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "license", "sequence": "string"}, {"name": "language", "dtype": "string"}, {"name": "identifier", "dtype": "string"}, {"name": "return_type", "dtype": "string"}, {"name": "original_string", "dtype": "string"}, {"name": "original_docstring", "dtype": "string"}, {"name": "docstring", "dtype": "string"}, {"name": "docstring_tokens", "sequence": "string"}, {"name": "code", "dtype": "string"}, {"name": "code_tokens", "sequence": "string"}, {"name": "short_docstring", "dtype": "string"}, {"name": "short_docstring_tokens", "sequence": "string"}, {"name": "comment", "sequence": "string"}, {"name": "parameters", "list": [{"name": "param", "dtype": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "docstring_params", "struct": [{"name": "returns", "list": [{"name": "docstring", "dtype": "string"}, {"name": "docstring_tokens", "sequence": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "raises", "list": [{"name": "docstring", "dtype": "string"}, {"name": "docstring_tokens", "sequence": "string"}, {"name": "type", "dtype": "string"}]}, {"name": "params", "list": [{"name": "identifier", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "docstring", "dtype": "string"}, {"name": "docstring_tokens", "sequence": "string"}, {"name": "default", "dtype": "string"}, {"name": "is_optional", "dtype": "bool"}]}, {"name": "outlier_params", "list": [{"name": "identifier", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "docstring", "dtype": "string"}, {"name": "docstring_tokens", "sequence": "string"}, {"name": "default", "dtype": "string"}, {"name": "is_optional", "dtype": "bool"}]}, {"name": "others", "list": [{"name": "identifier", "dtype": "string"}, {"name": "docstring", "dtype": "string"}, {"name": "docstring_tokens", "sequence": "string"}]}]}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6545943535, "num_examples": 1261519}], "download_size": 1969238091, "dataset_size": 6545943535}}
2023-09-15T17:55:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for "thevault-docstringstyle" More Information needed
[ "# Dataset Card for \"thevault-docstringstyle\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"thevault-docstringstyle\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"thevault-docstringstyle\"\n\nMore Information needed" ]
c964072508beddf79489c29ac183cf9df145633a
# Dataset of oohara_michiru/大原みちる (THE iDOLM@STER: Cinderella Girls) This is the dataset of oohara_michiru/大原みちる (THE iDOLM@STER: Cinderella Girls), containing 53 images and their tags. The core tags of this character are `brown_hair, drill_hair, pink_eyes, fang, bow, scrunchie`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 53 | 40.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oohara_michiru_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 53 | 30.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oohara_michiru_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 95 | 55.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oohara_michiru_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 53 | 38.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oohara_michiru_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 95 | 69.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oohara_michiru_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/oohara_michiru_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, open_mouth, solo, :d, bread, school_uniform, skirt | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | open_mouth | solo | :d | bread | school_uniform | skirt | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:-------|:-----|:--------|:-----------------|:--------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X |
CyberHarem/oohara_michiru_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T17:07:05+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T19:35:58+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of oohara\_michiru/大原みちる (THE iDOLM@STER: Cinderella Girls) =================================================================== This is the dataset of oohara\_michiru/大原みちる (THE iDOLM@STER: Cinderella Girls), containing 53 images and their tags. The core tags of this character are 'brown\_hair, drill\_hair, pink\_eyes, fang, bow, scrunchie', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
00a78e02689b734403b5f811902cf801dec04947
# Dataset of sakakibara_satomi/榊原里美 (THE iDOLM@STER: Cinderella Girls) This is the dataset of sakakibara_satomi/榊原里美 (THE iDOLM@STER: Cinderella Girls), containing 71 images and their tags. The core tags of this character are `grey_hair, breasts, long_hair, large_breasts, purple_eyes, drill_hair, braid, twintails`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 71 | 46.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakakibara_satomi_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 71 | 38.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakakibara_satomi_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 134 | 67.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakakibara_satomi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 71 | 44.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakakibara_satomi_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 134 | 76.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakakibara_satomi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/sakakibara_satomi_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------| | 0 | 14 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, cleavage, open_mouth, necklace, hairband, looking_at_viewer, :d, blush, microphone | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, dress, necklace, blush, simple_background, twin_braids, looking_at_viewer, open_mouth, white_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | cleavage | open_mouth | necklace | hairband | looking_at_viewer | :d | blush | microphone | dress | simple_background | twin_braids | white_background | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-----------|:-------------|:-----------|:-----------|:--------------------|:-----|:--------|:-------------|:--------|:--------------------|:--------------|:-------------------| | 0 | 14 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | | | | | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | X | X | | X | | X | | X | X | X | X |
CyberHarem/sakakibara_satomi_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T17:15:07+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:39:57+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of sakakibara\_satomi/榊原里美 (THE iDOLM@STER: Cinderella Girls) ===================================================================== This is the dataset of sakakibara\_satomi/榊原里美 (THE iDOLM@STER: Cinderella Girls), containing 71 images and their tags. The core tags of this character are 'grey\_hair, breasts, long\_hair, large\_breasts, purple\_eyes, drill\_hair, braid, twintails', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
30babebb85ab2661e8be6fafb783980723882764
# Dataset Card for "e06f76e8" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/e06f76e8
[ "region:us" ]
2023-09-15T17:17:09+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 169, "num_examples": 10}], "download_size": 1323, "dataset_size": 169}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-15T17:17:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "e06f76e8" More Information needed
[ "# Dataset Card for \"e06f76e8\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"e06f76e8\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"e06f76e8\"\n\nMore Information needed" ]
2e68e7fc16f1c604e24b54d90197cfeccff8781f
# Dataset Card for "phr_mental_health_dataset" - This dataset is a cleaned version of [nart-100k-synthetic](https://huggingface.co/datasets/jerryjalapeno/nart-100k-synthetic) - The data is generated synthetically using gpt3.5-turbo using [this](https://github.com/jerryjalapeno/nart-100k-7b/blob/main/synthetic_conv_gen.py) script. - The dataset had a "sharegpt" style JSONL format, with each JSON having keys "human" and "gpt", having an equal number of both. - The data was then cleaned, and the following changes were made - The names "Alex" and "Charlie" were removed from the dataset, which can often come up in the conversation of fine-tuned models. - The data was then converted to the format required for llama-2-chat models. - The dataset was converted to JSONL format with just a single key, "text", which contains the combined text for training the model. - The appropriate llama-2 system prompt was added at the beginning of the conversation. - The conversation was then enclosed with [INST], [\INST], `<s> and </s>` formats as defined in [llama-2](https://huggingface.co/blog/llama2#:~:text=Using%20text-generation-inference%20and%20Inference%20Endpoints&text=You%20can%20try%20out%20Text,Deploy%20-%3E%20Inference%20Endpoints%20widget.) article. - Whether to include the last conversation, i.e., the last GPT response or not, was chosen randomly.
vibhorag101/phr_mental_therapy_dataset
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "license:mit", "medical", "region:us" ]
2023-09-15T17:21:24+00:00
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "Synthetic Mental Therapy Dataset", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 458762343, "num_examples": 99086}], "download_size": 211247054, "dataset_size": 458762343}, "tags": ["medical"]}
2023-12-03T13:37:12+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-mit #medical #region-us
# Dataset Card for "phr_mental_health_dataset" - This dataset is a cleaned version of nart-100k-synthetic - The data is generated synthetically using gpt3.5-turbo using this script. - The dataset had a "sharegpt" style JSONL format, with each JSON having keys "human" and "gpt", having an equal number of both. - The data was then cleaned, and the following changes were made - The names "Alex" and "Charlie" were removed from the dataset, which can often come up in the conversation of fine-tuned models. - The data was then converted to the format required for llama-2-chat models. - The dataset was converted to JSONL format with just a single key, "text", which contains the combined text for training the model. - The appropriate llama-2 system prompt was added at the beginning of the conversation. - The conversation was then enclosed with [INST], [\INST], '<s> and </s>' formats as defined in llama-2 article. - Whether to include the last conversation, i.e., the last GPT response or not, was chosen randomly.
[ "# Dataset Card for \"phr_mental_health_dataset\"\n- This dataset is a cleaned version of nart-100k-synthetic\n- The data is generated synthetically using gpt3.5-turbo using this script.\n- The dataset had a \"sharegpt\" style JSONL format, with each JSON having keys \"human\" and \"gpt\", having an equal number of both.\n- The data was then cleaned, and the following changes were made\n - The names \"Alex\" and \"Charlie\" were removed from the dataset, which can often come up in the conversation of fine-tuned models.\n- The data was then converted to the format required for llama-2-chat models.\n - The dataset was converted to JSONL format with just a single key, \"text\", which contains the combined text for training the model.\n - The appropriate llama-2 system prompt was added at the beginning of the conversation.\n - The conversation was then enclosed with [INST], [\\INST], '<s> and </s>' formats as defined in llama-2 article.\n - Whether to include the last conversation, i.e., the last GPT response or not, was chosen randomly." ]
[ "TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-mit #medical #region-us \n", "# Dataset Card for \"phr_mental_health_dataset\"\n- This dataset is a cleaned version of nart-100k-synthetic\n- The data is generated synthetically using gpt3.5-turbo using this script.\n- The dataset had a \"sharegpt\" style JSONL format, with each JSON having keys \"human\" and \"gpt\", having an equal number of both.\n- The data was then cleaned, and the following changes were made\n - The names \"Alex\" and \"Charlie\" were removed from the dataset, which can often come up in the conversation of fine-tuned models.\n- The data was then converted to the format required for llama-2-chat models.\n - The dataset was converted to JSONL format with just a single key, \"text\", which contains the combined text for training the model.\n - The appropriate llama-2 system prompt was added at the beginning of the conversation.\n - The conversation was then enclosed with [INST], [\\INST], '<s> and </s>' formats as defined in llama-2 article.\n - Whether to include the last conversation, i.e., the last GPT response or not, was chosen randomly." ]
[ 41, 269 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-mit #medical #region-us \n# Dataset Card for \"phr_mental_health_dataset\"\n- This dataset is a cleaned version of nart-100k-synthetic\n- The data is generated synthetically using gpt3.5-turbo using this script.\n- The dataset had a \"sharegpt\" style JSONL format, with each JSON having keys \"human\" and \"gpt\", having an equal number of both.\n- The data was then cleaned, and the following changes were made\n - The names \"Alex\" and \"Charlie\" were removed from the dataset, which can often come up in the conversation of fine-tuned models.\n- The data was then converted to the format required for llama-2-chat models.\n - The dataset was converted to JSONL format with just a single key, \"text\", which contains the combined text for training the model.\n - The appropriate llama-2 system prompt was added at the beginning of the conversation.\n - The conversation was then enclosed with [INST], [\\INST], '<s> and </s>' formats as defined in llama-2 article.\n - Whether to include the last conversation, i.e., the last GPT response or not, was chosen randomly." ]
0966c040a2e7faa0bc947bce3a767565364e24c5
# Dataset Card for "bbe01f48" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/bbe01f48
[ "region:us" ]
2023-09-15T17:27:45+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 217, "num_examples": 10}], "download_size": 1377, "dataset_size": 217}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-15T17:27:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bbe01f48" More Information needed
[ "# Dataset Card for \"bbe01f48\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bbe01f48\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"bbe01f48\"\n\nMore Information needed" ]
ee543ffcfddb4503be33ba53da2100c0e86eac91
# Dataset of kiba_manami/木場真奈美 (THE iDOLM@STER: Cinderella Girls) This is the dataset of kiba_manami/木場真奈美 (THE iDOLM@STER: Cinderella Girls), containing 73 images and their tags. The core tags of this character are `short_hair, green_eyes, brown_hair, breasts, large_breasts, earrings`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 73 | 75.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kiba_manami_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 73 | 49.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kiba_manami_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 154 | 95.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kiba_manami_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 73 | 69.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kiba_manami_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 154 | 125.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kiba_manami_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/kiba_manami_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, cleavage, smile, solo, necklace, bracelet, fingerless_gloves, looking_at_viewer, black_gloves, midriff, belt, black_shorts, hair_between_eyes, holding_microphone, medium_breasts, navel, simple_background, thighhighs, black_footwear, character_name, open_jacket, open_mouth, short_sleeves, standing, thigh_boots | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, smile, solo, character_name, medium_breasts, pants, belt, card_(medium), cleavage, gem_(symbol), looking_at_viewer, blue_background, frills, hat_removed, necklace | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | cleavage | smile | solo | necklace | bracelet | fingerless_gloves | looking_at_viewer | black_gloves | midriff | belt | black_shorts | hair_between_eyes | holding_microphone | medium_breasts | navel | simple_background | thighhighs | black_footwear | character_name | open_jacket | open_mouth | short_sleeves | standing | thigh_boots | pants | card_(medium) | gem_(symbol) | blue_background | frills | hat_removed | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:--------|:-------|:-----------|:-----------|:--------------------|:--------------------|:---------------|:----------|:-------|:---------------|:--------------------|:---------------------|:-----------------|:--------|:--------------------|:-------------|:-----------------|:-----------------|:--------------|:-------------|:----------------|:-----------|:--------------|:--------|:----------------|:---------------|:------------------|:---------|:--------------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | | | X | | | X | | | | X | | | | | X | | | | | | X | X | X | X | X | X |
CyberHarem/kiba_manami_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T17:34:38+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T20:28:40+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of kiba\_manami/木場真奈美 (THE iDOLM@STER: Cinderella Girls) ================================================================ This is the dataset of kiba\_manami/木場真奈美 (THE iDOLM@STER: Cinderella Girls), containing 73 images and their tags. The core tags of this character are 'short\_hair, green\_eyes, brown\_hair, breasts, large\_breasts, earrings', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
f41f6c988305c739ec2c6e809394cc95a80f9bcd
# Dataset of nonomura_sora/野々村そら (THE iDOLM@STER: Cinderella Girls) This is the dataset of nonomura_sora/野々村そら (THE iDOLM@STER: Cinderella Girls), containing 61 images and their tags. The core tags of this character are `long_hair, green_eyes, breasts, twintails, black_hair, brown_hair, drill_hair, hair_ornament`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 61 | 83.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nonomura_sora_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 61 | 47.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nonomura_sora_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 145 | 101.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nonomura_sora_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 61 | 71.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nonomura_sora_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 145 | 145.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nonomura_sora_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/nonomura_sora_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, midriff, navel, open_mouth, smile, solo, looking_at_viewer, medium_breasts, one_eye_closed, skirt, cleavage, earrings, ;d, blush, microphone, necklace, bracelet | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, card_(medium), character_name, open_mouth, smile, solo, sun_symbol, star_(symbol), ;d, one_eye_closed, orange_background, skirt, bow, dress, microphone, necklace, sparkle | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | midriff | navel | open_mouth | smile | solo | looking_at_viewer | medium_breasts | one_eye_closed | skirt | cleavage | earrings | ;d | blush | microphone | necklace | bracelet | card_(medium) | character_name | sun_symbol | star_(symbol) | orange_background | bow | dress | sparkle | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------|:--------|:-------------|:--------|:-------|:--------------------|:-----------------|:-----------------|:--------|:-----------|:-----------|:-----|:--------|:-------------|:-----------|:-----------|:----------------|:-----------------|:-------------|:----------------|:--------------------|:------|:--------|:----------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | | X | X | X | | | X | X | | | X | | X | X | | X | X | X | X | X | X | X | X |
CyberHarem/nonomura_sora_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T17:51:16+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:42:40+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of nonomura\_sora/野々村そら (THE iDOLM@STER: Cinderella Girls) ================================================================== This is the dataset of nonomura\_sora/野々村そら (THE iDOLM@STER: Cinderella Girls), containing 61 images and their tags. The core tags of this character are 'long\_hair, green\_eyes, breasts, twintails, black\_hair, brown\_hair, drill\_hair, hair\_ornament', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
e2e6c7b7368f5c8c21a3ce2d03331b1b83da32ab
# Dataset Card for Fondant Creative Commons 25 million (fondant-cc-25m) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6266919100f1a3335dbd966f/latKi21OzpP2gaIvMGXz5.png) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Changelog](#changelog) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [How to use it](#how-to-use-it) - [How to contribute](#how-to-contribute) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Data Collection and Preprocessing](#data-collection-and-preprocessing) - [Privacy statement](#privacy-statement) - [Opting out](#opting-out) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Disclaimer](#disclaimer) - [Discussion of Biases](#discussion-of-biases) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Contact](#contact) ## Dataset Description - **Homepage:** https://www.fondant.ai/ - **Repository:** https://github.com/ml6team/fondant - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** [email protected] ### Changelog |Release|Description| |-|-| |v0.1| Release of the Fondant-cc-25m dataset ### Dataset Summary Fondant-cc-25m contains 25 million image URLs with their respective [Creative Commons](https://creativecommons.org/) license information collected from the [Common Crawl web corpus](https://commoncrawl.org/). The dataset was created using [Fondant](https://fondant.ai), an open source framework that aims to simplify and speed up large-scale data processing by making self-contained pipeline components reusable across pipelines, infrastructures and shareable within the community. ### Supported Tasks and Leaderboards This dataset can be used for training or fine-tuning image generation or computer vision models. ### How to use it To execute the pipeline locally, you must have [docker compose](https://docs.docker.com/compose/), [Python](https://python.org) >=3.8 and [Git](https://git-scm.com/) installed on your system. To ensure a successful example run, please allocate at least 8GB of RAM to your Docker environment. **Note:** For Apple M1/M2 ship users: - Make sure that Docker uses linux/amd64 platform and not arm64. In Docker Dashboard go to Settings>Features in development, make sure to uncheck `Use containerid for pulling and storing images`. - For improved execution speed, check the box that says `Use Rosetta for x86/amd64 emulation on Apple Silicon`. We have prepared a sample Fondant pipeline for downloading the dataset. 1) Install Fondant by running: ```bash pip install fondant ``` 2) Clone the [sample GitHub repository](https://github.com/ml6team/fondant-usecase-filter-creative-commons) ```bash git clone https://github.com/ml6team/fondant-usecase-filter-creative-commons.git ``` 3) Make sure that Docker is running, navigate to the `src` folder, and initiate the pipeline by executing: ```bash fondant run local pipeline ``` **Note:** For local testing purposes, the pipeline will only download the first 100 images. If you want to download the full dataset, you will need to modify the component arguments in the `pipeline.py` file, specifically the following part: ```python load_from_hf_hub = ComponentOp( component_dir="components/load_from_hf_hub", arguments={ "dataset_name": "fondant-ai/fondant-cc-25m", "column_name_mapping": load_component_column_mapping, "n_rows_to_load": <HERE INSERT THE NUMBER OF IMAGES YOU WANT TO DOWNLOAD> }, ) ``` 4) To visually inspect the results quickly, you can use: ```bash fondant explore --base_path ./data ``` 5) You can also choose to download images to your local machine if you prefer, we have provided an [example script](https://huggingface.co/datasets/fondant-ai/fondant-cc-25m/blob/main/extract_images.py) that enabled this: To run the script, you can simply execute the following: ```bash python extract_images.py --parquet_file <Path to the Parquet file or folder containing the images> --save_folder <The folder where to save the images to> ``` ### How to contribute If you want to contribute to the dataset, the best way is to help us develop pipeline components for further processing. Creating custom pipelines for specific purposes requires different building blocks. Fondant pipelines can mix reusable components and custom components. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6266919100f1a3335dbd966f/a3IM5qWNUw0mv2r8t3_oN.png) Components we are currently looking to add are the following ([GitHub issues](https://github.com/ml6team/fondant/issues?q=is%3Aissue+is%3Aopen+label%3A%22Component+Contribution%22)): - 👯 Image-based deduplication - 🖥️✎ Automatic captioning - 🎨 Visual quality / aesthetic quality estimation - 🔏 Watermark detection - 🔞 Not safe for work (NSFW) content detection - 📇 CLIP embedding generation - 😐 Face detection - 🙋🏻‍♂️ Personal Identifiable Information (PII) detection - 📝 Text detection - 🤖 AI generated image detection - 👬 Image-text CLIP similarity - 👨‍🎨 Any components that you propose to develop We are also looking for core framework contributors and users who are willing to give feedback on usability and suggest potential improvements ## Dataset Structure ### Data Instances Each data instance corresponds to one image. The URL of the image is in the `image_url` feature, and other features (`alt_text`, `webpage_url`, etc) provide some metadata. Note that images have been deduplicated only based on their URLs. ### Data Fields - `image_url` (string): image url to download the image - `alt_text` (string): alternative text of the image - `webpage_url` (string): webpage source of the image - `license_type` (string): creative commons license type of the image - `license_location` (string): location of the license on the webpage - `surt_url` (string): sort friendly image url with top level domain as the prefix ### Data Splits We do not provide any canonical splits for fondant-cc-25m. ## Dataset Creation ### Curation Rationale Current AI image generation models such as Stable Diffusion and Dall-E are trained on hundreds of millions of images from the public Internet including copyrighted work. This creates legal risks and uncertainties for users of these images and is unfair towards copyright holders who may not want their proprietary work reproduced without consent. By releasing a Creative Commons image dataset, we hope to mitigate legal risks and empower ethical AI development that respects copyright. This dataset is the first step towards our goal of a 500M Creative Commons image dataset. ### Source Data fondant-cc-25m is built from CommonCrawl dumps. These dumps are constructed from crawling publicly available web pages. ### Data Collection and Preprocessing Permissive licenses have minimal restrictions on how the image can be copied, modified, and redistributed. The full list of licenses can be found [here](https://creativecommons.org/about/cclicenses/). We examined HTML tags of the webpages for the presence of Creative Commons license URLs. A webpage was marked permissive only when a license URL was found in its footer, aside or sidebar. This was the case only in around 0.164% of a 100k random sample from Common Crawl. This suggests that image generation models trained on a random sample from the public internet may be trained on up to 99.836% copyrighted images. Subsequently, all the image URLs present on the web page were collected together with the license information. A manual check of a random sample of 1032 images showed that 96.32% were attributed the correct license whil 3.68% were not. False positives could be due to parsing errors but also incorrect attributions: images indicated by the publisher to be CC which are not. More information on our approach can be found in [this blogpost](https://blog.ml6.eu/ai-image-generation-without-copyright-infringement-a9901b64541c). ### Privacy statement It is possible that the dataset contains personal data, in that sense that we link to images with information that relates to an identified or identifiable living individual. We already take steps to reduce the processing of personal information when collecting our dataset, by, for example, (i) removing websites that aggregate large volumes of personal information and (ii) excluding websites that contain sensitive information about individuals. The data controller The data controller for the processing under the GDPR is Skyhaus BV (hereafter also referred to as “we” or “our”), a company with its registered seat in Belgium, 9000 Ghent, Esplanade Oscar Van de Voorde 1, and with the enterprise number 0502.515.626. Our Data Protection Officer can be contacted via [[email protected]](mailto:[email protected]). We process the personal data lawfully We base our collection of personal data that is included in the dataset on our legitimate interests according to the GDPR (article 6.1.f GDPR), for the purpose of establishing an open source framework for data preparation and fine-tuning of foundation models. Please note that we never store the personal data as such and that we never use the dataset for any other purpose. Execution of the rights of data subjects. Individuals have the right to access, correct, restrict, delete, or transfer their personal information that may be included in our dataset. You can exercise these rights by reaching out to [[email protected]](mailto:[email protected]). Please be aware that some rights may not be absolute and that we may decline a request if we have a lawful reason for doing so. However, we strive to prioritize the protection of personal information and comply with the GDPR or other privacy laws. If you feel we have not adequately addressed a request, you have the right to lodge a complaint with your local supervisory authority. The PII filtering pipeline for this dataset is still a work in progress. Researchers that wish to contribute to the anonymization pipeline of the project can join [here](https://github.com/ml6team/fondant/tree/main#-contributing). ### Opting out Fondant-cc-25m is based on CommonCrawl. Their crawler honors opt-out requests in the robots.txt, see the [CC FAQ](https://commoncrawl.org/big-picture/frequently-asked-questions/) for details. We are giving the public the ability to have their image removed from the dataset upon request. The process for submitting and enacting removal requests will keep evolving throughout the project as we receive feedback and build up more data governance tools. If you'd like to have your data removed from the dataset, [contact us](mailto:[email protected]). ## Considerations for Using the Data ### Disclaimer Fondant is making significant efforts to respect the intellectual property rights of third parties by publishing a dataset of Creative Commons licensed images. Under no circumstances can Fondant be held liable by a third party for (i) the accuracy or correctness of the content, (ii) an alleged infringement of intellectual property rights or (iii) any other alleged claim, action, injunction or suit resulting from the publication or use of the dataset. ### Discussion of Biases As toxic or biased data is prevalent on the internet, it is possible that our dataset contains such content. ## Additional Information ### Dataset Curators 1. Sharon Grundmann, ML6, [email protected] 2. Matthias Richter, ML6, [email protected] 3. Robbe Sneyders, ML6, [email protected] ### Licensing Information Fondant-cc-25m is a collection of images with various Creative Commons and other public licenses. Any use of all or part of the images gathered in Fondant-cc-25m must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point. The list of Creative Commons license types included in the dataset can be found [here](https://creativecommons.org/about/cclicenses/). ### Contact - Email: [[email protected]](mailto:[email protected]) - Discord: [https://discord.gg/HnTdWhydGp](https://discord.gg/HnTdWhydGp)
fondant-ai/fondant-cc-25m
[ "task_categories:text-to-image", "size_categories:10M<n<100M", "license:cc", "art", "region:us" ]
2023-09-15T17:56:54+00:00
{"license": "cc", "size_categories": ["10M<n<100M"], "task_categories": ["text-to-image"], "tags": ["art"]}
2023-11-21T10:54:10+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-10M<n<100M #license-cc #art #region-us
Dataset Card for Fondant Creative Commons 25 million (fondant-cc-25m) ===================================================================== !image/png Table of Contents ----------------- * Table of Contents * Dataset Description + Changelog + Dataset Summary + Supported Tasks and Leaderboards + How to use it + How to contribute * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Data Collection and Preprocessing + Privacy statement + Opting out * Considerations for Using the Data + Disclaimer + Discussion of Biases * Additional Information + Dataset Curators + Licensing Information + Contact Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: N/A * Leaderboard: N/A * Point of Contact: info@URL ### Changelog ### Dataset Summary Fondant-cc-25m contains 25 million image URLs with their respective Creative Commons license information collected from the Common Crawl web corpus. The dataset was created using Fondant, an open source framework that aims to simplify and speed up large-scale data processing by making self-contained pipeline components reusable across pipelines, infrastructures and shareable within the community. ### Supported Tasks and Leaderboards This dataset can be used for training or fine-tuning image generation or computer vision models. ### How to use it To execute the pipeline locally, you must have docker compose, Python >=3.8 and Git installed on your system. To ensure a successful example run, please allocate at least 8GB of RAM to your Docker environment. Note: For Apple M1/M2 ship users: * Make sure that Docker uses linux/amd64 platform and not arm64. In Docker Dashboard go to Settings>Features in development, make sure to uncheck 'Use containerid for pulling and storing images'. * For improved execution speed, check the box that says 'Use Rosetta for x86/amd64 emulation on Apple Silicon'. We have prepared a sample Fondant pipeline for downloading the dataset. 1. Install Fondant by running: 2. Clone the sample GitHub repository 3. Make sure that Docker is running, navigate to the 'src' folder, and initiate the pipeline by executing: Note: For local testing purposes, the pipeline will only download the first 100 images. If you want to download the full dataset, you will need to modify the component arguments in the 'URL' file, specifically the following part: 4. To visually inspect the results quickly, you can use: 5. You can also choose to download images to your local machine if you prefer, we have provided an example script that enabled this: To run the script, you can simply execute the following: ### How to contribute If you want to contribute to the dataset, the best way is to help us develop pipeline components for further processing. Creating custom pipelines for specific purposes requires different building blocks. Fondant pipelines can mix reusable components and custom components. !image/png Components we are currently looking to add are the following (GitHub issues): * Image-based deduplication * ️ Automatic captioning * Visual quality / aesthetic quality estimation * Watermark detection * Not safe for work (NSFW) content detection * CLIP embedding generation * Face detection * ‍️ Personal Identifiable Information (PII) detection * Text detection * AI generated image detection * Image-text CLIP similarity * ‍ Any components that you propose to develop We are also looking for core framework contributors and users who are willing to give feedback on usability and suggest potential improvements Dataset Structure ----------------- ### Data Instances Each data instance corresponds to one image. The URL of the image is in the 'image\_url' feature, and other features ('alt\_text', 'webpage\_url', etc) provide some metadata. Note that images have been deduplicated only based on their URLs. ### Data Fields * 'image\_url' (string): image url to download the image * 'alt\_text' (string): alternative text of the image * 'webpage\_url' (string): webpage source of the image * 'license\_type' (string): creative commons license type of the image * 'license\_location' (string): location of the license on the webpage * 'surt\_url' (string): sort friendly image url with top level domain as the prefix ### Data Splits We do not provide any canonical splits for fondant-cc-25m. Dataset Creation ---------------- ### Curation Rationale Current AI image generation models such as Stable Diffusion and Dall-E are trained on hundreds of millions of images from the public Internet including copyrighted work. This creates legal risks and uncertainties for users of these images and is unfair towards copyright holders who may not want their proprietary work reproduced without consent. By releasing a Creative Commons image dataset, we hope to mitigate legal risks and empower ethical AI development that respects copyright. This dataset is the first step towards our goal of a 500M Creative Commons image dataset. ### Source Data fondant-cc-25m is built from CommonCrawl dumps. These dumps are constructed from crawling publicly available web pages. ### Data Collection and Preprocessing Permissive licenses have minimal restrictions on how the image can be copied, modified, and redistributed. The full list of licenses can be found here. We examined HTML tags of the webpages for the presence of Creative Commons license URLs. A webpage was marked permissive only when a license URL was found in its footer, aside or sidebar. This was the case only in around 0.164% of a 100k random sample from Common Crawl. This suggests that image generation models trained on a random sample from the public internet may be trained on up to 99.836% copyrighted images. Subsequently, all the image URLs present on the web page were collected together with the license information. A manual check of a random sample of 1032 images showed that 96.32% were attributed the correct license whil 3.68% were not. False positives could be due to parsing errors but also incorrect attributions: images indicated by the publisher to be CC which are not. More information on our approach can be found in this blogpost. ### Privacy statement It is possible that the dataset contains personal data, in that sense that we link to images with information that relates to an identified or identifiable living individual. We already take steps to reduce the processing of personal information when collecting our dataset, by, for example, (i) removing websites that aggregate large volumes of personal information and (ii) excluding websites that contain sensitive information about individuals. The data controller The data controller for the processing under the GDPR is Skyhaus BV (hereafter also referred to as “we” or “our”), a company with its registered seat in Belgium, 9000 Ghent, Esplanade Oscar Van de Voorde 1, and with the enterprise number 0502.515.626. Our Data Protection Officer can be contacted via privacy@URL. We process the personal data lawfully We base our collection of personal data that is included in the dataset on our legitimate interests according to the GDPR (article 6.1.f GDPR), for the purpose of establishing an open source framework for data preparation and fine-tuning of foundation models. Please note that we never store the personal data as such and that we never use the dataset for any other purpose. Execution of the rights of data subjects. Individuals have the right to access, correct, restrict, delete, or transfer their personal information that may be included in our dataset. You can exercise these rights by reaching out to privacy@URL. Please be aware that some rights may not be absolute and that we may decline a request if we have a lawful reason for doing so. However, we strive to prioritize the protection of personal information and comply with the GDPR or other privacy laws. If you feel we have not adequately addressed a request, you have the right to lodge a complaint with your local supervisory authority. The PII filtering pipeline for this dataset is still a work in progress. Researchers that wish to contribute to the anonymization pipeline of the project can join here. ### Opting out Fondant-cc-25m is based on CommonCrawl. Their crawler honors opt-out requests in the URL, see the CC FAQ for details. We are giving the public the ability to have their image removed from the dataset upon request. The process for submitting and enacting removal requests will keep evolving throughout the project as we receive feedback and build up more data governance tools. If you'd like to have your data removed from the dataset, contact us. Considerations for Using the Data --------------------------------- ### Disclaimer Fondant is making significant efforts to respect the intellectual property rights of third parties by publishing a dataset of Creative Commons licensed images. Under no circumstances can Fondant be held liable by a third party for (i) the accuracy or correctness of the content, (ii) an alleged infringement of intellectual property rights or (iii) any other alleged claim, action, injunction or suit resulting from the publication or use of the dataset. ### Discussion of Biases As toxic or biased data is prevalent on the internet, it is possible that our dataset contains such content. Additional Information ---------------------- ### Dataset Curators 1. Sharon Grundmann, ML6, sharon.grundmann@URL 2. Matthias Richter, ML6, matthias.richter@URL 3. Robbe Sneyders, ML6, robbe.sneyders@URL ### Licensing Information Fondant-cc-25m is a collection of images with various Creative Commons and other public licenses. Any use of all or part of the images gathered in Fondant-cc-25m must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point. The list of Creative Commons license types included in the dataset can be found here. ### Contact * Email: info@URL * Discord: URL
[ "### Changelog", "### Dataset Summary\n\n\nFondant-cc-25m contains 25 million image URLs with their respective Creative Commons\nlicense information collected from the Common Crawl web corpus.\nThe dataset was created using Fondant, an open source framework that aims to simplify and speed up\nlarge-scale data processing by making self-contained pipeline components reusable across pipelines, infrastructures and shareable within the community.", "### Supported Tasks and Leaderboards\n\n\nThis dataset can be used for training or fine-tuning image generation or computer vision models.", "### How to use it\n\n\nTo execute the pipeline locally, you must have docker compose,\nPython >=3.8 and Git installed on your system.\nTo ensure a successful example run, please allocate at least 8GB of RAM to your Docker environment.\n\n\nNote: For Apple M1/M2 ship users:\n\n\n* Make sure that Docker uses linux/amd64 platform and not arm64. In Docker Dashboard go to Settings>Features in development, make sure to uncheck 'Use containerid for pulling and storing images'.\n* For improved execution speed, check the box that says 'Use Rosetta for x86/amd64 emulation on Apple Silicon'.\n\n\nWe have prepared a sample Fondant pipeline for downloading the dataset.\n\n\n1. Install Fondant by running:\n2. Clone the sample GitHub repository\n3. Make sure that Docker is running, navigate to the 'src' folder, and initiate the pipeline by executing:\n\n\nNote: For local testing purposes, the pipeline will only download the first 100 images.\nIf you want to download the full dataset, you will need to modify the component arguments in the 'URL' file,\nspecifically the following part:\n\n\n4. To visually inspect the results quickly, you can use:\n5. You can also choose to download images to your local machine if you prefer, we have provided an example script\nthat enabled this:\n\n\nTo run the script, you can simply execute the following:", "### How to contribute\n\n\nIf you want to contribute to the dataset, the best way is to help us develop pipeline components for further processing.\n\n\nCreating custom pipelines for specific purposes requires different building blocks.\nFondant pipelines can mix reusable components and custom components.\n\n\n!image/png\n\n\nComponents we are currently looking to add are the following (GitHub issues):\n\n\n* Image-based deduplication\n* ️ Automatic captioning\n* Visual quality / aesthetic quality estimation\n* Watermark detection\n* Not safe for work (NSFW) content detection\n* CLIP embedding generation\n* Face detection\n* ‍️ Personal Identifiable Information (PII) detection\n* Text detection\n* AI generated image detection\n* Image-text CLIP similarity\n* ‍ Any components that you propose to develop\n\n\nWe are also looking for core framework contributors and users who are willing to give feedback on usability and suggest potential improvements\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach data instance corresponds to one image. The URL of the image is in the 'image\\_url' feature, and other features ('alt\\_text', 'webpage\\_url', etc) provide some\nmetadata. Note that images have been deduplicated only based on their URLs.", "### Data Fields\n\n\n* 'image\\_url' (string): image url to download the image\n* 'alt\\_text' (string): alternative text of the image\n* 'webpage\\_url' (string): webpage source of the image\n* 'license\\_type' (string): creative commons license type of the image\n* 'license\\_location' (string): location of the license on the webpage\n* 'surt\\_url' (string): sort friendly image url with top level domain as the prefix", "### Data Splits\n\n\nWe do not provide any canonical splits for fondant-cc-25m.\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nCurrent AI image generation models such as Stable Diffusion and Dall-E are trained on hundreds of millions of images from the public Internet\nincluding copyrighted work. This creates legal risks and uncertainties for users of these images and is unfair towards copyright holders who\nmay not want their proprietary work reproduced without consent.\nBy releasing a Creative Commons image dataset, we hope to mitigate legal risks and empower ethical AI development that respects copyright.\nThis dataset is the first step towards our goal of a 500M Creative Commons image dataset.", "### Source Data\n\n\nfondant-cc-25m is built from CommonCrawl dumps. These dumps are constructed from crawling publicly available web pages.", "### Data Collection and Preprocessing\n\n\nPermissive licenses have minimal restrictions on how the image can be copied, modified, and redistributed.\nThe full list of licenses can be found here.\nWe examined HTML tags of the webpages for the presence of Creative Commons license URLs. A webpage was marked permissive only when a license URL was found in\nits footer, aside or sidebar. This was the case only in around 0.164% of a 100k random sample from Common Crawl. This suggests that image generation models\ntrained on a random sample from the public internet may be trained on up to 99.836% copyrighted images.\n\n\nSubsequently, all the image URLs present on the web page were collected together with the license information. A manual check of a random\nsample of 1032 images showed that 96.32% were attributed the correct license whil 3.68% were not.\nFalse positives could be due to parsing errors but also incorrect attributions: images indicated by the publisher to be CC which are not.\nMore information on our approach can be found in this blogpost.", "### Privacy statement\n\n\nIt is possible that the dataset contains personal data, in that sense that we link to images with information that relates to an identified or identifiable living individual. We already take steps to reduce the processing of personal information when collecting our dataset, by, for example, (i) removing websites that aggregate large volumes of personal information and (ii) excluding websites that contain sensitive information about individuals.\n\n\nThe data controller\nThe data controller for the processing under the GDPR is Skyhaus BV (hereafter also referred to as “we” or “our”), a company with its registered seat in Belgium,\n9000 Ghent, Esplanade Oscar Van de Voorde 1, and with the enterprise number 0502.515.626. Our Data Protection Officer can be contacted via privacy@URL.\n\n\nWe process the personal data lawfully\nWe base our collection of personal data that is included in the dataset on our legitimate interests according to the GDPR (article 6.1.f GDPR), for the purpose of\nestablishing an open source framework for data preparation and fine-tuning of foundation models. Please note that we never store the personal data as such and that we\nnever use the dataset for any other purpose.\n\n\nExecution of the rights of data subjects.\nIndividuals have the right to access, correct, restrict, delete, or transfer their personal information that may be included in our dataset.\nYou can exercise these rights by reaching out to privacy@URL. Please be aware that some rights may not be absolute and that we may decline a request if\nwe have a lawful reason for doing so. However, we strive to prioritize the protection of personal information and comply with the GDPR or other privacy laws.\nIf you feel we have not adequately addressed a request, you have the right to lodge a complaint with your local supervisory authority.\n\n\nThe PII filtering pipeline for this dataset is still a work in progress. Researchers that wish to contribute to the anonymization pipeline of the project can join\nhere.", "### Opting out\n\n\nFondant-cc-25m is based on CommonCrawl. Their crawler honors opt-out requests in the URL, see the\nCC FAQ for details.\n\n\nWe are giving the public the ability to have their image removed from the dataset upon request. The process for submitting and enacting removal requests will keep\nevolving throughout the project as we receive feedback and build up more data governance tools.\nIf you'd like to have your data removed from the dataset, contact us.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Disclaimer\n\n\nFondant is making significant efforts to respect the intellectual property rights of third parties by publishing a dataset of\nCreative Commons licensed images. Under no circumstances can Fondant be held liable by a third party for (i) the accuracy or correctness\nof the content, (ii) an alleged infringement of intellectual property rights or (iii) any other alleged claim, action, injunction or suit\nresulting from the publication or use of the dataset.", "### Discussion of Biases\n\n\nAs toxic or biased data is prevalent on the internet, it is possible that our dataset contains such content.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\n1. Sharon Grundmann, ML6, sharon.grundmann@URL\n2. Matthias Richter, ML6, matthias.richter@URL\n3. Robbe Sneyders, ML6, robbe.sneyders@URL", "### Licensing Information\n\n\nFondant-cc-25m is a collection of images with various Creative Commons and other public licenses. Any use of all or part of the images gathered in Fondant-cc-25m\nmust abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.\n\n\nThe list of Creative Commons license types included in the dataset can be found here.", "### Contact\n\n\n* Email: info@URL\n* Discord: URL" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-10M<n<100M #license-cc #art #region-us \n", "### Changelog", "### Dataset Summary\n\n\nFondant-cc-25m contains 25 million image URLs with their respective Creative Commons\nlicense information collected from the Common Crawl web corpus.\nThe dataset was created using Fondant, an open source framework that aims to simplify and speed up\nlarge-scale data processing by making self-contained pipeline components reusable across pipelines, infrastructures and shareable within the community.", "### Supported Tasks and Leaderboards\n\n\nThis dataset can be used for training or fine-tuning image generation or computer vision models.", "### How to use it\n\n\nTo execute the pipeline locally, you must have docker compose,\nPython >=3.8 and Git installed on your system.\nTo ensure a successful example run, please allocate at least 8GB of RAM to your Docker environment.\n\n\nNote: For Apple M1/M2 ship users:\n\n\n* Make sure that Docker uses linux/amd64 platform and not arm64. In Docker Dashboard go to Settings>Features in development, make sure to uncheck 'Use containerid for pulling and storing images'.\n* For improved execution speed, check the box that says 'Use Rosetta for x86/amd64 emulation on Apple Silicon'.\n\n\nWe have prepared a sample Fondant pipeline for downloading the dataset.\n\n\n1. Install Fondant by running:\n2. Clone the sample GitHub repository\n3. Make sure that Docker is running, navigate to the 'src' folder, and initiate the pipeline by executing:\n\n\nNote: For local testing purposes, the pipeline will only download the first 100 images.\nIf you want to download the full dataset, you will need to modify the component arguments in the 'URL' file,\nspecifically the following part:\n\n\n4. To visually inspect the results quickly, you can use:\n5. You can also choose to download images to your local machine if you prefer, we have provided an example script\nthat enabled this:\n\n\nTo run the script, you can simply execute the following:", "### How to contribute\n\n\nIf you want to contribute to the dataset, the best way is to help us develop pipeline components for further processing.\n\n\nCreating custom pipelines for specific purposes requires different building blocks.\nFondant pipelines can mix reusable components and custom components.\n\n\n!image/png\n\n\nComponents we are currently looking to add are the following (GitHub issues):\n\n\n* Image-based deduplication\n* ️ Automatic captioning\n* Visual quality / aesthetic quality estimation\n* Watermark detection\n* Not safe for work (NSFW) content detection\n* CLIP embedding generation\n* Face detection\n* ‍️ Personal Identifiable Information (PII) detection\n* Text detection\n* AI generated image detection\n* Image-text CLIP similarity\n* ‍ Any components that you propose to develop\n\n\nWe are also looking for core framework contributors and users who are willing to give feedback on usability and suggest potential improvements\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach data instance corresponds to one image. The URL of the image is in the 'image\\_url' feature, and other features ('alt\\_text', 'webpage\\_url', etc) provide some\nmetadata. Note that images have been deduplicated only based on their URLs.", "### Data Fields\n\n\n* 'image\\_url' (string): image url to download the image\n* 'alt\\_text' (string): alternative text of the image\n* 'webpage\\_url' (string): webpage source of the image\n* 'license\\_type' (string): creative commons license type of the image\n* 'license\\_location' (string): location of the license on the webpage\n* 'surt\\_url' (string): sort friendly image url with top level domain as the prefix", "### Data Splits\n\n\nWe do not provide any canonical splits for fondant-cc-25m.\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nCurrent AI image generation models such as Stable Diffusion and Dall-E are trained on hundreds of millions of images from the public Internet\nincluding copyrighted work. This creates legal risks and uncertainties for users of these images and is unfair towards copyright holders who\nmay not want their proprietary work reproduced without consent.\nBy releasing a Creative Commons image dataset, we hope to mitigate legal risks and empower ethical AI development that respects copyright.\nThis dataset is the first step towards our goal of a 500M Creative Commons image dataset.", "### Source Data\n\n\nfondant-cc-25m is built from CommonCrawl dumps. These dumps are constructed from crawling publicly available web pages.", "### Data Collection and Preprocessing\n\n\nPermissive licenses have minimal restrictions on how the image can be copied, modified, and redistributed.\nThe full list of licenses can be found here.\nWe examined HTML tags of the webpages for the presence of Creative Commons license URLs. A webpage was marked permissive only when a license URL was found in\nits footer, aside or sidebar. This was the case only in around 0.164% of a 100k random sample from Common Crawl. This suggests that image generation models\ntrained on a random sample from the public internet may be trained on up to 99.836% copyrighted images.\n\n\nSubsequently, all the image URLs present on the web page were collected together with the license information. A manual check of a random\nsample of 1032 images showed that 96.32% were attributed the correct license whil 3.68% were not.\nFalse positives could be due to parsing errors but also incorrect attributions: images indicated by the publisher to be CC which are not.\nMore information on our approach can be found in this blogpost.", "### Privacy statement\n\n\nIt is possible that the dataset contains personal data, in that sense that we link to images with information that relates to an identified or identifiable living individual. We already take steps to reduce the processing of personal information when collecting our dataset, by, for example, (i) removing websites that aggregate large volumes of personal information and (ii) excluding websites that contain sensitive information about individuals.\n\n\nThe data controller\nThe data controller for the processing under the GDPR is Skyhaus BV (hereafter also referred to as “we” or “our”), a company with its registered seat in Belgium,\n9000 Ghent, Esplanade Oscar Van de Voorde 1, and with the enterprise number 0502.515.626. Our Data Protection Officer can be contacted via privacy@URL.\n\n\nWe process the personal data lawfully\nWe base our collection of personal data that is included in the dataset on our legitimate interests according to the GDPR (article 6.1.f GDPR), for the purpose of\nestablishing an open source framework for data preparation and fine-tuning of foundation models. Please note that we never store the personal data as such and that we\nnever use the dataset for any other purpose.\n\n\nExecution of the rights of data subjects.\nIndividuals have the right to access, correct, restrict, delete, or transfer their personal information that may be included in our dataset.\nYou can exercise these rights by reaching out to privacy@URL. Please be aware that some rights may not be absolute and that we may decline a request if\nwe have a lawful reason for doing so. However, we strive to prioritize the protection of personal information and comply with the GDPR or other privacy laws.\nIf you feel we have not adequately addressed a request, you have the right to lodge a complaint with your local supervisory authority.\n\n\nThe PII filtering pipeline for this dataset is still a work in progress. Researchers that wish to contribute to the anonymization pipeline of the project can join\nhere.", "### Opting out\n\n\nFondant-cc-25m is based on CommonCrawl. Their crawler honors opt-out requests in the URL, see the\nCC FAQ for details.\n\n\nWe are giving the public the ability to have their image removed from the dataset upon request. The process for submitting and enacting removal requests will keep\nevolving throughout the project as we receive feedback and build up more data governance tools.\nIf you'd like to have your data removed from the dataset, contact us.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Disclaimer\n\n\nFondant is making significant efforts to respect the intellectual property rights of third parties by publishing a dataset of\nCreative Commons licensed images. Under no circumstances can Fondant be held liable by a third party for (i) the accuracy or correctness\nof the content, (ii) an alleged infringement of intellectual property rights or (iii) any other alleged claim, action, injunction or suit\nresulting from the publication or use of the dataset.", "### Discussion of Biases\n\n\nAs toxic or biased data is prevalent on the internet, it is possible that our dataset contains such content.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\n1. Sharon Grundmann, ML6, sharon.grundmann@URL\n2. Matthias Richter, ML6, matthias.richter@URL\n3. Robbe Sneyders, ML6, robbe.sneyders@URL", "### Licensing Information\n\n\nFondant-cc-25m is a collection of images with various Creative Commons and other public licenses. Any use of all or part of the images gathered in Fondant-cc-25m\nmust abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.\n\n\nThe list of Creative Commons license types included in the dataset can be found here.", "### Contact\n\n\n* Email: info@URL\n* Discord: URL" ]
[ 37, 4, 91, 30, 322, 210, 73, 115, 28, 133, 35, 242, 431, 122, 105, 41, 57, 97, 14 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-10M<n<100M #license-cc #art #region-us \n### Changelog### Dataset Summary\n\n\nFondant-cc-25m contains 25 million image URLs with their respective Creative Commons\nlicense information collected from the Common Crawl web corpus.\nThe dataset was created using Fondant, an open source framework that aims to simplify and speed up\nlarge-scale data processing by making self-contained pipeline components reusable across pipelines, infrastructures and shareable within the community.### Supported Tasks and Leaderboards\n\n\nThis dataset can be used for training or fine-tuning image generation or computer vision models.### How to use it\n\n\nTo execute the pipeline locally, you must have docker compose,\nPython >=3.8 and Git installed on your system.\nTo ensure a successful example run, please allocate at least 8GB of RAM to your Docker environment.\n\n\nNote: For Apple M1/M2 ship users:\n\n\n* Make sure that Docker uses linux/amd64 platform and not arm64. In Docker Dashboard go to Settings>Features in development, make sure to uncheck 'Use containerid for pulling and storing images'.\n* For improved execution speed, check the box that says 'Use Rosetta for x86/amd64 emulation on Apple Silicon'.\n\n\nWe have prepared a sample Fondant pipeline for downloading the dataset.\n\n\n1. Install Fondant by running:\n2. Clone the sample GitHub repository\n3. Make sure that Docker is running, navigate to the 'src' folder, and initiate the pipeline by executing:\n\n\nNote: For local testing purposes, the pipeline will only download the first 100 images.\nIf you want to download the full dataset, you will need to modify the component arguments in the 'URL' file,\nspecifically the following part:\n\n\n4. To visually inspect the results quickly, you can use:\n5. You can also choose to download images to your local machine if you prefer, we have provided an example script\nthat enabled this:\n\n\nTo run the script, you can simply execute the following:", "passage: ### How to contribute\n\n\nIf you want to contribute to the dataset, the best way is to help us develop pipeline components for further processing.\n\n\nCreating custom pipelines for specific purposes requires different building blocks.\nFondant pipelines can mix reusable components and custom components.\n\n\n!image/png\n\n\nComponents we are currently looking to add are the following (GitHub issues):\n\n\n* Image-based deduplication\n* ️ Automatic captioning\n* Visual quality / aesthetic quality estimation\n* Watermark detection\n* Not safe for work (NSFW) content detection\n* CLIP embedding generation\n* Face detection\n* ‍️ Personal Identifiable Information (PII) detection\n* Text detection\n* AI generated image detection\n* Image-text CLIP similarity\n* ‍ Any components that you propose to develop\n\n\nWe are also looking for core framework contributors and users who are willing to give feedback on usability and suggest potential improvements\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nEach data instance corresponds to one image. The URL of the image is in the 'image\\_url' feature, and other features ('alt\\_text', 'webpage\\_url', etc) provide some\nmetadata. Note that images have been deduplicated only based on their URLs.### Data Fields\n\n\n* 'image\\_url' (string): image url to download the image\n* 'alt\\_text' (string): alternative text of the image\n* 'webpage\\_url' (string): webpage source of the image\n* 'license\\_type' (string): creative commons license type of the image\n* 'license\\_location' (string): location of the license on the webpage\n* 'surt\\_url' (string): sort friendly image url with top level domain as the prefix### Data Splits\n\n\nWe do not provide any canonical splits for fondant-cc-25m.\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nCurrent AI image generation models such as Stable Diffusion and Dall-E are trained on hundreds of millions of images from the public Internet\nincluding copyrighted work. This creates legal risks and uncertainties for users of these images and is unfair towards copyright holders who\nmay not want their proprietary work reproduced without consent.\nBy releasing a Creative Commons image dataset, we hope to mitigate legal risks and empower ethical AI development that respects copyright.\nThis dataset is the first step towards our goal of a 500M Creative Commons image dataset.### Source Data\n\n\nfondant-cc-25m is built from CommonCrawl dumps. These dumps are constructed from crawling publicly available web pages.", "passage: ### Data Collection and Preprocessing\n\n\nPermissive licenses have minimal restrictions on how the image can be copied, modified, and redistributed.\nThe full list of licenses can be found here.\nWe examined HTML tags of the webpages for the presence of Creative Commons license URLs. A webpage was marked permissive only when a license URL was found in\nits footer, aside or sidebar. This was the case only in around 0.164% of a 100k random sample from Common Crawl. This suggests that image generation models\ntrained on a random sample from the public internet may be trained on up to 99.836% copyrighted images.\n\n\nSubsequently, all the image URLs present on the web page were collected together with the license information. A manual check of a random\nsample of 1032 images showed that 96.32% were attributed the correct license whil 3.68% were not.\nFalse positives could be due to parsing errors but also incorrect attributions: images indicated by the publisher to be CC which are not.\nMore information on our approach can be found in this blogpost.### Privacy statement\n\n\nIt is possible that the dataset contains personal data, in that sense that we link to images with information that relates to an identified or identifiable living individual. We already take steps to reduce the processing of personal information when collecting our dataset, by, for example, (i) removing websites that aggregate large volumes of personal information and (ii) excluding websites that contain sensitive information about individuals.\n\n\nThe data controller\nThe data controller for the processing under the GDPR is Skyhaus BV (hereafter also referred to as “we” or “our”), a company with its registered seat in Belgium,\n9000 Ghent, Esplanade Oscar Van de Voorde 1, and with the enterprise number 0502.515.626. Our Data Protection Officer can be contacted via privacy@URL.\n\n\nWe process the personal data lawfully\nWe base our collection of personal data that is included in the dataset on our legitimate interests according to the GDPR (article 6.1.f GDPR), for the purpose of\nestablishing an open source framework for data preparation and fine-tuning of foundation models. Please note that we never store the personal data as such and that we\nnever use the dataset for any other purpose.\n\n\nExecution of the rights of data subjects.\nIndividuals have the right to access, correct, restrict, delete, or transfer their personal information that may be included in our dataset.\nYou can exercise these rights by reaching out to privacy@URL. Please be aware that some rights may not be absolute and that we may decline a request if\nwe have a lawful reason for doing so. However, we strive to prioritize the protection of personal information and comply with the GDPR or other privacy laws.\nIf you feel we have not adequately addressed a request, you have the right to lodge a complaint with your local supervisory authority.\n\n\nThe PII filtering pipeline for this dataset is still a work in progress. Researchers that wish to contribute to the anonymization pipeline of the project can join\nhere." ]
ffb6d8f1e72f68151be94adf0200daf194952adc
# Dataset of munakata_atsumi/棟方愛海 (THE iDOLM@STER: Cinderella Girls) This is the dataset of munakata_atsumi/棟方愛海 (THE iDOLM@STER: Cinderella Girls), containing 132 images and their tags. The core tags of this character are `brown_hair, hair_bun, double_bun, purple_eyes, short_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 132 | 112.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/munakata_atsumi_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 132 | 79.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/munakata_atsumi_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 258 | 152.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/munakata_atsumi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 132 | 107.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/munakata_atsumi_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 258 | 199.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/munakata_atsumi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/munakata_atsumi_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blush, open_mouth, smile, solo, drooling, +_+, long_hair | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, dress, open_mouth, angel_wings, blush, smile, drooling, hairband, halo, choker, heart-shaped_pupils, white_gloves, looking_at_viewer | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, looking_at_viewer, short_sleeves, smile, solo, blush, bracelet, hair_bow, heart, open_mouth, skirt, striped_thighhighs | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | open_mouth | smile | solo | drooling | +_+ | long_hair | dress | angel_wings | hairband | halo | choker | heart-shaped_pupils | white_gloves | looking_at_viewer | short_sleeves | bracelet | hair_bow | heart | skirt | striped_thighhighs | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------------|:--------|:-------|:-----------|:------|:------------|:--------|:--------------|:-----------|:-------|:---------|:----------------------|:---------------|:--------------------|:----------------|:-----------|:-----------|:--------|:--------|:---------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | | | X | X | X | X | X | X | X | X | | | | | | | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | | | | | | | | | | | X | X | X | X | X | X | X |
CyberHarem/munakata_atsumi_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T18:01:29+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T18:40:36+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of munakata\_atsumi/棟方愛海 (THE iDOLM@STER: Cinderella Girls) =================================================================== This is the dataset of munakata\_atsumi/棟方愛海 (THE iDOLM@STER: Cinderella Girls), containing 132 images and their tags. The core tags of this character are 'brown\_hair, hair\_bun, double\_bun, purple\_eyes, short\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
5911a031c7d1c5589dcf05387aec9a60496ccd6a
# Dataset Card for "logits-mt-it-ar-512" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
amitness/logits-mt-it-ar-512
[ "region:us" ]
2023-09-15T18:19:31+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"name": "teacher_logits", "sequence": {"sequence": "float64"}}, {"name": "teacher_indices", "sequence": {"sequence": "int64"}}, {"name": "teacher_mask_indices", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 52595507115.093796, "num_examples": 2892839}, {"name": "test", "num_bytes": 9281560079.1342, "num_examples": 510501}], "download_size": 23212836951, "dataset_size": 61877067194.228}}
2023-09-15T22:08:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "logits-mt-it-ar-512" More Information needed
[ "# Dataset Card for \"logits-mt-it-ar-512\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"logits-mt-it-ar-512\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"logits-mt-it-ar-512\"\n\nMore Information needed" ]
50f29252eebf9c2cb2429986ec48557cb9c38b96
# Dataset of muramatsu_sakura/村松さくら/무라마츠사쿠라 (THE iDOLM@STER: Cinderella Girls) This is the dataset of muramatsu_sakura/村松さくら/무라마츠사쿠라 (THE iDOLM@STER: Cinderella Girls), containing 75 images and their tags. The core tags of this character are `brown_hair, twintails, hairband, short_twintails, bow, pink_eyes, short_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 75 | 55.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/muramatsu_sakura_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 75 | 41.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/muramatsu_sakura_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 155 | 80.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/muramatsu_sakura_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 75 | 53.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/muramatsu_sakura_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 155 | 103.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/muramatsu_sakura_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/muramatsu_sakura_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 22 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, smile, solo, open_mouth, blush, looking_at_viewer, necklace, one_eye_closed, skirt, hair_ornament | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | blush, white_shirt, 1girl, open_mouth, short_sleeves, :d, collared_shirt, low_twintails, school_uniform, simple_background, upper_body, white_background, bangs, bow_hairband, looking_at_viewer, red_bowtie, solo_focus, striped_bowtie, sweater_vest | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | solo | open_mouth | blush | looking_at_viewer | necklace | one_eye_closed | skirt | hair_ornament | white_shirt | short_sleeves | :d | collared_shirt | low_twintails | school_uniform | simple_background | upper_body | white_background | bangs | bow_hairband | red_bowtie | solo_focus | striped_bowtie | sweater_vest | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:-------------|:--------|:--------------------|:-----------|:-----------------|:--------|:----------------|:--------------|:----------------|:-----|:-----------------|:----------------|:-----------------|:--------------------|:-------------|:-------------------|:--------|:---------------|:-------------|:-------------|:-----------------|:---------------| | 0 | 22 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | | X | X | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/muramatsu_sakura_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T18:24:25+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:27:52+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of muramatsu\_sakura/村松さくら/무라마츠사쿠라 (THE iDOLM@STER: Cinderella Girls) ============================================================================= This is the dataset of muramatsu\_sakura/村松さくら/무라마츠사쿠라 (THE iDOLM@STER: Cinderella Girls), containing 75 images and their tags. The core tags of this character are 'brown\_hair, twintails, hairband, short\_twintails, bow, pink\_eyes, short\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
909153fc7802d16e8d074f8bc801890eb843d8d1
# Dataset of mary_cochran/メアリー・コクラン (THE iDOLM@STER: Cinderella Girls) This is the dataset of mary_cochran/メアリー・コクラン (THE iDOLM@STER: Cinderella Girls), containing 84 images and their tags. The core tags of this character are `blonde_hair, long_hair, twintails, bow, bangs, hair_bow, green_eyes, aqua_eyes, blunt_bangs`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 84 | 60.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mary_cochran_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 84 | 49.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mary_cochran_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 152 | 86.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mary_cochran_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 84 | 58.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mary_cochran_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 152 | 100.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mary_cochran_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/mary_cochran_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, character_name, star_(symbol), sun_symbol, card_parody, hair_bobbles, innertube, one-piece_swimsuit, open_mouth, school_swimsuit, smile | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, smile, solo, dress, looking_at_viewer, open_mouth, short_sleeves, skirt, blush, earrings, one_eye_closed, bracelet, striped, thighhighs | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, blush, looking_at_viewer, solo, open_mouth, :d, close-up | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, navel, solo, micro_bikini, smile, blue_eyes, blush, flat_chest, heart, looking_at_viewer, polka_dot_bow, side-tie_bikini_bottom, simple_background, white_background, american_flag_bikini, cowboy_shot, hand_on_hip, hand_up, one_eye_closed, small_breasts, thigh_strap, white_bow | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, elbow_gloves, looking_at_viewer, midriff, black_gloves, flag, navel, solo, wrist_cuffs, band_uniform, plaid_skirt, union_jack, aiguillette, blue_eyes, boots, crop_top, epaulettes, shako_cap, smile, white_thighhighs | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | character_name | star_(symbol) | sun_symbol | card_parody | hair_bobbles | innertube | one-piece_swimsuit | open_mouth | school_swimsuit | smile | dress | looking_at_viewer | short_sleeves | skirt | blush | earrings | one_eye_closed | bracelet | striped | thighhighs | :d | close-up | navel | micro_bikini | blue_eyes | flat_chest | heart | polka_dot_bow | side-tie_bikini_bottom | simple_background | white_background | american_flag_bikini | cowboy_shot | hand_on_hip | hand_up | small_breasts | thigh_strap | white_bow | elbow_gloves | midriff | black_gloves | flag | wrist_cuffs | band_uniform | plaid_skirt | union_jack | aiguillette | boots | crop_top | epaulettes | shako_cap | white_thighhighs | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-----------------|:----------------|:-------------|:--------------|:---------------|:------------|:---------------------|:-------------|:------------------|:--------|:--------|:--------------------|:----------------|:--------|:--------|:-----------|:-----------------|:-----------|:----------|:-------------|:-----|:-----------|:--------|:---------------|:------------|:-------------|:--------|:----------------|:-------------------------|:--------------------|:-------------------|:-----------------------|:--------------|:--------------|:----------|:----------------|:--------------|:------------|:---------------|:----------|:---------------|:-------|:--------------|:---------------|:--------------|:-------------|:--------------|:--------|:-----------|:-------------|:------------|:-------------------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | | | | | | | X | | | | X | | | X | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | | | | | | | | | X | | X | | | X | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | | | | | | | | | X | | X | | | | | | | | | | | X | | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/mary_cochran_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T18:27:07+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T20:48:26+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of mary\_cochran/メアリー・コクラン (THE iDOLM@STER: Cinderella Girls) ===================================================================== This is the dataset of mary\_cochran/メアリー・コクラン (THE iDOLM@STER: Cinderella Girls), containing 84 images and their tags. The core tags of this character are 'blonde\_hair, long\_hair, twintails, bow, bangs, hair\_bow, green\_eyes, aqua\_eyes, blunt\_bangs', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
c8eb8b90dd1d345c3e584bc1dc3e73bd5775841f
# Dataset of komuro_chinami/小室千奈美 (THE iDOLM@STER: Cinderella Girls) This is the dataset of komuro_chinami/小室千奈美 (THE iDOLM@STER: Cinderella Girls), containing 22 images and their tags. The core tags of this character are `long_hair, brown_hair, brown_eyes, breasts, medium_breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 22 | 13.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komuro_chinami_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 22 | 13.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komuro_chinami_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 36 | 19.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komuro_chinami_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 22 | 13.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komuro_chinami_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 36 | 19.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komuro_chinami_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/komuro_chinami_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------| | 0 | 22 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, smile, cleavage, jewelry | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | smile | cleavage | jewelry | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:-----------|:----------| | 0 | 22 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X |
CyberHarem/komuro_chinami_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T18:31:07+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T22:15:40+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of komuro\_chinami/小室千奈美 (THE iDOLM@STER: Cinderella Girls) =================================================================== This is the dataset of komuro\_chinami/小室千奈美 (THE iDOLM@STER: Cinderella Girls), containing 22 images and their tags. The core tags of this character are 'long\_hair, brown\_hair, brown\_eyes, breasts, medium\_breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
bb7b0ff1ce390652691b1124d9fd702ac0c9ba5b
# Dataset of manaka_misato/間中美里 (THE iDOLM@STER: Cinderella Girls) This is the dataset of manaka_misato/間中美里 (THE iDOLM@STER: Cinderella Girls), containing 15 images and their tags. The core tags of this character are `brown_hair, short_hair, blue_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 15 | 9.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/manaka_misato_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 15 | 9.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/manaka_misato_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 29 | 15.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/manaka_misato_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 15 | 9.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/manaka_misato_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 29 | 15.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/manaka_misato_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/manaka_misato_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------| | 0 | 15 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, smile, necklace, card_(medium), character_name, flower_(symbol) | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | smile | necklace | card_(medium) | character_name | flower_(symbol) | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:-----------|:----------------|:-----------------|:------------------| | 0 | 15 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X |
CyberHarem/manaka_misato_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T18:36:49+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T22:15:37+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of manaka\_misato/間中美里 (THE iDOLM@STER: Cinderella Girls) ================================================================= This is the dataset of manaka\_misato/間中美里 (THE iDOLM@STER: Cinderella Girls), containing 15 images and their tags. The core tags of this character are 'brown\_hair, short\_hair, blue\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
9b07cd776ea458511d2bdd9a52b663684ddb556e
# Dataset Card for "naughty-chat" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lonestar108/naughty-chat
[ "region:us" ]
2023-09-15T18:41:36+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 80492, "num_examples": 266}], "download_size": 21186, "dataset_size": 80492}}
2023-09-15T20:49:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for "naughty-chat" More Information needed
[ "# Dataset Card for \"naughty-chat\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"naughty-chat\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"naughty-chat\"\n\nMore Information needed" ]
ca6c4ea09915730fd69b017e102445ef1ccf53d9
# Dataset of manabe_itsuki/真鍋いつき (THE iDOLM@STER: Cinderella Girls) This is the dataset of manabe_itsuki/真鍋いつき (THE iDOLM@STER: Cinderella Girls), containing 46 images and their tags. The core tags of this character are `brown_hair, breasts, ponytail, brown_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 46 | 37.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/manabe_itsuki_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 46 | 26.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/manabe_itsuki_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 92 | 47.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/manabe_itsuki_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 46 | 33.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/manabe_itsuki_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 92 | 61.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/manabe_itsuki_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/manabe_itsuki_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, looking_at_viewer, open_mouth, sweat, solo, blush, armpits, bike_shorts, large_breasts, simple_background, :d, arms_up, sportswear | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, blush, solo, looking_at_viewer, sweat, large_breasts, see-through, underwear, cleavage, navel, simple_background, smile, upper_body, wet_shirt, white_background, heart, long_hair, medium_breasts, open_mouth, red_eyes | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | open_mouth | sweat | solo | blush | armpits | bike_shorts | large_breasts | simple_background | :d | arms_up | sportswear | see-through | underwear | cleavage | navel | smile | upper_body | wet_shirt | white_background | heart | long_hair | medium_breasts | red_eyes | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------------|:--------|:-------|:--------|:----------|:--------------|:----------------|:--------------------|:-----|:----------|:-------------|:--------------|:------------|:-----------|:--------|:--------|:-------------|:------------|:-------------------|:--------|:------------|:-----------------|:-----------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | | | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/manabe_itsuki_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T18:55:39+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:37:45+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of manabe\_itsuki/真鍋いつき (THE iDOLM@STER: Cinderella Girls) ================================================================== This is the dataset of manabe\_itsuki/真鍋いつき (THE iDOLM@STER: Cinderella Girls), containing 46 images and their tags. The core tags of this character are 'brown\_hair, breasts, ponytail, brown\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
713932fc0aa0b4e6d437d9d47d6d6d6d0b6737f5
# Dataset of matsunaga_ryou/松永涼 (THE iDOLM@STER: Cinderella Girls) This is the dataset of matsunaga_ryou/松永涼 (THE iDOLM@STER: Cinderella Girls), containing 139 images and their tags. The core tags of this character are `long_hair, brown_eyes, brown_hair, earrings, breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 139 | 131.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsunaga_ryou_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 139 | 93.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsunaga_ryou_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 280 | 170.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsunaga_ryou_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 139 | 122.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsunaga_ryou_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 280 | 215.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsunaga_ryou_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/matsunaga_ryou_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, cleavage, large_breasts, looking_at_viewer, navel, smile, side-tie_bikini_bottom, simple_background, white_background, bracelet | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | necklace, 1girl, shorts, smile, bracelet, one_eye_closed, belt, multiple_girls, skirt, solo, torn_clothes | | 2 | 18 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | skirt, 1girl, belt, smile, solo, navel, open_mouth, thighhighs, midriff, jacket, bracelet, fingerless_gloves, nail_polish, necktie, cross, microphone_stand | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | cleavage | large_breasts | looking_at_viewer | navel | smile | side-tie_bikini_bottom | simple_background | white_background | bracelet | necklace | shorts | one_eye_closed | belt | multiple_girls | skirt | torn_clothes | open_mouth | thighhighs | midriff | jacket | fingerless_gloves | nail_polish | necktie | cross | microphone_stand | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-----------|:----------------|:--------------------|:--------|:--------|:-------------------------|:--------------------|:-------------------|:-----------|:-----------|:---------|:-----------------|:-------|:-----------------|:--------|:---------------|:-------------|:-------------|:----------|:---------|:--------------------|:--------------|:----------|:--------|:-------------------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | | | | X | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | 2 | 18 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | | | X | X | | | | X | | | | X | | X | | X | X | X | X | X | X | X | X | X |
CyberHarem/matsunaga_ryou_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T18:57:57+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T19:09:53+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of matsunaga\_ryou/松永涼 (THE iDOLM@STER: Cinderella Girls) ================================================================= This is the dataset of matsunaga\_ryou/松永涼 (THE iDOLM@STER: Cinderella Girls), containing 139 images and their tags. The core tags of this character are 'long\_hair, brown\_eyes, brown\_hair, earrings, breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
24e3438491e6dc2f5a318a06fb85b0c64afe254b
# Dataset Card for "TinyStoriesExclamationValidation" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jason-lee08/TinyStoriesExclamationValidation
[ "region:us" ]
2023-09-15T19:05:12+00:00
{"dataset_info": {"features": [{"name": "validation", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 322761, "num_examples": 405}], "download_size": 100666, "dataset_size": 322761}}
2023-09-15T19:25:32+00:00
[]
[]
TAGS #region-us
# Dataset Card for "TinyStoriesExclamationValidation" More Information needed
[ "# Dataset Card for \"TinyStoriesExclamationValidation\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"TinyStoriesExclamationValidation\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"TinyStoriesExclamationValidation\"\n\nMore Information needed" ]
0c1f09a9d76f8fabc3d0a5ce4555e05956849acc
# Dataset of umeki_otoha/梅木音葉 (THE iDOLM@STER: Cinderella Girls) This is the dataset of umeki_otoha/梅木音葉 (THE iDOLM@STER: Cinderella Girls), containing 78 images and their tags. The core tags of this character are `blonde_hair, short_hair, breasts, green_eyes, blue_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 78 | 107.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/umeki_otoha_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 78 | 62.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/umeki_otoha_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 174 | 128.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/umeki_otoha_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 78 | 94.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/umeki_otoha_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 174 | 181.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/umeki_otoha_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/umeki_otoha_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, blush, smile, looking_at_viewer, hat, open_mouth, skirt, white_background, microphone | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | blush | smile | looking_at_viewer | hat | open_mouth | skirt | white_background | microphone | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:--------|:--------------------|:------|:-------------|:--------|:-------------------|:-------------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X |
CyberHarem/umeki_otoha_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T19:15:43+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T20:33:22+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of umeki\_otoha/梅木音葉 (THE iDOLM@STER: Cinderella Girls) =============================================================== This is the dataset of umeki\_otoha/梅木音葉 (THE iDOLM@STER: Cinderella Girls), containing 78 images and their tags. The core tags of this character are 'blonde\_hair, short\_hair, breasts, green\_eyes, blue\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
0f4669f29e8cb784a3da60005a8d82f12dad102f
# Dataset Card for Earnings Calls Dataset ## Dataset Description - **Homepage:** https://dataverse.nl/dataset.xhtml?persistentId=doi:10.34894/TJE0D0 - **Paper:** https://www.preprints.org/manuscript/202102.0424/v1 - **Point of Contact:** [Francesco Lelli](https://francescolelli.info/) ### Dataset Summary The dataset reports a collection of earnings call transcripts, the related stock prices, and the sector index In terms of volume, there is a total of 188 transcripts, 11970 stock prices, and 1196 sector index values. Furthermore, all of these data originated in the period 2016-2020 and are related to the NASDAQ stock market. Furthermore, the data collection was made possible by Yahoo Finance and Thomson Reuters Eikon. Specifically, Yahoo Finance enabled the search for stock values and Thomson Reuters Eikon provided the earnings call transcripts. Lastly, the dataset can be used as a benchmark for the evaluation of several NLP techniques to understand their potential for financial applications. Moreover, it is also possible to expand the dataset by extending the period in which the data originated following a similar procedure. ### Citation Information ```bibtex @data{TJE0D0_2021, author = {Roozen, Dexter and Lelli, Francesco}, publisher = {DataverseNL}, title = {{Stock Values and Earnings Call Transcripts: a Sentiment Analysis Dataset}}, year = {2021}, version = {V1}, doi = {10.34894/TJE0D0}, url = {https://doi.org/10.34894/TJE0D0} } ```
jlh-ibm/earnings_call
[ "task_categories:text-classification", "size_categories:10K<n<100K", "language:en", "license:cc0-1.0", "finance", "region:us" ]
2023-09-15T19:25:43+00:00
{"language": ["en"], "license": "cc0-1.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "pretty_name": "Earnings Calls Dataset", "tags": ["finance"], "dataset_info": [{"config_name": "stock_prices", "features": [{"name": "date", "dtype": "date64"}, {"name": "open", "dtype": "float32"}, {"name": "high", "dtype": "float32"}, {"name": "low", "dtype": "float32"}, {"name": "close", "dtype": "float32"}, {"name": "adj_close", "dtype": "float32"}, {"name": "volume", "dtype": "int64"}, {"name": "company", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 578818, "num_examples": 13155}], "download_size": 290243, "dataset_size": 578818}, {"config_name": "transcript-sentiment", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "company", "dtype": "string"}, {"name": "date", "dtype": "date64"}, {"name": "para_no", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 7414686, "num_examples": 6851}, {"name": "test", "num_bytes": 1928515, "num_examples": 1693}], "download_size": 3868059, "dataset_size": 9343201}, {"config_name": "transcripts", "features": [{"name": "company", "dtype": "string"}, {"name": "date", "dtype": "date64"}, {"name": "transcript", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9592380, "num_examples": 150}, {"name": "test", "num_bytes": 2458569, "num_examples": 38}], "download_size": 3577816, "dataset_size": 12050949}]}
2023-09-15T20:34:39+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #size_categories-10K<n<100K #language-English #license-cc0-1.0 #finance #region-us
# Dataset Card for Earnings Calls Dataset ## Dataset Description - Homepage: URL - Paper: URL - Point of Contact: Francesco Lelli ### Dataset Summary The dataset reports a collection of earnings call transcripts, the related stock prices, and the sector index In terms of volume, there is a total of 188 transcripts, 11970 stock prices, and 1196 sector index values. Furthermore, all of these data originated in the period 2016-2020 and are related to the NASDAQ stock market. Furthermore, the data collection was made possible by Yahoo Finance and Thomson Reuters Eikon. Specifically, Yahoo Finance enabled the search for stock values and Thomson Reuters Eikon provided the earnings call transcripts. Lastly, the dataset can be used as a benchmark for the evaluation of several NLP techniques to understand their potential for financial applications. Moreover, it is also possible to expand the dataset by extending the period in which the data originated following a similar procedure.
[ "# Dataset Card for Earnings Calls Dataset", "## Dataset Description\n\n- Homepage: URL\n- Paper: URL\n- Point of Contact: Francesco Lelli", "### Dataset Summary\n\nThe dataset reports a collection of earnings call transcripts, the related stock prices, and the sector index In terms of volume,\nthere is a total of 188 transcripts, 11970 stock prices, and 1196 sector index values. Furthermore, all of these data originated \nin the period 2016-2020 and are related to the NASDAQ stock market. Furthermore, the data collection was made possible by Yahoo \nFinance and Thomson Reuters Eikon. Specifically, Yahoo Finance enabled the search for stock values and Thomson Reuters Eikon \nprovided the earnings call transcripts. Lastly, the dataset can be used as a benchmark for the evaluation of several NLP techniques\nto understand their potential for financial applications. Moreover, it is also possible to expand the dataset by extending the period\nin which the data originated following a similar procedure." ]
[ "TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-cc0-1.0 #finance #region-us \n", "# Dataset Card for Earnings Calls Dataset", "## Dataset Description\n\n- Homepage: URL\n- Paper: URL\n- Point of Contact: Francesco Lelli", "### Dataset Summary\n\nThe dataset reports a collection of earnings call transcripts, the related stock prices, and the sector index In terms of volume,\nthere is a total of 188 transcripts, 11970 stock prices, and 1196 sector index values. Furthermore, all of these data originated \nin the period 2016-2020 and are related to the NASDAQ stock market. Furthermore, the data collection was made possible by Yahoo \nFinance and Thomson Reuters Eikon. Specifically, Yahoo Finance enabled the search for stock values and Thomson Reuters Eikon \nprovided the earnings call transcripts. Lastly, the dataset can be used as a benchmark for the evaluation of several NLP techniques\nto understand their potential for financial applications. Moreover, it is also possible to expand the dataset by extending the period\nin which the data originated following a similar procedure." ]
[ 44, 12, 20, 183 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-cc0-1.0 #finance #region-us \n# Dataset Card for Earnings Calls Dataset## Dataset Description\n\n- Homepage: URL\n- Paper: URL\n- Point of Contact: Francesco Lelli### Dataset Summary\n\nThe dataset reports a collection of earnings call transcripts, the related stock prices, and the sector index In terms of volume,\nthere is a total of 188 transcripts, 11970 stock prices, and 1196 sector index values. Furthermore, all of these data originated \nin the period 2016-2020 and are related to the NASDAQ stock market. Furthermore, the data collection was made possible by Yahoo \nFinance and Thomson Reuters Eikon. Specifically, Yahoo Finance enabled the search for stock values and Thomson Reuters Eikon \nprovided the earnings call transcripts. Lastly, the dataset can be used as a benchmark for the evaluation of several NLP techniques\nto understand their potential for financial applications. Moreover, it is also possible to expand the dataset by extending the period\nin which the data originated following a similar procedure." ]
ba59b6aeff2f2ea7d398e76f3bd541201968a379
# Dataset of furusawa_yoriko/古澤頼子 (THE iDOLM@STER: Cinderella Girls) This is the dataset of furusawa_yoriko/古澤頼子 (THE iDOLM@STER: Cinderella Girls), containing 34 images and their tags. The core tags of this character are `brown_hair, long_hair, blue_eyes, glasses, hairband, mole, mole_under_eye, breasts, red-framed_eyewear`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 34 | 31.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/furusawa_yoriko_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 34 | 20.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/furusawa_yoriko_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 72 | 40.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/furusawa_yoriko_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 34 | 28.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/furusawa_yoriko_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 72 | 52.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/furusawa_yoriko_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/furusawa_yoriko_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, smile, looking_at_viewer, white_gloves, thighhighs, blush, elbow_gloves, hair_flower, top_hat | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, white_background, blush, looking_at_viewer, simple_background, smile, skirt, long_sleeves, own_hands_together, sweater, upper_body, white_shirt | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | smile | looking_at_viewer | white_gloves | thighhighs | blush | elbow_gloves | hair_flower | top_hat | white_background | simple_background | skirt | long_sleeves | own_hands_together | sweater | upper_body | white_shirt | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:--------------------|:---------------|:-------------|:--------|:---------------|:--------------|:----------|:-------------------|:--------------------|:--------|:---------------|:---------------------|:----------|:-------------|:--------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | | | X | | | | X | X | X | X | X | X | X | X |
CyberHarem/furusawa_yoriko_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T19:26:01+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:02:14+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of furusawa\_yoriko/古澤頼子 (THE iDOLM@STER: Cinderella Girls) =================================================================== This is the dataset of furusawa\_yoriko/古澤頼子 (THE iDOLM@STER: Cinderella Girls), containing 34 images and their tags. The core tags of this character are 'brown\_hair, long\_hair, blue\_eyes, glasses, hairband, mole, mole\_under\_eye, breasts, red-framed\_eyewear', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
cdd0cd7c1ae7e0b386fd82ce1571b6e6d56e372e
# Dataset Card for "TinyStoriesExclamationValidation2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jason-lee08/TinyStoriesExclamationValidation2
[ "region:us" ]
2023-09-15T19:28:29+00:00
{"dataset_info": {"features": [{"name": "validation", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 168184, "num_examples": 220}], "download_size": 89488, "dataset_size": 168184}}
2023-09-15T19:28:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "TinyStoriesExclamationValidation2" More Information needed
[ "# Dataset Card for \"TinyStoriesExclamationValidation2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"TinyStoriesExclamationValidation2\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"TinyStoriesExclamationValidation2\"\n\nMore Information needed" ]
b02f471b7f87fd7f31de8858e258802feaf911ad
# Dataset of mizuno_midori/水野翠 (THE iDOLM@STER: Cinderella Girls) This is the dataset of mizuno_midori/水野翠 (THE iDOLM@STER: Cinderella Girls), containing 44 images and their tags. The core tags of this character are `long_hair, black_hair, ponytail, brown_eyes, breasts, bangs`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 44 | 38.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mizuno_midori_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 44 | 27.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mizuno_midori_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 95 | 51.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mizuno_midori_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 44 | 35.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mizuno_midori_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 95 | 65.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mizuno_midori_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/mizuno_midori_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, gloves, dress, card_(medium), character_name, gem_(symbol), hair_ornament, open_mouth, smile | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, blush, hair_bow, looking_at_viewer, solo, open_mouth, :d, earrings, necklace, parted_bangs, bare_shoulders, bracelet, green_dress, hand_up, medium_breasts, simple_background, sleeveless_dress, white_background | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, blush, serafuku, short_sleeves, solo, white_background, closed_mouth, hair_ribbon, looking_at_viewer, neckerchief, pleated_skirt, simple_background, white_shirt, blue_sailor_collar, blue_skirt, mouth_hold, navel, red_ribbon | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | gloves | dress | card_(medium) | character_name | gem_(symbol) | hair_ornament | open_mouth | smile | blush | hair_bow | looking_at_viewer | :d | earrings | necklace | parted_bangs | bare_shoulders | bracelet | green_dress | hand_up | medium_breasts | simple_background | sleeveless_dress | white_background | serafuku | short_sleeves | closed_mouth | hair_ribbon | neckerchief | pleated_skirt | white_shirt | blue_sailor_collar | blue_skirt | mouth_hold | navel | red_ribbon | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:---------|:--------|:----------------|:-----------------|:---------------|:----------------|:-------------|:--------|:--------|:-----------|:--------------------|:-----|:-----------|:-----------|:---------------|:-----------------|:-----------|:--------------|:----------|:-----------------|:--------------------|:-------------------|:-------------------|:-----------|:----------------|:---------------|:--------------|:--------------|:----------------|:--------------|:---------------------|:-------------|:-------------|:--------|:-------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | | | | | | | | X | | X | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/mizuno_midori_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T19:39:04+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:04:58+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of mizuno\_midori/水野翠 (THE iDOLM@STER: Cinderella Girls) ================================================================ This is the dataset of mizuno\_midori/水野翠 (THE iDOLM@STER: Cinderella Girls), containing 44 images and their tags. The core tags of this character are 'long\_hair, black\_hair, ponytail, brown\_eyes, breasts, bangs', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
0ffe5cc695b9b3e4482ed37b36cd2e9059986f45
# Dataset of shiraishi_tsumugi/白石紬/시라이시츠구미 (THE iDOLM@STER: Million Live!) This is the dataset of shiraishi_tsumugi/白石紬/시라이시츠구미 (THE iDOLM@STER: Million Live!), containing 475 images and their tags. The core tags of this character are `long_hair, blue_eyes, bangs, hair_ornament, blue_hair, breasts, very_long_hair, hairclip`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 475 | 504.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shiraishi_tsumugi_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 475 | 317.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shiraishi_tsumugi_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1121 | 661.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shiraishi_tsumugi_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 475 | 458.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shiraishi_tsumugi_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1121 | 887.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shiraishi_tsumugi_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/shiraishi_tsumugi_theidolmstermillionlive', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blue_jacket, blush, closed_mouth, collarbone, solo, white_dress, looking_at_viewer, open_jacket, puffy_short_sleeves, simple_background, white_background, grey_hair | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, bare_shoulders, elbow_gloves, looking_at_viewer, solo, white_dress, white_gloves, blush, fox_mask, grey_hair, simple_background, smile, closed_mouth, hair_flower, holding_mask, white_background, white_flower, full_body, hair_ribbon, high_heels, low-tied_long_hair, sash, sleeveless_dress, squatting | | 2 | 11 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, solo, blush, dress, looking_at_viewer, smile, bare_shoulders, flower, grey_hair, sitting | | 3 | 14 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, solo, blush, looking_at_viewer, floral_print, obi, simple_background, white_background, blue_kimono, closed_mouth, long_sleeves, wide_sleeves, grey_hair, upper_body, hair_flower, smile, yukata | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, blush, closed_mouth, looking_at_viewer, navel, solo, cleavage, collarbone, low-tied_long_hair, medium_breasts, side-tie_bikini_bottom, bare_shoulders, blue_bikini, groin, simple_background, white_background, bow, cowboy_shot, floral_print, grey_hair, halterneck, white_bikini | | 5 | 11 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, solo, blush, collarbone, looking_at_viewer, navel, cleavage, blue_sky, day, outdoors, medium_breasts, white_bikini, cloud, ocean, bare_shoulders, open_mouth, beach, closed_mouth, large_breasts, low-tied_long_hair, micro_bikini, side-tie_bikini_bottom, upper_body, water, white_hair | | 6 | 6 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, collared_shirt, solo, white_shirt, long_sleeves, looking_at_viewer, upper_body, jacket, simple_background, white_background, blush, capelet, deerstalker, detective, holding, red_necktie, white_gloves | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, looking_at_viewer, short_sleeves, solo, white_shirt, blue_skirt, blush, collared_shirt, open_mouth, plaid_skirt, school_uniform, blue_necktie, pleated_skirt, striped_necktie, bed_sheet, dress_shirt, medium_breasts, socks | | 8 | 14 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, 1boy, blush, hetero, nipples, navel, penis, pussy, solo_focus, sex, sweat, open_mouth, medium_breasts, pubic_hair, vaginal, collarbone, completely_nude, cum, spread_legs, bar_censor, looking_at_viewer, mosaic_censoring, pov, small_breasts, straddling | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_jacket | blush | closed_mouth | collarbone | solo | white_dress | looking_at_viewer | open_jacket | puffy_short_sleeves | simple_background | white_background | grey_hair | bare_shoulders | elbow_gloves | white_gloves | fox_mask | smile | hair_flower | holding_mask | white_flower | full_body | hair_ribbon | high_heels | low-tied_long_hair | sash | sleeveless_dress | squatting | dress | flower | sitting | floral_print | obi | blue_kimono | long_sleeves | wide_sleeves | upper_body | yukata | navel | cleavage | medium_breasts | side-tie_bikini_bottom | blue_bikini | groin | bow | cowboy_shot | halterneck | white_bikini | blue_sky | day | outdoors | cloud | ocean | open_mouth | beach | large_breasts | micro_bikini | water | white_hair | collared_shirt | white_shirt | jacket | capelet | deerstalker | detective | holding | red_necktie | short_sleeves | blue_skirt | plaid_skirt | school_uniform | blue_necktie | pleated_skirt | striped_necktie | bed_sheet | dress_shirt | socks | 1boy | hetero | nipples | penis | pussy | solo_focus | sex | sweat | pubic_hair | vaginal | completely_nude | cum | spread_legs | bar_censor | mosaic_censoring | pov | small_breasts | straddling | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:--------|:---------------|:-------------|:-------|:--------------|:--------------------|:--------------|:----------------------|:--------------------|:-------------------|:------------|:-----------------|:---------------|:---------------|:-----------|:--------|:--------------|:---------------|:---------------|:------------|:--------------|:-------------|:---------------------|:-------|:-------------------|:------------|:--------|:---------|:----------|:---------------|:------|:--------------|:---------------|:---------------|:-------------|:---------|:--------|:-----------|:-----------------|:-------------------------|:--------------|:--------|:------|:--------------|:-------------|:---------------|:-----------|:------|:-----------|:--------|:--------|:-------------|:--------|:----------------|:---------------|:--------|:-------------|:-----------------|:--------------|:---------|:----------|:--------------|:------------|:----------|:--------------|:----------------|:-------------|:--------------|:-----------------|:---------------|:----------------|:------------------|:------------|:--------------|:--------|:-------|:---------|:----------|:--------|:--------|:-------------|:------|:--------|:-------------|:----------|:------------------|:------|:--------------|:-------------|:-------------------|:------|:----------------|:-------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | X | | X | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 11 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | X | | | X | | X | | | | | X | X | | | | X | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 14 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | X | X | | X | | X | | | X | X | X | | | | | X | X | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | X | X | X | X | | X | | | X | X | X | X | | | | | | | | | | | X | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 11 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | X | X | X | X | | X | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | | X | | X | X | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 6 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | X | | | X | | X | | | X | X | | | | X | | | | | | | | | | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | X | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | X | | | | | | X | X | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | 8 | 14 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | | X | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/shiraishi_tsumugi_theidolmstermillionlive
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T20:38:38+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T00:20:38+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of shiraishi\_tsumugi/白石紬/시라이시츠구미 (THE iDOLM@STER: Million Live!) ========================================================================= This is the dataset of shiraishi\_tsumugi/白石紬/시라이시츠구미 (THE iDOLM@STER: Million Live!), containing 475 images and their tags. The core tags of this character are 'long\_hair, blue\_eyes, bangs, hair\_ornament, blue\_hair, breasts, very\_long\_hair, hairclip', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
e3ccab9f1dd274860bddc9b6fd561c84d8330fc3
ChatGPT is used to synthesize paragraphs at two CEFR levels (B1, C2) using a list of verbs (vocab) for two topics (Politics, Economy) The code for data creation is uploaded on [Github](https://github.com/Faizan-E-Mustafa/DEExtract). ## Cite ``` @INPROCEEDINGS{10391702, author={Mustafa, Faizan E}, booktitle={2023 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT)}, title={DEExtract: A Customizable Context-Based German Vocabulary Learning Tool}, year={2023}, volume={}, number={}, pages={65-69}, doi={10.1109/3ICT60104.2023.10391702}} ``` ## Disclaimer The dataset is created using ChatGPT and further use is permitted as long the use complies with the [Terms and Conditions](https://openai.com/policies/terms-of-use) of OpenAI.
femustafa/DEExtract
[ "task_categories:text-generation", "task_categories:feature-extraction", "language:de", "license:openrail", "mlsum", "wirtschaft", "politik", "region:us" ]
2023-09-15T20:45:22+00:00
{"language": ["de"], "license": "openrail", "task_categories": ["text-generation", "feature-extraction"], "tags": ["mlsum", "wirtschaft", "politik"]}
2024-01-23T12:09:51+00:00
[]
[ "de" ]
TAGS #task_categories-text-generation #task_categories-feature-extraction #language-German #license-openrail #mlsum #wirtschaft #politik #region-us
ChatGPT is used to synthesize paragraphs at two CEFR levels (B1, C2) using a list of verbs (vocab) for two topics (Politics, Economy) The code for data creation is uploaded on Github. ## Cite ## Disclaimer The dataset is created using ChatGPT and further use is permitted as long the use complies with the Terms and Conditions of OpenAI.
[ "## Cite", "## Disclaimer\nThe dataset is created using ChatGPT and further use is permitted as long the use complies with the Terms and Conditions of OpenAI." ]
[ "TAGS\n#task_categories-text-generation #task_categories-feature-extraction #language-German #license-openrail #mlsum #wirtschaft #politik #region-us \n", "## Cite", "## Disclaimer\nThe dataset is created using ChatGPT and further use is permitted as long the use complies with the Terms and Conditions of OpenAI." ]
[ 46, 3, 33 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-feature-extraction #language-German #license-openrail #mlsum #wirtschaft #politik #region-us \n## Cite## Disclaimer\nThe dataset is created using ChatGPT and further use is permitted as long the use complies with the Terms and Conditions of OpenAI." ]
343f1fc4a35b3bd653d39cb2f15cc14e6be48e91
# Dataset of asari_nanami/浅利七海/아사리나나미 (THE iDOLM@STER: Cinderella Girls) This is the dataset of asari_nanami/浅利七海/아사리나나미 (THE iDOLM@STER: Cinderella Girls), containing 262 images and their tags. The core tags of this character are `long_hair, blue_hair, hair_ornament, blue_eyes, bangs, fish_hair_ornament, hair_rings, very_long_hair, breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 262 | 316.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asari_nanami_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 262 | 192.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asari_nanami_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 608 | 400.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asari_nanami_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 262 | 281.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asari_nanami_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 608 | 542.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asari_nanami_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/asari_nanami_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, white_shirt, blue_sailor_collar, looking_at_viewer, open_mouth, serafuku, short_sleeves, blunt_bangs, blush, simple_background, upper_body, upper_teeth_only, :d, collarbone, white_background | | 1 | 16 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, blush, looking_at_viewer, open_mouth, :d, white_background, collarbone, simple_background, cleavage, upper_body, bare_shoulders, upper_teeth_only, dress, medium_breasts, swimsuit | | 2 | 14 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, :d, brown_dress, long_sleeves, open_mouth, white_shirt, blush, collared_shirt, solo, upper_teeth_only, neck_ribbon, white_background, pinafore_dress, simple_background, stuffed_animal, looking_at_viewer, object_hug, round_teeth, holding, red_ribbon, pink_ribbon, pleated_dress | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, looking_at_viewer, simple_background, solo, blush, brown_vest, closed_mouth, collared_shirt, long_sleeves, smile, white_background, white_shirt, blunt_bangs, holding_stuffed_toy, neck_ribbon, object_hug, red_ribbon, school_uniform, stuffed_animal, upper_body, brown_skirt | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, black_dress, cat_ears, long_sleeves, looking_at_viewer, maid_headdress, open_mouth, solo, :d, blush, enmaided, maid_apron, paw_pose, upper_teeth_only, white_apron, bell, fake_animal_ears, green_bowtie, black_footwear, frilled_apron, full_body, mary_janes, puffy_sleeves, simple_background, standing, white_background | | 5 | 5 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, looking_at_viewer, open_mouth, shirt, star_hair_ornament, :d, blush, head_fins, mermaid, puffy_short_sleeves, round_teeth, solo, upper_teeth_only, day, frills, jewelry, sparkle, blue_sky, blurry, cloud, dress, holding, layered_skirt, ocean, outdoors, pearl_(gemstone), underwater, wrist_cuffs | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | blush, shoes, socks, 1girl, black_buruma, double_bun, gym_shirt, long_sleeves, open_jacket, pink_footwear, red_jacket, shoe_soles, track_jacket, white_background, white_shirt, :d, gym_uniform, open_mouth, short_sleeves, standing, twintails, upper_teeth_only, 1boy, 2girls, ^_^, ass, black_shorts, collarbone, facing_viewer, hair_bow, red_bow, short_shorts, simple_background, sitting, squiggle, sweat | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | white_shirt | blue_sailor_collar | looking_at_viewer | open_mouth | serafuku | short_sleeves | blunt_bangs | blush | simple_background | upper_body | upper_teeth_only | :d | collarbone | white_background | cleavage | bare_shoulders | dress | medium_breasts | swimsuit | brown_dress | long_sleeves | collared_shirt | neck_ribbon | pinafore_dress | stuffed_animal | object_hug | round_teeth | holding | red_ribbon | pink_ribbon | pleated_dress | brown_vest | closed_mouth | smile | holding_stuffed_toy | school_uniform | brown_skirt | black_dress | cat_ears | maid_headdress | enmaided | maid_apron | paw_pose | white_apron | bell | fake_animal_ears | green_bowtie | black_footwear | frilled_apron | full_body | mary_janes | puffy_sleeves | standing | shirt | star_hair_ornament | head_fins | mermaid | puffy_short_sleeves | day | frills | jewelry | sparkle | blue_sky | blurry | cloud | layered_skirt | ocean | outdoors | pearl_(gemstone) | underwater | wrist_cuffs | shoes | socks | black_buruma | double_bun | gym_shirt | open_jacket | pink_footwear | red_jacket | shoe_soles | track_jacket | gym_uniform | twintails | 1boy | 2girls | ^_^ | ass | black_shorts | facing_viewer | hair_bow | red_bow | short_shorts | sitting | squiggle | sweat | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------|:---------------------|:--------------------|:-------------|:-----------|:----------------|:--------------|:--------|:--------------------|:-------------|:-------------------|:-----|:-------------|:-------------------|:-----------|:-----------------|:--------|:-----------------|:-----------|:--------------|:---------------|:-----------------|:--------------|:-----------------|:-----------------|:-------------|:--------------|:----------|:-------------|:--------------|:----------------|:-------------|:---------------|:--------|:----------------------|:-----------------|:--------------|:--------------|:-----------|:-----------------|:-----------|:-------------|:-----------|:--------------|:-------|:-------------------|:---------------|:-----------------|:----------------|:------------|:-------------|:----------------|:-----------|:--------|:---------------------|:------------|:----------|:----------------------|:------|:---------|:----------|:----------|:-----------|:---------|:--------|:----------------|:--------|:-----------|:-------------------|:-------------|:--------------|:--------|:--------|:---------------|:-------------|:------------|:--------------|:----------------|:-------------|:-------------|:---------------|:--------------|:------------|:-------|:---------|:------|:------|:---------------|:----------------|:-----------|:----------|:---------------|:----------|:-----------|:--------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 16 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 14 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | X | X | | | | X | X | | X | X | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | | X | | | | X | X | X | X | | | | X | | | | | | | X | X | X | | X | X | | | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | | X | X | | | | X | X | | X | X | | X | | | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 5 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | | | X | X | | | | X | | | X | X | | | | | X | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | X | | | X | | X | | X | X | | X | X | X | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/asari_nanami_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T20:49:16+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T17:39:42+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of asari\_nanami/浅利七海/아사리나나미 (THE iDOLM@STER: Cinderella Girls) ======================================================================= This is the dataset of asari\_nanami/浅利七海/아사리나나미 (THE iDOLM@STER: Cinderella Girls), containing 262 images and their tags. The core tags of this character are 'long\_hair, blue\_hair, hair\_ornament, blue\_eyes, bangs, fish\_hair\_ornament, hair\_rings, very\_long\_hair, breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
9f882720752d42ecc576fb255e4ef5685caecde1
# Dataset of okuyama_saori/奥山沙織 (THE iDOLM@STER: Cinderella Girls) This is the dataset of okuyama_saori/奥山沙織 (THE iDOLM@STER: Cinderella Girls), containing 64 images and their tags. The core tags of this character are `long_hair, brown_hair, braid, brown_eyes, glasses, hair_ornament, freckles, twin_braids, ahoge, hairclip`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 64 | 51.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/okuyama_saori_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 64 | 41.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/okuyama_saori_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 129 | 76.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/okuyama_saori_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 64 | 50.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/okuyama_saori_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 129 | 89.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/okuyama_saori_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/okuyama_saori_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------| | 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, smile, solo, blush, looking_at_viewer, open_mouth, bow, dress | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | solo | blush | looking_at_viewer | open_mouth | bow | dress | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:--------|:--------------------|:-------------|:------|:--------| | 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X |
CyberHarem/okuyama_saori_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T21:05:12+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:05:11+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of okuyama\_saori/奥山沙織 (THE iDOLM@STER: Cinderella Girls) ================================================================= This is the dataset of okuyama\_saori/奥山沙織 (THE iDOLM@STER: Cinderella Girls), containing 64 images and their tags. The core tags of this character are 'long\_hair, brown\_hair, braid, brown\_eyes, glasses, hair\_ornament, freckles, twin\_braids, ahoge, hairclip', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
0ad789eff5d74322d00faade281e2bcc49dc51eb
# Dataset of takahashi_reiko/高橋礼子 (THE iDOLM@STER: Cinderella Girls) This is the dataset of takahashi_reiko/高橋礼子 (THE iDOLM@STER: Cinderella Girls), containing 57 images and their tags. The core tags of this character are `long_hair, purple_eyes, breasts, black_hair, large_breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 57 | 46.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takahashi_reiko_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 57 | 38.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takahashi_reiko_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 109 | 64.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takahashi_reiko_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 57 | 44.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takahashi_reiko_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 109 | 73.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takahashi_reiko_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/takahashi_reiko_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------| | 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, cleavage, looking_at_viewer, necklace, smile, dress, brown_hair, hair_flower | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | cleavage | looking_at_viewer | necklace | smile | dress | brown_hair | hair_flower | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-----------|:--------------------|:-----------|:--------|:--------|:-------------|:--------------| | 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X |
CyberHarem/takahashi_reiko_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T21:21:26+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:26:23+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of takahashi\_reiko/高橋礼子 (THE iDOLM@STER: Cinderella Girls) =================================================================== This is the dataset of takahashi\_reiko/高橋礼子 (THE iDOLM@STER: Cinderella Girls), containing 57 images and their tags. The core tags of this character are 'long\_hair, purple\_eyes, breasts, black\_hair, large\_breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
c9aad042651d5664b2b0d590bde157b631b8a2a7
# Dataset of aino_nagisa (THE iDOLM@STER: Cinderella Girls) This is the dataset of aino_nagisa (THE iDOLM@STER: Cinderella Girls), containing 37 images and their tags. The core tags of this character are `brown_hair, long_hair, ponytail, brown_eyes, breasts, ribbon`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 37 | 34.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aino_nagisa_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 37 | 23.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aino_nagisa_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 78 | 45.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aino_nagisa_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 37 | 31.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aino_nagisa_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 78 | 58.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aino_nagisa_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/aino_nagisa_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, smile, solo, card_(medium), character_name, sun_symbol, open_mouth, shorts, jewelry, orange_background, skirt | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, cowboy_shot, high_ponytail, looking_at_viewer, navel, solo, standing, armpits, collarbone, crop_top, groin, hair_intakes, large_breasts, midriff, red_eyes, sidelocks, sleeveless_shirt, tied_shirt, white_skirt, bike_shorts, black_shorts, blush, cleavage, detached_sleeves, open_mouth, short_shorts, very_long_hair, :d, arm_warmers, arms_up, ball, bare_shoulders, grin, hair_bow, hair_ribbon, holding, medium_breasts, necklace, one_eye_closed, parted_bangs, sportswear, stomach, white_background, wristband | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | solo | card_(medium) | character_name | sun_symbol | open_mouth | shorts | jewelry | orange_background | skirt | cowboy_shot | high_ponytail | looking_at_viewer | navel | standing | armpits | collarbone | crop_top | groin | hair_intakes | large_breasts | midriff | red_eyes | sidelocks | sleeveless_shirt | tied_shirt | white_skirt | bike_shorts | black_shorts | blush | cleavage | detached_sleeves | short_shorts | very_long_hair | :d | arm_warmers | arms_up | ball | bare_shoulders | grin | hair_bow | hair_ribbon | holding | medium_breasts | necklace | one_eye_closed | parted_bangs | sportswear | stomach | white_background | wristband | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:----------------|:-----------------|:-------------|:-------------|:---------|:----------|:--------------------|:--------|:--------------|:----------------|:--------------------|:--------|:-----------|:----------|:-------------|:-----------|:--------|:---------------|:----------------|:----------|:-----------|:------------|:-------------------|:-------------|:--------------|:--------------|:---------------|:--------|:-----------|:-------------------|:---------------|:-----------------|:-----|:--------------|:----------|:-------|:-----------------|:-------|:-----------|:--------------|:----------|:-----------------|:-----------|:-----------------|:---------------|:-------------|:----------|:-------------------|:------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/aino_nagisa_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T21:31:47+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T22:43:19+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of aino\_nagisa (THE iDOLM@STER: Cinderella Girls) ========================================================== This is the dataset of aino\_nagisa (THE iDOLM@STER: Cinderella Girls), containing 37 images and their tags. The core tags of this character are 'brown\_hair, long\_hair, ponytail, brown\_eyes, breasts, ribbon', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
c33413169d3e48a58c285571c37204c416823da6
# Dataset Card for "v2_sinespacios" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
anaisk/v2_sinespacios
[ "region:us" ]
2023-09-15T21:31:54+00:00
{"dataset_info": {"features": [{"name": "Sentence", "dtype": "string"}, {"name": "Audio", "dtype": "audio"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 314514171.93, "num_examples": 9730}], "download_size": 357778902, "dataset_size": 314514171.93}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-15T21:36:29+00:00
[]
[]
TAGS #region-us
# Dataset Card for "v2_sinespacios" More Information needed
[ "# Dataset Card for \"v2_sinespacios\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"v2_sinespacios\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"v2_sinespacios\"\n\nMore Information needed" ]
53ef04c5b5fa246edb30e5204e4c127724e8599e
# Dataset of momoi_azuki/桃井あずき (THE iDOLM@STER: Cinderella Girls) This is the dataset of momoi_azuki/桃井あずき (THE iDOLM@STER: Cinderella Girls), containing 66 images and their tags. The core tags of this character are `brown_eyes, black_hair, breasts, hair_bun, single_hair_bun, brown_hair, hair_ornament, hair_flower`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 66 | 42.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momoi_azuki_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 66 | 35.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momoi_azuki_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 119 | 60.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momoi_azuki_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 66 | 41.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momoi_azuki_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 119 | 68.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momoi_azuki_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/momoi_azuki_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blush, looking_at_viewer, solo, large_breasts, navel, nipples, open_mouth, smile, white_panties, open_clothes, pink_panties, red_eyes, bow, bra_removed, lying, medium_breasts, panty_pull, pink_bra, serafuku, shirt, skirt_removed | | 1 | 22 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, flower, kimono, smile, long_hair, open_mouth, blush, looking_at_viewer, medium_breasts | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 2girls, blush, open_mouth, long_hair, solo_focus, :d, long_sleeves, looking_at_viewer, bangs, serafuku, simple_background, upper_body, white_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | looking_at_viewer | solo | large_breasts | navel | nipples | open_mouth | smile | white_panties | open_clothes | pink_panties | red_eyes | bow | bra_removed | lying | medium_breasts | panty_pull | pink_bra | serafuku | shirt | skirt_removed | flower | kimono | long_hair | 2girls | solo_focus | :d | long_sleeves | bangs | simple_background | upper_body | white_background | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:--------------------|:-------|:----------------|:--------|:----------|:-------------|:--------|:----------------|:---------------|:---------------|:-----------|:------|:--------------|:--------|:-----------------|:-------------|:-----------|:-----------|:--------|:----------------|:---------|:---------|:------------|:---------|:-------------|:-----|:---------------|:--------|:--------------------|:-------------|:-------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | 1 | 22 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | | | | X | X | | | | | | | | X | | | | | | X | X | X | | | | | | | | | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | | X | X | | | | | X | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X |
CyberHarem/momoi_azuki_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T21:48:05+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T19:54:12+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of momoi\_azuki/桃井あずき (THE iDOLM@STER: Cinderella Girls) ================================================================ This is the dataset of momoi\_azuki/桃井あずき (THE iDOLM@STER: Cinderella Girls), containing 66 images and their tags. The core tags of this character are 'brown\_eyes, black\_hair, breasts, hair\_bun, single\_hair\_bun, brown\_hair, hair\_ornament, hair\_flower', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
9d6b83c7d675d080cd5bbc7f5d03d25da61c02d4
# Dataset of aikawa_chinatsu/相川千夏 (THE iDOLM@STER: Cinderella Girls) This is the dataset of aikawa_chinatsu/相川千夏 (THE iDOLM@STER: Cinderella Girls), containing 36 images and their tags. The core tags of this character are `short_hair, brown_hair, glasses, brown_eyes, red-framed_eyewear`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 36 | 26.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aikawa_chinatsu_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 36 | 20.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aikawa_chinatsu_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 65 | 33.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aikawa_chinatsu_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 36 | 25.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aikawa_chinatsu_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 65 | 41.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aikawa_chinatsu_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/aikawa_chinatsu_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, necklace, beans, belt, setsubun, oni_mask, scarf | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, looking_at_viewer, solo, simple_background, upper_body, white_background, smile, bangs, breasts, closed_mouth, holding, jewelry, white_shirt, yellow_eyes | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | necklace | beans | belt | setsubun | oni_mask | scarf | looking_at_viewer | simple_background | upper_body | white_background | smile | bangs | breasts | closed_mouth | holding | jewelry | white_shirt | yellow_eyes | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-----------|:--------|:-------|:-----------|:-----------|:--------|:--------------------|:--------------------|:-------------|:-------------------|:--------|:--------|:----------|:---------------|:----------|:----------|:--------------|:--------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/aikawa_chinatsu_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T21:57:20+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T20:42:58+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of aikawa\_chinatsu/相川千夏 (THE iDOLM@STER: Cinderella Girls) =================================================================== This is the dataset of aikawa\_chinatsu/相川千夏 (THE iDOLM@STER: Cinderella Girls), containing 36 images and their tags. The core tags of this character are 'short\_hair, brown\_hair, glasses, brown\_eyes, red-framed\_eyewear', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
5b6a8af3f449e91573377c58f471b258815ac2a7
# Dataset of anzai_miyako/安斎都 (THE iDOLM@STER: Cinderella Girls) This is the dataset of anzai_miyako/安斎都 (THE iDOLM@STER: Cinderella Girls), containing 44 images and their tags. The core tags of this character are `red_hair, short_hair, blue_eyes, bangs, hat, hair_ornament`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 44 | 38.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anzai_miyako_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 44 | 27.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anzai_miyako_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 98 | 55.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anzai_miyako_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 44 | 35.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anzai_miyako_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 98 | 71.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anzai_miyako_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/anzai_miyako_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, bandaid_on_knee, solo, open_mouth, shoes, hairclip, looking_at_viewer, shorts, :d, bag, blush, holding, kneehighs, long_sleeves, magnifying_glass, simple_background, sitting, white_background | | 1 | 9 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, smile, solo, capelet, bow, looking_at_viewer, open_mouth, deerstalker, magnifying_glass, one_eye_closed, skirt, ;d, detective | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bandaid_on_knee | solo | open_mouth | shoes | hairclip | looking_at_viewer | shorts | :d | bag | blush | holding | kneehighs | long_sleeves | magnifying_glass | simple_background | sitting | white_background | smile | capelet | bow | deerstalker | one_eye_closed | skirt | ;d | detective | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------------|:-------|:-------------|:--------|:-----------|:--------------------|:---------|:-----|:------|:--------|:----------|:------------|:---------------|:-------------------|:--------------------|:----------|:-------------------|:--------|:----------|:------|:--------------|:-----------------|:--------|:-----|:------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | 1 | 9 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | X | | | X | | | | | | | | X | | | | X | X | X | X | X | X | X | X |
CyberHarem/anzai_miyako_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T22:07:29+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T20:42:13+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of anzai\_miyako/安斎都 (THE iDOLM@STER: Cinderella Girls) =============================================================== This is the dataset of anzai\_miyako/安斎都 (THE iDOLM@STER: Cinderella Girls), containing 44 images and their tags. The core tags of this character are 'red\_hair, short\_hair, blue\_eyes, bangs, hat, hair\_ornament', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
df5ad79a62ebfb93a0e03392a840eb17040d6c08
# Dataset of nanao_yuriko/七尾百合子/나나오유리코 (THE iDOLM@STER: Million Live!) This is the dataset of nanao_yuriko/七尾百合子/나나오유리코 (THE iDOLM@STER: Million Live!), containing 500 images and their tags. The core tags of this character are `blue_hair, yellow_eyes, short_hair, breasts, braid, bangs, medium_breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 568.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nanao_yuriko_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 356.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nanao_yuriko_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1212 | 759.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nanao_yuriko_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 514.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nanao_yuriko_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1212 | 1.00 GiB | [Download](https://huggingface.co/datasets/CyberHarem/nanao_yuriko_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/nanao_yuriko_theidolmstermillionlive', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 25 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, smile, looking_at_viewer, blush, open_mouth, book | | 1 | 26 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, blush, looking_at_viewer, open_mouth, frilled_bikini, navel, hair_flower, smile, cleavage, outdoors, collarbone, green_bikini | | 2 | 18 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1boy, 1girl, blush, hetero, open_mouth, solo_focus, nipples, pussy, sex, vaginal, penis, lying, navel, completely_nude, mosaic_censoring, spread_legs | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1boy, 1girl, blush, fellatio, hetero, nude, penis, solo_focus, looking_at_viewer, pov, ass, mosaic_censoring, cum_in_mouth | | 4 | 9 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, serafuku, solo, looking_at_viewer, white_shirt, blush, navel, pleated_skirt, red_neckerchief, short_sleeves, fingerless_gloves, white_background, white_skirt, black_gloves, cape, closed_mouth, midriff, simple_background, black_thighhighs, medium_hair, white_sailor_collar | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, blush, collarbone, solo, cleavage, upper_body, looking_at_viewer, simple_background, white_background, armpits, arms_up, bow_bra, navel, small_breasts, smile, underwear_only | | 6 | 8 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, blush, looking_at_viewer, open_mouth, solo, :d, bow, detached_sleeves, white_dress, bare_shoulders, collarbone, frilled_dress, hair_ribbon, puffy_short_sleeves, sleeveless_dress, yellow_ribbon, blue_sky, day, mini_hat, outdoors, sailor_dress, white_headwear, wrist_cuffs, argyle, choker, cloud, ribbon_braid, sparkle, standing, tilted_headwear, white_sailor_collar, belt_buckle, blue_thighhighs, feathers, holding, outstretched_arm, pleated_dress, shiny_hair, white_background, white_sleeves | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, detached_collar, looking_at_viewer, rabbit_ears, solo, wrist_cuffs, cleavage, fake_animal_ears, playboy_bunny, bare_shoulders, black_bowtie, blush, sitting, strapless_leotard, black_pantyhose, closed_mouth, covered_navel, cowboy_shot, holding_tray, rabbit_tail, smile, wine_glass | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | smile | looking_at_viewer | blush | open_mouth | book | frilled_bikini | navel | hair_flower | cleavage | outdoors | collarbone | green_bikini | 1boy | hetero | solo_focus | nipples | pussy | sex | vaginal | penis | lying | completely_nude | mosaic_censoring | spread_legs | fellatio | nude | pov | ass | cum_in_mouth | serafuku | white_shirt | pleated_skirt | red_neckerchief | short_sleeves | fingerless_gloves | white_background | white_skirt | black_gloves | cape | closed_mouth | midriff | simple_background | black_thighhighs | medium_hair | white_sailor_collar | upper_body | armpits | arms_up | bow_bra | small_breasts | underwear_only | :d | bow | detached_sleeves | white_dress | bare_shoulders | frilled_dress | hair_ribbon | puffy_short_sleeves | sleeveless_dress | yellow_ribbon | blue_sky | day | mini_hat | sailor_dress | white_headwear | wrist_cuffs | argyle | choker | cloud | ribbon_braid | sparkle | standing | tilted_headwear | belt_buckle | blue_thighhighs | feathers | holding | outstretched_arm | pleated_dress | shiny_hair | white_sleeves | detached_collar | rabbit_ears | fake_animal_ears | playboy_bunny | black_bowtie | sitting | strapless_leotard | black_pantyhose | covered_navel | cowboy_shot | holding_tray | rabbit_tail | wine_glass | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:--------------------|:--------|:-------------|:-------|:-----------------|:--------|:--------------|:-----------|:-----------|:-------------|:---------------|:-------|:---------|:-------------|:----------|:--------|:------|:----------|:--------|:--------|:------------------|:-------------------|:--------------|:-----------|:-------|:------|:------|:---------------|:-----------|:--------------|:----------------|:------------------|:----------------|:--------------------|:-------------------|:--------------|:---------------|:-------|:---------------|:----------|:--------------------|:-------------------|:--------------|:----------------------|:-------------|:----------|:----------|:----------|:----------------|:-----------------|:-----|:------|:-------------------|:--------------|:-----------------|:----------------|:--------------|:----------------------|:-------------------|:----------------|:-----------|:------|:-----------|:---------------|:-----------------|:--------------|:---------|:---------|:--------|:---------------|:----------|:-----------|:------------------|:--------------|:------------------|:-----------|:----------|:-------------------|:----------------|:-------------|:----------------|:------------------|:--------------|:-------------------|:----------------|:---------------|:----------|:--------------------|:------------------|:----------------|:--------------|:---------------|:--------------|:-------------| | 0 | 25 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 26 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 18 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | | | X | X | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | X | X | | | | | | | | | | X | X | X | | | | | X | | | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 9 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | X | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | X | X | X | | | | X | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 8 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | | X | X | X | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | X | X | X | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/nanao_yuriko_theidolmstermillionlive
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T22:13:03+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-15T20:59:45+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of nanao\_yuriko/七尾百合子/나나오유리코 (THE iDOLM@STER: Million Live!) ===================================================================== This is the dataset of nanao\_yuriko/七尾百合子/나나오유리코 (THE iDOLM@STER: Million Live!), containing 500 images and their tags. The core tags of this character are 'blue\_hair, yellow\_eyes, short\_hair, breasts, braid, bangs, medium\_breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
46c6251950f94c1ed62df8878b75435ba2dac571
# Dataset Card for "virus_dna_dedup_minihash_0.9_kmer_7" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Hack90/virus_dna_dedup_minihash_0.9_kmer_7
[ "region:us" ]
2023-09-15T22:14:53+00:00
{"dataset_info": {"features": [{"name": "sequence_x", "dtype": "string"}, {"name": "similarity_filter", "dtype": "float64"}, {"name": "id", "dtype": "string"}, {"name": "sequence_y", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "features", "dtype": "int64"}, {"name": "seq_length", "dtype": "int64"}, {"name": "missing_seq_count", "dtype": "int64"}, {"name": "missingness", "dtype": "float64"}, {"name": "seq_filled", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}, {"name": "spaced_sequence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 522191271, "num_examples": 10885}], "download_size": 234031394, "dataset_size": 522191271}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-22T21:04:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "virus_dna_dedup_minihash_0.9_kmer_7" More Information needed
[ "# Dataset Card for \"virus_dna_dedup_minihash_0.9_kmer_7\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"virus_dna_dedup_minihash_0.9_kmer_7\"\n\nMore Information needed" ]
[ 6, 26 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"virus_dna_dedup_minihash_0.9_kmer_7\"\n\nMore Information needed" ]
5b7d1b970ebb1bd37a151104010617e8027222cc
# Dataset Card for "hermes_labeled" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
vikp/hermes_labeled
[ "region:us" ]
2023-09-15T22:18:53+00:00
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "rendered", "dtype": "string"}, {"name": "quality_prob", "dtype": "float64"}, {"name": "learning_prob", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 624932230, "num_examples": 242831}], "download_size": 285527683, "dataset_size": 624932230}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-15T22:22:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hermes_labeled" More Information needed
[ "# Dataset Card for \"hermes_labeled\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hermes_labeled\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"hermes_labeled\"\n\nMore Information needed" ]
17ffaea9bc54bd6abd1ac3bb304011f0027f335f
# Dataset of yanase_miyuki/柳瀬美由紀 (THE iDOLM@STER: Cinderella Girls) This is the dataset of yanase_miyuki/柳瀬美由紀 (THE iDOLM@STER: Cinderella Girls), containing 51 images and their tags. The core tags of this character are `brown_hair, short_hair, hair_ornament, yellow_eyes, side_ponytail, brown_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 51 | 53.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yanase_miyuki_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 51 | 33.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yanase_miyuki_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 112 | 65.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yanase_miyuki_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 51 | 47.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yanase_miyuki_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 112 | 89.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yanase_miyuki_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/yanase_miyuki_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------| | 0 | 51 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, open_mouth, solo, blush, looking_at_viewer, :d | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | open_mouth | solo | blush | looking_at_viewer | :d | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:-------|:--------|:--------------------|:-----| | 0 | 51 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X |
CyberHarem/yanase_miyuki_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T22:35:12+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T20:55:18+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of yanase\_miyuki/柳瀬美由紀 (THE iDOLM@STER: Cinderella Girls) ================================================================== This is the dataset of yanase\_miyuki/柳瀬美由紀 (THE iDOLM@STER: Cinderella Girls), containing 51 images and their tags. The core tags of this character are 'brown\_hair, short\_hair, hair\_ornament, yellow\_eyes, side\_ponytail, brown\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
46fa8ca0e90e2e624b3d763110d2f9d0d1fbbfc0
# Dataset of kurihara_nene/栗原ネネ (THE iDOLM@STER: Cinderella Girls) This is the dataset of kurihara_nene/栗原ネネ (THE iDOLM@STER: Cinderella Girls), containing 47 images and their tags. The core tags of this character are `long_hair, black_hair, blue_eyes, breasts, bangs`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 47 | 33.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kurihara_nene_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 47 | 25.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kurihara_nene_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 82 | 42.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kurihara_nene_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 47 | 31.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kurihara_nene_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 82 | 52.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kurihara_nene_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/kurihara_nene_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, smile, solo, black_eyes, blush, necklace, bare_shoulders, closed_mouth, looking_at_viewer, bracelet, earrings, frills, hair_bow, hair_flower, open_mouth, sleeveless_dress, upper_body | | 1 | 12 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | smile, blush, 1girl, navel, open_mouth, solo, braid, midriff, multiple_girls, sweat | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | solo | black_eyes | blush | necklace | bare_shoulders | closed_mouth | looking_at_viewer | bracelet | earrings | frills | hair_bow | hair_flower | open_mouth | sleeveless_dress | upper_body | navel | braid | midriff | multiple_girls | sweat | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:-------------|:--------|:-----------|:-----------------|:---------------|:--------------------|:-----------|:-----------|:---------|:-----------|:--------------|:-------------|:-------------------|:-------------|:--------|:--------|:----------|:-----------------|:--------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | 1 | 12 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | | X | | | | | | | | | | X | | | X | X | X | X | X |
CyberHarem/kurihara_nene_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T22:53:40+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:52:05+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of kurihara\_nene/栗原ネネ (THE iDOLM@STER: Cinderella Girls) ================================================================= This is the dataset of kurihara\_nene/栗原ネネ (THE iDOLM@STER: Cinderella Girls), containing 47 images and their tags. The core tags of this character are 'long\_hair, black\_hair, blue\_eyes, breasts, bangs', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
83df33413316ad1dba475c8544789a4986ffa8ec
# Dataset Card for "cs323_densepred_depth" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
shariqfarooq/cs323_densepred_depth
[ "region:us" ]
2023-09-15T23:00:58+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "depth", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 651397023.7943412, "num_examples": 25356}, {"name": "test", "num_bytes": 13440344.421658808, "num_examples": 518}], "download_size": 343390111, "dataset_size": 664837368.216}}
2023-09-15T23:02:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cs323_densepred_depth" More Information needed
[ "# Dataset Card for \"cs323_densepred_depth\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cs323_densepred_depth\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"cs323_densepred_depth\"\n\nMore Information needed" ]
c1b88cfa68410174122a623e483f11c16dbe9e4b
# Dataset of yanagi_kiyora/柳清良 (THE iDOLM@STER: Cinderella Girls) This is the dataset of yanagi_kiyora/柳清良 (THE iDOLM@STER: Cinderella Girls), containing 39 images and their tags. The core tags of this character are `brown_hair, breasts, earrings, hat, green_eyes, large_breasts, nurse_cap`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 39 | 41.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yanagi_kiyora_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 39 | 26.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yanagi_kiyora_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 84 | 51.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yanagi_kiyora_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 39 | 38.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yanagi_kiyora_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 84 | 71.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yanagi_kiyora_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/yanagi_kiyora_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------| | 0 | 39 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, smile, jewelry, looking_at_viewer, cleavage, blush, dress, open_mouth | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | smile | jewelry | looking_at_viewer | cleavage | blush | dress | open_mouth | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:----------|:--------------------|:-----------|:--------|:--------|:-------------| | 0 | 39 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X |
CyberHarem/yanagi_kiyora_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T23:09:50+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:53:32+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of yanagi\_kiyora/柳清良 (THE iDOLM@STER: Cinderella Girls) ================================================================ This is the dataset of yanagi\_kiyora/柳清良 (THE iDOLM@STER: Cinderella Girls), containing 39 images and their tags. The core tags of this character are 'brown\_hair, breasts, earrings, hat, green\_eyes, large\_breasts, nurse\_cap', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
99081df704c9313387a93fab40320b66a2ce3274
# Dataset Card for "nbs_and_pypi" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
vikp/nbs_and_pypi
[ "region:us" ]
2023-09-15T23:25:48+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "chunk_prompt", "dtype": "bool"}, {"name": "kind", "dtype": "string"}, {"name": "prob", "dtype": "float64"}, {"name": "path", "dtype": "string"}, {"name": "quality_prob", "dtype": "float64"}, {"name": "learning_prob", "dtype": "float64"}, {"name": "filename", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7581106764.450301, "num_examples": 870654}], "download_size": 3062286174, "dataset_size": 7581106764.450301}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-15T23:49:36+00:00
[]
[]
TAGS #region-us
# Dataset Card for "nbs_and_pypi" More Information needed
[ "# Dataset Card for \"nbs_and_pypi\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"nbs_and_pypi\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"nbs_and_pypi\"\n\nMore Information needed" ]
28803ea2b8d119e6fd8e59b10d9ab98e4855c6f9
# Dataset Card for heuristic_classification-filtered-pile-50M ## Dataset Description - **Repository:** https://github.com/p-lambda/dsir - **Paper:** https://arxiv.org/abs/2302.03169 - **Point of Contact: Sang Michael Xie <[email protected]>** ### Dataset Summary This dataset is a subset of The Pile, selected via the heuristic classification data selection method. The target distribution for heuristic classification are the Wikipedia and BookCorpus2 subsets of The Pile. ### Languages English (EN) ## Dataset Structure A train set is provided (51.2M examples) in jsonl format. ### Data Instances ``` {"contents": "Members join for free and will have access to all of our earning verticals, including, but not limited to, watching videos, shopping for cash back, taking surveys, and redeeming special offers. Swagbucks is the web's leading rewards platform, dedicated to providing FREE gift cards to its 12+ million members. Choose from top retailers like Amazon, Target, Walmart, Starbucks, PayPal, and tons more.dead full espanol tle work is running out. You\u2019re given a descargar land of the dead full espanol but that respect it\u2019s tons of one another. When the screen. With the pluses gained from a ledge, your arms or abandons your name suggests, Inferno has locked on a dash for a poozer, it\u2019s placed in their shadowing skills. These controls forward, backward, and frankly, the straights. You can also have expected, but that\u2019s unlike anything particularly adept pacing. Each win by so rough idea that\u2019s worth it up. There are a neat sensation to play of a fresh\n\nthe voice actors give up with content and the same innovative control scheme that pulls you invested. From the movement. The unique art style and is still remarkably tough. You\u2019re not", "metadata": {"pile_set_name": ["Pile-CC", "Pile-CC"]}, "id": 303} ``` ### Data Fields ``` "contents": the text "metadata": contains information about the source(s) of text that the text comes from. Multiple sources means that the example is concatenated from two sources. "id": Ignore - a non-unique identifier ``` ## Dataset Creation We first select 102.4M examples then concatenate every two examples to create 51.2M examples. This ensures that the examples are long enough for a max token length of 512 without much padding. We train the fasttext binary classifier for heuristic classification from The Pile validation set, where the target is Wikipedia + BookCorpus2 + Gutenberg + Books3 and the raw data come from the rest of the data sources in The Pile. We first select 98.4M examples from non-Wikipedia and book data, then randomly select 2M from Wikipedia and 0.66M each from BookCorpus2, Gutenberg, and Books3. After this, we concatenate every two examples. ### Source Data The Pile #### Initial Data Collection and Normalization We select data from The Pile, which comes in 30 random chunks. We reserve chunk 0 for validation purposes and only consider the last 29 chunks. We first divided the documents in The Pile into chunks of 128 words, according to whitespace tokenization. These chunks define the examples that we do data selection on, totaling 1.7B examples. Before heuristic classification, we first apply a manual quality filter (see paper for details) and only consider the examples that pass the filter. ## Considerations for Using the Data The dataset is biased towards choosing data from non-Wikipedia and non-Books sources. A balanced approach would be to mix in more data from Wikipedia and books. ### Dataset Curators Sang Michael Xie, Shibani Santurkar ### Citation Information Paper: <https://arxiv.org/abs/2302.03169> ``` @article{xie2023data, author = {Sang Michael Xie and Shibani Santurkar and Tengyu Ma and Percy Liang}, journal = {arXiv preprint arXiv:2302.03169}, title = {Data Selection for Language Models via Importance Resampling}, year = {2023}, } ```
stanford-crfm/heuristic_classification-filtered-pile-50M
[ "size_categories:10M<n<100M", "language:en", "license:mit", "arxiv:2302.03169", "region:us" ]
2023-09-15T23:31:34+00:00
{"language": ["en"], "license": "mit", "size_categories": ["10M<n<100M"]}
2023-09-16T15:06:56+00:00
[ "2302.03169" ]
[ "en" ]
TAGS #size_categories-10M<n<100M #language-English #license-mit #arxiv-2302.03169 #region-us
# Dataset Card for heuristic_classification-filtered-pile-50M ## Dataset Description - Repository: URL - Paper: URL - Point of Contact: Sang Michael Xie <xie@URL> ### Dataset Summary This dataset is a subset of The Pile, selected via the heuristic classification data selection method. The target distribution for heuristic classification are the Wikipedia and BookCorpus2 subsets of The Pile. ### Languages English (EN) ## Dataset Structure A train set is provided (51.2M examples) in jsonl format. ### Data Instances ### Data Fields ## Dataset Creation We first select 102.4M examples then concatenate every two examples to create 51.2M examples. This ensures that the examples are long enough for a max token length of 512 without much padding. We train the fasttext binary classifier for heuristic classification from The Pile validation set, where the target is Wikipedia + BookCorpus2 + Gutenberg + Books3 and the raw data come from the rest of the data sources in The Pile. We first select 98.4M examples from non-Wikipedia and book data, then randomly select 2M from Wikipedia and 0.66M each from BookCorpus2, Gutenberg, and Books3. After this, we concatenate every two examples. ### Source Data The Pile #### Initial Data Collection and Normalization We select data from The Pile, which comes in 30 random chunks. We reserve chunk 0 for validation purposes and only consider the last 29 chunks. We first divided the documents in The Pile into chunks of 128 words, according to whitespace tokenization. These chunks define the examples that we do data selection on, totaling 1.7B examples. Before heuristic classification, we first apply a manual quality filter (see paper for details) and only consider the examples that pass the filter. ## Considerations for Using the Data The dataset is biased towards choosing data from non-Wikipedia and non-Books sources. A balanced approach would be to mix in more data from Wikipedia and books. ### Dataset Curators Sang Michael Xie, Shibani Santurkar Paper: <URL
[ "# Dataset Card for heuristic_classification-filtered-pile-50M", "## Dataset Description\n\n- Repository: URL\n- Paper: URL\n- Point of Contact: Sang Michael Xie <xie@URL>", "### Dataset Summary\n\nThis dataset is a subset of The Pile, selected via the heuristic classification data selection method. The target distribution for heuristic classification are the Wikipedia and BookCorpus2 subsets of The Pile.", "### Languages\n\nEnglish (EN)", "## Dataset Structure\n\nA train set is provided (51.2M examples) in jsonl format.", "### Data Instances", "### Data Fields", "## Dataset Creation\nWe first select 102.4M examples then concatenate every two examples to create 51.2M examples.\nThis ensures that the examples are long enough for a max token length of 512 without much padding.\nWe train the fasttext binary classifier for heuristic classification from The Pile validation set, where the target is Wikipedia + BookCorpus2 + Gutenberg + Books3 and the raw data come from the rest of the data sources in The Pile.\nWe first select 98.4M examples from non-Wikipedia and book data, then randomly select 2M from Wikipedia and 0.66M each from BookCorpus2, Gutenberg, and Books3. \nAfter this, we concatenate every two examples.", "### Source Data\nThe Pile", "#### Initial Data Collection and Normalization\nWe select data from The Pile, which comes in 30 random chunks. We reserve chunk 0 for validation purposes and only consider the last 29 chunks.\nWe first divided the documents in The Pile into chunks of 128 words, according to whitespace tokenization.\nThese chunks define the examples that we do data selection on, totaling 1.7B examples.\nBefore heuristic classification, we first apply a manual quality filter (see paper for details) and only consider the examples that pass the filter.", "## Considerations for Using the Data\n\nThe dataset is biased towards choosing data from non-Wikipedia and non-Books sources. A balanced approach would be to mix in more data from Wikipedia and books.", "### Dataset Curators\n\nSang Michael Xie, Shibani Santurkar\n\n\nPaper: <URL" ]
[ "TAGS\n#size_categories-10M<n<100M #language-English #license-mit #arxiv-2302.03169 #region-us \n", "# Dataset Card for heuristic_classification-filtered-pile-50M", "## Dataset Description\n\n- Repository: URL\n- Paper: URL\n- Point of Contact: Sang Michael Xie <xie@URL>", "### Dataset Summary\n\nThis dataset is a subset of The Pile, selected via the heuristic classification data selection method. The target distribution for heuristic classification are the Wikipedia and BookCorpus2 subsets of The Pile.", "### Languages\n\nEnglish (EN)", "## Dataset Structure\n\nA train set is provided (51.2M examples) in jsonl format.", "### Data Instances", "### Data Fields", "## Dataset Creation\nWe first select 102.4M examples then concatenate every two examples to create 51.2M examples.\nThis ensures that the examples are long enough for a max token length of 512 without much padding.\nWe train the fasttext binary classifier for heuristic classification from The Pile validation set, where the target is Wikipedia + BookCorpus2 + Gutenberg + Books3 and the raw data come from the rest of the data sources in The Pile.\nWe first select 98.4M examples from non-Wikipedia and book data, then randomly select 2M from Wikipedia and 0.66M each from BookCorpus2, Gutenberg, and Books3. \nAfter this, we concatenate every two examples.", "### Source Data\nThe Pile", "#### Initial Data Collection and Normalization\nWe select data from The Pile, which comes in 30 random chunks. We reserve chunk 0 for validation purposes and only consider the last 29 chunks.\nWe first divided the documents in The Pile into chunks of 128 words, according to whitespace tokenization.\nThese chunks define the examples that we do data selection on, totaling 1.7B examples.\nBefore heuristic classification, we first apply a manual quality filter (see paper for details) and only consider the examples that pass the filter.", "## Considerations for Using the Data\n\nThe dataset is biased towards choosing data from non-Wikipedia and non-Books sources. A balanced approach would be to mix in more data from Wikipedia and books.", "### Dataset Curators\n\nSang Michael Xie, Shibani Santurkar\n\n\nPaper: <URL" ]
[ 35, 19, 28, 55, 8, 23, 6, 5, 165, 7, 122, 44, 20 ]
[ "passage: TAGS\n#size_categories-10M<n<100M #language-English #license-mit #arxiv-2302.03169 #region-us \n# Dataset Card for heuristic_classification-filtered-pile-50M## Dataset Description\n\n- Repository: URL\n- Paper: URL\n- Point of Contact: Sang Michael Xie <xie@URL>### Dataset Summary\n\nThis dataset is a subset of The Pile, selected via the heuristic classification data selection method. The target distribution for heuristic classification are the Wikipedia and BookCorpus2 subsets of The Pile.### Languages\n\nEnglish (EN)## Dataset Structure\n\nA train set is provided (51.2M examples) in jsonl format.### Data Instances### Data Fields## Dataset Creation\nWe first select 102.4M examples then concatenate every two examples to create 51.2M examples.\nThis ensures that the examples are long enough for a max token length of 512 without much padding.\nWe train the fasttext binary classifier for heuristic classification from The Pile validation set, where the target is Wikipedia + BookCorpus2 + Gutenberg + Books3 and the raw data come from the rest of the data sources in The Pile.\nWe first select 98.4M examples from non-Wikipedia and book data, then randomly select 2M from Wikipedia and 0.66M each from BookCorpus2, Gutenberg, and Books3. \nAfter this, we concatenate every two examples.### Source Data\nThe Pile#### Initial Data Collection and Normalization\nWe select data from The Pile, which comes in 30 random chunks. We reserve chunk 0 for validation purposes and only consider the last 29 chunks.\nWe first divided the documents in The Pile into chunks of 128 words, according to whitespace tokenization.\nThese chunks define the examples that we do data selection on, totaling 1.7B examples.\nBefore heuristic classification, we first apply a manual quality filter (see paper for details) and only consider the examples that pass the filter." ]
0a3ca963ff400d020a32f6ab956b392ff3169186
# Dataset of oonuma_kurumi/大沼くるみ (THE iDOLM@STER: Cinderella Girls) This is the dataset of oonuma_kurumi/大沼くるみ (THE iDOLM@STER: Cinderella Girls), containing 80 images and their tags. The core tags of this character are `long_hair, breasts, brown_eyes, large_breasts, black_hair, bow`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 80 | 72.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oonuma_kurumi_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 80 | 51.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oonuma_kurumi_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 174 | 103.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oonuma_kurumi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 80 | 68.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oonuma_kurumi_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 174 | 133.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oonuma_kurumi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/oonuma_kurumi_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blush, long_sleeves, open_mouth, tears, wavy_mouth, bangs, brown_skirt, solo, white_shirt, collared_shirt, looking_at_viewer, pink_bow, plaid_skirt, very_long_hair, blue_hair, center_frills, crying, hands_up, simple_background | | 1 | 12 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, open_mouth, solo, blush, smile, cleavage, tears, hair_bow, microphone, ponytail, wavy_mouth, dress | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | long_sleeves | open_mouth | tears | wavy_mouth | bangs | brown_skirt | solo | white_shirt | collared_shirt | looking_at_viewer | pink_bow | plaid_skirt | very_long_hair | blue_hair | center_frills | crying | hands_up | simple_background | smile | cleavage | hair_bow | microphone | ponytail | dress | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:---------------|:-------------|:--------|:-------------|:--------|:--------------|:-------|:--------------|:-----------------|:--------------------|:-----------|:--------------|:-----------------|:------------|:----------------|:---------|:-----------|:--------------------|:--------|:-----------|:-----------|:-------------|:-----------|:--------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | 1 | 12 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | X | X | X | | | X | | | | | | | | | | | | X | X | X | X | X | X |
CyberHarem/oonuma_kurumi_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T23:37:26+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T19:56:23+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of oonuma\_kurumi/大沼くるみ (THE iDOLM@STER: Cinderella Girls) ================================================================== This is the dataset of oonuma\_kurumi/大沼くるみ (THE iDOLM@STER: Cinderella Girls), containing 80 images and their tags. The core tags of this character are 'long\_hair, breasts, brown\_eyes, large\_breasts, black\_hair, bow', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
3848223a345acf9a7005e1fbe52214f643e50880
# Dataset of mogami_shizuka/最上静香/모가미시즈카 (THE iDOLM@STER: Million Live!) This is the dataset of mogami_shizuka/最上静香/모가미시즈카 (THE iDOLM@STER: Million Live!), containing 458 images and their tags. The core tags of this character are `long_hair, blue_eyes, black_hair, bangs`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 458 | 562.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mogami_shizuka_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 458 | 328.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mogami_shizuka_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1012 | 674.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mogami_shizuka_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 458 | 497.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mogami_shizuka_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1012 | 963.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mogami_shizuka_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/mogami_shizuka_theidolmstermillionlive', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, dress, microphone, open_mouth, smile, hair_ornament, solo, blush, fingerless_gloves, necklace, looking_at_viewer, thighhighs | | 1 | 9 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, blush, looking_at_viewer, simple_background, solo, white_background, upper_body, open_mouth | | 2 | 19 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, solo, blue_jacket, looking_at_viewer, white_shirt, long_sleeves, neck_ribbon, simple_background, white_background, closed_mouth, green_ribbon, black_skirt, blush, open_clothes, smile, hair_intakes, blunt_bangs | | 3 | 8 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | blush, solo_focus, 2girls, brown_hair, open_mouth, looking_at_viewer, 3girls, :d | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, blush, nurse_cap, solo, headset, looking_at_viewer, open_mouth, syringe, white_gloves, bare_shoulders, dress, lying | | 5 | 12 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, solo, looking_at_viewer, navel, medium_breasts, striped_bikini, blush, cleavage, hair_flower, necklace, blue_bikini, bracelet, frilled_bikini, smile | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, blush, cloud, collarbone, day, looking_at_viewer, navel, outdoors, solo, blue_sky, ocean, small_breasts, blue_bikini, cowboy_shot, open_mouth, frilled_bikini, front-tie_top, smile, standing, white_bikini | | 7 | 8 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, tennis_racket, tennis_uniform, blush, solo, looking_at_viewer, ponytail, blue_hair, tennis_ball, visor_cap, white_skirt, breasts, holding, sleeveless_shirt, wristband | | 8 | 9 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, detached_collar, fake_animal_ears, looking_at_viewer, playboy_bunny, rabbit_ears, solo, wrist_cuffs, rabbit_tail, strapless_leotard, black_leotard, blush, medium_breasts, simple_background, white_background, black_pantyhose, bowtie, ass, from_behind, looking_back | | 9 | 13 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 1boy, 1girl, blush, hetero, solo_focus, sweat, penis, breasts, cum, nipples, ass, bar_censor, fellatio, panties, pubic_hair, sex | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | dress | microphone | open_mouth | smile | hair_ornament | solo | blush | fingerless_gloves | necklace | looking_at_viewer | thighhighs | simple_background | white_background | upper_body | blue_jacket | white_shirt | long_sleeves | neck_ribbon | closed_mouth | green_ribbon | black_skirt | open_clothes | hair_intakes | blunt_bangs | solo_focus | 2girls | brown_hair | 3girls | :d | nurse_cap | headset | syringe | white_gloves | bare_shoulders | lying | navel | medium_breasts | striped_bikini | cleavage | hair_flower | blue_bikini | bracelet | frilled_bikini | cloud | collarbone | day | outdoors | blue_sky | ocean | small_breasts | cowboy_shot | front-tie_top | standing | white_bikini | tennis_racket | tennis_uniform | ponytail | blue_hair | tennis_ball | visor_cap | white_skirt | breasts | holding | sleeveless_shirt | wristband | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | wrist_cuffs | rabbit_tail | strapless_leotard | black_leotard | black_pantyhose | bowtie | ass | from_behind | looking_back | 1boy | hetero | sweat | penis | cum | nipples | bar_censor | fellatio | panties | pubic_hair | sex | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------------|:-------------|:--------|:----------------|:-------|:--------|:--------------------|:-----------|:--------------------|:-------------|:--------------------|:-------------------|:-------------|:--------------|:--------------|:---------------|:--------------|:---------------|:---------------|:--------------|:---------------|:---------------|:--------------|:-------------|:---------|:-------------|:---------|:-----|:------------|:----------|:----------|:---------------|:-----------------|:--------|:--------|:-----------------|:-----------------|:-----------|:--------------|:--------------|:-----------|:-----------------|:--------|:-------------|:------|:-----------|:-----------|:--------|:----------------|:--------------|:----------------|:-----------|:---------------|:----------------|:-----------------|:-----------|:------------|:--------------|:------------|:--------------|:----------|:----------|:-------------------|:------------|:------------------|:-------------------|:----------------|:--------------|:--------------|:--------------|:--------------------|:----------------|:------------------|:---------|:------|:--------------|:---------------|:-------|:---------|:--------|:--------|:------|:----------|:-------------|:-----------|:----------|:-------------|:------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 9 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | | X | | | X | X | | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 19 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | | | X | | X | X | | | X | | X | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 8 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | | | | X | | | | X | | | X | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | X | | | X | X | | | X | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 12 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | | | X | | X | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | | X | X | | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 8 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | | | | | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 9 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | | | | | | X | X | | | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | 9 | 13 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | | | | | | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/mogami_shizuka_theidolmstermillionlive
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T23:43:27+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T00:37:16+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of mogami\_shizuka/最上静香/모가미시즈카 (THE iDOLM@STER: Million Live!) ====================================================================== This is the dataset of mogami\_shizuka/最上静香/모가미시즈카 (THE iDOLM@STER: Million Live!), containing 458 images and their tags. The core tags of this character are 'long\_hair, blue\_eyes, black\_hair, bangs', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
b3245338cacc2aece1a0ff988a0daf4fe5d3b217
# Dataset of shinohara_rei/篠原礼 (THE iDOLM@STER: Cinderella Girls) This is the dataset of shinohara_rei/篠原礼 (THE iDOLM@STER: Cinderella Girls), containing 26 images and their tags. The core tags of this character are `brown_hair, green_eyes, short_hair, breasts, earrings, large_breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 26 | 17.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinohara_rei_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 26 | 14.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinohara_rei_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 47 | 24.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinohara_rei_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 26 | 17.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinohara_rei_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 47 | 29.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinohara_rei_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/shinohara_rei_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, cleavage, necklace, smile, bare_shoulders, blush, collarbone, dress, looking_at_viewer, arm_support, simple_background, sitting | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | cleavage | necklace | smile | bare_shoulders | blush | collarbone | dress | looking_at_viewer | arm_support | simple_background | sitting | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-----------|:-----------|:--------|:-----------------|:--------|:-------------|:--------|:--------------------|:--------------|:--------------------|:----------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/shinohara_rei_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T23:45:31+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:48:53+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of shinohara\_rei/篠原礼 (THE iDOLM@STER: Cinderella Girls) ================================================================ This is the dataset of shinohara\_rei/篠原礼 (THE iDOLM@STER: Cinderella Girls), containing 26 images and their tags. The core tags of this character are 'brown\_hair, green\_eyes, short\_hair, breasts, earrings, large\_breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
5a869b6b0c0aff922643b023bfc7e819ef20f705
# Dataset Card for "en_corpora_parliament_processed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
macarious/en_corpora_parliament_processed
[ "region:us" ]
2023-09-15T23:52:40+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 309185247, "num_examples": 2051014}], "download_size": 171553321, "dataset_size": 309185247}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-21T21:42:34+00:00
[]
[]
TAGS #region-us
# Dataset Card for "en_corpora_parliament_processed" More Information needed
[ "# Dataset Card for \"en_corpora_parliament_processed\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"en_corpora_parliament_processed\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"en_corpora_parliament_processed\"\n\nMore Information needed" ]
4b56b93c3a05457dd8ecd0c0ef6309d02db42ae9
# Dataset of suzumiya_seika/涼宮星花 (THE iDOLM@STER: Cinderella Girls) This is the dataset of suzumiya_seika/涼宮星花 (THE iDOLM@STER: Cinderella Girls), containing 33 images and their tags. The core tags of this character are `long_hair, black_hair, purple_eyes, breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 33 | 27.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/suzumiya_seika_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 33 | 22.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/suzumiya_seika_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 68 | 39.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/suzumiya_seika_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 33 | 26.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/suzumiya_seika_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 68 | 45.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/suzumiya_seika_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/suzumiya_seika_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, smile, solo, card_(medium), character_name, flower_(symbol), open_mouth, pink_background, jewelry, navel | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | solo | card_(medium) | character_name | flower_(symbol) | open_mouth | pink_background | jewelry | navel | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:----------------|:-----------------|:------------------|:-------------|:------------------|:----------|:--------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X |
CyberHarem/suzumiya_seika_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-15T23:53:10+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T22:01:33+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of suzumiya\_seika/涼宮星花 (THE iDOLM@STER: Cinderella Girls) ================================================================== This is the dataset of suzumiya\_seika/涼宮星花 (THE iDOLM@STER: Cinderella Girls), containing 33 images and their tags. The core tags of this character are 'long\_hair, black\_hair, purple\_eyes, breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
4e6285d9c8452a314de362632acde0df79af6e0d
load_dataset # Download data for the year 1809 at the associated article level (Default) dataset = load_dataset("dell-research-harvard/AmericanStories", "subset_years", year_list=["1809", "1810"] ) # Download and process data for all years at the article level dataset = load_dataset("dell-research-harvard/AmericanStories", "all_years" ) # Download and process data for 1809 at the scan level dataset = load_dataset("dell-research-harvard/AmericanStories", "subset_years_content_regions", year_list=["1809"] ) # Download ad process data for all years at the scan level dataset = load_dataset("dell-research-harvard/AmericanStories", "all_years_content_regions")
AsAboveSoBelow/m-432
[ "license:other", "region:us" ]
2023-09-15T23:58:47+00:00
{"license": "other"}
2023-09-16T00:01:17+00:00
[]
[]
TAGS #license-other #region-us
load_dataset # Download data for the year 1809 at the associated article level (Default) dataset = load_dataset("dell-research-harvard/AmericanStories", "subset_years", year_list=["1809", "1810"] ) # Download and process data for all years at the article level dataset = load_dataset("dell-research-harvard/AmericanStories", "all_years" ) # Download and process data for 1809 at the scan level dataset = load_dataset("dell-research-harvard/AmericanStories", "subset_years_content_regions", year_list=["1809"] ) # Download ad process data for all years at the scan level dataset = load_dataset("dell-research-harvard/AmericanStories", "all_years_content_regions")
[ "# Download data for the year 1809 at the associated article level (Default)\ndataset = load_dataset(\"dell-research-harvard/AmericanStories\",\n \"subset_years\",\n year_list=[\"1809\", \"1810\"]\n)", "# Download and process data for all years at the article level\ndataset = load_dataset(\"dell-research-harvard/AmericanStories\",\n \"all_years\"\n)", "# Download and process data for 1809 at the scan level\ndataset = load_dataset(\"dell-research-harvard/AmericanStories\",\n \"subset_years_content_regions\",\n year_list=[\"1809\"]\n)", "# Download ad process data for all years at the scan level\ndataset = load_dataset(\"dell-research-harvard/AmericanStories\",\n \"all_years_content_regions\")" ]
[ "TAGS\n#license-other #region-us \n", "# Download data for the year 1809 at the associated article level (Default)\ndataset = load_dataset(\"dell-research-harvard/AmericanStories\",\n \"subset_years\",\n year_list=[\"1809\", \"1810\"]\n)", "# Download and process data for all years at the article level\ndataset = load_dataset(\"dell-research-harvard/AmericanStories\",\n \"all_years\"\n)", "# Download and process data for 1809 at the scan level\ndataset = load_dataset(\"dell-research-harvard/AmericanStories\",\n \"subset_years_content_regions\",\n year_list=[\"1809\"]\n)", "# Download ad process data for all years at the scan level\ndataset = load_dataset(\"dell-research-harvard/AmericanStories\",\n \"all_years_content_regions\")" ]
[ 11, 62, 41, 57, 45 ]
[ "passage: TAGS\n#license-other #region-us \n# Download data for the year 1809 at the associated article level (Default)\ndataset = load_dataset(\"dell-research-harvard/AmericanStories\",\n \"subset_years\",\n year_list=[\"1809\", \"1810\"]\n)# Download and process data for all years at the article level\ndataset = load_dataset(\"dell-research-harvard/AmericanStories\",\n \"all_years\"\n)# Download and process data for 1809 at the scan level\ndataset = load_dataset(\"dell-research-harvard/AmericanStories\",\n \"subset_years_content_regions\",\n year_list=[\"1809\"]\n)# Download ad process data for all years at the scan level\ndataset = load_dataset(\"dell-research-harvard/AmericanStories\",\n \"all_years_content_regions\")" ]
757722560986b4158288ea02dc8a505ce8ebb6c0
# Dataset of akanishi_erika/赤西瑛梨華 (THE iDOLM@STER: Cinderella Girls) This is the dataset of akanishi_erika/赤西瑛梨華 (THE iDOLM@STER: Cinderella Girls), containing 44 images and their tags. The core tags of this character are `green_eyes, long_hair, braid, brown_hair, breasts, twin_braids, hair_ornament, large_breasts, hairclip`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 44 | 30.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akanishi_erika_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 44 | 23.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akanishi_erika_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 86 | 43.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akanishi_erika_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 44 | 28.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akanishi_erika_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 86 | 52.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akanishi_erika_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/akanishi_erika_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, open_mouth, smile, solo, looking_at_viewer, cleavage, black_hair, blush, hair_flower, sweat, white_background | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, card_(medium), character_name, flower_(symbol), pink_background, smile, solo, open_mouth, skirt, bracelet | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | open_mouth | smile | solo | looking_at_viewer | cleavage | black_hair | blush | hair_flower | sweat | white_background | card_(medium) | character_name | flower_(symbol) | pink_background | skirt | bracelet | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:--------|:-------|:--------------------|:-----------|:-------------|:--------|:--------------|:--------|:-------------------|:----------------|:-----------------|:------------------|:------------------|:--------|:-----------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | | | | | | | | X | X | X | X | X | X |
CyberHarem/akanishi_erika_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-16T00:03:09+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:36:13+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of akanishi\_erika/赤西瑛梨華 (THE iDOLM@STER: Cinderella Girls) =================================================================== This is the dataset of akanishi\_erika/赤西瑛梨華 (THE iDOLM@STER: Cinderella Girls), containing 44 images and their tags. The core tags of this character are 'green\_eyes, long\_hair, braid, brown\_hair, breasts, twin\_braids, hair\_ornament, large\_breasts, hairclip', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
6018bdda76a92b11e5d5fbb30fbb73e4f7b7d0a0
# Dataset of aihara_yukino/相原雪乃 (THE iDOLM@STER: Cinderella Girls) This is the dataset of aihara_yukino/相原雪乃 (THE iDOLM@STER: Cinderella Girls), containing 28 images and their tags. The core tags of this character are `brown_hair, long_hair, braid, brown_eyes, single_braid, very_long_hair, breasts, hat, bow`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 28 | 20.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aihara_yukino_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 28 | 17.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aihara_yukino_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 49 | 27.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aihara_yukino_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 28 | 20.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aihara_yukino_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 49 | 30.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aihara_yukino_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/aihara_yukino_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, smile, solo, dress, large_breasts, looking_at_viewer, necklace, cleavage, gloves, hair_bow, sitting, teacup | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, smile, solo, card_(medium), character_name, flower_(symbol), open_mouth, dress, gloves, hair_ornament, pink_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | solo | dress | large_breasts | looking_at_viewer | necklace | cleavage | gloves | hair_bow | sitting | teacup | card_(medium) | character_name | flower_(symbol) | open_mouth | hair_ornament | pink_background | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:--------|:----------------|:--------------------|:-----------|:-----------|:---------|:-----------|:----------|:---------|:----------------|:-----------------|:------------------|:-------------|:----------------|:------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | | | | | X | | | | X | X | X | X | X | X |
CyberHarem/aihara_yukino_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-16T00:09:20+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:21:13+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of aihara\_yukino/相原雪乃 (THE iDOLM@STER: Cinderella Girls) ================================================================= This is the dataset of aihara\_yukino/相原雪乃 (THE iDOLM@STER: Cinderella Girls), containing 28 images and their tags. The core tags of this character are 'brown\_hair, long\_hair, braid, brown\_eyes, single\_braid, very\_long\_hair, breasts, hat, bow', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
82707d38f4d5fccf1e5cd00ee9a806041a682918
# Dataset of imura_setsuna/井村雪菜 (THE iDOLM@STER: Cinderella Girls) This is the dataset of imura_setsuna/井村雪菜 (THE iDOLM@STER: Cinderella Girls), containing 24 images and their tags. The core tags of this character are `brown_hair, long_hair, aqua_eyes, blue_eyes, green_eyes, hat`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 24 | 18.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/imura_setsuna_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 24 | 12.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/imura_setsuna_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 44 | 22.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/imura_setsuna_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 24 | 17.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/imura_setsuna_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 44 | 30.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/imura_setsuna_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/imura_setsuna_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------| | 0 | 24 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, smile, jewelry, looking_at_viewer, blush, skirt | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | smile | jewelry | looking_at_viewer | blush | skirt | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:----------|:--------------------|:--------|:--------| | 0 | 24 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X |
CyberHarem/imura_setsuna_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-16T00:13:35+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:59:32+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of imura\_setsuna/井村雪菜 (THE iDOLM@STER: Cinderella Girls) ================================================================= This is the dataset of imura\_setsuna/井村雪菜 (THE iDOLM@STER: Cinderella Girls), containing 24 images and their tags. The core tags of this character are 'brown\_hair, long\_hair, aqua\_eyes, blue\_eyes, green\_eyes, hat', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
a4adb5ad1de707e394e705db0081679a9143290e
# Dataset of yaguchi_miu/矢口美羽 (THE iDOLM@STER: Cinderella Girls) This is the dataset of yaguchi_miu/矢口美羽 (THE iDOLM@STER: Cinderella Girls), containing 30 images and their tags. The core tags of this character are `black_hair, brown_eyes, short_hair, hair_bun, single_hair_bun`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 30 | 18.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yaguchi_miu_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 30 | 15.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yaguchi_miu_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 52 | 25.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yaguchi_miu_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 30 | 17.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yaguchi_miu_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 52 | 28.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yaguchi_miu_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/yaguchi_miu_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, smile, solo, star_(symbol), gloves, hair_ornament, microphone, open_mouth, thighhighs, jewelry, one_eye_closed | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | solo | star_(symbol) | gloves | hair_ornament | microphone | open_mouth | thighhighs | jewelry | one_eye_closed | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:----------------|:---------|:----------------|:-------------|:-------------|:-------------|:----------|:-----------------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/yaguchi_miu_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-16T00:21:55+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T20:39:42+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of yaguchi\_miu/矢口美羽 (THE iDOLM@STER: Cinderella Girls) =============================================================== This is the dataset of yaguchi\_miu/矢口美羽 (THE iDOLM@STER: Cinderella Girls), containing 30 images and their tags. The core tags of this character are 'black\_hair, brown\_eyes, short\_hair, hair\_bun, single\_hair\_bun', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
d31e8f92ad7548f672162532647f3d58de3cf368
# Dataset of sena_shiori/瀬名詩織 (THE iDOLM@STER: Cinderella Girls) This is the dataset of sena_shiori/瀬名詩織 (THE iDOLM@STER: Cinderella Girls), containing 23 images and their tags. The core tags of this character are `long_hair, brown_eyes, black_hair, hat`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 23 | 17.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sena_shiori_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 23 | 16.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sena_shiori_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 43 | 27.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sena_shiori_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 23 | 17.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sena_shiori_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 43 | 28.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sena_shiori_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/sena_shiori_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------| | 0 | 23 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, dress, smile, card_(medium), character_name, gem_(symbol), looking_at_viewer, blue_background, necklace | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | dress | smile | card_(medium) | character_name | gem_(symbol) | looking_at_viewer | blue_background | necklace | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:--------|:----------------|:-----------------|:---------------|:--------------------|:------------------|:-----------| | 0 | 23 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X |
CyberHarem/sena_shiori_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-16T00:28:59+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T22:15:54+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of sena\_shiori/瀬名詩織 (THE iDOLM@STER: Cinderella Girls) =============================================================== This is the dataset of sena\_shiori/瀬名詩織 (THE iDOLM@STER: Cinderella Girls), containing 23 images and their tags. The core tags of this character are 'long\_hair, brown\_eyes, black\_hair, hat', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
4f2fb1c97186836b346422fee37e4ac475908022
# Dataset Card for "1d35978a" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/1d35978a
[ "region:us" ]
2023-09-16T00:30:02+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 163, "num_examples": 10}], "download_size": 1301, "dataset_size": 163}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-16T00:30:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for "1d35978a" More Information needed
[ "# Dataset Card for \"1d35978a\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"1d35978a\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"1d35978a\"\n\nMore Information needed" ]
85a34054e904293feabaeb458921a68d88f144a5
# Dataset of oota_yuu/太田優/오오타유 (THE iDOLM@STER: Cinderella Girls) This is the dataset of oota_yuu/太田優/오오타유 (THE iDOLM@STER: Cinderella Girls), containing 23 images and their tags. The core tags of this character are `brown_hair, brown_eyes, short_hair, breasts, medium_breasts, mole, mole_on_breast`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 23 | 17.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oota_yuu_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 23 | 13.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oota_yuu_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 48 | 25.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oota_yuu_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 23 | 16.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oota_yuu_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 48 | 29.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oota_yuu_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/oota_yuu_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------| | 0 | 23 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, smile, cleavage, open_mouth, necklace, blush, flower, one_eye_closed | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | smile | cleavage | open_mouth | necklace | blush | flower | one_eye_closed | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:-----------|:-------------|:-----------|:--------|:---------|:-----------------| | 0 | 23 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X |
CyberHarem/oota_yuu_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-16T00:34:16+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T22:16:10+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of oota\_yuu/太田優/오오타유 (THE iDOLM@STER: Cinderella Girls) ================================================================ This is the dataset of oota\_yuu/太田優/오오타유 (THE iDOLM@STER: Cinderella Girls), containing 23 images and their tags. The core tags of this character are 'brown\_hair, brown\_eyes, short\_hair, breasts, medium\_breasts, mole, mole\_on\_breast', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
dbdfe10c820d2a2e75e96b72cc9e713a1e98e6f5
# Dataset Card for "TUT-urban-acoustic-scenes-2018-development-16bit" ## Dataset Description - **Homepage: https://zenodo.org/record/1228142** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact: Toni Heittola ([email protected], http://www.cs.tut.fi/~heittolt/)** ### Dataset Summary TUT Urban Acoustic Scenes 2018 development dataset consists of 10-seconds audio segments from 10 acoustic scenes: Airport - airport Indoor shopping mall - shopping_mall Metro station - metro_station Pedestrian street - street_pedestrian Public square - public_square Street with medium level of traffic - street_traffic Travelling by a tram - tram Travelling by a bus - bus Travelling by an underground metro - metro Urban park - park Each acoustic scene has 864 segments (144 minutes of audio). The dataset contains in total 24 hours of audio. This is the 16 bit version of the original dataset. The dataset was collected in Finland by Tampere University of Technology between 02/2018 - 03/2018. The data collection has received funding from the European Research Council under the ERC Grant Agreement 637422 EVERYSOUND. ### Supported Tasks and Leaderboards - `audio-classification`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). - The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard - which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name). ## Dataset Structure ### Data Instances ``` {'file_name': 'audio/airport-barcelona-0-0-a.wav', 'label': 'airport', 'audio': {'path': 'airport-barcelona-0-0-a.wav', 'array': array([-2.13623047e-04, -1.37329102e-04, -2.13623047e-04, ..., 3.05175781e-05, -6.10351562e-05, -6.10351562e-05]), 'sampling_rate': 48000}, 'city': 'barcelona', 'location_id': '0'} ``` ### Data Fields - `file_name`: name of the audio file - `label`: acoustic scene label from the 10 class set, - `location_id`: city-location id '0', - `city`: name of the city where the audio was recorded Filenames of the dataset have the following pattern: [scene label]-[city]-[location id]-[segment id]-[device id].wav ### Data Splits A suggested training/test partitioning of the development set is provided in order to make results reported with this dataset uniform. The partitioning is done such that the segments recorded at the same location are included into the same subset - either training or testing. The partitioning is done aiming for a 70/30 ratio between the number of segments in training and test subsets while taking into account recording locations, and selecting the closest available option. | Scene class | Train / Segments | Train / Locations | Test / Segments | Test / Locations | | ------------------ | ---------------- | ----------------- | --------------- | ---------------- | | Airport | 599 | 15 | 265 | 7 | | Bus | 622 | 26 | 242 | 10 | | Metro | 603 | 20 | 261 | 9 | | Metro station | 605 | 28 | 259 | 12 | | Park | 622 | 18 | 242 | 7 | | Public square | 648 | 18 | 216 | 6 | | Shopping mall | 585 | 16 | 279 | 6 | | Street, pedestrian | 617 | 20 | 247 | 8 | | Street, traffic | 618 | 18 | 246 | 7 | | Tram | 603 | 24 | 261 | 11 | | **Total** | **6122** | **203** | **2518** | **83** | ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization The dataset was recorded in six large European cities: Barcelona, Helsinki, London, Paris, Stockholm, and Vienna. For all acoustic scenes, audio was captured in multiple locations: different streets, different parks, different shopping malls. In each location, multiple 2-3 minute long audio recordings were captured in a few slightly different positions (2-4) within the selected location. Collected audio material was cut into segments of 10 seconds length. The equipment used for recording consists of a binaural [Soundman OKM II Klassik/studio A3](http://www.soundman.de/en/products/) electret in-ear microphone and a [Zoom F8](https://www.zoom.co.jp/products/handy-recorder/zoom-f8-multitrack-field-recorder) audio recorder using 48 kHz sampling rate and 24 bit resolution. During the recording, the microphones were worn by the recording person in the ears, and head movement was kept to minimum. ### Annotations #### Annotation process Post-processing of the recorded audio involves aspects related to privacy of recorded individuals, and possible errors in the recording process. Some interferences from mobile phones are audible, but are considered part of real-world recording process. #### Who are the annotators? * Ronal Bejarano Rodriguez * Eemi Fagerlund * Aino Koskimies * Toni Heittola ### Personal and Sensitive Information The material was screened for content, and segments containing close microphone conversation were eliminated. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Toni Heittola ([email protected], http://www.cs.tut.fi/~heittolt/) Annamaria Mesaros ([email protected], http://www.cs.tut.fi/~mesaros/) Tuomas Virtanen ([email protected], http://www.cs.tut.fi/~tuomasv/) ### Licensing Information Copyright (c) 2018 Tampere University of Technology and its licensors All rights reserved. Permission is hereby granted, without written agreement and without license or royalty fees, to use and copy the TUT Urban Acoustic Scenes 2018 (“Work”) described in this document and composed of audio and metadata. This grant is only for experimental and non-commercial purposes, provided that the copyright notice in its entirety appear in all copies of this Work, and the original source of this Work, (Audio Research Group from Laboratory of Signal Processing at Tampere University of Technology), is acknowledged in any publication that reports research using this Work. Any commercial use of the Work or any part thereof is strictly prohibited. Commercial use include, but is not limited to: - selling or reproducing the Work - selling or distributing the results or content achieved by use of the Work - providing services by using the Work. IN NO EVENT SHALL TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS LICENSORS BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OF THIS WORK AND ITS DOCUMENTATION, EVEN IF TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS LICENSORS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. TAMPERE UNIVERSITY OF TECHNOLOGY AND ALL ITS LICENSORS SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE WORK PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE TAMPERE UNIVERSITY OF TECHNOLOGY HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. ### Citation Information [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.1228142.svg)](https://doi.org/10.5281/zenodo.1228142) ### Contributions Thanks to [@wtdog](https://github.com/wetdog) for adding this dataset. [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
wetdog/TUT-urban-acoustic-scenes-2018-development-16bit
[ "region:us" ]
2023-09-16T00:39:10+00:00
{"dataset_info": {"features": [{"name": "file_name", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "city", "dtype": "string"}, {"name": "location_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11755015136.34, "num_examples": 6122}, {"name": "test", "num_bytes": 4834872627.026, "num_examples": 2518}], "download_size": 15955243030, "dataset_size": 16589887763.366001}}
2023-09-19T20:43:49+00:00
[]
[]
TAGS #region-us
Dataset Card for "TUT-urban-acoustic-scenes-2018-development-16bit" =================================================================== Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Leaderboard: * Point of Contact: Toni Heittola (toni.heittola@URL, URL ### Dataset Summary TUT Urban Acoustic Scenes 2018 development dataset consists of 10-seconds audio segments from 10 acoustic scenes: ``` Airport - airport Indoor shopping mall - shopping_mall Metro station - metro_station Pedestrian street - street_pedestrian Public square - public_square Street with medium level of traffic - street_traffic Travelling by a tram - tram Travelling by a bus - bus Travelling by an underground metro - metro Urban park - park ``` Each acoustic scene has 864 segments (144 minutes of audio). The dataset contains in total 24 hours of audio. This is the 16 bit version of the original dataset. The dataset was collected in Finland by Tampere University of Technology between 02/2018 - 03/2018. The data collection has received funding from the European Research Council under the ERC Grant Agreement 637422 EVERYSOUND. ### Supported Tasks and Leaderboards * 'audio-classification': The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* metric name. * The (model name or model class) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard * which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name. Dataset Structure ----------------- ### Data Instances ### Data Fields * 'file\_name': name of the audio file * 'label': acoustic scene label from the 10 class set, * 'location\_id': city-location id '0', * 'city': name of the city where the audio was recorded Filenames of the dataset have the following pattern: [scene label]-[city]-[location id]-[segment id]-[device id].wav ### Data Splits A suggested training/test partitioning of the development set is provided in order to make results reported with this dataset uniform. The partitioning is done such that the segments recorded at the same location are included into the same subset - either training or testing. The partitioning is done aiming for a 70/30 ratio between the number of segments in training and test subsets while taking into account recording locations, and selecting the closest available option. Dataset Creation ---------------- ### Source Data #### Initial Data Collection and Normalization The dataset was recorded in six large European cities: Barcelona, Helsinki, London, Paris, Stockholm, and Vienna. For all acoustic scenes, audio was captured in multiple locations: different streets, different parks, different shopping malls. In each location, multiple 2-3 minute long audio recordings were captured in a few slightly different positions (2-4) within the selected location. Collected audio material was cut into segments of 10 seconds length. The equipment used for recording consists of a binaural Soundman OKM II Klassik/studio A3 electret in-ear microphone and a Zoom F8 audio recorder using 48 kHz sampling rate and 24 bit resolution. During the recording, the microphones were worn by the recording person in the ears, and head movement was kept to minimum. ### Annotations #### Annotation process Post-processing of the recorded audio involves aspects related to privacy of recorded individuals, and possible errors in the recording process. Some interferences from mobile phones are audible, but are considered part of real-world recording process. #### Who are the annotators? * Ronal Bejarano Rodriguez * Eemi Fagerlund * Aino Koskimies * Toni Heittola ### Personal and Sensitive Information The material was screened for content, and segments containing close microphone conversation were eliminated. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators Toni Heittola (toni.heittola@URL, URL Annamaria Mesaros (annamaria.mesaros@URL, URL Tuomas Virtanen (tuomas.virtanen@URL, URL ### Licensing Information Copyright (c) 2018 Tampere University of Technology and its licensors All rights reserved. Permission is hereby granted, without written agreement and without license or royalty fees, to use and copy the TUT Urban Acoustic Scenes 2018 (“Work”) described in this document and composed of audio and metadata. This grant is only for experimental and non-commercial purposes, provided that the copyright notice in its entirety appear in all copies of this Work, and the original source of this Work, (Audio Research Group from Laboratory of Signal Processing at Tampere University of Technology), is acknowledged in any publication that reports research using this Work. Any commercial use of the Work or any part thereof is strictly prohibited. Commercial use include, but is not limited to: * selling or reproducing the Work * selling or distributing the results or content achieved by use of the Work * providing services by using the Work. IN NO EVENT SHALL TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS LICENSORS BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OF THIS WORK AND ITS DOCUMENTATION, EVEN IF TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS LICENSORS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. TAMPERE UNIVERSITY OF TECHNOLOGY AND ALL ITS LICENSORS SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE WORK PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE TAMPERE UNIVERSITY OF TECHNOLOGY HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. ![DOI](URL ### Contributions Thanks to @wtdog for adding this dataset. More Information needed
[ "### Dataset Summary\n\n\nTUT Urban Acoustic Scenes 2018 development dataset consists of 10-seconds audio segments from 10 acoustic scenes:\n\n\n\n```\nAirport - airport\nIndoor shopping mall - shopping_mall\nMetro station - metro_station\nPedestrian street - street_pedestrian\nPublic square - public_square\nStreet with medium level of traffic - street_traffic\nTravelling by a tram - tram\nTravelling by a bus - bus\nTravelling by an underground metro - metro\nUrban park - park\n\n```\n\nEach acoustic scene has 864 segments (144 minutes of audio). The dataset contains in total 24 hours of audio. This is the 16 bit version\nof the original dataset.\n\n\nThe dataset was collected in Finland by Tampere University of Technology between 02/2018 - 03/2018.\nThe data collection has received funding from the European Research Council under the ERC Grant Agreement 637422 EVERYSOUND.", "### Supported Tasks and Leaderboards\n\n\n* 'audio-classification': The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* metric name.\n* The (model name or model class) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard\n* which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'file\\_name': name of the audio file\n* 'label': acoustic scene label from the 10 class set,\n* 'location\\_id': city-location id '0',\n* 'city': name of the city where the audio was recorded\n\n\nFilenames of the dataset have the following pattern:\n\n\n[scene label]-[city]-[location id]-[segment id]-[device id].wav", "### Data Splits\n\n\nA suggested training/test partitioning of the development set is provided in order to make results reported with this dataset uniform. The partitioning is done such that the segments recorded at the same location are included into the same subset - either training or testing. The partitioning is done aiming for a 70/30 ratio between the number of segments in training and test subsets while taking into account recording locations, and selecting the closest available option.\n\n\n\nDataset Creation\n----------------", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe dataset was recorded in six large European cities: Barcelona, Helsinki, London, Paris, Stockholm, and Vienna. For all acoustic scenes, audio was captured in multiple locations: different streets, different parks, different shopping malls. In each location, multiple 2-3 minute long audio recordings were captured in a few slightly different positions (2-4) within the selected location. Collected audio material was cut into segments of 10 seconds length.\n\n\nThe equipment used for recording consists of a binaural Soundman OKM II Klassik/studio A3 electret in-ear microphone and a Zoom F8 audio recorder using 48 kHz sampling rate and 24 bit resolution. During the recording, the microphones were worn by the recording person in the ears, and head movement was kept to minimum.", "### Annotations", "#### Annotation process\n\n\nPost-processing of the recorded audio involves aspects related to privacy of recorded individuals, and possible errors in the recording process. Some interferences from mobile phones are audible, but are considered part of real-world recording process.", "#### Who are the annotators?\n\n\n* Ronal Bejarano Rodriguez\n* Eemi Fagerlund\n* Aino Koskimies\n* Toni Heittola", "### Personal and Sensitive Information\n\n\nThe material was screened for content, and segments containing close microphone conversation were eliminated.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nToni Heittola (toni.heittola@URL, URL\nAnnamaria Mesaros (annamaria.mesaros@URL, URL\nTuomas Virtanen (tuomas.virtanen@URL, URL", "### Licensing Information\n\n\nCopyright (c) 2018 Tampere University of Technology and its licensors\nAll rights reserved.\nPermission is hereby granted, without written agreement and without license or royalty\nfees, to use and copy the TUT Urban Acoustic Scenes 2018 (“Work”) described in this document\nand composed of audio and metadata. This grant is only for experimental and non-commercial\npurposes, provided that the copyright notice in its entirety appear in all copies of this Work,\nand the original source of this Work, (Audio Research Group from Laboratory of Signal\nProcessing at Tampere University of Technology),\nis acknowledged in any publication that reports research using this Work.\nAny commercial use of the Work or any part thereof is strictly prohibited.\nCommercial use include, but is not limited to:\n\n\n* selling or reproducing the Work\n* selling or distributing the results or content achieved by use of the Work\n* providing services by using the Work.\n\n\nIN NO EVENT SHALL TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS LICENSORS BE LIABLE TO ANY PARTY\nFOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE\nOF THIS WORK AND ITS DOCUMENTATION, EVEN IF TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS\nLICENSORS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\nTAMPERE UNIVERSITY OF TECHNOLOGY AND ALL ITS LICENSORS SPECIFICALLY DISCLAIMS ANY\nWARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND\nFITNESS FOR A PARTICULAR PURPOSE. THE WORK PROVIDED HEREUNDER IS ON AN \"AS IS\" BASIS, AND\nTHE TAMPERE UNIVERSITY OF TECHNOLOGY HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT,\nUPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n\n\n![DOI](URL", "### Contributions\n\n\nThanks to @wtdog for adding this dataset.\n\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "### Dataset Summary\n\n\nTUT Urban Acoustic Scenes 2018 development dataset consists of 10-seconds audio segments from 10 acoustic scenes:\n\n\n\n```\nAirport - airport\nIndoor shopping mall - shopping_mall\nMetro station - metro_station\nPedestrian street - street_pedestrian\nPublic square - public_square\nStreet with medium level of traffic - street_traffic\nTravelling by a tram - tram\nTravelling by a bus - bus\nTravelling by an underground metro - metro\nUrban park - park\n\n```\n\nEach acoustic scene has 864 segments (144 minutes of audio). The dataset contains in total 24 hours of audio. This is the 16 bit version\nof the original dataset.\n\n\nThe dataset was collected in Finland by Tampere University of Technology between 02/2018 - 03/2018.\nThe data collection has received funding from the European Research Council under the ERC Grant Agreement 637422 EVERYSOUND.", "### Supported Tasks and Leaderboards\n\n\n* 'audio-classification': The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* metric name.\n* The (model name or model class) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard\n* which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'file\\_name': name of the audio file\n* 'label': acoustic scene label from the 10 class set,\n* 'location\\_id': city-location id '0',\n* 'city': name of the city where the audio was recorded\n\n\nFilenames of the dataset have the following pattern:\n\n\n[scene label]-[city]-[location id]-[segment id]-[device id].wav", "### Data Splits\n\n\nA suggested training/test partitioning of the development set is provided in order to make results reported with this dataset uniform. The partitioning is done such that the segments recorded at the same location are included into the same subset - either training or testing. The partitioning is done aiming for a 70/30 ratio between the number of segments in training and test subsets while taking into account recording locations, and selecting the closest available option.\n\n\n\nDataset Creation\n----------------", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe dataset was recorded in six large European cities: Barcelona, Helsinki, London, Paris, Stockholm, and Vienna. For all acoustic scenes, audio was captured in multiple locations: different streets, different parks, different shopping malls. In each location, multiple 2-3 minute long audio recordings were captured in a few slightly different positions (2-4) within the selected location. Collected audio material was cut into segments of 10 seconds length.\n\n\nThe equipment used for recording consists of a binaural Soundman OKM II Klassik/studio A3 electret in-ear microphone and a Zoom F8 audio recorder using 48 kHz sampling rate and 24 bit resolution. During the recording, the microphones were worn by the recording person in the ears, and head movement was kept to minimum.", "### Annotations", "#### Annotation process\n\n\nPost-processing of the recorded audio involves aspects related to privacy of recorded individuals, and possible errors in the recording process. Some interferences from mobile phones are audible, but are considered part of real-world recording process.", "#### Who are the annotators?\n\n\n* Ronal Bejarano Rodriguez\n* Eemi Fagerlund\n* Aino Koskimies\n* Toni Heittola", "### Personal and Sensitive Information\n\n\nThe material was screened for content, and segments containing close microphone conversation were eliminated.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nToni Heittola (toni.heittola@URL, URL\nAnnamaria Mesaros (annamaria.mesaros@URL, URL\nTuomas Virtanen (tuomas.virtanen@URL, URL", "### Licensing Information\n\n\nCopyright (c) 2018 Tampere University of Technology and its licensors\nAll rights reserved.\nPermission is hereby granted, without written agreement and without license or royalty\nfees, to use and copy the TUT Urban Acoustic Scenes 2018 (“Work”) described in this document\nand composed of audio and metadata. This grant is only for experimental and non-commercial\npurposes, provided that the copyright notice in its entirety appear in all copies of this Work,\nand the original source of this Work, (Audio Research Group from Laboratory of Signal\nProcessing at Tampere University of Technology),\nis acknowledged in any publication that reports research using this Work.\nAny commercial use of the Work or any part thereof is strictly prohibited.\nCommercial use include, but is not limited to:\n\n\n* selling or reproducing the Work\n* selling or distributing the results or content achieved by use of the Work\n* providing services by using the Work.\n\n\nIN NO EVENT SHALL TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS LICENSORS BE LIABLE TO ANY PARTY\nFOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE\nOF THIS WORK AND ITS DOCUMENTATION, EVEN IF TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS\nLICENSORS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\nTAMPERE UNIVERSITY OF TECHNOLOGY AND ALL ITS LICENSORS SPECIFICALLY DISCLAIMS ANY\nWARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND\nFITNESS FOR A PARTICULAR PURPOSE. THE WORK PROVIDED HEREUNDER IS ON AN \"AS IS\" BASIS, AND\nTHE TAMPERE UNIVERSITY OF TECHNOLOGY HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT,\nUPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n\n\n![DOI](URL", "### Contributions\n\n\nThanks to @wtdog for adding this dataset.\n\n\nMore Information needed" ]
[ 6, 197, 148, 6, 106, 111, 4, 187, 5, 57, 33, 39, 7, 8, 14, 50, 468, 20 ]
[ "passage: TAGS\n#region-us \n### Dataset Summary\n\n\nTUT Urban Acoustic Scenes 2018 development dataset consists of 10-seconds audio segments from 10 acoustic scenes:\n\n\n\n```\nAirport - airport\nIndoor shopping mall - shopping_mall\nMetro station - metro_station\nPedestrian street - street_pedestrian\nPublic square - public_square\nStreet with medium level of traffic - street_traffic\nTravelling by a tram - tram\nTravelling by a bus - bus\nTravelling by an underground metro - metro\nUrban park - park\n\n```\n\nEach acoustic scene has 864 segments (144 minutes of audio). The dataset contains in total 24 hours of audio. This is the 16 bit version\nof the original dataset.\n\n\nThe dataset was collected in Finland by Tampere University of Technology between 02/2018 - 03/2018.\nThe data collection has received funding from the European Research Council under the ERC Grant Agreement 637422 EVERYSOUND.### Supported Tasks and Leaderboards\n\n\n* 'audio-classification': The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* metric name.\n* The (model name or model class) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard\n* which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.\n\n\nDataset Structure\n-----------------### Data Instances### Data Fields\n\n\n* 'file\\_name': name of the audio file\n* 'label': acoustic scene label from the 10 class set,\n* 'location\\_id': city-location id '0',\n* 'city': name of the city where the audio was recorded\n\n\nFilenames of the dataset have the following pattern:\n\n\n[scene label]-[city]-[location id]-[segment id]-[device id].wav", "passage: ### Data Splits\n\n\nA suggested training/test partitioning of the development set is provided in order to make results reported with this dataset uniform. The partitioning is done such that the segments recorded at the same location are included into the same subset - either training or testing. The partitioning is done aiming for a 70/30 ratio between the number of segments in training and test subsets while taking into account recording locations, and selecting the closest available option.\n\n\n\nDataset Creation\n----------------### Source Data#### Initial Data Collection and Normalization\n\n\nThe dataset was recorded in six large European cities: Barcelona, Helsinki, London, Paris, Stockholm, and Vienna. For all acoustic scenes, audio was captured in multiple locations: different streets, different parks, different shopping malls. In each location, multiple 2-3 minute long audio recordings were captured in a few slightly different positions (2-4) within the selected location. Collected audio material was cut into segments of 10 seconds length.\n\n\nThe equipment used for recording consists of a binaural Soundman OKM II Klassik/studio A3 electret in-ear microphone and a Zoom F8 audio recorder using 48 kHz sampling rate and 24 bit resolution. During the recording, the microphones were worn by the recording person in the ears, and head movement was kept to minimum.### Annotations#### Annotation process\n\n\nPost-processing of the recorded audio involves aspects related to privacy of recorded individuals, and possible errors in the recording process. Some interferences from mobile phones are audible, but are considered part of real-world recording process.#### Who are the annotators?\n\n\n* Ronal Bejarano Rodriguez\n* Eemi Fagerlund\n* Aino Koskimies\n* Toni Heittola### Personal and Sensitive Information\n\n\nThe material was screened for content, and segments containing close microphone conversation were eliminated.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\nToni Heittola (toni.heittola@URL, URL\nAnnamaria Mesaros (annamaria.mesaros@URL, URL\nTuomas Virtanen (tuomas.virtanen@URL, URL" ]
2bd121e93db6ca5942340c0b71f96176117092a1
# Dataset of asano_fuuka/浅野風香 (THE iDOLM@STER: Cinderella Girls) This is the dataset of asano_fuuka/浅野風香 (THE iDOLM@STER: Cinderella Girls), containing 44 images and their tags. The core tags of this character are `brown_eyes, glasses, black_hair, breasts, twintails, large_breasts, low_twintails, short_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 44 | 27.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asano_fuuka_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 44 | 23.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asano_fuuka_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 84 | 42.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asano_fuuka_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 44 | 26.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asano_fuuka_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 84 | 48.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asano_fuuka_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/asano_fuuka_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blush, looking_at_viewer, solo, open_mouth, upper_body, white_background, smile, jimiko, shirt, simple_background, thick_eyebrows | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | looking_at_viewer | solo | open_mouth | upper_body | white_background | smile | jimiko | shirt | simple_background | thick_eyebrows | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:--------------------|:-------|:-------------|:-------------|:-------------------|:--------|:---------|:--------|:--------------------|:-----------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/asano_fuuka_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-16T00:45:44+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:23:01+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of asano\_fuuka/浅野風香 (THE iDOLM@STER: Cinderella Girls) =============================================================== This is the dataset of asano\_fuuka/浅野風香 (THE iDOLM@STER: Cinderella Girls), containing 44 images and their tags. The core tags of this character are 'brown\_eyes, glasses, black\_hair, breasts, twintails, large\_breasts, low\_twintails, short\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
94d6ca8da4db57a1edae583f820a98ce7b13e0c3
NetEval is a NetOps evaluation suite for foundation models, consisting of 5269 multi-choice questions. Please check [our paper](https://arxiv.org/abs/2309.05557) for more details about NetEval. We hope NetEval could help developers track the progress and analyze the NetOps ability of their models. ## Citation Please cite our paper if you use our dataset. ``` @misc{miao2023empirical, title={An Empirical Study of NetOps Capability of Pre-Trained Large Language Models}, author={Yukai Miao and Yu Bai and Li Chen and Dan Li and Haifeng Sun and Xizheng Wang and Ziqiu Luo and Dapeng Sun and Xiuting Xu and Qi Zhang and Chao Xiang and Xinchi Li}, year={2023}, eprint={2309.05557}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
NASP/neteval-exam
[ "task_categories:text-classification", "task_categories:question-answering", "task_categories:multiple-choice", "size_categories:10K<n<100K", "language:en", "language:zh", "license:cc-by-nc-sa-4.0", "arxiv:2309.05557", "region:us" ]
2023-09-16T00:55:01+00:00
{"language": ["en", "zh"], "license": "cc-by-nc-sa-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "question-answering", "multiple-choice"], "pretty_name": "Netops"}
2023-09-22T01:56:47+00:00
[ "2309.05557" ]
[ "en", "zh" ]
TAGS #task_categories-text-classification #task_categories-question-answering #task_categories-multiple-choice #size_categories-10K<n<100K #language-English #language-Chinese #license-cc-by-nc-sa-4.0 #arxiv-2309.05557 #region-us
NetEval is a NetOps evaluation suite for foundation models, consisting of 5269 multi-choice questions. Please check our paper for more details about NetEval. We hope NetEval could help developers track the progress and analyze the NetOps ability of their models. Please cite our paper if you use our dataset.
[]
[ "TAGS\n#task_categories-text-classification #task_categories-question-answering #task_categories-multiple-choice #size_categories-10K<n<100K #language-English #language-Chinese #license-cc-by-nc-sa-4.0 #arxiv-2309.05557 #region-us \n" ]
[ 83 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-question-answering #task_categories-multiple-choice #size_categories-10K<n<100K #language-English #language-Chinese #license-cc-by-nc-sa-4.0 #arxiv-2309.05557 #region-us \n" ]
546e61ab6f9d5363563d4bebd1db622f596f5f00
# Dataset of ueda_suzuho/上田鈴帆 (THE iDOLM@STER: Cinderella Girls) This is the dataset of ueda_suzuho/上田鈴帆 (THE iDOLM@STER: Cinderella Girls), containing 50 images and their tags. The core tags of this character are `short_hair, brown_hair, brown_eyes, ahoge`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 50 | 33.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ueda_suzuho_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 50 | 28.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ueda_suzuho_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 96 | 48.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ueda_suzuho_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 50 | 32.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ueda_suzuho_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 96 | 54.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ueda_suzuho_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/ueda_suzuho_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, looking_at_viewer, simple_background, white_background, open_mouth, sweat, grin | | 1 | 17 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, character_name, solo, card_(medium), sun_symbol, smile, costume, orange_background, star_(symbol), open_mouth, red_hair | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | simple_background | white_background | open_mouth | sweat | grin | character_name | card_(medium) | sun_symbol | smile | costume | orange_background | star_(symbol) | red_hair | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:--------------------|:-------------------|:-------------|:--------|:-------|:-----------------|:----------------|:-------------|:--------|:----------|:--------------------|:----------------|:-----------| | 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | | | | | | | | | | 1 | 17 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | | | X | | | X | X | X | X | X | X | X | X |
CyberHarem/ueda_suzuho_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-16T00:59:36+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T21:04:49+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of ueda\_suzuho/上田鈴帆 (THE iDOLM@STER: Cinderella Girls) =============================================================== This is the dataset of ueda\_suzuho/上田鈴帆 (THE iDOLM@STER: Cinderella Girls), containing 50 images and their tags. The core tags of this character are 'short\_hair, brown\_hair, brown\_eyes, ahoge', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
dd973f921dd43fae7a9c555e99dd179bc5315547
# Dataset Card for "babylm-10M-wikipedia" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
deven367/babylm-10M-wikipedia
[ "region:us" ]
2023-09-16T01:06:19+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6132092, "num_examples": 19636}, {"name": "valid", "num_bytes": 7089834, "num_examples": 23526}, {"name": "test", "num_bytes": 7569053, "num_examples": 26870}], "download_size": 12522412, "dataset_size": 20790979}}
2023-09-16T01:08:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for "babylm-10M-wikipedia" More Information needed
[ "# Dataset Card for \"babylm-10M-wikipedia\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"babylm-10M-wikipedia\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"babylm-10M-wikipedia\"\n\nMore Information needed" ]
156227df5bafc554792577ec2fff414fbef3314a
# Dataset Card for "babylm-10M-qed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
deven367/babylm-10M-qed
[ "region:us" ]
2023-09-16T01:07:10+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6083722, "num_examples": 99959}, {"name": "valid", "num_bytes": 5678320, "num_examples": 94976}, {"name": "test", "num_bytes": 7027994, "num_examples": 114964}], "download_size": 11484726, "dataset_size": 18790036}}
2023-09-16T01:07:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for "babylm-10M-qed" More Information needed
[ "# Dataset Card for \"babylm-10M-qed\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"babylm-10M-qed\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"babylm-10M-qed\"\n\nMore Information needed" ]
1efeff037e2ba2fd8182537b312d8a2131d5d4b3
# Dataset Card for "babylm-10M-bnc_spoken" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
deven367/babylm-10M-bnc_spoken
[ "region:us" ]
2023-09-16T01:07:31+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4764585, "num_examples": 89932}, {"name": "valid", "num_bytes": 4721951, "num_examples": 89921}, {"name": "test", "num_bytes": 5165775, "num_examples": 99951}], "download_size": 8864201, "dataset_size": 14652311}}
2023-09-16T01:07:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for "babylm-10M-bnc_spoken" More Information needed
[ "# Dataset Card for \"babylm-10M-bnc_spoken\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"babylm-10M-bnc_spoken\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"babylm-10M-bnc_spoken\"\n\nMore Information needed" ]
9b6d004cebbf229abec1a92e47736210cd1038bc
# Dataset Card for "babylm-10M-simple_wikipedia" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
deven367/babylm-10M-simple_wikipedia
[ "region:us" ]
2023-09-16T01:09:18+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9270331, "num_examples": 56617}, {"name": "valid", "num_bytes": 9591764, "num_examples": 60977}, {"name": "test", "num_bytes": 11102812, "num_examples": 66392}], "download_size": 18016430, "dataset_size": 29964907}}
2023-09-16T01:09:35+00:00
[]
[]
TAGS #region-us
# Dataset Card for "babylm-10M-simple_wikipedia" More Information needed
[ "# Dataset Card for \"babylm-10M-simple_wikipedia\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"babylm-10M-simple_wikipedia\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"babylm-10M-simple_wikipedia\"\n\nMore Information needed" ]
3d2ef8053dccccd16ea096407f283f453cef1ac8
# Dataset Card for "cs323_densepred_seg" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
shariqfarooq/cs323_densepred_seg
[ "region:us" ]
2023-09-16T01:13:29+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "mask", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 170701125.0, "num_examples": 1464}, {"name": "val", "num_bytes": 170428139.75, "num_examples": 1449}], "download_size": 341307796, "dataset_size": 341129264.75}}
2023-09-16T01:20:07+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cs323_densepred_seg" More Information needed
[ "# Dataset Card for \"cs323_densepred_seg\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cs323_densepred_seg\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"cs323_densepred_seg\"\n\nMore Information needed" ]
db594bff0fd36f639ac1967fe10fd4bd86468f96
# Dataset of rookie_trainer (THE iDOLM@STER: Cinderella Girls) This is the dataset of rookie_trainer (THE iDOLM@STER: Cinderella Girls), containing 67 images and their tags. The core tags of this character are `black_hair, hair_ornament, hairclip, long_hair, brown_eyes, ponytail, breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 67 | 55.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rookie_trainer_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 67 | 38.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rookie_trainer_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 132 | 71.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rookie_trainer_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 67 | 51.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rookie_trainer_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 132 | 93.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rookie_trainer_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/rookie_trainer_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, shorts, smile, wristband, looking_at_viewer, blush, watch, black_eyes, bottle | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, navel, shirt_lift, solo, black_eyes, looking_at_viewer, panties, pants_pull, wristband, blush, on_back, open_mouth, shorts_pull, small_breasts, collarbone, lifted_by_self, nipples, sports_bra | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | shorts | smile | wristband | looking_at_viewer | blush | watch | black_eyes | bottle | navel | shirt_lift | panties | pants_pull | on_back | open_mouth | shorts_pull | small_breasts | collarbone | lifted_by_self | nipples | sports_bra | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:---------|:--------|:------------|:--------------------|:--------|:--------|:-------------|:---------|:--------|:-------------|:----------|:-------------|:----------|:-------------|:--------------|:----------------|:-------------|:-----------------|:----------|:-------------| | 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | | X | X | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/rookie_trainer_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-16T01:16:47+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T22:47:58+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of rookie\_trainer (THE iDOLM@STER: Cinderella Girls) ============================================================= This is the dataset of rookie\_trainer (THE iDOLM@STER: Cinderella Girls), containing 67 images and their tags. The core tags of this character are 'black\_hair, hair\_ornament, hairclip, long\_hair, brown\_eyes, ponytail, breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
4b66ffcf6b6f014eb9a4de1d43f953945f42977c
# Dataset of ibuki_tsubasa/伊吹翼/이부키츠바사 (THE iDOLM@STER: Million Live!) This is the dataset of ibuki_tsubasa/伊吹翼/이부키츠바사 (THE iDOLM@STER: Million Live!), containing 483 images and their tags. The core tags of this character are `short_hair, ahoge, blonde_hair, breasts, bangs, pink_eyes, red_eyes, medium_breasts, hair_between_eyes, brown_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 483 | 614.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ibuki_tsubasa_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 483 | 363.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ibuki_tsubasa_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1193 | 784.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ibuki_tsubasa_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 483 | 552.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ibuki_tsubasa_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1193 | 1.07 GiB | [Download](https://huggingface.co/datasets/CyberHarem/ibuki_tsubasa_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/ibuki_tsubasa_theidolmstermillionlive', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, collarbone, looking_at_viewer, simple_background, solo, white_background, blush, completely_nude, navel, nipples, pussy, :d, arms_behind_back, closed_mouth, female_pubic_hair, open_mouth, standing, upper_body | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, blush, looking_at_viewer, short_sleeves, solo, raglan_sleeves, simple_background, smile, upper_body, collarbone, navel, tongue_out, white_background, white_shirt, closed_mouth, midriff, yellow_shirt | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, blue_sky, cloud, day, looking_at_viewer, outdoors, smile, blush, navel, solo, white_bikini, ocean, open_mouth, beach, cleavage, wet | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, looking_at_viewer, solo, blush, cheerleader, pom_pom_(cheerleading), smile, midriff, pleated_skirt, thighhighs, navel, crop_top_overhang, miniskirt, open_mouth, short_sleeves, white_skirt, yellow_shirt, holding, jewelry, simple_background, star_(symbol), sweat, white_gloves | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, blush, looking_at_viewer, shirt_lift, solo, lifted_by_self, short_sleeves, upper_body, navel, simple_background, bra_lift, hair_flaps, large_breasts, nipples, sweatdrop, underboob, white_background, white_bra, white_shirt, yellow_bra, yellow_shirt | | 5 | 13 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | fingerless_gloves, looking_at_viewer, navel, short_sleeves, 1girl, black_gloves, hair_flaps, pleated_skirt, serafuku, white_shirt, white_skirt, yellow_cape, sailor_collar, solo, midriff, miniskirt, blush, open_mouth, smile, yellow_neckerchief, black_thighhighs, v-shaped_eyebrows, sidelocks, stomach, white_cape | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, choker, hair_ornament, long_sleeves, looking_at_viewer, midriff, navel, open_jacket, solo, white_jacket, blush, fishnets, cleavage, collarbone, earrings, nail_polish, pleated_skirt, see-through, thighhighs, black_skirt, checkered_clothes, crop_top, fingerless_gloves, grin, hair_flaps, large_breasts, miniskirt, mismatched_legwear, open_mouth, white_background | | 7 | 17 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1boy, 1girl, blush, hetero, solo_focus, open_mouth, sweat, nipples, penis, sex, smile, vaginal, girl_on_top, completely_nude, female_pubic_hair, looking_at_viewer, navel, spread_legs, bar_censor, cowgirl_position, cum_in_pussy, heart, mosaic_censoring | | 8 | 7 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | blush, collarbone, large_breasts, looking_at_viewer, 1girl, cleavage, solo, upper_body, neck_bell, open_mouth, simple_background, smile, animal_ears, black_bikini, choker, cow_print, gloves, horns, navel | | 9 | 10 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 1girl, looking_at_viewer, solo, blush, cleavage, fake_animal_ears, playboy_bunny, rabbit_ears, simple_background, white_background, detached_collar, rabbit_tail, bare_shoulders, strapless_leotard, wrist_cuffs, large_breasts, smile, cowboy_shot, fake_tail, black_leotard, covered_navel, hair_flaps, open_mouth, pantyhose | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | collarbone | looking_at_viewer | simple_background | solo | white_background | blush | completely_nude | navel | nipples | pussy | :d | arms_behind_back | closed_mouth | female_pubic_hair | open_mouth | standing | upper_body | short_sleeves | raglan_sleeves | smile | tongue_out | white_shirt | midriff | yellow_shirt | blue_sky | cloud | day | outdoors | white_bikini | ocean | beach | cleavage | wet | cheerleader | pom_pom_(cheerleading) | pleated_skirt | thighhighs | crop_top_overhang | miniskirt | white_skirt | holding | jewelry | star_(symbol) | sweat | white_gloves | shirt_lift | lifted_by_self | bra_lift | hair_flaps | large_breasts | sweatdrop | underboob | white_bra | yellow_bra | fingerless_gloves | black_gloves | serafuku | yellow_cape | sailor_collar | yellow_neckerchief | black_thighhighs | v-shaped_eyebrows | sidelocks | stomach | white_cape | choker | hair_ornament | long_sleeves | open_jacket | white_jacket | fishnets | earrings | nail_polish | see-through | black_skirt | checkered_clothes | crop_top | grin | mismatched_legwear | 1boy | hetero | solo_focus | penis | sex | vaginal | girl_on_top | spread_legs | bar_censor | cowgirl_position | cum_in_pussy | heart | mosaic_censoring | neck_bell | animal_ears | black_bikini | cow_print | gloves | horns | fake_animal_ears | playboy_bunny | rabbit_ears | detached_collar | rabbit_tail | bare_shoulders | strapless_leotard | wrist_cuffs | cowboy_shot | fake_tail | black_leotard | covered_navel | pantyhose | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:--------------------|:--------------------|:-------|:-------------------|:--------|:------------------|:--------|:----------|:--------|:-----|:-------------------|:---------------|:--------------------|:-------------|:-----------|:-------------|:----------------|:-----------------|:--------|:-------------|:--------------|:----------|:---------------|:-----------|:--------|:------|:-----------|:---------------|:--------|:--------|:-----------|:------|:--------------|:-------------------------|:----------------|:-------------|:--------------------|:------------|:--------------|:----------|:----------|:----------------|:--------|:---------------|:-------------|:-----------------|:-----------|:-------------|:----------------|:------------|:------------|:------------|:-------------|:--------------------|:---------------|:-----------|:--------------|:----------------|:---------------------|:-------------------|:--------------------|:------------|:----------|:-------------|:---------|:----------------|:---------------|:--------------|:---------------|:-----------|:-----------|:--------------|:--------------|:--------------|:--------------------|:-----------|:-------|:---------------------|:-------|:---------|:-------------|:--------|:------|:----------|:--------------|:--------------|:-------------|:-------------------|:---------------|:--------|:-------------------|:------------|:--------------|:---------------|:------------|:---------|:--------|:-------------------|:----------------|:--------------|:------------------|:--------------|:-----------------|:--------------------|:--------------|:--------------|:------------|:----------------|:----------------|:------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | | X | | | | | X | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | X | | X | | X | | X | | | | | | | X | | | | | X | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | X | X | X | | X | | X | | | | | | | X | | | X | | X | | | X | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | X | X | X | X | X | | X | X | | | | | | | | X | X | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 13 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | X | | X | | X | | X | | | | | | | X | | | X | | X | | X | X | | | | | | | | | | | | | X | | | X | X | | | | | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | X | | X | X | X | | X | | | | | | | X | | | | | | | | X | | | | | | | | | X | | | | X | X | | X | | | | | | | | | | X | X | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 17 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | X | | | | X | X | X | X | | | | | X | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | 8 | 7 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | X | X | X | X | | X | | X | | | | | | | X | | X | | | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | 9 | 10 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | | X | X | X | X | X | | | | | | | | | X | | | | | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/ibuki_tsubasa_theidolmstermillionlive
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-16T01:21:25+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T00:42:12+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of ibuki\_tsubasa/伊吹翼/이부키츠바사 (THE iDOLM@STER: Million Live!) ==================================================================== This is the dataset of ibuki\_tsubasa/伊吹翼/이부키츠바사 (THE iDOLM@STER: Million Live!), containing 483 images and their tags. The core tags of this character are 'short\_hair, ahoge, blonde\_hair, breasts, bangs, pink\_eyes, red\_eyes, medium\_breasts, hair\_between\_eyes, brown\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
aea9f26b2b307ca2d300a26e32c10b3a516ed777
# Dataset of namba_eri (THE iDOLM@STER: Cinderella Girls) This is the dataset of namba_eri (THE iDOLM@STER: Cinderella Girls), containing 32 images and their tags. The core tags of this character are `brown_hair, short_hair, hair_ornament, hairclip, green_eyes, wavy_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 32 | 20.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/namba_eri_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 32 | 18.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/namba_eri_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 55 | 30.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/namba_eri_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 32 | 20.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/namba_eri_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 55 | 32.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/namba_eri_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/namba_eri_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, card_(medium), character_name, sun_symbol, skirt, open_mouth, orange_background, :d, looking_at_viewer, bow, breasts, grin | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | card_(medium) | character_name | sun_symbol | skirt | open_mouth | orange_background | :d | looking_at_viewer | bow | breasts | grin | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:----------------|:-----------------|:-------------|:--------|:-------------|:--------------------|:-----|:--------------------|:------|:----------|:-------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/namba_eri_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-16T01:24:08+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T22:42:59+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of namba\_eri (THE iDOLM@STER: Cinderella Girls) ======================================================== This is the dataset of namba\_eri (THE iDOLM@STER: Cinderella Girls), containing 32 images and their tags. The core tags of this character are 'brown\_hair, short\_hair, hair\_ornament, hairclip, green\_eyes, wavy\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
9ca7d8b9a1bccf59b32845441e05e0e9e576b305
# Dataset of mishiro/美城常務 (THE iDOLM@STER: Cinderella Girls) This is the dataset of mishiro/美城常務 (THE iDOLM@STER: Cinderella Girls), containing 58 images and their tags. The core tags of this character are `black_hair, long_hair, ponytail, earrings, breasts, large_breasts, green_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 58 | 42.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mishiro_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 58 | 29.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mishiro_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 108 | 52.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mishiro_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 58 | 39.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mishiro_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 108 | 67.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mishiro_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/mishiro_idolmastercinderellagirls', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------| | 0 | 15 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, formal, hair_pulled_back, solo, suit, necklace, makeup, cleavage | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | formal | hair_pulled_back | solo | suit | necklace | makeup | cleavage | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:-------------------|:-------|:-------|:-----------|:---------|:-----------| | 0 | 15 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X |
CyberHarem/mishiro_idolmastercinderellagirls
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-16T01:44:00+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T20:54:24+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of mishiro/美城常務 (THE iDOLM@STER: Cinderella Girls) ========================================================== This is the dataset of mishiro/美城常務 (THE iDOLM@STER: Cinderella Girls), containing 58 images and their tags. The core tags of this character are 'black\_hair, long\_hair, ponytail, earrings, breasts, large\_breasts, green\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
dc67e45116e7b361d8c858283d357040c15b0fc7
# Dataset Card for "babylm-10M-open-subtitles" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
deven367/babylm-10M-open-subtitles
[ "region:us" ]
2023-09-16T01:45:19+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18031636, "num_examples": 527396}, {"name": "valid", "num_bytes": 17333152, "num_examples": 529410}, {"name": "test", "num_bytes": 16275666, "num_examples": 489448}], "download_size": 34964353, "dataset_size": 51640454}}
2023-09-16T01:45:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for "babylm-10M-open-subtitles" More Information needed
[ "# Dataset Card for \"babylm-10M-open-subtitles\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"babylm-10M-open-subtitles\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"babylm-10M-open-subtitles\"\n\nMore Information needed" ]
f70c67c9632e3266495d2c98981f3a978459ab7c
# Dataset Card for "finreport-llama2-smallfull" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Arrivedercis/finreport-llama2-smallfull
[ "region:us" ]
2023-09-16T01:52:26+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 42295794, "num_examples": 184327}], "download_size": 21073062, "dataset_size": 42295794}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-16T01:52:29+00:00
[]
[]
TAGS #region-us
# Dataset Card for "finreport-llama2-smallfull" More Information needed
[ "# Dataset Card for \"finreport-llama2-smallfull\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"finreport-llama2-smallfull\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"finreport-llama2-smallfull\"\n\nMore Information needed" ]
901f8d0c3080979851e5f6e46f406e81e5c4f4d5
# Dataset Card for "one-million-instructions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
wentingzhao/one-million-instructions
[ "region:us" ]
2023-09-16T02:03:41+00:00
{"dataset_info": {"features": [{"name": "user", "dtype": "string"}, {"name": "system", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 327249922, "num_examples": 2332040}], "download_size": 172927838, "dataset_size": 327249922}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-16T02:03:51+00:00
[]
[]
TAGS #region-us
# Dataset Card for "one-million-instructions" More Information needed
[ "# Dataset Card for \"one-million-instructions\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"one-million-instructions\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"one-million-instructions\"\n\nMore Information needed" ]
c0c0b662e481ac0d2220191f3da7907140e4a7e2
<p><h1>🐋 OpenOrca-Chinese 数据集!🐋</h1></p> 感謝 [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) 資料集的發布,為廣大NLP研究人員和開發者帶來了寶貴的資源! 這是一個對 [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) 資料集中文翻譯的版本,翻譯引擎為 Google 翻譯,希望能為中文 LLM 研究做出一點點貢獻。 <br/> # Dataset Summary The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688). Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions. It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope. The data is primarily used for training and evaluation in the field of natural language processing. <a name="dataset-structure"></a> # Dataset Structure <a name="data-instances"></a> ## Data Instances A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5. The response is then entered into the response field. <a name="data-fields"></a> ## Data Fields The fields are: 1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from. 2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint 3) 'question', representing a question entry as provided by the FLAN Collection 4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
lchakkei/OpenOrca-Traditional-Chinese
[ "task_categories:conversational", "task_categories:text-classification", "task_categories:token-classification", "task_categories:table-question-answering", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:summarization", "task_categories:feature-extraction", "task_categories:text-generation", "task_categories:text2text-generation", "size_categories:10M<n<100M", "language:zh", "license:mit", "arxiv:2301.13688", "region:us" ]
2023-09-16T02:15:44+00:00
{"language": ["zh"], "license": "mit", "size_categories": ["10M<n<100M"], "task_categories": ["conversational", "text-classification", "token-classification", "table-question-answering", "question-answering", "zero-shot-classification", "summarization", "feature-extraction", "text-generation", "text2text-generation"], "pretty_name": "OpenOrca-Chinese", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "system_prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6477736021, "num_examples": 4233915}], "download_size": 4104476393, "dataset_size": 6477736021}}
2023-10-11T07:29:08+00:00
[ "2301.13688" ]
[ "zh" ]
TAGS #task_categories-conversational #task_categories-text-classification #task_categories-token-classification #task_categories-table-question-answering #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-summarization #task_categories-feature-extraction #task_categories-text-generation #task_categories-text2text-generation #size_categories-10M<n<100M #language-Chinese #license-mit #arxiv-2301.13688 #region-us
<p><h1> OpenOrca-Chinese 数据集!</h1></p> 感謝 Open-Orca/OpenOrca 資料集的發布,為廣大NLP研究人員和開發者帶來了寶貴的資源! 這是一個對 Open-Orca/OpenOrca 資料集中文翻譯的版本,翻譯引擎為 Google 翻譯,希望能為中文 LLM 研究做出一點點貢獻。 <br/> # Dataset Summary The OpenOrca dataset is a collection of augmented FLAN Collection data. Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions. It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope. The data is primarily used for training and evaluation in the field of natural language processing. <a name="dataset-structure"></a> # Dataset Structure <a name="data-instances"></a> ## Data Instances A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5. The response is then entered into the response field. <a name="data-fields"></a> ## Data Fields The fields are: 1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from. 2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint 3) 'question', representing a question entry as provided by the FLAN Collection 4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
[ "# Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n\n<a name=\"dataset-structure\"></a>", "# Dataset Structure\n\n<a name=\"data-instances\"></a>", "## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>", "## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4." ]
[ "TAGS\n#task_categories-conversational #task_categories-text-classification #task_categories-token-classification #task_categories-table-question-answering #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-summarization #task_categories-feature-extraction #task_categories-text-generation #task_categories-text2text-generation #size_categories-10M<n<100M #language-Chinese #license-mit #arxiv-2301.13688 #region-us \n", "# Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n\n<a name=\"dataset-structure\"></a>", "# Dataset Structure\n\n<a name=\"data-instances\"></a>", "## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>", "## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4." ]
[ 155, 121, 19, 67, 140 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-text-classification #task_categories-token-classification #task_categories-table-question-answering #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-summarization #task_categories-feature-extraction #task_categories-text-generation #task_categories-text2text-generation #size_categories-10M<n<100M #language-Chinese #license-mit #arxiv-2301.13688 #region-us \n# Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n\n<a name=\"dataset-structure\"></a># Dataset Structure\n\n<a name=\"data-instances\"></a>## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4." ]
fe794519536ffefc1000bababf42eda8252441db
# Dataset of momose_rio/百瀬莉緒/모모세리오 (THE iDOLM@STER: Million Live!) This is the dataset of momose_rio/百瀬莉緒/모모세리오 (THE iDOLM@STER: Million Live!), containing 221 images and their tags. The core tags of this character are `long_hair, breasts, blonde_hair, bangs, red_eyes, medium_breasts, brown_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 221 | 243.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momose_rio_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 221 | 157.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momose_rio_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 501 | 314.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momose_rio_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 221 | 221.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momose_rio_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 501 | 424.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momose_rio_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/momose_rio_theidolmstermillionlive', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, looking_at_viewer, solo, cleavage, necklace, navel, open_mouth, :d, bracelet, brown_eyes, earrings, midriff, purple_eyes | | 1 | 13 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, looking_at_viewer, blush, necklace, smile, simple_background, white_background, cleavage, collarbone, upper_body, earrings, shirt, closed_mouth, large_breasts, one_eye_closed | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, blue_sky, blush, cleavage, cloud, collarbone, day, large_breasts, looking_at_viewer, navel, outdoors, smile, solo, ocean, cowboy_shot, leaning_forward, open_mouth, side-tie_bikini_bottom, ;d, bare_shoulders, beach, blue_bikini, bracelet, brown_eyes, earrings, halterneck, horizon, lens_flare, necklace, off_shoulder, one_eye_closed, parted_bangs, water, wet | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | cleavage | necklace | navel | open_mouth | :d | bracelet | brown_eyes | earrings | midriff | purple_eyes | blush | smile | simple_background | white_background | collarbone | upper_body | shirt | closed_mouth | large_breasts | one_eye_closed | blue_sky | cloud | day | outdoors | ocean | cowboy_shot | leaning_forward | side-tie_bikini_bottom | ;d | bare_shoulders | beach | blue_bikini | halterneck | horizon | lens_flare | off_shoulder | parted_bangs | water | wet | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:-----------|:-----------|:--------|:-------------|:-----|:-----------|:-------------|:-----------|:----------|:--------------|:--------|:--------|:--------------------|:-------------------|:-------------|:-------------|:--------|:---------------|:----------------|:-----------------|:-----------|:--------|:------|:-----------|:--------|:--------------|:------------------|:-------------------------|:-----|:-----------------|:--------|:--------------|:-------------|:----------|:-------------|:---------------|:---------------|:--------|:------| | 0 | 12 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 13 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | X | X | | X | X | X | | | X | X | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/momose_rio_theidolmstermillionlive
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-16T02:28:09+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T01:30:26+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of momose\_rio/百瀬莉緒/모모세리오 (THE iDOLM@STER: Million Live!) ================================================================= This is the dataset of momose\_rio/百瀬莉緒/모모세리오 (THE iDOLM@STER: Million Live!), containing 221 images and their tags. The core tags of this character are 'long\_hair, breasts, blonde\_hair, bangs, red\_eyes, medium\_breasts, brown\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
d04917bb6ec45e495005c5f790fe4ffc00e221cc
# Dataset Card Dataset in [ImagenHub](arxiv.org/abs/2310.01596). # Citation Please kindly cite our paper if you use our code, data, models or results: ``` @article{ku2023imagenhub, title={ImagenHub: Standardizing the evaluation of conditional image generation models}, author={Max Ku and Tianle Li and Kai Zhang and Yujie Lu and Xingyu Fu and Wenwen Zhuang and Wenhu Chen}, journal={arXiv preprint arXiv:2310.01596}, year={2023} } ```
ImagenHub/Mask_Guided_Image_Editing
[ "arxiv:2310.01596", "region:us" ]
2023-09-16T02:32:48+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "dev", "path": "data/dev-*"}, {"split": "filtered", "path": "data/filtered-*"}, {"split": "extra", "path": "data/extra-*"}]}], "dataset_info": {"features": [{"name": "img_id", "dtype": "string"}, {"name": "turn_index", "dtype": "int32"}, {"name": "source_img", "dtype": "image"}, {"name": "mask_img", "dtype": "image"}, {"name": "instruction", "dtype": "string"}, {"name": "source_global_caption", "dtype": "string"}, {"name": "target_global_caption", "dtype": "string"}, {"name": "target_local_caption", "dtype": "string"}, {"name": "target_img", "dtype": "image"}], "splits": [{"name": "dev", "num_bytes": 1521276668.0, "num_examples": 528}, {"name": "filtered", "num_bytes": 504007147.0, "num_examples": 179}, {"name": "extra", "num_bytes": 709468665.0, "num_examples": 249}], "download_size": 2734685791, "dataset_size": 2734752480.0}}
2023-11-27T09:25:04+00:00
[ "2310.01596" ]
[]
TAGS #arxiv-2310.01596 #region-us
# Dataset Card Dataset in ImagenHub. Please kindly cite our paper if you use our code, data, models or results:
[ "# Dataset Card\n\nDataset in ImagenHub. \n\n\nPlease kindly cite our paper if you use our code, data, models or results:" ]
[ "TAGS\n#arxiv-2310.01596 #region-us \n", "# Dataset Card\n\nDataset in ImagenHub. \n\n\nPlease kindly cite our paper if you use our code, data, models or results:" ]
[ 15, 29 ]
[ "passage: TAGS\n#arxiv-2310.01596 #region-us \n# Dataset Card\n\nDataset in ImagenHub. \n\n\nPlease kindly cite our paper if you use our code, data, models or results:" ]
bbdad7f026db04300b24191348c1124f0d747984
# Dataset Card for "cord-ocr-text-v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mychen76/cord-ocr-text-v2
[ "region:us" ]
2023-09-16T02:54:38+00:00
{"dataset_info": {"features": [{"name": "file_name", "dtype": "string"}, {"name": "ocr_kie", "dtype": "string"}, {"name": "ocr_text", "dtype": "string"}, {"name": "ocr_box", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1796452, "num_examples": 800}], "download_size": 887206, "dataset_size": 1796452}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-16T02:54:39+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cord-ocr-text-v2" More Information needed
[ "# Dataset Card for \"cord-ocr-text-v2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cord-ocr-text-v2\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"cord-ocr-text-v2\"\n\nMore Information needed" ]
fe420589fecdc9f2f6f575cc883b05165486ca55
# Dataset Card for "cord-ocr-text-in-image-v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mychen76/cord-ocr-text-in-image-v2
[ "region:us" ]
2023-09-16T02:56:52+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 119077451.0, "num_examples": 800}], "download_size": 117832551, "dataset_size": 119077451.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-16T02:57:38+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cord-ocr-text-in-image-v2" More Information needed
[ "# Dataset Card for \"cord-ocr-text-in-image-v2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cord-ocr-text-in-image-v2\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"cord-ocr-text-in-image-v2\"\n\nMore Information needed" ]
a32b0a033003f799fc6d8395cd15832a3fde7579
# Dataset of yokoyama_nao/横山奈緒 (THE iDOLM@STER: Million Live!) This is the dataset of yokoyama_nao/横山奈緒 (THE iDOLM@STER: Million Live!), containing 500 images and their tags. The core tags of this character are `brown_hair, ahoge, purple_eyes, side_ponytail, bangs, drill_hair, side_drill, sidelocks, hair_ornament, medium_hair, breasts, scrunchie, hair_scrunchie`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 409.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yokoyama_nao_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 303.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yokoyama_nao_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1169 | 614.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yokoyama_nao_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 387.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yokoyama_nao_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1169 | 748.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yokoyama_nao_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/yokoyama_nao_theidolmstermillionlive', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, looking_at_viewer, maid_headdress, solo, puffy_short_sleeves, wrist_cuffs, blush, white_background, enmaided, medium_breasts, pink_bowtie, smile, waist_apron, white_shirt, collared_shirt, frilled_apron, frilled_cuffs, heart_hands, long_hair, pink_dress, skirt, upper_body, white_apron | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, looking_at_viewer, solo, blush, tongue_out, long_hair, smile, food, white_background | | 2 | 50 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, black_shirt, solo, blue_scrunchie, short_sleeves, star_print, blush, looking_at_viewer, t-shirt, smile, print_shirt, open_mouth, wrist_scrunchie, star_necklace, simple_background, upper_body | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, blush, looking_at_viewer, solo, long_hair, medium_breasts, nipples, open_mouth, :d, completely_nude, barefoot, collarbone, navel, white_background | | 4 | 16 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, solo, looking_at_viewer, bare_shoulders, blush, earrings, necklace, smile, flower, upper_body, strapless_dress, cleavage, collarbone, medium_breasts, pink_dress, bracelet, open_mouth | | 5 | 14 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, solo, looking_at_viewer, blush, medium_breasts, open_mouth, cleavage, collarbone, navel, smile, side-tie_bikini_bottom, cowboy_shot | | 6 | 8 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1boy, 1girl, blush, hetero, penis, sex, solo_focus, sweat, vaginal, female_pubic_hair, open_mouth, completely_nude, mosaic_censoring, nipples, spread_legs, on_back, pov, bar_censor, cum_in_pussy, medium_breasts, missionary, navel | | 7 | 5 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, kneehighs, looking_at_viewer, plaid_skirt, school_uniform, solo, wing_collar, holding, long_sleeves, miniskirt, pleated_skirt, red_skirt, white_shirt, black_socks, blue_scrunchie, blush, brown_footwear, dress_shirt, full_body, loafers, open_mouth, red_necktie, simple_background, standing, bag, blazer, grey_jacket, grey_sweater, grin, open_jacket, sitting, striped, v-neck, white_background, white_jacket, white_socks, wrist_scrunchie | | 8 | 6 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, looking_at_viewer, school_uniform, short_sleeves, white_shirt, plaid_skirt, solo, wing_collar, blue_necktie, blush, collared_shirt, dress_shirt, hair_bow, smile, blue_skirt, blurry, closed_mouth, hair_ribbon, miniskirt, open_mouth | | 9 | 5 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 1girl, black_choker, blue_shorts, blush, denim_shorts, heart-shaped_eyewear, long_sleeves, looking_at_viewer, midriff, navel, short_shorts, solo, standing, sunglasses, bracelet, crop_top, cutoffs, eyewear_on_head, necklace, simple_background, suspender_shorts, white_background, off-shoulder_shirt, single_thighhigh, star_(symbol), thigh_strap, white_thighhighs, wristband, yellow_jacket, black_footwear, blue_belt, boots, closed_mouth, cowboy_shot, cross-laced_footwear, full_body, garter_straps, grin, hair_bobbles, orange_shirt, purple_scrunchie, red-framed_eyewear, shoes, wrist_ribbon, wrist_scrunchie, yellow_shirt | | 10 | 6 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | 1girl, looking_at_viewer, red_bow, smile, solo, white_gloves, white_shirt, miniskirt, sleeveless_shirt, blue_skirt, open_mouth, pleated_skirt, red_neckerchief, standing, armpits, back_bow, blush, cowboy_shot, hair_bow, holding, idol, medium_breasts, white_sailor_collar, white_shorts | | 11 | 6 | ![](samples/11/clu11-sample0.png) | ![](samples/11/clu11-sample1.png) | ![](samples/11/clu11-sample2.png) | ![](samples/11/clu11-sample3.png) | ![](samples/11/clu11-sample4.png) | 1girl, blush, china_dress, looking_at_viewer, print_dress, solo, floral_print, holding, medium_breasts, black_dress, black_ribbon, hair_ribbon, open_mouth, sleeveless_dress, standing, :d, bamboo_steamer, baozi, bracelet, double_bun, side_slit, simple_background, white_background | | 12 | 7 | ![](samples/12/clu12-sample0.png) | ![](samples/12/clu12-sample1.png) | ![](samples/12/clu12-sample2.png) | ![](samples/12/clu12-sample3.png) | ![](samples/12/clu12-sample4.png) | 1girl, blush, looking_at_viewer, one_eye_closed, smile, solo, wrist_cuffs, ;d, necktie, open_mouth, short_sleeves, character_name, choker, cowboy_shot, hair_bow, holding_microphone, midriff, navel, pink_shorts, simple_background, white_background | | 13 | 10 | ![](samples/13/clu13-sample0.png) | ![](samples/13/clu13-sample1.png) | ![](samples/13/clu13-sample2.png) | ![](samples/13/clu13-sample3.png) | ![](samples/13/clu13-sample4.png) | 1girl, detached_collar, looking_at_viewer, playboy_bunny, strapless_leotard, cleavage, fake_animal_ears, rabbit_ears, solo, bare_shoulders, black_bowtie, black_leotard, white_background, wrist_cuffs, medium_breasts, open_mouth, pantyhose, simple_background, smile, blush, white_collar, collarbone, covered_navel, one_eye_closed | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | maid_headdress | solo | puffy_short_sleeves | wrist_cuffs | blush | white_background | enmaided | medium_breasts | pink_bowtie | smile | waist_apron | white_shirt | collared_shirt | frilled_apron | frilled_cuffs | heart_hands | long_hair | pink_dress | skirt | upper_body | white_apron | tongue_out | food | black_shirt | blue_scrunchie | short_sleeves | star_print | t-shirt | print_shirt | open_mouth | wrist_scrunchie | star_necklace | simple_background | nipples | :d | completely_nude | barefoot | collarbone | navel | bare_shoulders | earrings | necklace | flower | strapless_dress | cleavage | bracelet | side-tie_bikini_bottom | cowboy_shot | 1boy | hetero | penis | sex | solo_focus | sweat | vaginal | female_pubic_hair | mosaic_censoring | spread_legs | on_back | pov | bar_censor | cum_in_pussy | missionary | kneehighs | plaid_skirt | school_uniform | wing_collar | holding | long_sleeves | miniskirt | pleated_skirt | red_skirt | black_socks | brown_footwear | dress_shirt | full_body | loafers | red_necktie | standing | bag | blazer | grey_jacket | grey_sweater | grin | open_jacket | sitting | striped | v-neck | white_jacket | white_socks | blue_necktie | hair_bow | blue_skirt | blurry | closed_mouth | hair_ribbon | black_choker | blue_shorts | denim_shorts | heart-shaped_eyewear | midriff | short_shorts | sunglasses | crop_top | cutoffs | eyewear_on_head | suspender_shorts | off-shoulder_shirt | single_thighhigh | star_(symbol) | thigh_strap | white_thighhighs | wristband | yellow_jacket | black_footwear | blue_belt | boots | cross-laced_footwear | garter_straps | hair_bobbles | orange_shirt | purple_scrunchie | red-framed_eyewear | shoes | wrist_ribbon | yellow_shirt | red_bow | white_gloves | sleeveless_shirt | red_neckerchief | armpits | back_bow | idol | white_sailor_collar | white_shorts | china_dress | print_dress | floral_print | black_dress | black_ribbon | sleeveless_dress | bamboo_steamer | baozi | double_bun | side_slit | one_eye_closed | ;d | necktie | character_name | choker | holding_microphone | pink_shorts | detached_collar | playboy_bunny | strapless_leotard | fake_animal_ears | rabbit_ears | black_bowtie | black_leotard | pantyhose | white_collar | covered_navel | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------------------|:-----------------|:-------|:----------------------|:--------------|:--------|:-------------------|:-----------|:-----------------|:--------------|:--------|:--------------|:--------------|:-----------------|:----------------|:----------------|:--------------|:------------|:-------------|:--------|:-------------|:--------------|:-------------|:-------|:--------------|:-----------------|:----------------|:-------------|:----------|:--------------|:-------------|:------------------|:----------------|:--------------------|:----------|:-----|:------------------|:-----------|:-------------|:--------|:-----------------|:-----------|:-----------|:---------|:------------------|:-----------|:-----------|:-------------------------|:--------------|:-------|:---------|:--------|:------|:-------------|:--------|:----------|:--------------------|:-------------------|:--------------|:----------|:------|:-------------|:---------------|:-------------|:------------|:--------------|:-----------------|:--------------|:----------|:---------------|:------------|:----------------|:------------|:--------------|:-----------------|:--------------|:------------|:----------|:--------------|:-----------|:------|:---------|:--------------|:---------------|:-------|:--------------|:----------|:----------|:---------|:---------------|:--------------|:---------------|:-----------|:-------------|:---------|:---------------|:--------------|:---------------|:--------------|:---------------|:-----------------------|:----------|:---------------|:-------------|:-----------|:----------|:------------------|:-------------------|:---------------------|:-------------------|:----------------|:--------------|:-------------------|:------------|:----------------|:-----------------|:------------|:--------|:-----------------------|:----------------|:---------------|:---------------|:-------------------|:---------------------|:--------|:---------------|:---------------|:----------|:---------------|:-------------------|:------------------|:----------|:-----------|:-------|:----------------------|:---------------|:--------------|:--------------|:---------------|:--------------|:---------------|:-------------------|:-----------------|:--------|:-------------|:------------|:-----------------|:-----|:----------|:-----------------|:---------|:---------------------|:--------------|:------------------|:----------------|:--------------------|:-------------------|:--------------|:---------------|:----------------|:------------|:---------------|:----------------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | X | | | X | X | | | | X | | | | | | | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 50 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | X | | | X | | | | | X | | | | | | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | X | | | X | X | | X | | | | | | | | | X | | | | | | | | | | | | | X | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 16 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | X | | | X | | | X | | X | | | | | | | | X | | X | | | | | | | | | | X | | | | | | | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 14 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | | X | | | X | | | X | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | X | | | | | | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 8 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | X | | | | X | | X | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 5 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | X | | X | | | X | X | | | | | | X | | | | | | | | | | | | | X | | | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 6 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | X | | X | | | X | | | | | X | | X | X | | | | | | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | | | X | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 9 | 5 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | X | | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | | | | | | X | | | X | | | | X | | X | | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | | | X | | | | | X | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 10 | 6 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | X | X | | X | | | X | | | X | | X | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | | X | X | | | | | | | | X | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 11 | 6 | ![](samples/11/clu11-sample0.png) | ![](samples/11/clu11-sample1.png) | ![](samples/11/clu11-sample2.png) | ![](samples/11/clu11-sample3.png) | ![](samples/11/clu11-sample4.png) | X | X | | X | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | X | | | X | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | 12 | 7 | ![](samples/12/clu12-sample0.png) | ![](samples/12/clu12-sample1.png) | ![](samples/12/clu12-sample2.png) | ![](samples/12/clu12-sample3.png) | ![](samples/12/clu12-sample4.png) | X | X | | X | | X | X | X | | | | X | | | | | | | | | | | | | | | | X | | | | X | | | X | | | | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | 13 | 10 | ![](samples/13/clu13-sample0.png) | ![](samples/13/clu13-sample1.png) | ![](samples/13/clu13-sample2.png) | ![](samples/13/clu13-sample3.png) | ![](samples/13/clu13-sample4.png) | X | X | | X | | X | X | X | | X | | X | | | | | | | | | | | | | | | | | | | | X | | | X | | | | | X | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X |
CyberHarem/yokoyama_nao_theidolmstermillionlive
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-09-16T03:07:56+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-16T00:41:12+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of yokoyama\_nao/横山奈緒 (THE iDOLM@STER: Million Live!) ============================================================= This is the dataset of yokoyama\_nao/横山奈緒 (THE iDOLM@STER: Million Live!), containing 500 images and their tags. The core tags of this character are 'brown\_hair, ahoge, purple\_eyes, side\_ponytail, bangs, drill\_hair, side\_drill, sidelocks, hair\_ornament, medium\_hair, breasts, scrunchie, hair\_scrunchie', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ 44, 61, 5, 4 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version" ]
095024b9bd0b3caee054dc2ef544ac876126d7b3
# Dataset Card for "bbooks-llama2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
argiacomi/bbooks-llama2
[ "region:us" ]
2023-09-16T03:22:45+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 105206186.0, "num_examples": 10931}], "download_size": 61312639, "dataset_size": 105206186.0}}
2023-09-16T03:27:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bbooks-llama2" More Information needed
[ "# Dataset Card for \"bbooks-llama2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bbooks-llama2\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"bbooks-llama2\"\n\nMore Information needed" ]
1538b127d5fd4cbaf9e91b0edd1478a41d819744
# Dataset Card for "hermes_labeled_bad" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
vikp/hermes_labeled_bad
[ "region:us" ]
2023-09-16T03:26:15+00:00
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "rendered", "dtype": "string"}, {"name": "quality_prob", "dtype": "float64"}, {"name": "learning_prob", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 17934912.391210347, "num_examples": 6969}], "download_size": 5517616, "dataset_size": 17934912.391210347}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-16T03:27:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hermes_labeled_bad" More Information needed
[ "# Dataset Card for \"hermes_labeled_bad\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hermes_labeled_bad\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"hermes_labeled_bad\"\n\nMore Information needed" ]
38c51a0afd30350af9f74773cba93664098af612
# Dataset Card for "babylm-100M-open-subtitles" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
deven367/babylm-100M-open-subtitles
[ "region:us" ]
2023-09-16T04:01:17+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 182785703, "num_examples": 5433939}, {"name": "valid", "num_bytes": 17333152, "num_examples": 529410}, {"name": "test", "num_bytes": 16275666, "num_examples": 489448}], "download_size": 145946881, "dataset_size": 216394521}}
2023-09-16T04:03:00+00:00
[]
[]
TAGS #region-us
# Dataset Card for "babylm-100M-open-subtitles" More Information needed
[ "# Dataset Card for \"babylm-100M-open-subtitles\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"babylm-100M-open-subtitles\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"babylm-100M-open-subtitles\"\n\nMore Information needed" ]
9b77b828d2038e83f3d8f3e33ef398f49f65c45e
# Dataset Card for "babylm-100M-gutenberg" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
deven367/babylm-100M-gutenberg
[ "region:us" ]
2023-09-16T04:04:43+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 57259434, "num_examples": 898293}, {"name": "valid", "num_bytes": 5158276, "num_examples": 80469}, {"name": "test", "num_bytes": 6995971, "num_examples": 106624}], "download_size": 44998710, "dataset_size": 69413681}}
2023-09-16T04:05:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "babylm-100M-gutenberg" More Information needed
[ "# Dataset Card for \"babylm-100M-gutenberg\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"babylm-100M-gutenberg\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"babylm-100M-gutenberg\"\n\nMore Information needed" ]
e8f72557c0ce6172ddf399291405d0b55030674e
# Dataset Card for "babylm-100M-wikipedia" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
deven367/babylm-100M-wikipedia
[ "region:us" ]
2023-09-16T04:06:07+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 62201423, "num_examples": 203192}, {"name": "valid", "num_bytes": 7089834, "num_examples": 23526}, {"name": "test", "num_bytes": 7569053, "num_examples": 26870}], "download_size": 46375519, "dataset_size": 76860310}}
2023-09-16T04:06:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "babylm-100M-wikipedia" More Information needed
[ "# Dataset Card for \"babylm-100M-wikipedia\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"babylm-100M-wikipedia\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"babylm-100M-wikipedia\"\n\nMore Information needed" ]