sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
listlengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
listlengths
0
25
languages
listlengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
listlengths
0
352
processed_texts
listlengths
1
353
tokens_length
listlengths
1
353
input_texts
listlengths
1
40
54278a1352593e2fbf17e40e85ab645a387737e0
# Dataset of Kanzaki Kaori This is the dataset of Kanzaki Kaori, containing 234 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 234 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 518 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 234 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 234 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 234 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 234 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 234 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 518 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 518 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 518 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
CyberHarem/kanzaki_kaori_toarumajutsunoindex
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T19:43:06+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:06+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Kanzaki Kaori ======================== This is the dataset of Kanzaki Kaori, containing 234 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
fa379b17b540907d222124d13d5268bfb4426497
# Dataset of Uiharu Kazari This is the dataset of Uiharu Kazari, containing 96 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 96 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 217 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 96 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 96 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 96 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 96 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 96 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 217 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 217 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 217 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
CyberHarem/uiharu_kazari_toarumajutsunoindex
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T19:58:33+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:08+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Uiharu Kazari ======================== This is the dataset of Uiharu Kazari, containing 96 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
b0c0da5d14f7784bbb179679e1665c411a2b8a34
# Dataset Card for "sample_mj_prompts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
satvikp/sample_mj_prompts
[ "region:us" ]
2023-08-15T19:59:02+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 83019, "num_examples": 289}], "download_size": 22242, "dataset_size": 83019}}
2023-08-15T19:59:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for "sample_mj_prompts" More Information needed
[ "# Dataset Card for \"sample_mj_prompts\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"sample_mj_prompts\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"sample_mj_prompts\"\n\nMore Information needed" ]
a0d09fc337b8661c7c22c54c78e16328fbd03e79
# Dataset of Itsuwa This is the dataset of Itsuwa, containing 196 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 196 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 464 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 196 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 196 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 196 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 196 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 196 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 464 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 464 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 464 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
CyberHarem/itsuwa_toarumajutsunoindex
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T20:14:44+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:10+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Itsuwa ================= This is the dataset of Itsuwa, containing 196 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
030f002efb1889a5077ffd51707fca2e7cb41f54
# Dataset of Fukiyose Seiri This is the dataset of Fukiyose Seiri, containing 96 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 96 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 224 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 96 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 96 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 96 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 96 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 96 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 224 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 224 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 224 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
CyberHarem/fukiyose_seiri_toarumajutsunoindex
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T20:47:16+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:12+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Fukiyose Seiri ========================= This is the dataset of Fukiyose Seiri, containing 96 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
d6090b170e83a1dcb3b57ba55289d42720755366
# Dataset of frenda_seivelun (To Aru Majutsu no Index) This is the dataset of frenda_seivelun (To Aru Majutsu no Index), containing 137 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/frenda_seivelun_toarumajutsunoindex
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T20:52:44+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:14+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of frenda_seivelun (To Aru Majutsu no Index) This is the dataset of frenda_seivelun (To Aru Majutsu no Index), containing 137 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of frenda_seivelun (To Aru Majutsu no Index)\n\nThis is the dataset of frenda_seivelun (To Aru Majutsu no Index), containing 137 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of frenda_seivelun (To Aru Majutsu no Index)\n\nThis is the dataset of frenda_seivelun (To Aru Majutsu no Index), containing 137 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 95 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of frenda_seivelun (To Aru Majutsu no Index)\n\nThis is the dataset of frenda_seivelun (To Aru Majutsu no Index), containing 137 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
80cfffe63a517e9bb7120e423a38c05b17c8d781
# Dataset of kongou_mitsuko (To Aru Majutsu no Index) This is the dataset of kongou_mitsuko (To Aru Majutsu no Index), containing 40 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/kongou_mitsuko_toarumajutsunoindex
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T20:56:42+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:16+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of kongou_mitsuko (To Aru Majutsu no Index) This is the dataset of kongou_mitsuko (To Aru Majutsu no Index), containing 40 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of kongou_mitsuko (To Aru Majutsu no Index)\n\nThis is the dataset of kongou_mitsuko (To Aru Majutsu no Index), containing 40 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of kongou_mitsuko (To Aru Majutsu no Index)\n\nThis is the dataset of kongou_mitsuko (To Aru Majutsu no Index), containing 40 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 95 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of kongou_mitsuko (To Aru Majutsu no Index)\n\nThis is the dataset of kongou_mitsuko (To Aru Majutsu no Index), containing 40 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
6cfc81901cf8d6839b9ba77029c8eb7ea800e42e
# Stable Diffusion Dataset # Description: This dataset is in Jsonl format and is based on the MadVoyager/stable_diffusion_instructional_dataset. # Overview: The Stable Diffusion Dataset comprises approximately 80,000 meticulously curated prompts sourced from the image finder of Stable Diffusion: "Lexica.art". The dataset is intended to facilitate training and fine-tuning of various language models, including LLaMa2. # Key Features: ◉ Jsonl format for seamless integration with existing projects. ◉ High-quality prompts extracted from the Stable Diffusion image finder. ◉ Ideal for enhancing models like LLaMa2 through training and fine-tuning. ◉ Usage: ◉ Researchers and developers can utilize this dataset to: Train and fine-tune language models like LLaMa2. Conduct experiments in natural language processing and generation. Enhance and expand AI capabilities in creative and interactive applications. # Acknowledgments: We acknowledge the creators and contributors of the MadVoyager/stable_diffusion_instructional_dataset for providing the foundation for this dataset.
lusstta/stable_diffusion_instructional_dataset
[ "task_categories:text2text-generation", "task_categories:question-answering", "language:en", "stable diffussion", "llama", "llama2", "chatgpt", "prompt", "llm", "dataset", "finetune", "train", "qlora", "lora", "region:us" ]
2023-08-15T20:59:19+00:00
{"language": ["en"], "task_categories": ["text2text-generation", "question-answering"], "pretty_name": "Stable Difussion Instruct Dataset - AiresAI", "tags": ["stable diffussion", "llama", "llama2", "chatgpt", "prompt", "llm", "dataset", "finetune", "train", "qlora", "lora"]}
2023-08-15T21:11:10+00:00
[]
[ "en" ]
TAGS #task_categories-text2text-generation #task_categories-question-answering #language-English #stable diffussion #llama #llama2 #chatgpt #prompt #llm #dataset #finetune #train #qlora #lora #region-us
# Stable Diffusion Dataset # Description: This dataset is in Jsonl format and is based on the MadVoyager/stable_diffusion_instructional_dataset. # Overview: The Stable Diffusion Dataset comprises approximately 80,000 meticulously curated prompts sourced from the image finder of Stable Diffusion: "URL". The dataset is intended to facilitate training and fine-tuning of various language models, including LLaMa2. # Key Features: ◉ Jsonl format for seamless integration with existing projects. ◉ High-quality prompts extracted from the Stable Diffusion image finder. ◉ Ideal for enhancing models like LLaMa2 through training and fine-tuning. ◉ Usage: ◉ Researchers and developers can utilize this dataset to: Train and fine-tune language models like LLaMa2. Conduct experiments in natural language processing and generation. Enhance and expand AI capabilities in creative and interactive applications. # Acknowledgments: We acknowledge the creators and contributors of the MadVoyager/stable_diffusion_instructional_dataset for providing the foundation for this dataset.
[ "# Stable Diffusion Dataset", "# Description:\nThis dataset is in Jsonl format and is based on the MadVoyager/stable_diffusion_instructional_dataset.", "# Overview:\nThe Stable Diffusion Dataset comprises approximately 80,000 meticulously curated prompts sourced from the image finder of Stable Diffusion: \"URL\". The dataset is intended to facilitate training and fine-tuning of various language models, including LLaMa2.", "# Key Features:\n\n◉ Jsonl format for seamless integration with existing projects.\n◉ High-quality prompts extracted from the Stable Diffusion image finder.\n◉ Ideal for enhancing models like LLaMa2 through training and fine-tuning.\n◉ Usage:\n◉ Researchers and developers can utilize this dataset to:\nTrain and fine-tune language models like LLaMa2.\nConduct experiments in natural language processing and generation.\nEnhance and expand AI capabilities in creative and interactive applications.", "# Acknowledgments:\nWe acknowledge the creators and contributors of the MadVoyager/stable_diffusion_instructional_dataset for providing the foundation for this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #task_categories-question-answering #language-English #stable diffussion #llama #llama2 #chatgpt #prompt #llm #dataset #finetune #train #qlora #lora #region-us \n", "# Stable Diffusion Dataset", "# Description:\nThis dataset is in Jsonl format and is based on the MadVoyager/stable_diffusion_instructional_dataset.", "# Overview:\nThe Stable Diffusion Dataset comprises approximately 80,000 meticulously curated prompts sourced from the image finder of Stable Diffusion: \"URL\". The dataset is intended to facilitate training and fine-tuning of various language models, including LLaMa2.", "# Key Features:\n\n◉ Jsonl format for seamless integration with existing projects.\n◉ High-quality prompts extracted from the Stable Diffusion image finder.\n◉ Ideal for enhancing models like LLaMa2 through training and fine-tuning.\n◉ Usage:\n◉ Researchers and developers can utilize this dataset to:\nTrain and fine-tune language models like LLaMa2.\nConduct experiments in natural language processing and generation.\nEnhance and expand AI capabilities in creative and interactive applications.", "# Acknowledgments:\nWe acknowledge the creators and contributors of the MadVoyager/stable_diffusion_instructional_dataset for providing the foundation for this dataset." ]
[ 74, 8, 35, 64, 117, 43 ]
[ "passage: TAGS\n#task_categories-text2text-generation #task_categories-question-answering #language-English #stable diffussion #llama #llama2 #chatgpt #prompt #llm #dataset #finetune #train #qlora #lora #region-us \n# Stable Diffusion Dataset# Description:\nThis dataset is in Jsonl format and is based on the MadVoyager/stable_diffusion_instructional_dataset.# Overview:\nThe Stable Diffusion Dataset comprises approximately 80,000 meticulously curated prompts sourced from the image finder of Stable Diffusion: \"URL\". The dataset is intended to facilitate training and fine-tuning of various language models, including LLaMa2.# Key Features:\n\n◉ Jsonl format for seamless integration with existing projects.\n◉ High-quality prompts extracted from the Stable Diffusion image finder.\n◉ Ideal for enhancing models like LLaMa2 through training and fine-tuning.\n◉ Usage:\n◉ Researchers and developers can utilize this dataset to:\nTrain and fine-tune language models like LLaMa2.\nConduct experiments in natural language processing and generation.\nEnhance and expand AI capabilities in creative and interactive applications.# Acknowledgments:\nWe acknowledge the creators and contributors of the MadVoyager/stable_diffusion_instructional_dataset for providing the foundation for this dataset." ]
b0dcf37548acaa786c23b8dfefb8f591c0e20258
# Dataset Card for "autotree_snxor_l1_2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yzhuang/autotree_snxor_l1_2
[ "region:us" ]
2023-08-15T21:01:19+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": {"sequence": "float64"}}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float64"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 155000000, "num_examples": 10000}, {"name": "validation", "num_bytes": 155000000, "num_examples": 10000}, {"name": "test", "num_bytes": 155000000, "num_examples": 10000}], "download_size": 266245267, "dataset_size": 465000000}}
2023-08-17T20:43:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for "autotree_snxor_l1_2" More Information needed
[ "# Dataset Card for \"autotree_snxor_l1_2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"autotree_snxor_l1_2\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"autotree_snxor_l1_2\"\n\nMore Information needed" ]
7d911a350f31a3b63b06e14c59d4db99b56fc6e3
# Dataset of Lessar This is the dataset of Lessar, containing 89 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 89 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 200 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 89 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 89 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 89 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 89 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 89 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 200 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 200 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 200 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
CyberHarem/lessar_toarumajutsunoindex
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T21:06:26+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:18+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Lessar ================= This is the dataset of Lessar, containing 89 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
d5b189739b4218ec0eadae7729ec7f0b96ba2176
# Dataset of Takitsubo Rikou This is the dataset of Takitsubo Rikou, containing 85 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 85 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 177 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 85 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 85 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 85 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 85 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 85 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 177 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 177 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 177 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
CyberHarem/takitsubo_rikou_toarumajutsunoindex
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T21:10:03+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:20+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Takitsubo Rikou ========================== This is the dataset of Takitsubo Rikou, containing 85 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
a98b80aa43f0bebb2570417a62754d95c3f061f2
# Dataset of angelene (To Aru Majutsu no Index) This is the dataset of angelene (To Aru Majutsu no Index), containing 10 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/angelene_toarumajutsunoindex
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T21:12:31+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:22+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of angelene (To Aru Majutsu no Index) This is the dataset of angelene (To Aru Majutsu no Index), containing 10 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of angelene (To Aru Majutsu no Index)\n\nThis is the dataset of angelene (To Aru Majutsu no Index), containing 10 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of angelene (To Aru Majutsu no Index)\n\nThis is the dataset of angelene (To Aru Majutsu no Index), containing 10 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 87 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of angelene (To Aru Majutsu no Index)\n\nThis is the dataset of angelene (To Aru Majutsu no Index), containing 10 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
f50b470965dd4ae5534bcedaa471dfc2c3414910
# Dataset of mugino_shizuri (To Aru Majutsu no Index) This is the dataset of mugino_shizuri (To Aru Majutsu no Index), containing 80 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/mugino_shizuri_toarumajutsunoindex
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T21:29:15+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:24+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of mugino_shizuri (To Aru Majutsu no Index) This is the dataset of mugino_shizuri (To Aru Majutsu no Index), containing 80 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of mugino_shizuri (To Aru Majutsu no Index)\n\nThis is the dataset of mugino_shizuri (To Aru Majutsu no Index), containing 80 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of mugino_shizuri (To Aru Majutsu no Index)\n\nThis is the dataset of mugino_shizuri (To Aru Majutsu no Index), containing 80 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 95 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of mugino_shizuri (To Aru Majutsu no Index)\n\nThis is the dataset of mugino_shizuri (To Aru Majutsu no Index), containing 80 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
c3897172a26b178506854e3c3b6fb0df0871ccbe
# Dataset Card for "ar_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ad019el/ar_dataset
[ "region:us" ]
2023-08-15T21:51:52+00:00
{"dataset_info": {"features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 40849769.0, "num_examples": 1500}, {"name": "test", "num_bytes": 15937708.0, "num_examples": 500}], "download_size": 55327587, "dataset_size": 56787477.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2023-08-15T22:34:12+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ar_dataset" More Information needed
[ "# Dataset Card for \"ar_dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ar_dataset\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ar_dataset\"\n\nMore Information needed" ]
85703646d2aea743a859932e8c614b4a89479ac6
# Dataset of ais_wallenstein (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of ais_wallenstein (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/ais_wallenstein_isitwrongtotrytopickupgirlsinadungeon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T21:57:14+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:26+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of ais_wallenstein (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of ais_wallenstein (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of ais_wallenstein (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of ais_wallenstein (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of ais_wallenstein (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of ais_wallenstein (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 117 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of ais_wallenstein (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of ais_wallenstein (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
329e3d68d343b0e2edf7f069ef79af826fd4c819
# Dataset of othinus (To Aru Majutsu no Index) This is the dataset of othinus (To Aru Majutsu no Index), containing 68 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/othinus_toarumajutsunoindex
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T22:01:34+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:28+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of othinus (To Aru Majutsu no Index) This is the dataset of othinus (To Aru Majutsu no Index), containing 68 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of othinus (To Aru Majutsu no Index)\n\nThis is the dataset of othinus (To Aru Majutsu no Index), containing 68 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of othinus (To Aru Majutsu no Index)\n\nThis is the dataset of othinus (To Aru Majutsu no Index), containing 68 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 89 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of othinus (To Aru Majutsu no Index)\n\nThis is the dataset of othinus (To Aru Majutsu no Index), containing 68 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
7ab5bdcf7db38176246b7e8af650ccf8be519722
# Dataset Card for "my_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Guilherme34/Jennifer_dataset
[ "region:us" ]
2023-08-15T22:02:19+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 62489312.30556517, "num_examples": 52001}], "download_size": 25349894, "dataset_size": 62489312.30556517}}
2023-08-15T22:02:23+00:00
[]
[]
TAGS #region-us
# Dataset Card for "my_dataset" More Information needed
[ "# Dataset Card for \"my_dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"my_dataset\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"my_dataset\"\n\nMore Information needed" ]
a60e7e1c0eddaae74bc8d54fecf50b6dab88bedf
# XSUM - Translated Norwegian Bokmål Sourced from https://huggingface.co/datasets/NbAiLab/norwegian-xsum. Loaded from provided gzips and reuploaded due to errors accessing the original dataset through the dataset apis.
tollefj/norwegian-xsum-nob
[ "task_categories:summarization", "size_categories:100K<n<1M", "language:nb", "language:no", "license:cc-by-sa-4.0", "region:us" ]
2023-08-15T22:06:35+00:00
{"language": ["nb", "no"], "license": "cc-by-sa-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["summarization"], "pretty_name": "XSUM Norwegian Bokm\u00e5l", "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "document", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 23794328, "num_examples": 11334}, {"name": "train", "num_bytes": 426389147, "num_examples": 204045}, {"name": "validation", "num_bytes": 23422946, "num_examples": 11332}], "download_size": 301349675, "dataset_size": 473606421}}
2023-08-15T22:15:52+00:00
[]
[ "nb", "no" ]
TAGS #task_categories-summarization #size_categories-100K<n<1M #language-Norwegian Bokmål #language-Norwegian #license-cc-by-sa-4.0 #region-us
# XSUM - Translated Norwegian Bokmål Sourced from URL Loaded from provided gzips and reuploaded due to errors accessing the original dataset through the dataset apis.
[ "# XSUM - Translated Norwegian Bokmål\nSourced from URL Loaded from provided gzips and reuploaded due to errors accessing the original dataset through the dataset apis." ]
[ "TAGS\n#task_categories-summarization #size_categories-100K<n<1M #language-Norwegian Bokmål #language-Norwegian #license-cc-by-sa-4.0 #region-us \n", "# XSUM - Translated Norwegian Bokmål\nSourced from URL Loaded from provided gzips and reuploaded due to errors accessing the original dataset through the dataset apis." ]
[ 53, 43 ]
[ "passage: TAGS\n#task_categories-summarization #size_categories-100K<n<1M #language-Norwegian Bokmål #language-Norwegian #license-cc-by-sa-4.0 #region-us \n# XSUM - Translated Norwegian Bokmål\nSourced from URL Loaded from provided gzips and reuploaded due to errors accessing the original dataset through the dataset apis." ]
80d695ca1fc6f3d89e8bbd4cafc08956981c3d15
# Dataset of Musujime Awaki This is the dataset of Musujime Awaki, containing 158 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 158 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 328 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 158 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 158 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 158 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 158 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 158 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 328 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 328 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 328 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
CyberHarem/musujime_awaki_toarumajutsunoindex
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T22:09:38+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:30+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Musujime Awaki ========================= This is the dataset of Musujime Awaki, containing 158 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
f63d5bd8845f403c46230a7dc3811239f22d8be2
This is the ultimate code training data, created to be lossless so the AI model does not lose any other abilities it had previously, such as logical skills, after training on this dataset. The reason why this dataset is so large is to ensure that as the model learns to code, it continues to remember to follow regular instructions so as not to lose previously learned abilities. This is the result of all my work gathering data, testing AI models, and discovering what, why, and how coding models perform well or don't perform well. The content of this dataset is roughly 50% coding instruction data and 50% non-coding instruction data. Amounting to 1.5 million evol instruction-formatted lines of data. The outcome of having 50% non coding instruction data in the dataset is to preserve logic and reasoning skills within the model while training on coding. The lack of such skills has been observed to be a major issue with coding models such as Wizardcoder-15b and NewHope, but training models on this dataset alleviates that issue while also giving similar levels of coding knowledge. This dataset is a combination of the following datasets, along with additional deduping and uncensoring techniques: Coding: - https://huggingface.co/datasets/rombodawg/2XUNCENSORED_MegaCodeTraining188k - https://huggingface.co/datasets/rombodawg/Rombodawgs_commitpackft_Evolinstruct_Converted Instruction following: - https://huggingface.co/datasets/rombodawg/2XUNCENSORED_alpaca_840k_Evol_USER_ASSIST - https://huggingface.co/datasets/garage-bAInd/Open-Platypus
rombodawg/LosslessMegaCodeTrainingV3_1.6m_Evol
[ "license:other", "region:us" ]
2023-08-15T22:22:36+00:00
{"license": "other"}
2023-10-19T15:57:47+00:00
[]
[]
TAGS #license-other #region-us
This is the ultimate code training data, created to be lossless so the AI model does not lose any other abilities it had previously, such as logical skills, after training on this dataset. The reason why this dataset is so large is to ensure that as the model learns to code, it continues to remember to follow regular instructions so as not to lose previously learned abilities. This is the result of all my work gathering data, testing AI models, and discovering what, why, and how coding models perform well or don't perform well. The content of this dataset is roughly 50% coding instruction data and 50% non-coding instruction data. Amounting to 1.5 million evol instruction-formatted lines of data. The outcome of having 50% non coding instruction data in the dataset is to preserve logic and reasoning skills within the model while training on coding. The lack of such skills has been observed to be a major issue with coding models such as Wizardcoder-15b and NewHope, but training models on this dataset alleviates that issue while also giving similar levels of coding knowledge. This dataset is a combination of the following datasets, along with additional deduping and uncensoring techniques: Coding: - URL - URL Instruction following: - URL - URL
[]
[ "TAGS\n#license-other #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#license-other #region-us \n" ]
84e65616a326cf002259b994ebeb0ea9c6eaed4d
# Dataset of liliruca_arde (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of liliruca_arde (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 128 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/liliruca_arde_isitwrongtotrytopickupgirlsinadungeon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T22:26:39+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:32+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of liliruca_arde (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of liliruca_arde (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 128 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of liliruca_arde (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of liliruca_arde (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 128 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of liliruca_arde (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of liliruca_arde (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 128 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 117 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of liliruca_arde (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of liliruca_arde (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 128 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
443ac02a21f72bc430a93efd21bbdd7728fdcb20
# Dataset of Tsukuyomi Komoe This is the dataset of Tsukuyomi Komoe, containing 258 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 258 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 578 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 258 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 258 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 258 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 258 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 258 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 578 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 578 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 578 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
CyberHarem/tsukuyomi_komoe_toarumajutsunoindex
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T22:27:08+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:34+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Tsukuyomi Komoe ========================== This is the dataset of Tsukuyomi Komoe, containing 258 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
4af92b3582132b791bb793d87335d31e957a47ff
# Dataset of syr_flover (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of syr_flover (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 18 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/syr_flover_isitwrongtotrytopickupgirlsinadungeon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T22:30:05+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:36+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of syr_flover (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of syr_flover (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 18 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of syr_flover (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of syr_flover (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 18 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of syr_flover (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of syr_flover (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 18 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 115 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of syr_flover (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of syr_flover (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 18 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
269a743d5e8c057aba5489e58a1a7f149a4f180e
# XSUM - English to Norwegian Bokmål An experiment in joint translation and summarization. - English xsum: https://huggingface.co/datasets/xsum - Norwegian (bokmål) translated xsum: https://huggingface.co/datasets/tollefj/norwegian-xsum-nob
tollefj/xsum-en-no-transfer
[ "task_categories:summarization", "task_categories:translation", "size_categories:100K<n<1M", "language:nb", "language:no", "language:en", "region:us" ]
2023-08-15T22:34:02+00:00
{"language": ["nb", "no", "en"], "size_categories": ["100K<n<1M"], "task_categories": ["summarization", "translation"], "pretty_name": "xsum English to Norwegian", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "document", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 479736366, "num_examples": 204045}, {"name": "test", "num_bytes": 26786028, "num_examples": 11334}, {"name": "validation", "num_bytes": 26319794, "num_examples": 11332}], "download_size": 338414434, "dataset_size": 532842188}}
2023-08-15T22:38:49+00:00
[]
[ "nb", "no", "en" ]
TAGS #task_categories-summarization #task_categories-translation #size_categories-100K<n<1M #language-Norwegian Bokmål #language-Norwegian #language-English #region-us
# XSUM - English to Norwegian Bokmål An experiment in joint translation and summarization. - English xsum: URL - Norwegian (bokmål) translated xsum: URL
[ "# XSUM - English to Norwegian Bokmål\nAn experiment in joint translation and summarization.\n\n- English xsum: URL\n- Norwegian (bokmål) translated xsum: URL" ]
[ "TAGS\n#task_categories-summarization #task_categories-translation #size_categories-100K<n<1M #language-Norwegian Bokmål #language-Norwegian #language-English #region-us \n", "# XSUM - English to Norwegian Bokmål\nAn experiment in joint translation and summarization.\n\n- English xsum: URL\n- Norwegian (bokmål) translated xsum: URL" ]
[ 55, 38 ]
[ "passage: TAGS\n#task_categories-summarization #task_categories-translation #size_categories-100K<n<1M #language-Norwegian Bokmål #language-Norwegian #language-English #region-us \n# XSUM - English to Norwegian Bokmål\nAn experiment in joint translation and summarization.\n\n- English xsum: URL\n- Norwegian (bokmål) translated xsum: URL" ]
b9ae3cb9e43c5adf83804360be72dd700a68e3fb
# Dataset of lefiya_viridis (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of lefiya_viridis (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 26 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/lefiya_viridis_isitwrongtotrytopickupgirlsinadungeon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T22:36:18+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:38+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of lefiya_viridis (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of lefiya_viridis (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 26 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of lefiya_viridis (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of lefiya_viridis (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 26 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of lefiya_viridis (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of lefiya_viridis (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 26 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 121 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of lefiya_viridis (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of lefiya_viridis (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 26 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
ccbe199ca7ed555b843fe4f08d085825cd15119c
# Dataset Card for "ar_data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ad019el/ar_data
[ "region:us" ]
2023-08-15T22:36:23+00:00
{"dataset_info": {"features": [{"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 40579164.0, "num_examples": 1500}, {"name": "test", "num_bytes": 15846990.0, "num_examples": 500}], "download_size": 55259208, "dataset_size": 56426154.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2023-08-15T22:36:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ar_data" More Information needed
[ "# Dataset Card for \"ar_data\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ar_data\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ar_data\"\n\nMore Information needed" ]
985dbfe71584fb5dacd66c5fbae8cc4094971cb4
# Dataset of Himegami Aisa This is the dataset of Himegami Aisa, containing 167 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 167 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 350 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 167 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 167 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 167 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 167 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 167 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 350 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 350 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 350 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
CyberHarem/himegami_aisa_toarumajutsunoindex
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T22:37:20+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:40+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Himegami Aisa ======================== This is the dataset of Himegami Aisa, containing 167 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
afb27106a273d8c303fe3f5012947b038203539a
# Dataset Card for "rfslesstrat" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
thebigShawn/rfslesstrat
[ "task_categories:automatic-speech-recognition", "whisper", "whispering", "large", "region:us" ]
2023-08-15T22:39:09+00:00
{"task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "CHANNEL_NAME", "dtype": "null"}, {"name": "URL", "dtype": "null"}, {"name": "TITLE", "dtype": "null"}, {"name": "DESCRIPTION", "dtype": "null"}, {"name": "TRANSCRIPTION", "dtype": "null"}, {"name": "SEGMENTS", "dtype": "null"}, {"name": "__index_level_0__", "dtype": "null"}], "splits": [{"name": "train"}], "download_size": 1797, "dataset_size": 0}, "tags": ["whisper", "whispering", "large"]}
2023-08-15T22:39:11+00:00
[]
[]
TAGS #task_categories-automatic-speech-recognition #whisper #whispering #large #region-us
# Dataset Card for "rfslesstrat" More Information needed
[ "# Dataset Card for \"rfslesstrat\"\n\nMore Information needed" ]
[ "TAGS\n#task_categories-automatic-speech-recognition #whisper #whispering #large #region-us \n", "# Dataset Card for \"rfslesstrat\"\n\nMore Information needed" ]
[ 34, 14 ]
[ "passage: TAGS\n#task_categories-automatic-speech-recognition #whisper #whispering #large #region-us \n# Dataset Card for \"rfslesstrat\"\n\nMore Information needed" ]
d69e55d085324e532f319d55958aaaf346fdc0ff
# Dataset of agnese_sanctis (To Aru Majutsu no Index) This is the dataset of agnese_sanctis (To Aru Majutsu no Index), containing 15 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/agnese_sanctis_toarumajutsunoindex
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T22:43:22+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:42+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of agnese_sanctis (To Aru Majutsu no Index) This is the dataset of agnese_sanctis (To Aru Majutsu no Index), containing 15 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of agnese_sanctis (To Aru Majutsu no Index)\n\nThis is the dataset of agnese_sanctis (To Aru Majutsu no Index), containing 15 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of agnese_sanctis (To Aru Majutsu no Index)\n\nThis is the dataset of agnese_sanctis (To Aru Majutsu no Index), containing 15 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 93 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of agnese_sanctis (To Aru Majutsu no Index)\n\nThis is the dataset of agnese_sanctis (To Aru Majutsu no Index), containing 15 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
55d644bb1d7ef678eca95e76e5a487ccb79f7fda
# Dataset of eina_tulle (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of eina_tulle (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 51 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/eina_tulle_isitwrongtotrytopickupgirlsinadungeon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T22:46:32+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:44+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of eina_tulle (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of eina_tulle (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 51 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of eina_tulle (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of eina_tulle (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 51 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of eina_tulle (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of eina_tulle (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 51 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 115 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of eina_tulle (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of eina_tulle (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 51 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
ac0315a3a0d637469403f65ce9a5b9bb8da78458
# Dataset of loki (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of loki (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 22 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/loki_isitwrongtotrytopickupgirlsinadungeon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-15T22:53:25+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:46+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of loki (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of loki (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 22 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of loki (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of loki (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 22 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of loki (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of loki (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 22 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 111 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of loki (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of loki (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 22 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
1443fa885819fc3b4d09a60f5cb3866f8796929b
# Dataset Card for "next-step-zaps" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sheparddw/next-step-zaps
[ "region:us" ]
2023-08-15T22:55:21+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1192541, "num_examples": 4369}], "download_size": 160523, "dataset_size": 1192541}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-17T22:02:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for "next-step-zaps" More Information needed
[ "# Dataset Card for \"next-step-zaps\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"next-step-zaps\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"next-step-zaps\"\n\nMore Information needed" ]
df5351f45d0a886c6efa497cdafd9bc9f717a067
# Dataset Card for "gaps-qm9-1k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Hassanharb/gaps-qm9-1k
[ "region:us" ]
2023-08-15T23:03:28+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 80249, "num_examples": 1000}], "download_size": 14255, "dataset_size": 80249}}
2023-08-15T23:09:34+00:00
[]
[]
TAGS #region-us
# Dataset Card for "gaps-qm9-1k" More Information needed
[ "# Dataset Card for \"gaps-qm9-1k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"gaps-qm9-1k\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"gaps-qm9-1k\"\n\nMore Information needed" ]
2a747926650fa6b493d1584e112dc6508c3109db
# Dataset Card for "amazing_logos_v4" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
iamkaikai/amazing_logos_v4
[ "region:us" ]
2023-08-15T23:28:33+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6783845671.01, "num_examples": 397251}], "download_size": 9292502034, "dataset_size": 6783845671.01}}
2023-08-16T13:56:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for "amazing_logos_v4" More Information needed
[ "# Dataset Card for \"amazing_logos_v4\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"amazing_logos_v4\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"amazing_logos_v4\"\n\nMore Information needed" ]
c4df680e69e03764e84b232bdb25bb15ea67bbd4
# Dataset of hestia (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of hestia (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/hestia_isitwrongtotrytopickupgirlsinadungeon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T00:03:00+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:50+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of hestia (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of hestia (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of hestia (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of hestia (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of hestia (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of hestia (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 111 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of hestia (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of hestia (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
c1805e6218409f23d4003f0cd77087be86fe6e65
# Dataset Card for "essayforum_writing_prompts_6k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dim/essayforum_writing_prompts_6k
[ "region:us" ]
2023-08-16T00:03:40+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21696702, "num_examples": 6361}], "download_size": 11796178, "dataset_size": 21696702}}
2023-08-16T19:37:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "essayforum_writing_prompts_6k" More Information needed
[ "# Dataset Card for \"essayforum_writing_prompts_6k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"essayforum_writing_prompts_6k\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"essayforum_writing_prompts_6k\"\n\nMore Information needed" ]
e7b322c848d90759cc0c5db57dc2e0497357ed41
# Public Domain GitHub Repositories Dataset This dataset contains metadata and source code of 9,000 public domain (cc0 or unlicense) licensed GitHub repositories that have more than 25 stars. The dataset was created by scraping the GitHub API and downloading the repositories, so long as they are under 100mb. The dataset can be used for various natural language processing and software engineering tasks, such as code summarization, code generation, code search, code analysis, etc. ## Dataset Summary - **Number of repositories:** 9,000 - **Size:** 2.4 GB (compressed) - **Languages:** Python, JavaScript, Java, C#, C++, Ruby, PHP, Go, Swift, and Rust - **License:** Public Domain (cc0 or unlicense) ## Dataset License This dataset is released under the Public Domain (cc0 or unlicense) license. The original repositories are also licensed under the Public Domain (cc0 or unlicense) license. You can use this dataset for any purpose without any restrictions. ## Reproducing this dataset This dataset was produced by modifiying the "github-downloader" from EleutherAI. You can access our fork [on our GitHub page](https://github.com/KoalaAI-Research/github-downloader) Replication steps are included in it's readme there.
KoalaAI/GitHub-CC0
[ "task_categories:text-generation", "task_categories:text-classification", "size_categories:1K<n<10K", "license:cc0-1.0", "github", "programming", "code", "public domain", "cc0", "region:us" ]
2023-08-16T00:21:14+00:00
{"license": "cc0-1.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation", "text-classification"], "pretty_name": "Public Domain GitHub Repositories", "tags": ["github", "programming", "code", "public domain", "cc0"]}
2023-08-21T13:49:52+00:00
[]
[]
TAGS #task_categories-text-generation #task_categories-text-classification #size_categories-1K<n<10K #license-cc0-1.0 #github #programming #code #public domain #cc0 #region-us
# Public Domain GitHub Repositories Dataset This dataset contains metadata and source code of 9,000 public domain (cc0 or unlicense) licensed GitHub repositories that have more than 25 stars. The dataset was created by scraping the GitHub API and downloading the repositories, so long as they are under 100mb. The dataset can be used for various natural language processing and software engineering tasks, such as code summarization, code generation, code search, code analysis, etc. ## Dataset Summary - Number of repositories: 9,000 - Size: 2.4 GB (compressed) - Languages: Python, JavaScript, Java, C#, C++, Ruby, PHP, Go, Swift, and Rust - License: Public Domain (cc0 or unlicense) ## Dataset License This dataset is released under the Public Domain (cc0 or unlicense) license. The original repositories are also licensed under the Public Domain (cc0 or unlicense) license. You can use this dataset for any purpose without any restrictions. ## Reproducing this dataset This dataset was produced by modifiying the "github-downloader" from EleutherAI. You can access our fork on our GitHub page Replication steps are included in it's readme there.
[ "# Public Domain GitHub Repositories Dataset\n\nThis dataset contains metadata and source code of 9,000 public domain (cc0 or unlicense) licensed GitHub repositories that have more than 25 stars. \nThe dataset was created by scraping the GitHub API and downloading the repositories, so long as they are under 100mb.\nThe dataset can be used for various natural language processing and software engineering tasks, such as code summarization, code generation, code search, code analysis, etc.", "## Dataset Summary\n\n- Number of repositories: 9,000\n- Size: 2.4 GB (compressed)\n- Languages: Python, JavaScript, Java, C#, C++, Ruby, PHP, Go, Swift, and Rust\n- License: Public Domain (cc0 or unlicense)", "## Dataset License\nThis dataset is released under the Public Domain (cc0 or unlicense) license. The original repositories are also licensed under the Public Domain (cc0 or unlicense) license. You can use this dataset for any purpose without any restrictions.", "## Reproducing this dataset\nThis dataset was produced by modifiying the \"github-downloader\" from EleutherAI. You can access our fork on our GitHub page\nReplication steps are included in it's readme there." ]
[ "TAGS\n#task_categories-text-generation #task_categories-text-classification #size_categories-1K<n<10K #license-cc0-1.0 #github #programming #code #public domain #cc0 #region-us \n", "# Public Domain GitHub Repositories Dataset\n\nThis dataset contains metadata and source code of 9,000 public domain (cc0 or unlicense) licensed GitHub repositories that have more than 25 stars. \nThe dataset was created by scraping the GitHub API and downloading the repositories, so long as they are under 100mb.\nThe dataset can be used for various natural language processing and software engineering tasks, such as code summarization, code generation, code search, code analysis, etc.", "## Dataset Summary\n\n- Number of repositories: 9,000\n- Size: 2.4 GB (compressed)\n- Languages: Python, JavaScript, Java, C#, C++, Ruby, PHP, Go, Swift, and Rust\n- License: Public Domain (cc0 or unlicense)", "## Dataset License\nThis dataset is released under the Public Domain (cc0 or unlicense) license. The original repositories are also licensed under the Public Domain (cc0 or unlicense) license. You can use this dataset for any purpose without any restrictions.", "## Reproducing this dataset\nThis dataset was produced by modifiying the \"github-downloader\" from EleutherAI. You can access our fork on our GitHub page\nReplication steps are included in it's readme there." ]
[ 62, 116, 63, 60, 53 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-text-classification #size_categories-1K<n<10K #license-cc0-1.0 #github #programming #code #public domain #cc0 #region-us \n# Public Domain GitHub Repositories Dataset\n\nThis dataset contains metadata and source code of 9,000 public domain (cc0 or unlicense) licensed GitHub repositories that have more than 25 stars. \nThe dataset was created by scraping the GitHub API and downloading the repositories, so long as they are under 100mb.\nThe dataset can be used for various natural language processing and software engineering tasks, such as code summarization, code generation, code search, code analysis, etc.## Dataset Summary\n\n- Number of repositories: 9,000\n- Size: 2.4 GB (compressed)\n- Languages: Python, JavaScript, Java, C#, C++, Ruby, PHP, Go, Swift, and Rust\n- License: Public Domain (cc0 or unlicense)## Dataset License\nThis dataset is released under the Public Domain (cc0 or unlicense) license. The original repositories are also licensed under the Public Domain (cc0 or unlicense) license. You can use this dataset for any purpose without any restrictions.## Reproducing this dataset\nThis dataset was produced by modifiying the \"github-downloader\" from EleutherAI. You can access our fork on our GitHub page\nReplication steps are included in it's readme there." ]
cf5106113f148030333c4630b18b4de87fd288d7
# Dataset Card for "HWD_test_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Baiheng/HWD_test_dataset
[ "region:us" ]
2023-08-16T00:42:56+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "labels", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 96768944.55, "num_examples": 104510}], "download_size": 140564518, "dataset_size": 96768944.55}}
2023-08-16T00:45:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "HWD_test_dataset" More Information needed
[ "# Dataset Card for \"HWD_test_dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"HWD_test_dataset\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"HWD_test_dataset\"\n\nMore Information needed" ]
82e8bf85141e53b48030adc4c29015b4d1881b70
# Dataset of yunyun (Kono Subarashii Sekai ni Shukufuku wo!) This is the dataset of yunyun (Kono Subarashii Sekai ni Shukufuku wo!), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/yunyun_konosuba
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T00:49:47+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:52+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of yunyun (Kono Subarashii Sekai ni Shukufuku wo!) This is the dataset of yunyun (Kono Subarashii Sekai ni Shukufuku wo!), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of yunyun (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of yunyun (Kono Subarashii Sekai ni Shukufuku wo!), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of yunyun (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of yunyun (Kono Subarashii Sekai ni Shukufuku wo!), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 99 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of yunyun (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of yunyun (Kono Subarashii Sekai ni Shukufuku wo!), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
a685984ad8d8d7cbac7ed315c88556679cfa6570
# Dataset of chris (Kono Subarashii Sekai ni Shukufuku wo!) This is the dataset of chris (Kono Subarashii Sekai ni Shukufuku wo!), containing 52 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/chris_konosuba
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T01:02:16+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:54+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of chris (Kono Subarashii Sekai ni Shukufuku wo!) This is the dataset of chris (Kono Subarashii Sekai ni Shukufuku wo!), containing 52 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of chris (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of chris (Kono Subarashii Sekai ni Shukufuku wo!), containing 52 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of chris (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of chris (Kono Subarashii Sekai ni Shukufuku wo!), containing 52 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 99 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of chris (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of chris (Kono Subarashii Sekai ni Shukufuku wo!), containing 52 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
417d573f7b8caf6a0f3b4d70a01d4e243dbb114a
# Dataset Card for "irisTest_pubmed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zxvix/irisTest_pubmed
[ "region:us" ]
2023-08-16T01:06:34+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D"}}}}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 147552, "num_examples": 100}, {"name": "validation", "num_bytes": 1505111, "num_examples": 1000}], "download_size": 961769, "dataset_size": 1652663}}
2023-08-19T08:48:05+00:00
[]
[]
TAGS #region-us
# Dataset Card for "irisTest_pubmed" More Information needed
[ "# Dataset Card for \"irisTest_pubmed\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"irisTest_pubmed\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"irisTest_pubmed\"\n\nMore Information needed" ]
c3f2fe2f5fbfb87f4216db816471ebcfda4f3220
# サヨ子 音声コーパス ## ダウンロード方法 データセットを圧縮したzipファイルを、[gdrive](https://drive.google.com/file/d/1StMFlDH_RcBAaAyRwEIOWuTQFetVaMUU/view?usp=sharing)に置いています。 また、以下のスクリプトで、huggingface hubからダウンロードも可能です。 ``` # pip install --upgrade huggingface_hub from huggingface_hub import snapshot_download snapshot_download(repo_id="bandad/sayoko-tts-corpus", repo_type="dataset", revision="main", local_dir="./sayoko-tts-corpus") ``` ## 概要 81歳の女性の音声コーパスです。 `wav_noise`ディレクトリが実際に収録した音声です。高齢ということで、自宅にて収録し鈴虫などによるノイズが入っています。また、リップノイズも多いです。`wav`ディレクトリは、ノイズをできるだけ除去した音声ファイルです。音声合成などのタスクには、`wav`ディレクトリ配下の音声ファイルを使用してください。 音素+韻律記号を含むラベルは、`phoneme`ディレクトリに`音声ファイル名.txt`としておいています。 音素から変換したカナ+韻律記号を含むラベルは、`kana`ディレクトリに`音声ファイル名.txt`としておいています。 アクセントに関しては、ほとんど修正していないので、誤りを含んでいますが、Text to Speechができることは、確認しました。 また、音素は、手動で修正を行っていますが、誤りがあれば連絡ください。 連絡先: [email protected] github: https://github.com/k-washi x: https://twitter.com/kwashizzz # 利用規約 - 無料、商用で利用可能です。 - 「Fusic サヨ子音声コーパス」と、このリポジトリの[URL](https://huggingface.co/datasets/bandad/sayoko-tts-corpus)を、クレジットとして明記してください。ただし、動画や印刷物などリンクが貼れないものはリンクなしでOKです。 表記例: `[Fusic/サヨ子音声コーパス](https://huggingface.co/datasets/bandad/sayoko-tts-corpus)` - エロ・グロ作品などには使用しないでください。ホラーなどにはOKです。判断に迷う点があれば、ご相談ください。 - 再配布を行う場合は、本README.mdも一緒に配布してください。事後でも良いので、連絡先 or DMにてご一報ください。もし、再配布に関して、不明点があれば、ご相談ください。 - 音声ファイルなど本コーパスへの直リンクは禁止です。使用する場合は、ダウンロードし、自身のサーバーにアップロードしてそれを参照するようにアプリに組み込んでください。 - 音声関連の研究に自由に使用してください。学会発表等に使用する場合にも、事前の申請は不要です。 # 詳細 ## 属性 - 女性 - 81歳 ## その他 韻律記号は、以下になります。 | ラベル| 概要 | | --- | --- | | ^ | 文の始まり| | $ | 文の終わり | | _ | ポーズ | | # | アクセント境界 | | [ | アクセント上昇 | | ] | アクセント核 | | ? | 疑問| 音素の一覧 ``` [ 'a', 'i', 'u', 'e', 'o', 'k', 's', 't', 'n', 'h', 'm', 'y', 'r', 'w', 'g', 'z', 'd', 'p', 'b', 'ky', 'gy', 'sh', 'j', 'ch', 'ny', 'dy', 'f', 'hy', 'py', 'by', 'v', 'my', 'ry', 'cl', 'ty', 'N', 'ts', ] ```
bandad/sayoko-tts-corpus
[ "task_categories:text-to-speech", "language:ja", "license:cc-by-4.0", "region:us" ]
2023-08-16T01:11:29+00:00
{"language": ["ja"], "license": "cc-by-4.0", "task_categories": ["text-to-speech"]}
2023-08-16T04:42:35+00:00
[]
[ "ja" ]
TAGS #task_categories-text-to-speech #language-Japanese #license-cc-by-4.0 #region-us
サヨ子 音声コーパス ========== ダウンロード方法 -------- データセットを圧縮したzipファイルを、gdriveに置いています。 また、以下のスクリプトで、huggingface hubからダウンロードも可能です。 概要 -- 81歳の女性の音声コーパスです。 'wav\_noise'ディレクトリが実際に収録した音声です。高齢ということで、自宅にて収録し鈴虫などによるノイズが入っています。また、リップノイズも多いです。'wav'ディレクトリは、ノイズをできるだけ除去した音声ファイルです。音声合成などのタスクには、'wav'ディレクトリ配下の音声ファイルを使用してください。 音素+韻律記号を含むラベルは、'phoneme'ディレクトリに'音声ファイル名.txt'としておいています。 音素から変換したカナ+韻律記号を含むラベルは、'kana'ディレクトリに'音声ファイル名.txt'としておいています。 アクセントに関しては、ほとんど修正していないので、誤りを含んでいますが、Text to Speechができることは、確認しました。 また、音素は、手動で修正を行っていますが、誤りがあれば連絡ください。 連絡先: URL@URL github: URL x: URL 利用規約 ==== * 無料、商用で利用可能です。 * 「Fusic サヨ子音声コーパス」と、このリポジトリのURLを、クレジットとして明記してください。ただし、動画や印刷物などリンクが貼れないものはリンクなしでOKです。 表記例: 'Fusic/サヨ子音声コーパス' * エロ・グロ作品などには使用しないでください。ホラーなどにはOKです。判断に迷う点があれば、ご相談ください。 * 再配布を行う場合は、本README.mdも一緒に配布してください。事後でも良いので、連絡先 or DMにてご一報ください。もし、再配布に関して、不明点があれば、ご相談ください。 * 音声ファイルなど本コーパスへの直リンクは禁止です。使用する場合は、ダウンロードし、自身のサーバーにアップロードしてそれを参照するようにアプリに組み込んでください。 * 音声関連の研究に自由に使用してください。学会発表等に使用する場合にも、事前の申請は不要です。 詳細 == 属性 -- * 女性 * 81歳 その他 --- 韻律記号は、以下になります。 音素の一覧
[]
[ "TAGS\n#task_categories-text-to-speech #language-Japanese #license-cc-by-4.0 #region-us \n" ]
[ 34 ]
[ "passage: TAGS\n#task_categories-text-to-speech #language-Japanese #license-cc-by-4.0 #region-us \n" ]
9c7a7dcea25dfcfc22ce161003e19dc7cfaf1539
# Dataset of iris (Kono Subarashii Sekai ni Shukufuku wo!) This is the dataset of iris (Kono Subarashii Sekai ni Shukufuku wo!), containing 40 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/iris_konosuba
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T01:15:03+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:57+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of iris (Kono Subarashii Sekai ni Shukufuku wo!) This is the dataset of iris (Kono Subarashii Sekai ni Shukufuku wo!), containing 40 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of iris (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of iris (Kono Subarashii Sekai ni Shukufuku wo!), containing 40 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of iris (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of iris (Kono Subarashii Sekai ni Shukufuku wo!), containing 40 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 97 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of iris (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of iris (Kono Subarashii Sekai ni Shukufuku wo!), containing 40 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
b30243615d905efd5b13b1db16b7ed57f2f9d4e6
# Dataset of ryu_lion (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of ryu_lion (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 174 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/ryu_lion_isitwrongtotrytopickupgirlsinadungeon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T01:29:38+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:08:59+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of ryu_lion (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of ryu_lion (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 174 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of ryu_lion (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of ryu_lion (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 174 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of ryu_lion (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of ryu_lion (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 174 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 115 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of ryu_lion (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of ryu_lion (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 174 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
149ed660c6dc9ff906574799a94166cf19d6a01b
# Dataset of komekko (Kono Subarashii Sekai ni Shukufuku wo!) This is the dataset of komekko (Kono Subarashii Sekai ni Shukufuku wo!), containing 59 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/komekko_konosuba
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T01:36:43+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:01+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of komekko (Kono Subarashii Sekai ni Shukufuku wo!) This is the dataset of komekko (Kono Subarashii Sekai ni Shukufuku wo!), containing 59 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of komekko (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of komekko (Kono Subarashii Sekai ni Shukufuku wo!), containing 59 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of komekko (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of komekko (Kono Subarashii Sekai ni Shukufuku wo!), containing 59 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 99 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of komekko (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of komekko (Kono Subarashii Sekai ni Shukufuku wo!), containing 59 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
c6362db561ec13c08c61b141cdd0db8e1af09b77
# Dataset Card for "test_mechanic" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
will33am/test_mechanic
[ "region:us" ]
2023-08-16T01:44:43+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "filepath", "dtype": "string"}, {"name": "race", "dtype": {"class_label": {"names": {"0": "asian", "1": "black", "2": "caucasian", "3": "indian"}}}}, {"name": "id", "dtype": "int64"}, {"name": "occupation", "dtype": {"class_label": {"names": {"0": "aerospace engineer", "1": "automobile engineer", "2": "civil engineer", "3": "electrical engineer", "4": "industrial engineer", "5": "mechanic", "6": "mechanical engineer", "7": "petroleum engineer"}}}}, {"name": "clip_tags_LAION_ViT_L_14_2B_ensemble_specific", "dtype": "string"}, {"name": "clip_tags_LAION_ViT_H_14_2B_simple_specific", "dtype": "string"}, {"name": "clip_tags_LAION_ViT_B_32_2B_simple_specific", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 462980204.0, "num_examples": 4800}], "download_size": 462626268, "dataset_size": 462980204.0}}
2023-08-16T02:16:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "test_mechanic" More Information needed
[ "# Dataset Card for \"test_mechanic\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"test_mechanic\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"test_mechanic\"\n\nMore Information needed" ]
84d86d5d947e6a9b969f36446b1abec7925afa82
Removed whitespace, and made utf-8. ` chatdoctor200k-stripped-dolph.json` is the same as ` chatdoctor200k-stripped.json` except its instruction is replaced to say that the AI *IS* a doctor; "**You are a doctor. Answer the medical questions based on the patient's description.**" instead of "**If you are a doctor, please answer the medical questions based on the patient's description.**"
xzuyn/chatdoctor-200k-stripped
[ "size_categories:100K<n<1M", "region:us" ]
2023-08-16T01:45:04+00:00
{"size_categories": ["100K<n<1M"]}
2023-08-16T02:03:56+00:00
[]
[]
TAGS #size_categories-100K<n<1M #region-us
Removed whitespace, and made utf-8. ' URL' is the same as ' URL' except its instruction is replaced to say that the AI *IS* a doctor; "You are a doctor. Answer the medical questions based on the patient's description." instead of "If you are a doctor, please answer the medical questions based on the patient's description."
[]
[ "TAGS\n#size_categories-100K<n<1M #region-us \n" ]
[ 18 ]
[ "passage: TAGS\n#size_categories-100K<n<1M #region-us \n" ]
a2d4dd245deaf60210e61252bab5faf1312b0043
# Math Textbook 635k of tiny math textbooks [Not ready yet!] ## Topics included - Algebra - Calculus - Geometry - Logic and Deduction - Mathematical Thinking - Probability - Statistics
nampdn-ai/tiny-math-textbooks
[ "region:us" ]
2023-08-16T01:45:11+00:00
{}
2024-01-27T21:59:03+00:00
[]
[]
TAGS #region-us
# Math Textbook 635k of tiny math textbooks [Not ready yet!] ## Topics included - Algebra - Calculus - Geometry - Logic and Deduction - Mathematical Thinking - Probability - Statistics
[ "# Math Textbook\n\n635k of tiny math textbooks [Not ready yet!]", "## Topics included\n- Algebra\n- Calculus\n- Geometry\n- Logic and Deduction\n- Mathematical Thinking\n- Probability\n- Statistics" ]
[ "TAGS\n#region-us \n", "# Math Textbook\n\n635k of tiny math textbooks [Not ready yet!]", "## Topics included\n- Algebra\n- Calculus\n- Geometry\n- Logic and Deduction\n- Mathematical Thinking\n- Probability\n- Statistics" ]
[ 6, 19, 33 ]
[ "passage: TAGS\n#region-us \n# Math Textbook\n\n635k of tiny math textbooks [Not ready yet!]## Topics included\n- Algebra\n- Calculus\n- Geometry\n- Logic and Deduction\n- Mathematical Thinking\n- Probability\n- Statistics" ]
a7047a96648d4b045c5d62a4c5968871309468c3
# Dataset Card for "bengaliAI-preprocessed-whisper-medium-0-10000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Rounak28/bengaliAI-preprocessed-whisper-medium-0-10000
[ "region:us" ]
2023-08-16T01:49:11+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 9613174127, "num_examples": 10000}], "download_size": 1375361402, "dataset_size": 9613174127}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-16T01:50:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bengaliAI-preprocessed-whisper-medium-0-10000" More Information needed
[ "# Dataset Card for \"bengaliAI-preprocessed-whisper-medium-0-10000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bengaliAI-preprocessed-whisper-medium-0-10000\"\n\nMore Information needed" ]
[ 6, 27 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"bengaliAI-preprocessed-whisper-medium-0-10000\"\n\nMore Information needed" ]
e3fef22bf9bcf382b94632a06c10e1f69f2dcd72
# Dataset Card for "api_single_4k_truncateright" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
venkatsrini/api_single_4k_truncateright
[ "region:us" ]
2023-08-16T01:59:05+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": {"sequence": "int32"}}, {"name": "token_type_ids", "sequence": {"sequence": "int32"}}, {"name": "attention_mask", "sequence": {"sequence": {"sequence": "bool"}}}, {"name": "labels", "sequence": {"sequence": "int32"}}], "splits": [{"name": "train", "num_bytes": 5298664000, "num_examples": 2450}], "download_size": 276339066, "dataset_size": 5298664000}}
2023-08-16T02:08:45+00:00
[]
[]
TAGS #region-us
# Dataset Card for "api_single_4k_truncateright" More Information needed
[ "# Dataset Card for \"api_single_4k_truncateright\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"api_single_4k_truncateright\"\n\nMore Information needed" ]
[ 6, 22 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"api_single_4k_truncateright\"\n\nMore Information needed" ]
93984d877fa4fd9c60de5ab6e583131a176cb836
# Dataset of emilia (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of emilia (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/emilia_rezero
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T02:21:03+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:03+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of emilia (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of emilia (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of emilia (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of emilia (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of emilia (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of emilia (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 99 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of emilia (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of emilia (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
979aae73dce1caa71c6f1f79f15d769f6fcf8bd3
# Dataset Card for Music-Audio-Pseudo Captions `Pseudo Music and Audio Captions` from **[LP-MusicCaps](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MSD)**, **[Music Negation/Temporal Ordering](https://huggingface.co/datasets/mulab/diagnostic_eval_musdb)** **[WavCaps](https://huggingface.co/datasets/cvssp/WavCaps/tree/main/json_files)** ## Dataset Summary Compared to other domains, music and audio domains cannot obtain well-written web caption data, and caption annotation is expensive. Therefore, we use the Music (LP-MusicCaps), (Music Negation/Temporal Ordering) and Audio (Wavcaps) datasets created with ChatGPT to re-organize them in the form of `instructions`, `input`, and `ouput` (same with Alpaca format). This dataset was created for the purpose of finetunning LLMs. Update Soon
seungheondoh/music-audio-pseudo-captions
[ "task_categories:text2text-generation", "size_categories:100K<n<1M", "language:en", "license:mit", "music", "audio", "caption", "region:us" ]
2023-08-16T02:27:17+00:00
{"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text2text-generation"], "tags": ["music", "audio", "caption"]}
2023-08-16T02:29:49+00:00
[]
[ "en" ]
TAGS #task_categories-text2text-generation #size_categories-100K<n<1M #language-English #license-mit #music #audio #caption #region-us
# Dataset Card for Music-Audio-Pseudo Captions 'Pseudo Music and Audio Captions' from LP-MusicCaps, Music Negation/Temporal Ordering WavCaps ## Dataset Summary Compared to other domains, music and audio domains cannot obtain well-written web caption data, and caption annotation is expensive. Therefore, we use the Music (LP-MusicCaps), (Music Negation/Temporal Ordering) and Audio (Wavcaps) datasets created with ChatGPT to re-organize them in the form of 'instructions', 'input', and 'ouput' (same with Alpaca format). This dataset was created for the purpose of finetunning LLMs. Update Soon
[ "# Dataset Card for Music-Audio-Pseudo Captions\n\n'Pseudo Music and Audio Captions' from LP-MusicCaps, Music Negation/Temporal Ordering WavCaps", "## Dataset Summary\n\nCompared to other domains, music and audio domains cannot obtain well-written web caption data, and caption annotation is expensive. \nTherefore, we use the Music (LP-MusicCaps), (Music Negation/Temporal Ordering) and Audio (Wavcaps) datasets created with ChatGPT to re-organize them in the form of 'instructions', 'input', and 'ouput' (same with Alpaca format).\n\nThis dataset was created for the purpose of finetunning LLMs.\n\n\nUpdate Soon" ]
[ "TAGS\n#task_categories-text2text-generation #size_categories-100K<n<1M #language-English #license-mit #music #audio #caption #region-us \n", "# Dataset Card for Music-Audio-Pseudo Captions\n\n'Pseudo Music and Audio Captions' from LP-MusicCaps, Music Negation/Temporal Ordering WavCaps", "## Dataset Summary\n\nCompared to other domains, music and audio domains cannot obtain well-written web caption data, and caption annotation is expensive. \nTherefore, we use the Music (LP-MusicCaps), (Music Negation/Temporal Ordering) and Audio (Wavcaps) datasets created with ChatGPT to re-organize them in the form of 'instructions', 'input', and 'ouput' (same with Alpaca format).\n\nThis dataset was created for the purpose of finetunning LLMs.\n\n\nUpdate Soon" ]
[ 48, 44, 124 ]
[ "passage: TAGS\n#task_categories-text2text-generation #size_categories-100K<n<1M #language-English #license-mit #music #audio #caption #region-us \n# Dataset Card for Music-Audio-Pseudo Captions\n\n'Pseudo Music and Audio Captions' from LP-MusicCaps, Music Negation/Temporal Ordering WavCaps## Dataset Summary\n\nCompared to other domains, music and audio domains cannot obtain well-written web caption data, and caption annotation is expensive. \nTherefore, we use the Music (LP-MusicCaps), (Music Negation/Temporal Ordering) and Audio (Wavcaps) datasets created with ChatGPT to re-organize them in the form of 'instructions', 'input', and 'ouput' (same with Alpaca format).\n\nThis dataset was created for the purpose of finetunning LLMs.\n\n\nUpdate Soon" ]
879139e2bc417d2779af592eecd66b22689a8b24
# Dataset of sanjouno_haruhime (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of sanjouno_haruhime (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/sanjouno_haruhime_isitwrongtotrytopickupgirlsinadungeon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T02:53:10+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:05+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of sanjouno_haruhime (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of sanjouno_haruhime (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of sanjouno_haruhime (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of sanjouno_haruhime (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of sanjouno_haruhime (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of sanjouno_haruhime (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 121 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of sanjouno_haruhime (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of sanjouno_haruhime (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
b0c49721c7fc829f048ed02765d88d0c97d9a0db
# Dataset Card for "canto_full_7" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
thisiskeithkwan/canto_full_7
[ "region:us" ]
2023-08-16T03:01:30+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10781838762.132, "num_examples": 27269}], "download_size": 1417911450, "dataset_size": 10781838762.132}}
2023-08-16T05:30:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for "canto_full_7" More Information needed
[ "# Dataset Card for \"canto_full_7\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"canto_full_7\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"canto_full_7\"\n\nMore Information needed" ]
2a62ed54c0568409b05eac515a19b680e2285876
# Dataset of freya (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of freya (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 29 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/freya_isitwrongtotrytopickupgirlsinadungeon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T03:06:23+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:07+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of freya (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of freya (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 29 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of freya (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of freya (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 29 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of freya (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of freya (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 29 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 111 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of freya (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of freya (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 29 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
ded02e581cc4eab4ea20ce1a8b7f122c5377c359
# Dataset of echidna (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of echidna (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/echidna_rezero
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T03:15:49+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:09+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of echidna (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of echidna (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of echidna (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of echidna (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of echidna (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of echidna (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 101 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of echidna (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of echidna (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
697da1e6714aabfc6fd7d29454d27c430d0b68f6
# Dataset of yamato_mikoto (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of yamato_mikoto (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 28 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/yamato_mikoto_isitwrongtotrytopickupgirlsinadungeon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T03:17:51+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:11+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of yamato_mikoto (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of yamato_mikoto (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 28 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of yamato_mikoto (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of yamato_mikoto (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 28 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of yamato_mikoto (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of yamato_mikoto (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 28 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 117 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of yamato_mikoto (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of yamato_mikoto (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 28 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
f192276ca5ed8344e293749a1f15a26f4dc27f27
# Dataset Card for "DiffusionPerturbations_sample" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
DiffusionPerturbations/DiffusionPerturbations_sample
[ "region:us" ]
2023-08-16T03:19:05+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "base", "dtype": "image"}, {"name": "mask", "dtype": "image"}, {"name": "filepath", "dtype": "string"}, {"name": "occupation", "dtype": {"class_label": {"names": {"0": "chef", "1": "doctor", "2": "firefighter", "3": "mechanic", "4": "pilot"}}}}, {"name": "perceived_race", "dtype": {"class_label": {"names": {"0": "asian", "1": "black", "2": "caucasian", "3": "indian"}}}}], "splits": [{"name": "test", "num_bytes": 206463135.66666666, "num_examples": 1000}], "download_size": 111265228, "dataset_size": 206463135.66666666}}
2023-08-16T03:38:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "DiffusionPerturbations_sample" More Information needed
[ "# Dataset Card for \"DiffusionPerturbations_sample\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"DiffusionPerturbations_sample\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"DiffusionPerturbations_sample\"\n\nMore Information needed" ]
623ba3d674b8d15f66bb2c0a508bcade837d9ccf
# Dataset Card for "clean-data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Back-up/clean-data
[ "region:us" ]
2023-08-16T03:23:08+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "update", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6179314, "num_examples": 24000}], "download_size": 3465652, "dataset_size": 6179314}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-17T02:49:36+00:00
[]
[]
TAGS #region-us
# Dataset Card for "clean-data" More Information needed
[ "# Dataset Card for \"clean-data\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"clean-data\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"clean-data\"\n\nMore Information needed" ]
db2d5ca22e8b82cab86f5bbd9ec4eddf1e889843
# Dataset Card for "airo" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TinyPixel/airoboros_llama2
[ "region:us" ]
2023-08-16T03:23:26+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "category", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 213067797, "num_examples": 59277}], "download_size": 111592267, "dataset_size": 213067797}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-15T05:01:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "airo" More Information needed
[ "# Dataset Card for \"airo\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"airo\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"airo\"\n\nMore Information needed" ]
3b50d90f7f48c23bb41d4b4c91f654d80ea5546d
# Dataset Card for "airo-1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TinyPixel/airo-1
[ "region:us" ]
2023-08-16T03:28:48+00:00
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "question_id", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 57737476, "num_examples": 34204}], "download_size": 30991700, "dataset_size": 57737476}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-02T09:26:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "airo-1" More Information needed
[ "# Dataset Card for \"airo-1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"airo-1\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"airo-1\"\n\nMore Information needed" ]
15303f3f957f9185f8820bdbe9a4615b2bde64c2
# Dataset Card for "toxicContenData" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Back-up/toxicContenData
[ "region:us" ]
2023-08-16T03:29:03+00:00
{"dataset_info": {"features": [{"name": "answer", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "update", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 174657, "num_examples": 626}], "download_size": 93236, "dataset_size": 174657}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-16T03:29:06+00:00
[]
[]
TAGS #region-us
# Dataset Card for "toxicContenData" More Information needed
[ "# Dataset Card for \"toxicContenData\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"toxicContenData\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"toxicContenData\"\n\nMore Information needed" ]
152474a9a313e76acc11da756ef44d9429e30cbe
# Dataset of riveria_ljos_alf (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of riveria_ljos_alf (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 26 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/riveria_ljos_alf_isitwrongtotrytopickupgirlsinadungeon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T03:29:25+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:13+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of riveria_ljos_alf (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of riveria_ljos_alf (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 26 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of riveria_ljos_alf (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of riveria_ljos_alf (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 26 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of riveria_ljos_alf (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of riveria_ljos_alf (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 26 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 123 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of riveria_ljos_alf (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of riveria_ljos_alf (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 26 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
900f273090724648657595c167d7966999a1529c
# Dataset Card for "airo-2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TinyPixel/airo-2
[ "region:us" ]
2023-08-16T03:29:56+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 59526311, "num_examples": 34204}], "download_size": 30238636, "dataset_size": 59526311}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-09-02T09:41:45+00:00
[]
[]
TAGS #region-us
# Dataset Card for "airo-2" More Information needed
[ "# Dataset Card for \"airo-2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"airo-2\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"airo-2\"\n\nMore Information needed" ]
781b0baef212d28e993c3c231a5b86bd23a058f3
# Dataset Card for "apitext_multiple" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
havens2/apitext_multiple
[ "region:us" ]
2023-08-16T03:33:31+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 697585, "num_examples": 1055}], "download_size": 322725, "dataset_size": 697585}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-16T04:04:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for "apitext_multiple" More Information needed
[ "# Dataset Card for \"apitext_multiple\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"apitext_multiple\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"apitext_multiple\"\n\nMore Information needed" ]
9a55b0f6eafa41548565bfe37f328caa73d0b2bc
# Dataset Card for "train-classification-1k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Back-up/train-classification-1k
[ "region:us" ]
2023-08-16T03:35:51+00:00
{"dataset_info": {"features": [{"name": "answer", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "update", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 13575089.0, "num_examples": 48009}], "download_size": 7797354, "dataset_size": 13575089.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-17T02:50:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "train-classification-1k" More Information needed
[ "# Dataset Card for \"train-classification-1k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"train-classification-1k\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"train-classification-1k\"\n\nMore Information needed" ]
879d09c88cd88ff547673bc7f899c825ae656b7b
# Dataset of felt (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of felt (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 85 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/felt_rezero
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T03:38:05+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:15+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of felt (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of felt (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 85 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of felt (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of felt (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 85 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of felt (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of felt (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 85 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 97 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of felt (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of felt (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 85 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
89e0ec2cc83dd907d2ba5daf646c7e63c8af8a67
# Dataset Card for "cat_kingdom" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Andyrasika/cat_kingdom
[ "region:us" ]
2023-08-16T03:41:03+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "mask", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 224581.0, "num_examples": 9}], "download_size": 230721, "dataset_size": 224581.0}}
2023-09-18T15:14:42+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cat_kingdom" More Information needed
[ "# Dataset Card for \"cat_kingdom\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cat_kingdom\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"cat_kingdom\"\n\nMore Information needed" ]
7f342331c64932b861209f7ceeb31d64b0b0c73d
# Dataset of tiona_hiryute (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of tiona_hiryute (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 34 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/tiona_hiryute_isitwrongtotrytopickupgirlsinadungeon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T03:46:04+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:17+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of tiona_hiryute (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of tiona_hiryute (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 34 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of tiona_hiryute (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of tiona_hiryute (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 34 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of tiona_hiryute (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of tiona_hiryute (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 34 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 119 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of tiona_hiryute (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of tiona_hiryute (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 34 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
ec846a68b101f8acb787ae29fc812ec4642c7a55
# Dataset of satella (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of satella (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 49 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/satella_rezero
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T03:53:42+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:19+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of satella (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of satella (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 49 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of satella (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of satella (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 49 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of satella (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of satella (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 49 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 99 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of satella (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of satella (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 49 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
da9dd7d1686f7578bf3078487b10cee72b3316d3
# Dataset of wiene (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of wiene (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 23 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/wiene_isitwrongtotrytopickupgirlsinadungeon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T03:58:37+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:21+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of wiene (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka) This is the dataset of wiene (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 23 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of wiene (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of wiene (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 23 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of wiene (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of wiene (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 23 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 111 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of wiene (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka)\n\nThis is the dataset of wiene (Dungeon ni Deai wo Motomeru no wa Machigatteiru no Darou ka), containing 23 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
fbf78a9d02b66e975539faf86554da2fc6b69c3c
# Dataset Card for "Soldering-Data-Tiny-More-Data-aug-appearance-hole-0816" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AndyLiu0104/Soldering-Data-Tiny-More-Data-aug-appearance-hole-0816
[ "region:us" ]
2023-08-16T04:04:45+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14151528.625, "num_examples": 10475}], "download_size": 9077914, "dataset_size": 14151528.625}}
2023-08-16T04:04:51+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Soldering-Data-Tiny-More-Data-aug-appearance-hole-0816" More Information needed
[ "# Dataset Card for \"Soldering-Data-Tiny-More-Data-aug-appearance-hole-0816\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Soldering-Data-Tiny-More-Data-aug-appearance-hole-0816\"\n\nMore Information needed" ]
[ 6, 31 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Soldering-Data-Tiny-More-Data-aug-appearance-hole-0816\"\n\nMore Information needed" ]
d600436830d1a60bc356a5a60c5c9c895a72c62b
# Dataset Card for "office-llm" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TallalUsman/office-llm
[ "region:us" ]
2023-08-16T04:04:57+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 1876884.0, "num_examples": 229}, {"name": "test", "num_bytes": 213096.0, "num_examples": 26}], "download_size": 991661, "dataset_size": 2089980.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2023-08-16T04:05:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "office-llm" More Information needed
[ "# Dataset Card for \"office-llm\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"office-llm\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"office-llm\"\n\nMore Information needed" ]
e321283d41aa44e1fa1e2c88e70d18741059f000
# Dataset Card for "Soldering-Data-Tiny-More-Data-aug-appearance-hole-0816-2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AndyLiu0104/Soldering-Data-Tiny-More-Data-aug-appearance-hole-0816-2
[ "region:us" ]
2023-08-16T04:06:37+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 17735426.875, "num_examples": 10481}], "download_size": 11534138, "dataset_size": 17735426.875}}
2023-08-16T04:06:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Soldering-Data-Tiny-More-Data-aug-appearance-hole-0816-2" More Information needed
[ "# Dataset Card for \"Soldering-Data-Tiny-More-Data-aug-appearance-hole-0816-2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Soldering-Data-Tiny-More-Data-aug-appearance-hole-0816-2\"\n\nMore Information needed" ]
[ 6, 32 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Soldering-Data-Tiny-More-Data-aug-appearance-hole-0816-2\"\n\nMore Information needed" ]
d1e1c91387a2fcd3f987e125c23f2c3895aea549
# Dataset Card for "CEBaB" This is a lightly cleaned and simplified version of the CEBaB counterfactual restaurant review dataset from [this paper](https://arxiv.org/abs/2205.14140). The most important difference from the original dataset is that the `rating` column corresponds to the _median_ rating provided by the Mechanical Turkers, rather than the majority rating. These are the same whenever a majority rating exists, but when there is no majority rating (e.g. because there were two 1s, two 2s, and one 3), the original dataset used a `"no majority"` placeholder whereas we are able to provide an aggregate rating for all reviews. The exact code used to process the original dataset is provided below: ```py from ast import literal_eval from datasets import DatasetDict, Value, load_dataset def compute_median(x: str): """Compute the median rating given a multiset of ratings.""" # Decode the dictionary from string format dist = literal_eval(x) # Should be a dictionary whose keys are string-encoded integer ratings # and whose values are the number of times that the rating was observed assert isinstance(dist, dict) assert sum(dist.values()) % 2 == 1, "Number of ratings should be odd" ratings = [] for rating, count in dist.items(): ratings.extend([int(rating)] * count) ratings.sort() return ratings[len(ratings) // 2] cebab = load_dataset('CEBaB/CEBaB') assert isinstance(cebab, DatasetDict) # Remove redundant splits cebab['train'] = cebab.pop('train_inclusive') del cebab['train_exclusive'] del cebab['train_observational'] cebab = cebab.cast_column( 'original_id', Value('int32') ).map( lambda x: { # New column with inverted label for counterfactuals 'counterfactual': not x['is_original'], # Reduce the rating multiset into a single median rating 'rating': compute_median(x['review_label_distribution']) } ).map( # Replace the empty string and 'None' with Apache Arrow nulls lambda x: { k: v if v not in ('', 'no majority', 'None') else None for k, v in x.items() } ) # Sanity check that all the splits have the same columns cols = next(iter(cebab.values())).column_names assert all(split.column_names == cols for split in cebab.values()) # Clean up the names a bit cebab = cebab.rename_columns({ col: col.removesuffix('_majority').removesuffix('_aspect') for col in cols if col.endswith('_majority') }).rename_column( 'description', 'text' ) # Drop the unimportant columns cebab = cebab.remove_columns([ col for col in cols if col.endswith('_distribution') or col.endswith('_workers') ] + [ 'edit_id', 'edit_worker', 'id', 'is_original', 'opentable_metadata', 'review' ]).sort([ # Make sure counterfactual reviews come immediately after each original review 'original_id', 'counterfactual' ]) ```
EleutherAI/CEBaB
[ "task_categories:text-classification", "language:en", "license:cc-by-4.0", "arxiv:2205.14140", "region:us" ]
2023-08-16T04:13:51+00:00
{"language": ["en"], "license": "cc-by-4.0", "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "original_id", "dtype": "int32"}, {"name": "edit_goal", "dtype": "string"}, {"name": "edit_type", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "food", "dtype": "string"}, {"name": "ambiance", "dtype": "string"}, {"name": "service", "dtype": "string"}, {"name": "noise", "dtype": "string"}, {"name": "counterfactual", "dtype": "bool"}, {"name": "rating", "dtype": "int64"}], "splits": [{"name": "validation", "num_bytes": 306529, "num_examples": 1673}, {"name": "test", "num_bytes": 309751, "num_examples": 1689}, {"name": "train", "num_bytes": 2282439, "num_examples": 11728}], "download_size": 628886, "dataset_size": 2898719}}
2023-08-16T22:09:21+00:00
[ "2205.14140" ]
[ "en" ]
TAGS #task_categories-text-classification #language-English #license-cc-by-4.0 #arxiv-2205.14140 #region-us
# Dataset Card for "CEBaB" This is a lightly cleaned and simplified version of the CEBaB counterfactual restaurant review dataset from this paper. The most important difference from the original dataset is that the 'rating' column corresponds to the _median_ rating provided by the Mechanical Turkers, rather than the majority rating. These are the same whenever a majority rating exists, but when there is no majority rating (e.g. because there were two 1s, two 2s, and one 3), the original dataset used a '"no majority"' placeholder whereas we are able to provide an aggregate rating for all reviews. The exact code used to process the original dataset is provided below:
[ "# Dataset Card for \"CEBaB\"\n\nThis is a lightly cleaned and simplified version of the CEBaB counterfactual restaurant review dataset from this paper.\nThe most important difference from the original dataset is that the 'rating' column corresponds to the _median_ rating provided by the Mechanical Turkers,\nrather than the majority rating. These are the same whenever a majority rating exists, but when there is no majority rating (e.g. because there were two 1s,\ntwo 2s, and one 3), the original dataset used a '\"no majority\"' placeholder whereas we are able to provide an aggregate rating for all reviews.\n\nThe exact code used to process the original dataset is provided below:" ]
[ "TAGS\n#task_categories-text-classification #language-English #license-cc-by-4.0 #arxiv-2205.14140 #region-us \n", "# Dataset Card for \"CEBaB\"\n\nThis is a lightly cleaned and simplified version of the CEBaB counterfactual restaurant review dataset from this paper.\nThe most important difference from the original dataset is that the 'rating' column corresponds to the _median_ rating provided by the Mechanical Turkers,\nrather than the majority rating. These are the same whenever a majority rating exists, but when there is no majority rating (e.g. because there were two 1s,\ntwo 2s, and one 3), the original dataset used a '\"no majority\"' placeholder whereas we are able to provide an aggregate rating for all reviews.\n\nThe exact code used to process the original dataset is provided below:" ]
[ 39, 161 ]
[ "passage: TAGS\n#task_categories-text-classification #language-English #license-cc-by-4.0 #arxiv-2205.14140 #region-us \n# Dataset Card for \"CEBaB\"\n\nThis is a lightly cleaned and simplified version of the CEBaB counterfactual restaurant review dataset from this paper.\nThe most important difference from the original dataset is that the 'rating' column corresponds to the _median_ rating provided by the Mechanical Turkers,\nrather than the majority rating. These are the same whenever a majority rating exists, but when there is no majority rating (e.g. because there were two 1s,\ntwo 2s, and one 3), the original dataset used a '\"no majority\"' placeholder whereas we are able to provide an aggregate rating for all reviews.\n\nThe exact code used to process the original dataset is provided below:" ]
5b04ce2936c808732dea6735d6407c44d742ea3c
# Dataset Card for "vivos-fleurs" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
JRHuy/vivos-fleurs
[ "region:us" ]
2023-08-16T04:14:03+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3812520903.0, "num_examples": 14654}, {"name": "test", "num_bytes": 778309245.448, "num_examples": 1617}, {"name": "validation", "num_bytes": 275255625.0, "num_examples": 361}], "download_size": 4811668493, "dataset_size": 4866085773.448}}
2023-08-16T04:18:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "vivos-fleurs" More Information needed
[ "# Dataset Card for \"vivos-fleurs\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"vivos-fleurs\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"vivos-fleurs\"\n\nMore Information needed" ]
27730921709a88afd081874b43ccfaedf35621c3
# Dataset Card for "stocks_one_nvda_v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
achang/stocks_one_nvda_v2
[ "region:us" ]
2023-08-16T04:16:13+00:00
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2469069, "num_examples": 1538}], "download_size": 148179, "dataset_size": 2469069}}
2023-08-16T05:50:09+00:00
[]
[]
TAGS #region-us
# Dataset Card for "stocks_one_nvda_v2" More Information needed
[ "# Dataset Card for \"stocks_one_nvda_v2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"stocks_one_nvda_v2\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"stocks_one_nvda_v2\"\n\nMore Information needed" ]
c63848b727dca99792b3b2cb3707ab2650413bff
# Dataset of priscilla_barielle (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of priscilla_barielle (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 99 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/priscilla_barielle_rezero
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T04:18:26+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:23+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of priscilla_barielle (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of priscilla_barielle (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 99 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of priscilla_barielle (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of priscilla_barielle (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 99 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of priscilla_barielle (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of priscilla_barielle (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 99 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 105 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of priscilla_barielle (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of priscilla_barielle (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 99 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
e0de49d272162411d471ad4185dd578174a2a247
# Dataset Card for "cntt2-fleurs" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
JRHuy/cntt2-fleurs
[ "region:us" ]
2023-08-16T04:23:05+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7950015165.473, "num_examples": 5211}, {"name": "test", "num_bytes": 1345940254.064, "num_examples": 1104}, {"name": "validation", "num_bytes": 275255625.0, "num_examples": 361}], "download_size": 9266054751, "dataset_size": 9571211044.536999}}
2023-08-16T04:31:06+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cntt2-fleurs" More Information needed
[ "# Dataset Card for \"cntt2-fleurs\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cntt2-fleurs\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"cntt2-fleurs\"\n\nMore Information needed" ]
0f9d8a2973dd0edbe3bc27e7e6c13186fa645251
# Dataset Card for "Open-Platypus-Chat" This is the [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) dataset converted to sharegpt format, with a handful of potential refusals removed. All credit to the OpenPlatypus team and the original authors of the various component datasets.
chargoddard/Open-Platypus-Chat
[ "task_categories:question-answering", "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "license:other", "region:us" ]
2023-08-16T04:23:15+00:00
{"language": ["en"], "license": "other", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "text-generation"], "dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 30710879, "num_examples": 24887}], "download_size": 15122740, "dataset_size": 30710879}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-16T04:23:17+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-other #region-us
# Dataset Card for "Open-Platypus-Chat" This is the Open-Platypus dataset converted to sharegpt format, with a handful of potential refusals removed. All credit to the OpenPlatypus team and the original authors of the various component datasets.
[ "# Dataset Card for \"Open-Platypus-Chat\"\n\nThis is the Open-Platypus dataset converted to sharegpt format, with a handful of potential refusals removed. All credit to the OpenPlatypus team and the original authors of the various component datasets." ]
[ "TAGS\n#task_categories-question-answering #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-other #region-us \n", "# Dataset Card for \"Open-Platypus-Chat\"\n\nThis is the Open-Platypus dataset converted to sharegpt format, with a handful of potential refusals removed. All credit to the OpenPlatypus team and the original authors of the various component datasets." ]
[ 50, 64 ]
[ "passage: TAGS\n#task_categories-question-answering #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-other #region-us \n# Dataset Card for \"Open-Platypus-Chat\"\n\nThis is the Open-Platypus dataset converted to sharegpt format, with a handful of potential refusals removed. All credit to the OpenPlatypus team and the original authors of the various component datasets." ]
66aad2e68731030de4c4fb483389ca34e1e0d9bf
# Dataset of petra_leyte (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of petra_leyte (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 52 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/petra_leyte_rezero
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T04:31:32+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:25+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of petra_leyte (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of petra_leyte (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 52 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of petra_leyte (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of petra_leyte (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 52 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of petra_leyte (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of petra_leyte (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 52 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 105 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of petra_leyte (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of petra_leyte (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 52 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
1519f1d788163968e645fea17957c147c25e41a0
# Dataset Card for "movie_discriminator" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
satvikp/movie_discriminator
[ "region:us" ]
2023-08-16T04:35:31+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18844, "num_examples": 101}], "download_size": 8543, "dataset_size": 18844}}
2023-08-16T04:38:17+00:00
[]
[]
TAGS #region-us
# Dataset Card for "movie_discriminator" More Information needed
[ "# Dataset Card for \"movie_discriminator\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"movie_discriminator\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"movie_discriminator\"\n\nMore Information needed" ]
7109e93fa356848975e3a5d0762479e0dbf7d6aa
# Dataset Card for "shrutilipi-whisper" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
makaveli10/shrutilipi-whisper
[ "region:us" ]
2023-08-16T04:37:39+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 157535021960.908, "num_examples": 231324}, {"name": "validation", "num_bytes": 47062936319.286, "num_examples": 68882}], "download_size": 506481590, "dataset_size": 204597958280.194}}
2023-08-16T14:18:37+00:00
[]
[]
TAGS #region-us
# Dataset Card for "shrutilipi-whisper" More Information needed
[ "# Dataset Card for \"shrutilipi-whisper\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"shrutilipi-whisper\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"shrutilipi-whisper\"\n\nMore Information needed" ]
ab947b0e64248c38843cc39976c915f8bac04f04
# Dataset of theresia_van_astrea (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of theresia_van_astrea (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 49 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/theresia_van_astrea_rezero
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T04:43:15+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:27+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of theresia_van_astrea (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of theresia_van_astrea (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 49 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of theresia_van_astrea (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of theresia_van_astrea (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 49 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of theresia_van_astrea (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of theresia_van_astrea (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 49 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 109 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of theresia_van_astrea (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of theresia_van_astrea (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 49 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
54639da1f2fcfdedb6ecbd784bcf9beb1852ee8b
# Dataset Card for "ScienceQA_Test_IMG" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
leonardPKU/ScienceQA_Test_IMG
[ "region:us" ]
2023-08-16T04:55:52+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "hint", "dtype": "string"}, {"name": "task", "dtype": "string"}, {"name": "grade", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "topic", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "skill", "dtype": "string"}, {"name": "lecture", "dtype": "string"}, {"name": "solution", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 135202551.875, "num_examples": 2017}], "download_size": 130142579, "dataset_size": 135202551.875}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-08-16T04:56:00+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ScienceQA_Test_IMG" More Information needed
[ "# Dataset Card for \"ScienceQA_Test_IMG\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ScienceQA_Test_IMG\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ScienceQA_Test_IMG\"\n\nMore Information needed" ]
b2f16189ccb882ab00a00c3a0124ee089f68da3b
# Dataset of shaula (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of shaula (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 56 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/shaula_rezero
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T04:59:08+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:29+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of shaula (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of shaula (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 56 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of shaula (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of shaula (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 56 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of shaula (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of shaula (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 56 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 99 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of shaula (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of shaula (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 56 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
7cc2467bc5f64eceb2f2f7a6f9a4792a3810f0e5
# Dataset of meili_portroute (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of meili_portroute (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 23 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/meili_portroute_rezero
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T05:05:37+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:31+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of meili_portroute (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of meili_portroute (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 23 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of meili_portroute (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of meili_portroute (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 23 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of meili_portroute (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of meili_portroute (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 23 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 105 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of meili_portroute (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of meili_portroute (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 23 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
89bf4aba65a1a69bedf46ed23cd40fddafa8c4fc
# Dataset of mimi_pearlbaton (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of mimi_pearlbaton (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 20 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
CyberHarem/mimi_pearlbaton_rezero
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T05:11:55+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:33+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
# Dataset of mimi_pearlbaton (Re:Zero Kara Hajimeru Isekai Seikatsu) This is the dataset of mimi_pearlbaton (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 20 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[ "# Dataset of mimi_pearlbaton (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of mimi_pearlbaton (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 20 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "# Dataset of mimi_pearlbaton (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of mimi_pearlbaton (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 20 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
[ 44, 109 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of mimi_pearlbaton (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of mimi_pearlbaton (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 20 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)." ]
97db5d875c5ad984ac0ab29a9603f5330e9933e3
# Dataset Card for DataComp_medium_pool_BLIP2_captions ## Dataset Description - **Paper: https://arxiv.org/abs/2307.10350** - **Leaderboard: https://www.datacomp.ai/leaderboard.html** - **Point of Contact: Thao Nguyen ([email protected])** ### Dataset Summary ### Supported Tasks and Leaderboards We have used this dataset for pre-training CLIP models and found that it rivals or outperforms models trained on raw web captions on average across the 38 evaluation tasks proposed by DataComp. Refer to the DataComp leaderboard (https://www.datacomp.ai/leaderboard.html) for the top baselines uncovered in our work. ### Languages Primarily English. ## Dataset Structure ### Data Instances Each instance maps a unique image identifier from DataComp to the corresponding BLIP2 caption generated with different softmax temperatures (see the filename for the temperature value used). ### Data Fields uid: SHA256 hash of image, provided as metadata by the DataComp team. blip2-cap: corresponding caption generated by BLIP2. ### Data Splits Data was not split. The dataset is intended for pre-training multimodal models. ## Dataset Creation ### Curation Rationale Web-crawled image-text data can contain a lot of noise, i.e. the caption may not reflect the content of the respective image. Filtering out noisy web data, however, can hurt the diversity of the training set. To address both of these issues, we use image captioning models to increase the number of useful training samples from the initial pool, by ensuring the captions are more relevant to the images. Our work systematically explores the effectiveness of using these synthetic captions to replace or complement the raw text data, in the context of CLIP pre-training. ### Source Data #### Initial Data Collection and Normalization The original 128M image-text pairs were collected by the DataComp team from Common Crawl. Minimal filtering was performed on the initial data pool (face blurring, NSFW removal, train-test deduplication). We then replaced the original web-crawled captions with synthetic captions generated by BLIP2. #### Who are the source language producers? Common Crawl is the source for images. BLIP2 is the source of the text data. ### Annotations #### Annotation process The dataset was built in a fully automated process: captions are generated by the BLIP2 captioning model. #### Who are the annotators? No human annotators are involved. ### Personal and Sensitive Information The images, which we inherit from the DataComp benchmark, already underwent face detection and face blurring. While the DataComp team made an attempt to remove NSFW instances, it is possible that such content may still exist (to a small degree) in this dataset. Due to the large scale nature of this dataset, the content has not been manually verified to be completely safe. Therefore, it is strongly recommended that this dataset be used only for research purposes. ## Considerations for Using the Data ### Social Impact of Dataset The publication contains some preliminary analyses of the fairness implication of training on this dataset, when evaluating on Fairface. ### Discussion of Biases Refer to the publication for more details. ### Other Known Limitations Refer to the publication for more details. ## Additional Information ### Citation Information ```bibtex @article{nguyen2023improving, title={Improving Multimodal Datasets with Image Captioning}, author={Nguyen, Thao and Gadre, Samir Yitzhak and Ilharco, Gabriel and Oh, Sewoong and Schmidt, Ludwig}, journal={arXiv preprint arXiv:2307.10350}, year={2023} } ```
thaottn/DataComp_medium_pool_BLIP2_captions
[ "task_categories:image-to-text", "task_categories:zero-shot-classification", "size_categories:100M<n<1B", "license:cc-by-4.0", "arxiv:2307.10350", "region:us" ]
2023-08-16T05:25:07+00:00
{"license": "cc-by-4.0", "size_categories": ["100M<n<1B"], "task_categories": ["image-to-text", "zero-shot-classification"]}
2023-09-01T00:03:49+00:00
[ "2307.10350" ]
[]
TAGS #task_categories-image-to-text #task_categories-zero-shot-classification #size_categories-100M<n<1B #license-cc-by-4.0 #arxiv-2307.10350 #region-us
# Dataset Card for DataComp_medium_pool_BLIP2_captions ## Dataset Description - Paper: URL - Leaderboard: URL - Point of Contact: Thao Nguyen (thaottn@URL) ### Dataset Summary ### Supported Tasks and Leaderboards We have used this dataset for pre-training CLIP models and found that it rivals or outperforms models trained on raw web captions on average across the 38 evaluation tasks proposed by DataComp. Refer to the DataComp leaderboard (URL for the top baselines uncovered in our work. ### Languages Primarily English. ## Dataset Structure ### Data Instances Each instance maps a unique image identifier from DataComp to the corresponding BLIP2 caption generated with different softmax temperatures (see the filename for the temperature value used). ### Data Fields uid: SHA256 hash of image, provided as metadata by the DataComp team. blip2-cap: corresponding caption generated by BLIP2. ### Data Splits Data was not split. The dataset is intended for pre-training multimodal models. ## Dataset Creation ### Curation Rationale Web-crawled image-text data can contain a lot of noise, i.e. the caption may not reflect the content of the respective image. Filtering out noisy web data, however, can hurt the diversity of the training set. To address both of these issues, we use image captioning models to increase the number of useful training samples from the initial pool, by ensuring the captions are more relevant to the images. Our work systematically explores the effectiveness of using these synthetic captions to replace or complement the raw text data, in the context of CLIP pre-training. ### Source Data #### Initial Data Collection and Normalization The original 128M image-text pairs were collected by the DataComp team from Common Crawl. Minimal filtering was performed on the initial data pool (face blurring, NSFW removal, train-test deduplication). We then replaced the original web-crawled captions with synthetic captions generated by BLIP2. #### Who are the source language producers? Common Crawl is the source for images. BLIP2 is the source of the text data. ### Annotations #### Annotation process The dataset was built in a fully automated process: captions are generated by the BLIP2 captioning model. #### Who are the annotators? No human annotators are involved. ### Personal and Sensitive Information The images, which we inherit from the DataComp benchmark, already underwent face detection and face blurring. While the DataComp team made an attempt to remove NSFW instances, it is possible that such content may still exist (to a small degree) in this dataset. Due to the large scale nature of this dataset, the content has not been manually verified to be completely safe. Therefore, it is strongly recommended that this dataset be used only for research purposes. ## Considerations for Using the Data ### Social Impact of Dataset The publication contains some preliminary analyses of the fairness implication of training on this dataset, when evaluating on Fairface. ### Discussion of Biases Refer to the publication for more details. ### Other Known Limitations Refer to the publication for more details. ## Additional Information
[ "# Dataset Card for DataComp_medium_pool_BLIP2_captions", "## Dataset Description\n\n- Paper: URL\n- Leaderboard: URL \n- Point of Contact: Thao Nguyen (thaottn@URL)", "### Dataset Summary", "### Supported Tasks and Leaderboards\n\nWe have used this dataset for pre-training CLIP models and found that it rivals or outperforms models trained on raw web captions on average across the 38 evaluation tasks proposed by DataComp.\nRefer to the DataComp leaderboard (URL for the top baselines uncovered in our work.", "### Languages\n\nPrimarily English.", "## Dataset Structure", "### Data Instances\n\nEach instance maps a unique image identifier from DataComp to the corresponding BLIP2 caption generated with different softmax temperatures (see the filename for the temperature value used).", "### Data Fields\n\nuid: SHA256 hash of image, provided as metadata by the DataComp team.\n\nblip2-cap: corresponding caption generated by BLIP2.", "### Data Splits\n\nData was not split. The dataset is intended for pre-training multimodal models.", "## Dataset Creation", "### Curation Rationale\n\nWeb-crawled image-text data can contain a lot of noise, i.e. the caption may not reflect the content of the respective image. Filtering out noisy web data, however, can hurt the diversity of the training set.\nTo address both of these issues, we use image captioning models to increase the number of useful training samples from the initial pool, by ensuring the captions are more relevant to the images. \nOur work systematically explores the effectiveness of using these synthetic captions to replace or complement the raw text data, in the context of CLIP pre-training.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe original 128M image-text pairs were collected by the DataComp team from Common Crawl. Minimal filtering was performed on the initial data pool (face blurring, NSFW removal, train-test deduplication).\nWe then replaced the original web-crawled captions with synthetic captions generated by BLIP2.", "#### Who are the source language producers?\n\nCommon Crawl is the source for images. BLIP2 is the source of the text data.", "### Annotations", "#### Annotation process\n\nThe dataset was built in a fully automated process: captions are generated by the BLIP2 captioning model.", "#### Who are the annotators?\n\nNo human annotators are involved.", "### Personal and Sensitive Information\n\nThe images, which we inherit from the DataComp benchmark, already underwent face detection and face blurring. While the DataComp team made an attempt to remove NSFW instances, it is possible that such content may still exist (to a small degree) in this dataset. \nDue to the large scale nature of this dataset, the content has not been manually verified to be completely safe. Therefore, it is strongly recommended that this dataset be used only for research purposes.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe publication contains some preliminary analyses of the fairness implication of training on this dataset, when evaluating on Fairface.", "### Discussion of Biases\n\nRefer to the publication for more details.", "### Other Known Limitations\n\nRefer to the publication for more details.", "## Additional Information" ]
[ "TAGS\n#task_categories-image-to-text #task_categories-zero-shot-classification #size_categories-100M<n<1B #license-cc-by-4.0 #arxiv-2307.10350 #region-us \n", "# Dataset Card for DataComp_medium_pool_BLIP2_captions", "## Dataset Description\n\n- Paper: URL\n- Leaderboard: URL \n- Point of Contact: Thao Nguyen (thaottn@URL)", "### Dataset Summary", "### Supported Tasks and Leaderboards\n\nWe have used this dataset for pre-training CLIP models and found that it rivals or outperforms models trained on raw web captions on average across the 38 evaluation tasks proposed by DataComp.\nRefer to the DataComp leaderboard (URL for the top baselines uncovered in our work.", "### Languages\n\nPrimarily English.", "## Dataset Structure", "### Data Instances\n\nEach instance maps a unique image identifier from DataComp to the corresponding BLIP2 caption generated with different softmax temperatures (see the filename for the temperature value used).", "### Data Fields\n\nuid: SHA256 hash of image, provided as metadata by the DataComp team.\n\nblip2-cap: corresponding caption generated by BLIP2.", "### Data Splits\n\nData was not split. The dataset is intended for pre-training multimodal models.", "## Dataset Creation", "### Curation Rationale\n\nWeb-crawled image-text data can contain a lot of noise, i.e. the caption may not reflect the content of the respective image. Filtering out noisy web data, however, can hurt the diversity of the training set.\nTo address both of these issues, we use image captioning models to increase the number of useful training samples from the initial pool, by ensuring the captions are more relevant to the images. \nOur work systematically explores the effectiveness of using these synthetic captions to replace or complement the raw text data, in the context of CLIP pre-training.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe original 128M image-text pairs were collected by the DataComp team from Common Crawl. Minimal filtering was performed on the initial data pool (face blurring, NSFW removal, train-test deduplication).\nWe then replaced the original web-crawled captions with synthetic captions generated by BLIP2.", "#### Who are the source language producers?\n\nCommon Crawl is the source for images. BLIP2 is the source of the text data.", "### Annotations", "#### Annotation process\n\nThe dataset was built in a fully automated process: captions are generated by the BLIP2 captioning model.", "#### Who are the annotators?\n\nNo human annotators are involved.", "### Personal and Sensitive Information\n\nThe images, which we inherit from the DataComp benchmark, already underwent face detection and face blurring. While the DataComp team made an attempt to remove NSFW instances, it is possible that such content may still exist (to a small degree) in this dataset. \nDue to the large scale nature of this dataset, the content has not been manually verified to be completely safe. Therefore, it is strongly recommended that this dataset be used only for research purposes.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe publication contains some preliminary analyses of the fairness implication of training on this dataset, when evaluating on Fairface.", "### Discussion of Biases\n\nRefer to the publication for more details.", "### Other Known Limitations\n\nRefer to the publication for more details.", "## Additional Information" ]
[ 60, 19, 28, 6, 77, 9, 6, 46, 39, 24, 5, 137, 4, 86, 30, 5, 31, 17, 114, 8, 36, 16, 15, 5 ]
[ "passage: TAGS\n#task_categories-image-to-text #task_categories-zero-shot-classification #size_categories-100M<n<1B #license-cc-by-4.0 #arxiv-2307.10350 #region-us \n# Dataset Card for DataComp_medium_pool_BLIP2_captions## Dataset Description\n\n- Paper: URL\n- Leaderboard: URL \n- Point of Contact: Thao Nguyen (thaottn@URL)### Dataset Summary### Supported Tasks and Leaderboards\n\nWe have used this dataset for pre-training CLIP models and found that it rivals or outperforms models trained on raw web captions on average across the 38 evaluation tasks proposed by DataComp.\nRefer to the DataComp leaderboard (URL for the top baselines uncovered in our work.### Languages\n\nPrimarily English.## Dataset Structure### Data Instances\n\nEach instance maps a unique image identifier from DataComp to the corresponding BLIP2 caption generated with different softmax temperatures (see the filename for the temperature value used).### Data Fields\n\nuid: SHA256 hash of image, provided as metadata by the DataComp team.\n\nblip2-cap: corresponding caption generated by BLIP2.### Data Splits\n\nData was not split. The dataset is intended for pre-training multimodal models.## Dataset Creation### Curation Rationale\n\nWeb-crawled image-text data can contain a lot of noise, i.e. the caption may not reflect the content of the respective image. Filtering out noisy web data, however, can hurt the diversity of the training set.\nTo address both of these issues, we use image captioning models to increase the number of useful training samples from the initial pool, by ensuring the captions are more relevant to the images. \nOur work systematically explores the effectiveness of using these synthetic captions to replace or complement the raw text data, in the context of CLIP pre-training.### Source Data" ]
aaff3487e9bfd614c00564697061e5e3af6fac12
# Language Models for Taiwanese Culture <p align="center"> ✍️ <a href="https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2" target="_blank">Online Demo</a> • 🤗 <a href="https://huggingface.co/yentinglin" target="_blank">HF Repo</a> • 🐦 <a href="https://twitter.com/yentinglin56" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/pdf/2305.13711.pdf" target="_blank">[Paper Coming Soon]</a> • 👨️ <a href="https://yentingl.com/" target="_blank">Yen-Ting Lin</a> <br/><br/> <img src="https://www.csie.ntu.edu.tw/~miulab/taiwan-llama/logo-v2.png" width="100"> <br/> <a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE"> <img src="https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg"></a> <a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE"> <img src="https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg"></a> <br/> </p> ## Overview Taiwan-LLaMa is a full parameter fine-tuned model based on LLaMa 2 for Traditional Mandarin applications. **Taiwan-LLaMa v1.0** pretrained on over 5 billion tokens and instruction-tuned on over 490k conversations both in traditional mandarin. ## Demo A live demonstration of the model can be accessed at [Hugging Face Spaces](https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2). ## Key Features 1. **Traditional Mandarin Support**: The model is fine-tuned to understand and generate text in Traditional Mandarin, making it suitable for Taiwanese culture and related applications. 2. **Instruction-Tuned**: Further fine-tuned on conversational data to offer context-aware and instruction-following responses. 3. **Performance on Vicuna Benchmark**: Taiwan-LLaMa's relative performance on Vicuna Benchmark is measured against models like GPT-4 and ChatGPT. It's particularly optimized for Taiwanese culture. 4. **Flexible Customization**: Advanced options for controlling the model's behavior like system prompt, temperature, top-p, and top-k are available in the demo. ## Work in progress - [ ] **Improved pretraining**: A refined pretraining process (e.g. more data from Taiwan, training strategies) is under development, aiming to enhance model performance for better Taiwanese culture. - [ ] **Extend max length**: Utilizing the Rope mechanism as described in [the paper](https://arxiv.org/abs/2104.09864), the model's length will be extended from 4k to 8k. ## Taiwanese Culture Examples | Question | ChatGPT | **Taiwan LLaMa v1.0** | **Taiwan LLaMa v0.0 (w/o pretraining)** | Llama2_Traditional_Chinese_13b_Chat | |:-|:-|:-|:-|:-| | 以惡地景觀著名的是高雄哪一個景點? | 以惡地地景著名的是高雄市的旗津。 | 高雄以擁有惡地景觀著名的景點是田寮月世界。✅ | 以惡地景觀著名的是高雄市的旗津島。旗津島是一個小島,... | 高雄市內有多個以惡地景觀著名的景點,其中以下面幾個最為著名: 1. 高雄捷運紅線沿途景點:高雄市政府為了推廣高雄 | | 唐鳳是誰? | 唐鳳(Audrey Tang)是台灣的政治家、社會活動家和軟體工程師。她於2016年擔任台灣行政院數位政務委員,是台灣政府歷史上首位沒有政黨背景的政務官。唐鳳在軟體工程領域有豐富的經驗。 ✅ | 唐鳳是政治評論家、資訊社會學研究者,也是台灣第一位擔任政府數位政委的人。 ✅ | 唐鳳(Tang Feng)是一位著名的中國演員和歌手。他出生於1974年12月17日,出生地點在中國北京。唐鳳在1995年開始他的演藝生涯,並在同年發行了他的首張專輯《我不孤單》。此後,他發行了許多受歡迎的歌曲,並在多部電視劇和電影中擔任主角。唐鳳因其精湛的演技和迷人的歌聲而被譽為“亞洲魅力男王”。他的音樂獲得了多個獎項,包括“華表最佳新男演唱獎”、“華表最佳歌曲獎”和“華表最佳舞蹈獎”等。 | 唐鳳是唐朝時期的一位著名文學家,他的代表作品包括《陰騭》、《狂凡子》等。他的文學風格具有強烈的個性和創意,被譽為「唐代文學之父」。 | | NTU 在哪? | NTU位於新加坡,具體地址是50 Nanyang Avenue, Singapore 639798。 | NTU 是國立臺灣大學的縮寫。它是臺灣的一所頂尖研究型大學,位於臺北市。 ✅| NTU 是台灣的一所大學,位於台北市。✅ | NTU 的總部位於台灣新北市板橋區,佔地面積約為1,000公頃。| | 甚麼是22K? | "22K" 可能指的是金的純度,尤其是在珠寶和金飾方面。 | "22K"是指台灣的大學畢業生的起薪,大約是新台幣22,000元。這個詞被用來形容大學畢業生的起薪偏低,並且在就業市場上對於大學畢業生的需求不足。✅| 22K是指溫度尺度上的22,000K(千度)。1K代表一度,即溫度差距,是攝氏溫標(攝氏度)的一百萬分之一。因此,22K等於-253.15°C(攝氏溫標)或-423.67°F(華氏溫標)。 | 22K是一種簡寫,即22,000。它是一個數字,表示22,000個單位。| | 肉圓要不要加香菜? | 這是個個人口味的問題,有些人喜歡在肉圓中加入香菜,因為香菜的清香可以增添風味;而有些人則不喜歡香菜的味道,覺得它會蓋過其他食材的味道。所以要不要加香菜,可以根據個人喜好來決定。 | 這是一個個人喜好的問題,有些人喜歡在肉圓裡加香菜,有些人不喜歡。**在台灣**,肉圓加香菜是一種常見的做法,這是一種在肉圓上加香菜的做法。然而,在其他國這是一個文化差異的問題。😊 | 肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。香菜通常在製作過程中炒香,可以增添一些芳香和美味的味道。如果你想要加入香菜,可以在製作過程中炒香一些香菜,然後將其混入肉圓中。肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。| 加香菜的肉圓味道更好,但是不加香菜的肉圓也很好吃。如果你喜歡香菜的味道,那麼加上香菜就好了。如果你不喜歡香菜,那麼不加也不會影響肉 | ## Model We provide a number of model checkpoints that we trained. Please find them on Hugging Face [here](https://huggingface.co/models?search=taiwan-llama). Here are some quick links to the checkpoints that are finetuned from LLaMa 2: | **Model** | **13B** | |--------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------| | **Taiwan-LLaMa v1.0** (_better for Taiwanese Culture_) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0" target="_blank">yentinglin/Taiwan-LLaMa-v1.0</a> | | Taiwan-LLaMa v0.9 (partial instruction set) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.9" target="_blank">yentinglin/Taiwan-LLaMa-v0.9</a> | | Taiwan-LLaMa v0.0 (no Traditional Mandarin pretraining) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.0" target="_blank">yentinglin/Taiwan-LLaMa-v0.0</a> | ## Data Here are some quick links to the datasets that we used to train the models: | **Dataset** | **Link** | |---------------------------------|-------------------------------------------------------------------------------------------------------------------------------| | **Instruction-tuning** | 🤗 <a href="https://huggingface.co/datasets/yentinglin/traditional_mandarin_instructions" target="_blank">yentinglin/traditional_mandarin_instructions</a> | | Traditional Mandarin Pretraining | 🤗 <a href="https://huggingface.co/datasets/yentinglin/zh_TW_c4" target="_blank">yentinglin/zh_TW_c4</a> | ## Architecture Taiwan-LLaMa is based on LLaMa 2, leveraging transformer architecture, <a href="https://github.com/Dao-AILab/flash-attention" target="_blank">flash attention 2</a>, and bfloat16. It includes: * Pretraining Phase: Pretrained on a vast corpus of over 5 billion tokens, extracted from common crawl in Traditional Mandarin. * Fine-tuning Phase: Further instruction-tuned on over 490k multi-turn conversational data to enable more instruction-following and context-aware responses. ## Generic Capabilities on Vicuna Benchmark The data is translated into traditional mandarin for evaluating the general capability. <img src="./images/zhtw_vicuna_bench_chatgptbaseline.png" width="700"> The scores are calculated with ChatGPT as the baseline, represented as 100%. The other values show the relative performance of different models compared to ChatGPT. | Language Model | Relative Score (%) | |-------------------------------------|--------------------| | GPT-4 | 102.59% | | ChatGPT | 100.00% | | **Taiwan-LLaMa v1.0** | 76.76% | | Claude-Instant-1.2 | 74.04% | | Llama2_Traditional_Chinese_13b_Chat | 56.21% | ## How to deploy the model on my own machine? We recommend hosting models with [🤗 Text Generation Inference](https://github.com/huggingface/text-generation-inference). Please see their [license](https://github.com/huggingface/text-generation-inference/blob/main/LICENSE) for details on usage and limitations. ```bash bash run_text_generation_inference.sh "yentinglin/Taiwan-LLaMa" NUM_GPUS DIR_TO_SAVE_MODEL PORT MAX_INPUT_LEN MODEL_MAX_LEN ``` Prompt format follows vicuna-v1.1 template: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {user} ASSISTANT: ``` ## Setup development environment ```bash conda create -n taiwan-llama python=3.10 -y conda activate taiwan-llama pip install -r requirements.txt ``` ## Citations If you use our code, data, or models in your research, please cite this repository. You can use the following BibTeX entry: ```bibtex @inproceedings{lin-chen-2023-llm, title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models", author = "Lin, Yen-Ting and Chen, Yun-Nung", booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.nlp4convai-1.5", pages = "47--58" } @misc{taiwanllama, author={Lin, Yen-Ting and Chen, Yun-Nung}, title={Taiwanese-Aligned Language Models based on Meta-Llama2}, year={2023}, url={https://github.com/adamlin120/Taiwan-LLaMa}, note={Code and models available at https://github.com/adamlin120/Taiwan-LLaMa}, } ``` ## Collaborate With Us If you are interested in contributing to the development of Traditional Mandarin language models, exploring new applications, or leveraging Taiwan-LLaMa for your specific needs, please don't hesitate to contact us. We welcome collaborations from academia, industry, and individual contributors. ## License The code in this project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details. The models included in this project are licensed under the LLAMA 2 Community License. See the [LLAMA2 License](https://github.com/facebookresearch/llama/blob/main/LICENSE) for full details. ## OpenAI Data Acknowledgment The data included in this project were generated using OpenAI's models and are subject to OpenAI's Terms of Use. Please review [OpenAI's Terms of Use](https://openai.com/policies/terms-of-use) for details on usage and limitations. ## Acknowledgements We thank [Meta LLaMA team](https://github.com/facebookresearch/llama) and [Vicuna team](https://github.com/lm-sys/FastChat) for their open-source efforts in democratizing large language models.
botp/yentinglin-zh_TW_c4
[ "task_categories:text-generation", "size_categories:1M<n<10M", "language:zh", "license:odc-by", "arxiv:2305.13711", "arxiv:2104.09864", "region:us" ]
2023-08-16T05:55:04+00:00
{"language": ["zh"], "license": "odc-by", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "pretty_name": "Traditional Chinese C4", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "zh_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 15132113806.438198, "num_examples": 5179546}, {"name": "validation", "num_bytes": 14820238.368907347, "num_examples": 5099}], "download_size": 5569491261, "dataset_size": 15146934044.807106}, "duplicated_from": "yentinglin/zh_TW_c4"}
2023-08-16T05:55:04+00:00
[ "2305.13711", "2104.09864" ]
[ "zh" ]
TAGS #task_categories-text-generation #size_categories-1M<n<10M #language-Chinese #license-odc-by #arxiv-2305.13711 #arxiv-2104.09864 #region-us
Language Models for Taiwanese Culture ===================================== ️ [Online Demo](URL target=) • [HF Repo](URL target=) • [Twitter](URL target=) • [[Paper Coming Soon]](URL target=) • ️ [Yen-Ting Lin](URL target=) ![](URL width=) Overview -------- Taiwan-LLaMa is a full parameter fine-tuned model based on LLaMa 2 for Traditional Mandarin applications. Taiwan-LLaMa v1.0 pretrained on over 5 billion tokens and instruction-tuned on over 490k conversations both in traditional mandarin. Demo ---- A live demonstration of the model can be accessed at Hugging Face Spaces. Key Features ------------ 1. Traditional Mandarin Support: The model is fine-tuned to understand and generate text in Traditional Mandarin, making it suitable for Taiwanese culture and related applications. 2. Instruction-Tuned: Further fine-tuned on conversational data to offer context-aware and instruction-following responses. 3. Performance on Vicuna Benchmark: Taiwan-LLaMa's relative performance on Vicuna Benchmark is measured against models like GPT-4 and ChatGPT. It's particularly optimized for Taiwanese culture. 4. Flexible Customization: Advanced options for controlling the model's behavior like system prompt, temperature, top-p, and top-k are available in the demo. Work in progress ---------------- * [ ] Improved pretraining: A refined pretraining process (e.g. more data from Taiwan, training strategies) is under development, aiming to enhance model performance for better Taiwanese culture. * [ ] Extend max length: Utilizing the Rope mechanism as described in the paper, the model's length will be extended from 4k to 8k. Taiwanese Culture Examples -------------------------- Model ----- We provide a number of model checkpoints that we trained. Please find them on Hugging Face here. Here are some quick links to the checkpoints that are finetuned from LLaMa 2: Data ---- Here are some quick links to the datasets that we used to train the models: Architecture ------------ Taiwan-LLaMa is based on LLaMa 2, leveraging transformer architecture, <a href="URL target="\_blank">flash attention 2, and bfloat16. It includes: * Pretraining Phase: Pretrained on a vast corpus of over 5 billion tokens, extracted from common crawl in Traditional Mandarin. * Fine-tuning Phase: Further instruction-tuned on over 490k multi-turn conversational data to enable more instruction-following and context-aware responses. Generic Capabilities on Vicuna Benchmark ---------------------------------------- The data is translated into traditional mandarin for evaluating the general capability. ![](./images/zhtw_vicuna_bench_chatgptbaseline.png) The scores are calculated with ChatGPT as the baseline, represented as 100%. The other values show the relative performance of different models compared to ChatGPT. How to deploy the model on my own machine? ------------------------------------------ We recommend hosting models with Text Generation Inference. Please see their license for details on usage and limitations. Prompt format follows vicuna-v1.1 template: Setup development environment ----------------------------- s If you use our code, data, or models in your research, please cite this repository. You can use the following BibTeX entry: Collaborate With Us ------------------- If you are interested in contributing to the development of Traditional Mandarin language models, exploring new applications, or leveraging Taiwan-LLaMa for your specific needs, please don't hesitate to contact us. We welcome collaborations from academia, industry, and individual contributors. License ------- The code in this project is licensed under the Apache 2.0 License - see the LICENSE file for details. The models included in this project are licensed under the LLAMA 2 Community License. See the LLAMA2 License for full details. OpenAI Data Acknowledgment -------------------------- The data included in this project were generated using OpenAI's models and are subject to OpenAI's Terms of Use. Please review OpenAI's Terms of Use for details on usage and limitations. Acknowledgements ---------------- We thank Meta LLaMA team and Vicuna team for their open-source efforts in democratizing large language models.
[]
[ "TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-Chinese #license-odc-by #arxiv-2305.13711 #arxiv-2104.09864 #region-us \n" ]
[ 59 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-Chinese #license-odc-by #arxiv-2305.13711 #arxiv-2104.09864 #region-us \n" ]
e3bd70f7405a5adfc9dd8e7ed456e9012ebcab32
# Language Models for Taiwanese Culture <p align="center"> ✍️ <a href="https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2" target="_blank">Online Demo</a> • 🤗 <a href="https://huggingface.co/yentinglin" target="_blank">HF Repo</a> • 🐦 <a href="https://twitter.com/yentinglin56" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/pdf/2305.13711.pdf" target="_blank">[Paper Coming Soon]</a> • 👨️ <a href="https://yentingl.com/" target="_blank">Yen-Ting Lin</a> <br/><br/> <img src="https://www.csie.ntu.edu.tw/~miulab/taiwan-llama/logo-v2.png" width="100"> <br/> <a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE"> <img src="https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg"></a> <a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE"> <img src="https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg"></a> <br/> </p> ## Overview Taiwan-LLaMa is a full parameter fine-tuned model based on LLaMa 2 for Traditional Mandarin applications. **Taiwan-LLaMa v1.0** pretrained on over 5 billion tokens and instruction-tuned on over 490k conversations both in traditional mandarin. ## Demo A live demonstration of the model can be accessed at [Hugging Face Spaces](https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2). ## Key Features 1. **Traditional Mandarin Support**: The model is fine-tuned to understand and generate text in Traditional Mandarin, making it suitable for Taiwanese culture and related applications. 2. **Instruction-Tuned**: Further fine-tuned on conversational data to offer context-aware and instruction-following responses. 3. **Performance on Vicuna Benchmark**: Taiwan-LLaMa's relative performance on Vicuna Benchmark is measured against models like GPT-4 and ChatGPT. It's particularly optimized for Taiwanese culture. 4. **Flexible Customization**: Advanced options for controlling the model's behavior like system prompt, temperature, top-p, and top-k are available in the demo. ## Work in progress - [ ] **Improved pretraining**: A refined pretraining process (e.g. more data from Taiwan, training strategies) is under development, aiming to enhance model performance for better Taiwanese culture. - [ ] **Extend max length**: Utilizing the Rope mechanism as described in [the paper](https://arxiv.org/abs/2104.09864), the model's length will be extended from 4k to 8k. ## Taiwanese Culture Examples | Question | ChatGPT | **Taiwan LLaMa v1.0** | **Taiwan LLaMa v0.0 (w/o pretraining)** | Llama2_Traditional_Chinese_13b_Chat | |:-|:-|:-|:-|:-| | 以惡地景觀著名的是高雄哪一個景點? | 以惡地地景著名的是高雄市的旗津。 | 高雄以擁有惡地景觀著名的景點是田寮月世界。✅ | 以惡地景觀著名的是高雄市的旗津島。旗津島是一個小島,... | 高雄市內有多個以惡地景觀著名的景點,其中以下面幾個最為著名: 1. 高雄捷運紅線沿途景點:高雄市政府為了推廣高雄 | | 唐鳳是誰? | 唐鳳(Audrey Tang)是台灣的政治家、社會活動家和軟體工程師。她於2016年擔任台灣行政院數位政務委員,是台灣政府歷史上首位沒有政黨背景的政務官。唐鳳在軟體工程領域有豐富的經驗。 ✅ | 唐鳳是政治評論家、資訊社會學研究者,也是台灣第一位擔任政府數位政委的人。 ✅ | 唐鳳(Tang Feng)是一位著名的中國演員和歌手。他出生於1974年12月17日,出生地點在中國北京。唐鳳在1995年開始他的演藝生涯,並在同年發行了他的首張專輯《我不孤單》。此後,他發行了許多受歡迎的歌曲,並在多部電視劇和電影中擔任主角。唐鳳因其精湛的演技和迷人的歌聲而被譽為“亞洲魅力男王”。他的音樂獲得了多個獎項,包括“華表最佳新男演唱獎”、“華表最佳歌曲獎”和“華表最佳舞蹈獎”等。 | 唐鳳是唐朝時期的一位著名文學家,他的代表作品包括《陰騭》、《狂凡子》等。他的文學風格具有強烈的個性和創意,被譽為「唐代文學之父」。 | | NTU 在哪? | NTU位於新加坡,具體地址是50 Nanyang Avenue, Singapore 639798。 | NTU 是國立臺灣大學的縮寫。它是臺灣的一所頂尖研究型大學,位於臺北市。 ✅| NTU 是台灣的一所大學,位於台北市。✅ | NTU 的總部位於台灣新北市板橋區,佔地面積約為1,000公頃。| | 甚麼是22K? | "22K" 可能指的是金的純度,尤其是在珠寶和金飾方面。 | "22K"是指台灣的大學畢業生的起薪,大約是新台幣22,000元。這個詞被用來形容大學畢業生的起薪偏低,並且在就業市場上對於大學畢業生的需求不足。✅| 22K是指溫度尺度上的22,000K(千度)。1K代表一度,即溫度差距,是攝氏溫標(攝氏度)的一百萬分之一。因此,22K等於-253.15°C(攝氏溫標)或-423.67°F(華氏溫標)。 | 22K是一種簡寫,即22,000。它是一個數字,表示22,000個單位。| | 肉圓要不要加香菜? | 這是個個人口味的問題,有些人喜歡在肉圓中加入香菜,因為香菜的清香可以增添風味;而有些人則不喜歡香菜的味道,覺得它會蓋過其他食材的味道。所以要不要加香菜,可以根據個人喜好來決定。 | 這是一個個人喜好的問題,有些人喜歡在肉圓裡加香菜,有些人不喜歡。**在台灣**,肉圓加香菜是一種常見的做法,這是一種在肉圓上加香菜的做法。然而,在其他國這是一個文化差異的問題。😊 | 肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。香菜通常在製作過程中炒香,可以增添一些芳香和美味的味道。如果你想要加入香菜,可以在製作過程中炒香一些香菜,然後將其混入肉圓中。肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。| 加香菜的肉圓味道更好,但是不加香菜的肉圓也很好吃。如果你喜歡香菜的味道,那麼加上香菜就好了。如果你不喜歡香菜,那麼不加也不會影響肉 | ## Model We provide a number of model checkpoints that we trained. Please find them on Hugging Face [here](https://huggingface.co/models?search=taiwan-llama). Here are some quick links to the checkpoints that are finetuned from LLaMa 2: | **Model** | **13B** | |--------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------| | **Taiwan-LLaMa v1.0** (_better for Taiwanese Culture_) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0" target="_blank">yentinglin/Taiwan-LLaMa-v1.0</a> | | Taiwan-LLaMa v0.9 (partial instruction set) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.9" target="_blank">yentinglin/Taiwan-LLaMa-v0.9</a> | | Taiwan-LLaMa v0.0 (no Traditional Mandarin pretraining) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.0" target="_blank">yentinglin/Taiwan-LLaMa-v0.0</a> | ## Data Here are some quick links to the datasets that we used to train the models: | **Dataset** | **Link** | |---------------------------------|-------------------------------------------------------------------------------------------------------------------------------| | **Instruction-tuning** | 🤗 <a href="https://huggingface.co/datasets/yentinglin/traditional_mandarin_instructions" target="_blank">yentinglin/traditional_mandarin_instructions</a> | | Traditional Mandarin Pretraining | 🤗 <a href="https://huggingface.co/datasets/yentinglin/zh_TW_c4" target="_blank">yentinglin/zh_TW_c4</a> | ## Architecture Taiwan-LLaMa is based on LLaMa 2, leveraging transformer architecture, <a href="https://github.com/Dao-AILab/flash-attention" target="_blank">flash attention 2</a>, and bfloat16. It includes: * Pretraining Phase: Pretrained on a vast corpus of over 5 billion tokens, extracted from common crawl in Traditional Mandarin. * Fine-tuning Phase: Further instruction-tuned on over 490k multi-turn conversational data to enable more instruction-following and context-aware responses. ## Generic Capabilities on Vicuna Benchmark The data is translated into traditional mandarin for evaluating the general capability. <img src="./images/zhtw_vicuna_bench_chatgptbaseline.png" width="700"> The scores are calculated with ChatGPT as the baseline, represented as 100%. The other values show the relative performance of different models compared to ChatGPT. | Language Model | Relative Score (%) | |-------------------------------------|--------------------| | GPT-4 | 102.59% | | ChatGPT | 100.00% | | **Taiwan-LLaMa v1.0** | 76.76% | | Claude-Instant-1.2 | 74.04% | | Llama2_Traditional_Chinese_13b_Chat | 56.21% | ## How to deploy the model on my own machine? We recommend hosting models with [🤗 Text Generation Inference](https://github.com/huggingface/text-generation-inference). Please see their [license](https://github.com/huggingface/text-generation-inference/blob/main/LICENSE) for details on usage and limitations. ```bash bash run_text_generation_inference.sh "yentinglin/Taiwan-LLaMa" NUM_GPUS DIR_TO_SAVE_MODEL PORT MAX_INPUT_LEN MODEL_MAX_LEN ``` Prompt format follows vicuna-v1.1 template: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {user} ASSISTANT: ``` ## Setup development environment ```bash conda create -n taiwan-llama python=3.10 -y conda activate taiwan-llama pip install -r requirements.txt ``` ## Citations If you use our code, data, or models in your research, please cite this repository. You can use the following BibTeX entry: ```bibtex @inproceedings{lin-chen-2023-llm, title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models", author = "Lin, Yen-Ting and Chen, Yun-Nung", booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.nlp4convai-1.5", pages = "47--58" } @misc{taiwanllama, author={Lin, Yen-Ting and Chen, Yun-Nung}, title={Taiwanese-Aligned Language Models based on Meta-Llama2}, year={2023}, url={https://github.com/adamlin120/Taiwan-LLaMa}, note={Code and models available at https://github.com/adamlin120/Taiwan-LLaMa}, } ``` ## Collaborate With Us If you are interested in contributing to the development of Traditional Mandarin language models, exploring new applications, or leveraging Taiwan-LLaMa for your specific needs, please don't hesitate to contact us. We welcome collaborations from academia, industry, and individual contributors. ## License The code in this project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details. The models included in this project are licensed under the LLAMA 2 Community License. See the [LLAMA2 License](https://github.com/facebookresearch/llama/blob/main/LICENSE) for full details. ## OpenAI Data Acknowledgment The data included in this project were generated using OpenAI's models and are subject to OpenAI's Terms of Use. Please review [OpenAI's Terms of Use](https://openai.com/policies/terms-of-use) for details on usage and limitations. ## Acknowledgements We thank [Meta LLaMA team](https://github.com/facebookresearch/llama) and [Vicuna team](https://github.com/lm-sys/FastChat) for their open-source efforts in democratizing large language models.
botp/yentinglin-traditional_mandarin_instructions
[ "task_categories:conversational", "task_categories:text-generation", "task_categories:text2text-generation", "size_categories:100K<n<1M", "language:zh", "license:cc-by-nc-4.0", "arxiv:2305.13711", "arxiv:2104.09864", "region:us" ]
2023-08-16T05:59:16+00:00
{"language": ["zh"], "license": "cc-by-nc-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["conversational", "text-generation", "text2text-generation"], "pretty_name": "Traditional Chinese Instruction-tuning Set", "duplicated_from": "yentinglin/traditional_mandarin_instructions"}
2023-08-16T05:59:16+00:00
[ "2305.13711", "2104.09864" ]
[ "zh" ]
TAGS #task_categories-conversational #task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-Chinese #license-cc-by-nc-4.0 #arxiv-2305.13711 #arxiv-2104.09864 #region-us
Language Models for Taiwanese Culture ===================================== ️ [Online Demo](URL target=) • [HF Repo](URL target=) • [Twitter](URL target=) • [[Paper Coming Soon]](URL target=) • ️ [Yen-Ting Lin](URL target=) ![](URL width=) Overview -------- Taiwan-LLaMa is a full parameter fine-tuned model based on LLaMa 2 for Traditional Mandarin applications. Taiwan-LLaMa v1.0 pretrained on over 5 billion tokens and instruction-tuned on over 490k conversations both in traditional mandarin. Demo ---- A live demonstration of the model can be accessed at Hugging Face Spaces. Key Features ------------ 1. Traditional Mandarin Support: The model is fine-tuned to understand and generate text in Traditional Mandarin, making it suitable for Taiwanese culture and related applications. 2. Instruction-Tuned: Further fine-tuned on conversational data to offer context-aware and instruction-following responses. 3. Performance on Vicuna Benchmark: Taiwan-LLaMa's relative performance on Vicuna Benchmark is measured against models like GPT-4 and ChatGPT. It's particularly optimized for Taiwanese culture. 4. Flexible Customization: Advanced options for controlling the model's behavior like system prompt, temperature, top-p, and top-k are available in the demo. Work in progress ---------------- * [ ] Improved pretraining: A refined pretraining process (e.g. more data from Taiwan, training strategies) is under development, aiming to enhance model performance for better Taiwanese culture. * [ ] Extend max length: Utilizing the Rope mechanism as described in the paper, the model's length will be extended from 4k to 8k. Taiwanese Culture Examples -------------------------- Model ----- We provide a number of model checkpoints that we trained. Please find them on Hugging Face here. Here are some quick links to the checkpoints that are finetuned from LLaMa 2: Data ---- Here are some quick links to the datasets that we used to train the models: Architecture ------------ Taiwan-LLaMa is based on LLaMa 2, leveraging transformer architecture, <a href="URL target="\_blank">flash attention 2, and bfloat16. It includes: * Pretraining Phase: Pretrained on a vast corpus of over 5 billion tokens, extracted from common crawl in Traditional Mandarin. * Fine-tuning Phase: Further instruction-tuned on over 490k multi-turn conversational data to enable more instruction-following and context-aware responses. Generic Capabilities on Vicuna Benchmark ---------------------------------------- The data is translated into traditional mandarin for evaluating the general capability. ![](./images/zhtw_vicuna_bench_chatgptbaseline.png) The scores are calculated with ChatGPT as the baseline, represented as 100%. The other values show the relative performance of different models compared to ChatGPT. How to deploy the model on my own machine? ------------------------------------------ We recommend hosting models with Text Generation Inference. Please see their license for details on usage and limitations. Prompt format follows vicuna-v1.1 template: Setup development environment ----------------------------- s If you use our code, data, or models in your research, please cite this repository. You can use the following BibTeX entry: Collaborate With Us ------------------- If you are interested in contributing to the development of Traditional Mandarin language models, exploring new applications, or leveraging Taiwan-LLaMa for your specific needs, please don't hesitate to contact us. We welcome collaborations from academia, industry, and individual contributors. License ------- The code in this project is licensed under the Apache 2.0 License - see the LICENSE file for details. The models included in this project are licensed under the LLAMA 2 Community License. See the LLAMA2 License for full details. OpenAI Data Acknowledgment -------------------------- The data included in this project were generated using OpenAI's models and are subject to OpenAI's Terms of Use. Please review OpenAI's Terms of Use for details on usage and limitations. Acknowledgements ---------------- We thank Meta LLaMA team and Vicuna team for their open-source efforts in democratizing large language models.
[]
[ "TAGS\n#task_categories-conversational #task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-Chinese #license-cc-by-nc-4.0 #arxiv-2305.13711 #arxiv-2104.09864 #region-us \n" ]
[ 85 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-Chinese #license-cc-by-nc-4.0 #arxiv-2305.13711 #arxiv-2104.09864 #region-us \n" ]
555fcd2aca72c052098d637f5aa8f5f1942d2271
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
chanelcolgate/yenthienviet
[ "task_categories:object-detection", "region:us" ]
2023-08-16T06:00:08+00:00
{"task_categories": ["object-detection"], "pretty_name": "YENTHIENVIET", "dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}, {"name": "objects", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "area", "dtype": "int64"}, {"name": "bbox", "sequence": "float32", "length": 4}, {"name": "category", "dtype": {"class_label": {"names": {"0": "hop_dln", "1": "hop_jn", "2": "hop_vtg", "3": "hop_ytv", "4": "lo_kids", "5": "lo_ytv", "6": "loc_dln", "7": "loc_jn", "8": "loc_kids", "9": "loc_ytv"}}}}]}], "splits": [{"name": "train", "num_bytes": 649109264, "num_examples": 1755}, {"name": "test", "num_bytes": 51031547, "num_examples": 152}, {"name": "val", "num_bytes": 53894489, "num_examples": 159}], "download_size": 727024589, "dataset_size": 754035300}}
2023-12-29T04:10:06+00:00
[]
[]
TAGS #task_categories-object-detection #region-us
# Dataset Card for Dataset Name ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Dataset Name", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#task_categories-object-detection #region-us \n", "# Dataset Card for Dataset Name", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 17, 8, 24, 32, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#task_categories-object-detection #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
e9b96a6f1c60b809e4677103bbcf53d70f362e4c
# HypothesesParadise - Open request to public git submission on open resource their n-best to public usage. - If you consider this work would be related or useful for your research, please consider to cite the work in NeurIPS 2023. Thank you. ```bib @inproceedings{chen2023hyporadise, title={HyPoradise: An Open Baseline for Generative Speech Recognition with Large Language Models}, author={CHEN, CHEN and Hu, Yuchen and Yang, Chao-Han Huck and Siniscalchi, Sabato Marco and Chen, Pin-Yu and Chng, Ensiong}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023} } ```
PeacefulData/HyPoradise-v0
[ "task_categories:text-generation", "language_creators:expert-generated", "size_categories:10M<n<100M", "license:mit", "code", "region:us" ]
2023-08-16T06:05:33+00:00
{"language_creators": ["expert-generated"], "license": "mit", "size_categories": ["10M<n<100M"], "task_categories": ["text-generation"], "pretty_name": "Hypotheses Paradise", "tags": ["code"]}
2024-02-06T06:03:07+00:00
[]
[]
TAGS #task_categories-text-generation #language_creators-expert-generated #size_categories-10M<n<100M #license-mit #code #region-us
# HypothesesParadise - Open request to public git submission on open resource their n-best to public usage. - If you consider this work would be related or useful for your research, please consider to cite the work in NeurIPS 2023. Thank you.
[ "# HypothesesParadise\n\n- Open request to public git submission on open resource their n-best to public usage.\n- If you consider this work would be related or useful for your research, please consider to cite the work in NeurIPS 2023. Thank you." ]
[ "TAGS\n#task_categories-text-generation #language_creators-expert-generated #size_categories-10M<n<100M #license-mit #code #region-us \n", "# HypothesesParadise\n\n- Open request to public git submission on open resource their n-best to public usage.\n- If you consider this work would be related or useful for your research, please consider to cite the work in NeurIPS 2023. Thank you." ]
[ 47, 57 ]
[ "passage: TAGS\n#task_categories-text-generation #language_creators-expert-generated #size_categories-10M<n<100M #license-mit #code #region-us \n# HypothesesParadise\n\n- Open request to public git submission on open resource their n-best to public usage.\n- If you consider this work would be related or useful for your research, please consider to cite the work in NeurIPS 2023. Thank you." ]
d17cb54fda61bc203aae2c42ac76c115bad2da37
# Dataset Card for "logits-english-512" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
amitness/logits-english-512
[ "region:us" ]
2023-08-16T06:09:55+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}, {"name": "teacher_logits", "sequence": {"sequence": "float64"}}, {"name": "teacher_indices", "sequence": {"sequence": "int64"}}, {"name": "teacher_mask_indices", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 156799366264, "num_examples": 8620310}], "download_size": 0, "dataset_size": 156799366264}}
2023-09-24T15:46:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "logits-english-512" More Information needed
[ "# Dataset Card for \"logits-english-512\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"logits-english-512\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"logits-english-512\"\n\nMore Information needed" ]
ba5faed068a1b0ab325e114562a9344108bd262a
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). --- ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
VedCodes/my_files
[ "task_categories:text-generation", "size_categories:n<1K", "language:en", "medical", "region:us" ]
2023-08-16T06:10:06+00:00
{"language": ["en"], "size_categories": ["n<1K"], "task_categories": ["text-generation"], "pretty_name": "pretty_file", "tags": ["medical"]}
2023-08-16T06:11:23+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #size_categories-n<1K #language-English #medical #region-us
# Dataset Card for Dataset Name ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using this raw template. --- ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Dataset Name", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.\n---", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #medical #region-us \n", "# Dataset Card for Dataset Name", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.\n---", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 34, 8, 24, 33, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #medical #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.\n---### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
a40f71c7d43760fa3a7a347d70b9f44e0aa05039
# Dataset of Yor Briar This is the dataset of Yor Briar, containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 448 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 448 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 448 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 448 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
CyberHarem/yor_briar_spyxfamily
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T06:19:16+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:38+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Yor Briar ==================== This is the dataset of Yor Briar, containing 200 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
fcded6b73027891607568d3d204b2f29cb443dc8
本数据应用于项目:[Firefly(流萤): 中文对话式大语言模型](https://github.com/yangjianxin1/Firefly) ,训练后得到的模型[firefly-1b4](https://huggingface.co/YeungNLP/firefly-1b4) 如果您觉得此数据集对您有帮助,请like此数据集并在Github项目中star我们。 我们收集了23个常见的中文数据集,对于每个任务,由人工书写若干种指令模板,保证数据的高质量与丰富度,数据量为115万 。数据分布如下图所示: ![task_distribution](task_distribution.png) 每条数据的格式如下,包含任务类型、输入、目标输出: ```json { "kind": "ClassicalChinese", "input": "将下面句子翻译成现代文:\n石中央又生一树,高百余尺,条干偃阴为五色,翠叶如盘,花径尺余,色深碧,蕊深红,异香成烟,著物霏霏。", "target": "大石的中央长着一棵树,一百多尺高,枝干是彩色的,树叶有盘子那样大,花的直径有一尺宽,花瓣深蓝色,花中飘出奇异的香气笼罩着周围,如烟似雾。" } ``` 训练数据集的token长度分布如下图所示,绝大部分数据的长度都小于600: ![len_distribution.png](len_distribution.png)
botp/firefly-train-1.1M
[ "region:us" ]
2023-08-16T06:39:47+00:00
{"duplicated_from": "YeungNLP/firefly-train-1.1M"}
2023-08-16T06:39:47+00:00
[]
[]
TAGS #region-us
本数据应用于项目:Firefly(流萤): 中文对话式大语言模型 ,训练后得到的模型firefly-1b4 如果您觉得此数据集对您有帮助,请like此数据集并在Github项目中star我们。 我们收集了23个常见的中文数据集,对于每个任务,由人工书写若干种指令模板,保证数据的高质量与丰富度,数据量为115万 。数据分布如下图所示: !task_distribution 每条数据的格式如下,包含任务类型、输入、目标输出: 训练数据集的token长度分布如下图所示,绝大部分数据的长度都小于600: !len_distribution.png
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
ebf9488aea6894ea4ca28fce5ff28525b69eae64
# Dataset of Fiona Frost This is the dataset of Fiona Frost, containing 69 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------| | raw | 69 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 135 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | 384x512 | 69 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x512 | 69 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. | | 512x704 | 69 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x640 | 69 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. | | 640x880 | 69 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 135 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 135 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-1200 | 135 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
CyberHarem/fiona_frost_spyxfamily
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-08-16T06:48:27+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-09-17T16:09:40+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Fiona Frost ====================== This is the dataset of Fiona Frost, containing 69 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
f07a42fb7d0be0c0010ebca7c0add3edb1addc75
# Dataset Card for "fairness_chef_google_flan_t5_xxl_mode_T_SPECIFIC_A_ns_4800" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVasNLPExperiments/fairness_chef_google_flan_t5_xxl_mode_T_SPECIFIC_A_ns_4800
[ "region:us" ]
2023-08-16T06:55:23+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "scores", "sequence": "float64"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices", "num_bytes": 2513915, "num_examples": 4800}], "download_size": 238232, "dataset_size": 2513915}}
2023-08-18T06:06:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "fairness_chef_google_flan_t5_xxl_mode_T_SPECIFIC_A_ns_4800" More Information needed
[ "# Dataset Card for \"fairness_chef_google_flan_t5_xxl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"fairness_chef_google_flan_t5_xxl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
[ 6, 39 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"fairness_chef_google_flan_t5_xxl_mode_T_SPECIFIC_A_ns_4800\"\n\nMore Information needed" ]
df0cf6ac7730820b4f7a3e5538abec40fa88bf65
# AutoTrain Dataset for project: pubmed ## Dataset Description This dataset has been automatically processed by AutoTrain for project pubmed. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "tokens": [ "Pd", "has", "been", "regarded", "as", "one", "of", "the", "alternatives", "to", "Pt", "as", "a", "promising", "hydrogen", "evolution", "reaction", "(HER)", "catalyst.", "Strategies", "including", "Pd-metal", "alloys", "(Pd-M)", "and", "Pd", "hydrides", "(PdH<sub><i>x</i></sub>)", "have", "been", "proposed", "to", "boost", "HER", "performances.", "However,", "the", "stability", "issues,", "e.g.,", "the", "dissolution", "in", "Pd-M", "and", "the", "hydrogen", "releasing", "in", "PdH<sub><i>x</i></sub>,", "restrict", "the", "industrial", "application", "of", "Pd-based", "HER", "catalysts.", "We", "here", "design", "and", "synthesize", "a", "stable", "Pd-Cu", "hydride", "(", "PdCu<sub>0.2</sub>H<sub>0.43</sub>", ")", "catalyst,", "combining", "the", "advantages", "of", "both", "Pd-M", "and", "PdH<sub><i>x</i></sub>", "structures", "and", "improving", "the", "HER", "durability", "simultaneously.", "The", "hydrogen", "intercalation", "is", "realized", "under", "atmospheric", "pressure", "(1.0", "atm)", "following", "our", "synthetic", "approach", "that", "imparts", "high", "stability", "to", "the", "Pd-Cu", "hydride", "structure.", "The", "obtained", "PdCu<sub>0.2</sub>H<sub>0.43</sub>", "catalyst", "exhibits", "a", "small", "overpotential", "of", "28", "mV", "at", "10", "mA/cm<sup>2</sup>", ",", "a", "low", "Tafel", "slope", "of", "23", "mV/dec", ",", "and", "excellent", "HER", "durability", "due", "to", "its", "appropriate", "hydrogen", "adsorption", "free", "energy", "and", "alleviated", "metal", "dissolution", "rate.", "</p>", "<p>" ], "tags": [ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 4, 2, 5, 5, 2, 5, 5, 2, 2, 2, 4, 2, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 ] }, { "tokens": [ "A", "critical", "challenge", "in", "energy", "research", "is", "the", "development", "of", "earth", "abundant", "and", "cost-effective", "materials", "that", "catalyze", "the", "electrochemical", "splitting", "of", "water", "into", "hydrogen", "and", "oxygen", "at", "high", "rates", "and", "low", "overpotentials.", "Key", "to", "addressing", "this", "issue", "lies", "not", "only", "in", "the", "synthesis", "of", "new", "materials,", "but", "also", "in", "the", "elucidation", "of", "their", "active", "sites,", "their", "structure", "under", "operating", "conditions", "and", "ultimately,", "extraction", "of", "the", "structure-function", "relationships", "used", "to", "spearhead", "the", "next", "generation", "of", "catalyst", "development.", "In", "this", "work,", "we", "present", "a", "complete", "cycle", "of", "synthesis,", "operando", "characterization,", "and", "redesign", "of", "an", "amorphous", "cobalt", "phosphide", "(", "CoP", "<sub><i>x</i></sub>", ")", "bifunctional", "catalyst.", "The", "research", "was", "driven", "by", "integrated", "electrochemical", "analysis,", "Raman", "spectroscopy", "and", "gravimetric", "measurements", "utilizing", "a", "novel", "quartz", "crystal", "microbalance", "spectroelectrochemical", "cell", "to", "uncover", "the", "catalytically", "active", "species", "of", "amorphous", "CoP", "<sub><i>x</i></sub>", "and", "subsequently", "modify", "the", "material", "to", "enhance", "the", "activity", "of", "the", "elucidated", "catalytic", "phases.", "Illustrating", "the", "power", "of", "our", "approach,", "the", "second", "generation", "cobalt-iron", "phosphide", "(", "CoFeP<sub>x</sub>", ")", "catalyst,", "developed", "through", "an", "iteration", "of", "the", "operando", "measurement", "directed", "optimization", "cycle,", "is", "superior", "in", "both", "hydrogen", "and", "oxygen", "evolution", "reactivity", "over", "the", "previous", "material", "and", "is", "capable", "of", "overall", "water", "electrolysis", "at", "a", "current", "density", "of", "10", "mA", "cm<sup>-2</sup>", "with", "1.5", "V", "applied", "bias", "in", "1", "M", "KOH", "electrolyte", "solution.", "</p>", "<p>" ], "tags": [ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 4, 4, 2, 5, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 ] } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)", "tags": "Sequence(feature=ClassLabel(names=['CATALYST', 'CO-CATALYST', 'O', 'Other', 'PROPERTY_NAME', 'PROPERTY_VALUE'], id=None), length=-1, id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 166 | | valid | 44 |
QNN/autotrain-data-pubmed
[ "task_categories:token-classification", "region:us" ]
2023-08-16T07:09:48+00:00
{"task_categories": ["token-classification"]}
2023-08-16T07:10:29+00:00
[]
[]
TAGS #task_categories-token-classification #region-us
AutoTrain Dataset for project: pubmed ===================================== Dataset Description ------------------- This dataset has been automatically processed by AutoTrain for project pubmed. ### Languages The BCP-47 code for the dataset's language is unk. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#task_categories-token-classification #region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ 18, 27, 17, 23, 27 ]
[ "passage: TAGS\n#task_categories-token-classification #region-us \n### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from this dataset looks as follows:### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]