sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
36fa823db29626bb083c1a96b60ce0234dd8b94d
|
# Dataset Card for "calc-qa-augment-sft-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
GoshaLetov/calc-qa-augment-sft
|
[
"region:us"
] |
2023-08-16T07:32:17+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2576713, "num_examples": 4166}], "download_size": 108634, "dataset_size": 2576713}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-16T07:39:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "calc-qa-augment-sft-3"
More Information needed
|
[
"# Dataset Card for \"calc-qa-augment-sft-3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"calc-qa-augment-sft-3\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"calc-qa-augment-sft-3\"\n\nMore Information needed"
] |
a8b7a7eaef34b8facb118e285cd59ba5e89fab78
|
# Dataset Card for "v1.1_id0.2_context_instruction_tuning"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/v1.1_id0.2_context_instruction_tuning
|
[
"region:us"
] |
2023-08-16T07:54:00+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "task_source", "dtype": "string"}, {"name": "task_name", "dtype": "string"}, {"name": "template_type", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "template_used", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1154915040.1878934, "num_examples": 437288}, {"name": "eval_context", "num_bytes": 38006832.85245361, "num_examples": 13944}, {"name": "eval_id_context", "num_bytes": 10843981, "num_examples": 5976}], "download_size": 237906027, "dataset_size": 1203765854.040347}}
|
2023-08-16T10:38:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "v1.1_id0.2_context_instruction_tuning"
More Information needed
|
[
"# Dataset Card for \"v1.1_id0.2_context_instruction_tuning\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"v1.1_id0.2_context_instruction_tuning\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"v1.1_id0.2_context_instruction_tuning\"\n\nMore Information needed"
] |
e3f1ac09e8e7dc4eb538bcfe2f5b639a4d1c86b4
|
👉 Dataset source: https://www.muftiwp.gov.my/
|
Ammar-Azman/crawl-mufti_wilayah
|
[
"license:mit",
"region:us"
] |
2023-08-16T07:57:24+00:00
|
{"license": "mit"}
|
2023-08-19T09:24:23+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
Dataset source: URL
|
[] |
[
"TAGS\n#license-mit #region-us \n"
] |
[
11
] |
[
"passage: TAGS\n#license-mit #region-us \n"
] |
53ef1460be8f24c7120d0aca595ad75e535a5dc0
|
# Dataset of alice_schuberg (Sword Art Online)
This is the dataset of alice_schuberg (Sword Art Online), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/alice_schuberg_swordartonline
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T08:16:18+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:09:44+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of alice_schuberg (Sword Art Online)
This is the dataset of alice_schuberg (Sword Art Online), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of alice_schuberg (Sword Art Online)\n\nThis is the dataset of alice_schuberg (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of alice_schuberg (Sword Art Online)\n\nThis is the dataset of alice_schuberg (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
87
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of alice_schuberg (Sword Art Online)\n\nThis is the dataset of alice_schuberg (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
367e906837221a933d098fc2bd63f8cc02d4765b
|
# Dataset Card for "voxpopuli-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
MikhailT/voxpopuli-en
|
[
"region:us"
] |
2023-08-16T08:16:37+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "labels", "sequence": {"sequence": "float32"}}, {"name": "speaker_embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 2388645494.4157987, "num_examples": 11871}, {"name": "test", "num_bytes": 265606271.8076703, "num_examples": 1320}], "download_size": 1938036247, "dataset_size": 2654251766.223469}}
|
2023-08-16T09:10:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "voxpopuli-en"
More Information needed
|
[
"# Dataset Card for \"voxpopuli-en\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"voxpopuli-en\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"voxpopuli-en\"\n\nMore Information needed"
] |
f03a6dbc6a020e9a4d93604f54e1c006ab010d5f
|
# Dataset Card for AGM Dataset
## Dataset Summary
The AGM (AGricolaModerna) Dataset is a comprehensive collection of high-resolution RGB images capturing harvest-ready plants in a vertical farm setting. This dataset consists of 972,858 images, each with a resolution of 120x120 pixels, covering 18 different plant crops. In the context of this dataset, a crop refers to a plant species or a mix of plant species.
## Supported Tasks
Image classification: plant phenotyping
## Languages
The dataset primarily consists of image data and does not involve language content. Therefore, the primary language is English, but it is not relevant to the dataset's core content.
## Dataset Structure
### Data Instances
A typical data instance from the training set consists of the following:
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=120x120 at 0x29CEAD71780>,
'crop_type': 'by'
}
```
### Data Fields
The dataset's data instances have the following fields:
- `image`: A PIL.Image.Image object representing the image.
- `crop_type`: An string representation of the crop type in the image
### Data Splits
- **Training Set**:
- Number of Examples: 972,858
## Dataset Creation
### Curation Rationale
The creation of the AGM Dataset was motivated by the need for a large and diverse dataset that captures various aspects of modern agriculture, including plant species diversity, stress detection, and crop health assessment.
### Source Data
#### Initial Data Collection and Normalization
The images were captured using a high-resolution camera positioned above a moving table in an agricultural setting. The camera captured images of the entire table, which was filled with trays of harvested crops. The image capture process spanned from May 2022 to December 2022. The original images had a resolution of $1073{\times}650$ pixels. Each pixel in the images corresponds to a physical size of $0.5$ millimeters.
### Annotations
#### Annotation Process
Agronomists and domain experts were involved in the annotation process. They annotated each image to identify the crops present and assign them to specific categories or species. This annotation process involved labeling each image with one of 18 distinct crop categories, which include individual plant species and mixtures of species.
### Who Are the Annotators?
The annotators are agronomists employed by Agricola Moderna.
## Personal and Sensitive Information
The dataset does not contain personal or sensitive information about individuals. It primarily consists of images of plants.
## Considerations for Using the Data
### Social Impact of Dataset
The AGM Dataset has potential social impact in modern agriculture and related domains. It can advance agriculture by aiding the development of innovative technologies for crop monitoring, disease detection, and yield prediction, fostering sustainable farming practices, contributing to food security and ensuring higher agricultural productivity and affordability. The dataset supports research for environmentally sustainable agriculture, optimizing resource use and reducing environmental impact.
### Discussion of Biases and Known Limitations
The dataset primarily involves images from a single vertical farm setting therefore, while massive, includes relatively little variation in crop types. The dataset's contents and annotations may reflect regional agricultural practices and preferences. Business preferences also play a substantial role in determining the types of crops grown in vertical farms. These preferences, often influenced by market demand and profitability, can significantly differ from conventional open-air field agriculture. Therefore, the dataset may inherently reflect these business-driven crop choices, potentially affecting its representativeness of broader agricultural scenarios.
## Additional Information
### Dataset Curators
The dataset is curate by DeepPlants and AgricolaModerna. You can contact us for further informations at
[email protected]
[email protected]
### Licensing Information
### Citation Information
If you use the AGM dataset in your work, please consider citing the following publication:
```bibtex
@InProceedings{Sama_2023_ICCV,
author = {Sama, Nico and David, Etienne and Rossetti, Simone and Antona, Alessandro and Franchetti, Benjamin and Pirri, Fiora},
title = {A new Large Dataset and a Transfer Learning Methodology for Plant Phenotyping in Vertical Farms},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {October},
year = {2023},
pages = {540-551}
}
```
|
deep-plants/AGM
|
[
"task_categories:image-classification",
"size_categories:100K<n<1M",
"license:cc",
"region:us"
] |
2023-08-16T08:37:26+00:00
|
{"license": "cc", "size_categories": ["100K<n<1M"], "task_categories": ["image-classification"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3208126820.734, "num_examples": 972858}], "download_size": 3245813213, "dataset_size": 3208126820.734}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-04T10:06:53+00:00
|
[] |
[] |
TAGS
#task_categories-image-classification #size_categories-100K<n<1M #license-cc #region-us
|
# Dataset Card for AGM Dataset
## Dataset Summary
The AGM (AGricolaModerna) Dataset is a comprehensive collection of high-resolution RGB images capturing harvest-ready plants in a vertical farm setting. This dataset consists of 972,858 images, each with a resolution of 120x120 pixels, covering 18 different plant crops. In the context of this dataset, a crop refers to a plant species or a mix of plant species.
## Supported Tasks
Image classification: plant phenotyping
## Languages
The dataset primarily consists of image data and does not involve language content. Therefore, the primary language is English, but it is not relevant to the dataset's core content.
## Dataset Structure
### Data Instances
A typical data instance from the training set consists of the following:
### Data Fields
The dataset's data instances have the following fields:
- 'image': A PIL.Image.Image object representing the image.
- 'crop_type': An string representation of the crop type in the image
### Data Splits
- Training Set:
- Number of Examples: 972,858
## Dataset Creation
### Curation Rationale
The creation of the AGM Dataset was motivated by the need for a large and diverse dataset that captures various aspects of modern agriculture, including plant species diversity, stress detection, and crop health assessment.
### Source Data
#### Initial Data Collection and Normalization
The images were captured using a high-resolution camera positioned above a moving table in an agricultural setting. The camera captured images of the entire table, which was filled with trays of harvested crops. The image capture process spanned from May 2022 to December 2022. The original images had a resolution of $1073{\times}650$ pixels. Each pixel in the images corresponds to a physical size of $0.5$ millimeters.
### Annotations
#### Annotation Process
Agronomists and domain experts were involved in the annotation process. They annotated each image to identify the crops present and assign them to specific categories or species. This annotation process involved labeling each image with one of 18 distinct crop categories, which include individual plant species and mixtures of species.
### Who Are the Annotators?
The annotators are agronomists employed by Agricola Moderna.
## Personal and Sensitive Information
The dataset does not contain personal or sensitive information about individuals. It primarily consists of images of plants.
## Considerations for Using the Data
### Social Impact of Dataset
The AGM Dataset has potential social impact in modern agriculture and related domains. It can advance agriculture by aiding the development of innovative technologies for crop monitoring, disease detection, and yield prediction, fostering sustainable farming practices, contributing to food security and ensuring higher agricultural productivity and affordability. The dataset supports research for environmentally sustainable agriculture, optimizing resource use and reducing environmental impact.
### Discussion of Biases and Known Limitations
The dataset primarily involves images from a single vertical farm setting therefore, while massive, includes relatively little variation in crop types. The dataset's contents and annotations may reflect regional agricultural practices and preferences. Business preferences also play a substantial role in determining the types of crops grown in vertical farms. These preferences, often influenced by market demand and profitability, can significantly differ from conventional open-air field agriculture. Therefore, the dataset may inherently reflect these business-driven crop choices, potentially affecting its representativeness of broader agricultural scenarios.
## Additional Information
### Dataset Curators
The dataset is curate by DeepPlants and AgricolaModerna. You can contact us for further informations at
nico@URL
URL@URL
### Licensing Information
If you use the AGM dataset in your work, please consider citing the following publication:
|
[
"# Dataset Card for AGM Dataset",
"## Dataset Summary\nThe AGM (AGricolaModerna) Dataset is a comprehensive collection of high-resolution RGB images capturing harvest-ready plants in a vertical farm setting. This dataset consists of 972,858 images, each with a resolution of 120x120 pixels, covering 18 different plant crops. In the context of this dataset, a crop refers to a plant species or a mix of plant species.",
"## Supported Tasks\nImage classification: plant phenotyping",
"## Languages\nThe dataset primarily consists of image data and does not involve language content. Therefore, the primary language is English, but it is not relevant to the dataset's core content.",
"## Dataset Structure",
"### Data Instances\nA typical data instance from the training set consists of the following:",
"### Data Fields\nThe dataset's data instances have the following fields:\n\n- 'image': A PIL.Image.Image object representing the image.\n- 'crop_type': An string representation of the crop type in the image",
"### Data Splits\n- Training Set:\n - Number of Examples: 972,858",
"## Dataset Creation",
"### Curation Rationale\nThe creation of the AGM Dataset was motivated by the need for a large and diverse dataset that captures various aspects of modern agriculture, including plant species diversity, stress detection, and crop health assessment.",
"### Source Data",
"#### Initial Data Collection and Normalization\nThe images were captured using a high-resolution camera positioned above a moving table in an agricultural setting. The camera captured images of the entire table, which was filled with trays of harvested crops. The image capture process spanned from May 2022 to December 2022. The original images had a resolution of $1073{\\times}650$ pixels. Each pixel in the images corresponds to a physical size of $0.5$ millimeters.",
"### Annotations",
"#### Annotation Process\nAgronomists and domain experts were involved in the annotation process. They annotated each image to identify the crops present and assign them to specific categories or species. This annotation process involved labeling each image with one of 18 distinct crop categories, which include individual plant species and mixtures of species.",
"### Who Are the Annotators?\nThe annotators are agronomists employed by Agricola Moderna.",
"## Personal and Sensitive Information\nThe dataset does not contain personal or sensitive information about individuals. It primarily consists of images of plants.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nThe AGM Dataset has potential social impact in modern agriculture and related domains. It can advance agriculture by aiding the development of innovative technologies for crop monitoring, disease detection, and yield prediction, fostering sustainable farming practices, contributing to food security and ensuring higher agricultural productivity and affordability. The dataset supports research for environmentally sustainable agriculture, optimizing resource use and reducing environmental impact.",
"### Discussion of Biases and Known Limitations\nThe dataset primarily involves images from a single vertical farm setting therefore, while massive, includes relatively little variation in crop types. The dataset's contents and annotations may reflect regional agricultural practices and preferences. Business preferences also play a substantial role in determining the types of crops grown in vertical farms. These preferences, often influenced by market demand and profitability, can significantly differ from conventional open-air field agriculture. Therefore, the dataset may inherently reflect these business-driven crop choices, potentially affecting its representativeness of broader agricultural scenarios.",
"## Additional Information",
"### Dataset Curators\nThe dataset is curate by DeepPlants and AgricolaModerna. You can contact us for further informations at\nnico@URL\nURL@URL",
"### Licensing Information\n\n\n\nIf you use the AGM dataset in your work, please consider citing the following publication:"
] |
[
"TAGS\n#task_categories-image-classification #size_categories-100K<n<1M #license-cc #region-us \n",
"# Dataset Card for AGM Dataset",
"## Dataset Summary\nThe AGM (AGricolaModerna) Dataset is a comprehensive collection of high-resolution RGB images capturing harvest-ready plants in a vertical farm setting. This dataset consists of 972,858 images, each with a resolution of 120x120 pixels, covering 18 different plant crops. In the context of this dataset, a crop refers to a plant species or a mix of plant species.",
"## Supported Tasks\nImage classification: plant phenotyping",
"## Languages\nThe dataset primarily consists of image data and does not involve language content. Therefore, the primary language is English, but it is not relevant to the dataset's core content.",
"## Dataset Structure",
"### Data Instances\nA typical data instance from the training set consists of the following:",
"### Data Fields\nThe dataset's data instances have the following fields:\n\n- 'image': A PIL.Image.Image object representing the image.\n- 'crop_type': An string representation of the crop type in the image",
"### Data Splits\n- Training Set:\n - Number of Examples: 972,858",
"## Dataset Creation",
"### Curation Rationale\nThe creation of the AGM Dataset was motivated by the need for a large and diverse dataset that captures various aspects of modern agriculture, including plant species diversity, stress detection, and crop health assessment.",
"### Source Data",
"#### Initial Data Collection and Normalization\nThe images were captured using a high-resolution camera positioned above a moving table in an agricultural setting. The camera captured images of the entire table, which was filled with trays of harvested crops. The image capture process spanned from May 2022 to December 2022. The original images had a resolution of $1073{\\times}650$ pixels. Each pixel in the images corresponds to a physical size of $0.5$ millimeters.",
"### Annotations",
"#### Annotation Process\nAgronomists and domain experts were involved in the annotation process. They annotated each image to identify the crops present and assign them to specific categories or species. This annotation process involved labeling each image with one of 18 distinct crop categories, which include individual plant species and mixtures of species.",
"### Who Are the Annotators?\nThe annotators are agronomists employed by Agricola Moderna.",
"## Personal and Sensitive Information\nThe dataset does not contain personal or sensitive information about individuals. It primarily consists of images of plants.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nThe AGM Dataset has potential social impact in modern agriculture and related domains. It can advance agriculture by aiding the development of innovative technologies for crop monitoring, disease detection, and yield prediction, fostering sustainable farming practices, contributing to food security and ensuring higher agricultural productivity and affordability. The dataset supports research for environmentally sustainable agriculture, optimizing resource use and reducing environmental impact.",
"### Discussion of Biases and Known Limitations\nThe dataset primarily involves images from a single vertical farm setting therefore, while massive, includes relatively little variation in crop types. The dataset's contents and annotations may reflect regional agricultural practices and preferences. Business preferences also play a substantial role in determining the types of crops grown in vertical farms. These preferences, often influenced by market demand and profitability, can significantly differ from conventional open-air field agriculture. Therefore, the dataset may inherently reflect these business-driven crop choices, potentially affecting its representativeness of broader agricultural scenarios.",
"## Additional Information",
"### Dataset Curators\nThe dataset is curate by DeepPlants and AgricolaModerna. You can contact us for further informations at\nnico@URL\nURL@URL",
"### Licensing Information\n\n\n\nIf you use the AGM dataset in your work, please consider citing the following publication:"
] |
[
34,
9,
97,
14,
42,
6,
20,
57,
18,
5,
54,
4,
107,
5,
71,
25,
31,
8,
103,
149,
5,
37,
26
] |
[
"passage: TAGS\n#task_categories-image-classification #size_categories-100K<n<1M #license-cc #region-us \n# Dataset Card for AGM Dataset## Dataset Summary\nThe AGM (AGricolaModerna) Dataset is a comprehensive collection of high-resolution RGB images capturing harvest-ready plants in a vertical farm setting. This dataset consists of 972,858 images, each with a resolution of 120x120 pixels, covering 18 different plant crops. In the context of this dataset, a crop refers to a plant species or a mix of plant species.## Supported Tasks\nImage classification: plant phenotyping## Languages\nThe dataset primarily consists of image data and does not involve language content. Therefore, the primary language is English, but it is not relevant to the dataset's core content.## Dataset Structure### Data Instances\nA typical data instance from the training set consists of the following:### Data Fields\nThe dataset's data instances have the following fields:\n\n- 'image': A PIL.Image.Image object representing the image.\n- 'crop_type': An string representation of the crop type in the image### Data Splits\n- Training Set:\n - Number of Examples: 972,858## Dataset Creation### Curation Rationale\nThe creation of the AGM Dataset was motivated by the need for a large and diverse dataset that captures various aspects of modern agriculture, including plant species diversity, stress detection, and crop health assessment.### Source Data#### Initial Data Collection and Normalization\nThe images were captured using a high-resolution camera positioned above a moving table in an agricultural setting. The camera captured images of the entire table, which was filled with trays of harvested crops. The image capture process spanned from May 2022 to December 2022. The original images had a resolution of $1073{\\times}650$ pixels. Each pixel in the images corresponds to a physical size of $0.5$ millimeters.### Annotations"
] |
09818cacad2e86c723231459f07bf5445351f2c6
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
nt is empty. Use the Ed
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
VedCodes/llama2_project
|
[
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"medical",
"region:us"
] |
2023-08-16T08:44:22+00:00
|
{"language": ["en"], "size_categories": ["n<1K"], "task_categories": ["text-generation"], "pretty_name": "boy_hi", "tags": ["medical"]}
|
2023-08-16T08:52:02+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-n<1K #language-English #medical #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
nt is empty. Use the Ed
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\n\nnt is empty. Use the Ed",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #medical #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\n\nnt is empty. Use the Ed",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
34,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
13,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #medical #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process\n\n\nnt is empty. Use the Ed#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
fcdbe4532b73593e1dce8ee8d37a779dc887b451
|
# 08/16/2023
lt2_08162023_test_1j used to fine-tune llama-2-7b-chat-tagalog-v0.1. Experiment just to see how much a small dataset can influence the model.
"Taga-llama:
* Noting that traces of Tagalog may be included in pretrained LM's data, touching on how to make use of/invoke whatever the LM has learned from these traces: may also apply to other languages, when dealing with primarily English-trained LMs.
* Acknowledging that fine-tuning, even with bigger datasets cannot 'teach' pretrained models new info such as languages, but can allow us to observe how much a LM is capable of in the target language based on what it may have learned from its data."
|
922-Narra/lt_08162023_test_1j
|
[
"license:openrail",
"region:us"
] |
2023-08-16T08:47:06+00:00
|
{"license": "openrail"}
|
2023-08-18T05:53:13+00:00
|
[] |
[] |
TAGS
#license-openrail #region-us
|
# 08/16/2023
lt2_08162023_test_1j used to fine-tune llama-2-7b-chat-tagalog-v0.1. Experiment just to see how much a small dataset can influence the model.
"Taga-llama:
* Noting that traces of Tagalog may be included in pretrained LM's data, touching on how to make use of/invoke whatever the LM has learned from these traces: may also apply to other languages, when dealing with primarily English-trained LMs.
* Acknowledging that fine-tuning, even with bigger datasets cannot 'teach' pretrained models new info such as languages, but can allow us to observe how much a LM is capable of in the target language based on what it may have learned from its data."
|
[
"# 08/16/2023\nlt2_08162023_test_1j used to fine-tune llama-2-7b-chat-tagalog-v0.1. Experiment just to see how much a small dataset can influence the model.\n\n\"Taga-llama:\n* Noting that traces of Tagalog may be included in pretrained LM's data, touching on how to make use of/invoke whatever the LM has learned from these traces: may also apply to other languages, when dealing with primarily English-trained LMs.\n* Acknowledging that fine-tuning, even with bigger datasets cannot 'teach' pretrained models new info such as languages, but can allow us to observe how much a LM is capable of in the target language based on what it may have learned from its data.\""
] |
[
"TAGS\n#license-openrail #region-us \n",
"# 08/16/2023\nlt2_08162023_test_1j used to fine-tune llama-2-7b-chat-tagalog-v0.1. Experiment just to see how much a small dataset can influence the model.\n\n\"Taga-llama:\n* Noting that traces of Tagalog may be included in pretrained LM's data, touching on how to make use of/invoke whatever the LM has learned from these traces: may also apply to other languages, when dealing with primarily English-trained LMs.\n* Acknowledging that fine-tuning, even with bigger datasets cannot 'teach' pretrained models new info such as languages, but can allow us to observe how much a LM is capable of in the target language based on what it may have learned from its data.\""
] |
[
12,
185
] |
[
"passage: TAGS\n#license-openrail #region-us \n# 08/16/2023\nlt2_08162023_test_1j used to fine-tune llama-2-7b-chat-tagalog-v0.1. Experiment just to see how much a small dataset can influence the model.\n\n\"Taga-llama:\n* Noting that traces of Tagalog may be included in pretrained LM's data, touching on how to make use of/invoke whatever the LM has learned from these traces: may also apply to other languages, when dealing with primarily English-trained LMs.\n* Acknowledging that fine-tuning, even with bigger datasets cannot 'teach' pretrained models new info such as languages, but can allow us to observe how much a LM is capable of in the target language based on what it may have learned from its data.\""
] |
94e3b17fbb208692c365373774135f9ee210b1ea
|
# Dataset Card for AGM_HS Dataset
## Dataset Summary
The AGM<sub>HS</sub> (AGricolaModerna Healthy-Stress) Dataset is an extension of the AGM Dataset, specifically curated to address the challenge of detecting and localizing plant stress in top-view images of harvested crops. This dataset comprises 6,127 high-resolution RGB images, each with a resolution of 120x120 pixels, selected from the original AGM Dataset. The images in AGM<sub>HS</sub> are divided into two categories: healthy samples (3,798 images) and stressed samples (2,329 images) representing 14 of the 18 classes present in AGM. Alongside the healthy/stressed classification labels, the dataset also provides segmentation masks for the stressed areas.
## Supported Tasks
Image classification: Healthy-stressed classification
Image segmentation: detection and localization of plant stress in top-view images.
## Languages
The dataset primarily consists of image data and does not involve language content. Therefore, the primary language is English, but it is not relevant to the dataset's core content.
## Dataset Structure
### Data Instances
A typical data instance from the AGM<sub>HS</sub> Dataset consists of the following:
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=120x120 at 0x29CEAD71780>,
'labels': 'stressed',
'crop_type': 'by'
'mask': <PIL.PngImagePlugin.PngImageFile image mode=L size=120x120 at 0x29CEAD71780>
}
```
### Data Fields
The dataset's data instances have the following fields:
- `image`: A PIL.Image.Image object representing the image.
- `labels`: A string representation indicating whether the image is "healthy" or "stressed."
- `crop_type`: An string representation of the crop type in the image
- `mask`: A PIL.Image.Image object representing the segmentation mask of stressed areas in the image, stored as a PNG image.
### Data Splits
- **Training Set**:
- Number of Examples: 6,127
- Healthy Samples: 3,798
- Stressed Samples: 2,329
## Dataset Creation
### Curation Rationale
The AGM<sub>HS</sub> Dataset was created as an extension of the AGM Dataset to specifically address the challenge of detecting and localizing plant stress in top-view images of harvested crops. This dataset is essential for the development and evaluation of advanced segmentation models tailored for this task.
### Source Data
#### Initial Data Collection and Normalization
The images in AGM<sub>HS</sub> were extracted from the original AGM Dataset. During the extraction process, labelers selected images showing clear signs of either good health or high stress. These sub-images were resized to 120x120 pixels to create AGM<sub>HS</sub>.
### Annotations
#### Annotation Process
The AGM<sub>HS</sub> Dataset underwent a secondary stage of annotation. Labelers manually collected images by clicking on points corresponding to stressed areas on the leaves. These clicked points served as prompts for the semi-automatic generation of segmentation masks using the "Segment Anything" technique \cite{kirillov2023segment}. Each image is annotated with a classification label ("healthy" or "stressed") and a corresponding segmentation mask.
### Who Are the Annotators?
The annotators for AGM<sub>HS</sub> are domain experts with knowledge of plant health and stress detection.
## Personal and Sensitive Information
The dataset does not contain personal or sensitive information about individuals. It exclusively consists of images of plants.
## Considerations for Using the Data
### Social Impact of Dataset
The AGM<sub>HS</sub> Dataset plays a crucial role in advancing research and technologies for plant stress detection and localization in the context of modern agriculture. By providing a diverse set of top-view crop images with associated segmentation masks, this dataset can facilitate the development of innovative solutions for sustainable agriculture, contributing to increased crop health, yield prediction, and overall food security.
### Discussion of Biases and Known Limitations
While AGM<sub>HS</sub> is a valuable dataset, it inherits some limitations from the original AGM Dataset. It primarily involves images from a single vertical farm setting, potentially limiting the representativeness of broader agricultural scenarios. Additionally, the dataset's composition may reflect regional agricultural practices and business-driven crop preferences specific to vertical farming. Researchers should be aware of these potential biases when utilizing AGM<sub>HS</sub> for their work.
## Additional Information
### Dataset Curators
The AGM<sub>HS</sub> Dataset is curated by DeepPlants and AgricolaModerna. For further information, please contact us at:
- [email protected]
- [email protected]
### Licensing Information
### Citation Information
If you use the AGM<sub>HS</sub> dataset in your work, please consider citing the following publication:
```bibtex
@InProceedings{Sama_2023_ICCV,
author = {Sama, Nico and David, Etienne and Rossetti, Simone and Antona, Alessandro and Franchetti, Benjamin and Pirri, Fiora},
title = {A new Large Dataset and a Transfer Learning Methodology for Plant Phenotyping in Vertical Farms},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {October},
year = {2023},
pages = {540-551}
}
```
|
deep-plants/AGM_HS
|
[
"license:cc",
"region:us"
] |
2023-08-16T09:04:19+00:00
|
{"license": "cc", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "mask", "dtype": "image"}, {"name": "crop_type", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22900031.321, "num_examples": 6127}], "download_size": 22010079, "dataset_size": 22900031.321}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-04T10:07:25+00:00
|
[] |
[] |
TAGS
#license-cc #region-us
|
# Dataset Card for AGM_HS Dataset
## Dataset Summary
The AGM<sub>HS</sub> (AGricolaModerna Healthy-Stress) Dataset is an extension of the AGM Dataset, specifically curated to address the challenge of detecting and localizing plant stress in top-view images of harvested crops. This dataset comprises 6,127 high-resolution RGB images, each with a resolution of 120x120 pixels, selected from the original AGM Dataset. The images in AGM<sub>HS</sub> are divided into two categories: healthy samples (3,798 images) and stressed samples (2,329 images) representing 14 of the 18 classes present in AGM. Alongside the healthy/stressed classification labels, the dataset also provides segmentation masks for the stressed areas.
## Supported Tasks
Image classification: Healthy-stressed classification
Image segmentation: detection and localization of plant stress in top-view images.
## Languages
The dataset primarily consists of image data and does not involve language content. Therefore, the primary language is English, but it is not relevant to the dataset's core content.
## Dataset Structure
### Data Instances
A typical data instance from the AGM<sub>HS</sub> Dataset consists of the following:
### Data Fields
The dataset's data instances have the following fields:
- 'image': A PIL.Image.Image object representing the image.
- 'labels': A string representation indicating whether the image is "healthy" or "stressed."
- 'crop_type': An string representation of the crop type in the image
- 'mask': A PIL.Image.Image object representing the segmentation mask of stressed areas in the image, stored as a PNG image.
### Data Splits
- Training Set:
- Number of Examples: 6,127
- Healthy Samples: 3,798
- Stressed Samples: 2,329
## Dataset Creation
### Curation Rationale
The AGM<sub>HS</sub> Dataset was created as an extension of the AGM Dataset to specifically address the challenge of detecting and localizing plant stress in top-view images of harvested crops. This dataset is essential for the development and evaluation of advanced segmentation models tailored for this task.
### Source Data
#### Initial Data Collection and Normalization
The images in AGM<sub>HS</sub> were extracted from the original AGM Dataset. During the extraction process, labelers selected images showing clear signs of either good health or high stress. These sub-images were resized to 120x120 pixels to create AGM<sub>HS</sub>.
### Annotations
#### Annotation Process
The AGM<sub>HS</sub> Dataset underwent a secondary stage of annotation. Labelers manually collected images by clicking on points corresponding to stressed areas on the leaves. These clicked points served as prompts for the semi-automatic generation of segmentation masks using the "Segment Anything" technique \cite{kirillov2023segment}. Each image is annotated with a classification label ("healthy" or "stressed") and a corresponding segmentation mask.
### Who Are the Annotators?
The annotators for AGM<sub>HS</sub> are domain experts with knowledge of plant health and stress detection.
## Personal and Sensitive Information
The dataset does not contain personal or sensitive information about individuals. It exclusively consists of images of plants.
## Considerations for Using the Data
### Social Impact of Dataset
The AGM<sub>HS</sub> Dataset plays a crucial role in advancing research and technologies for plant stress detection and localization in the context of modern agriculture. By providing a diverse set of top-view crop images with associated segmentation masks, this dataset can facilitate the development of innovative solutions for sustainable agriculture, contributing to increased crop health, yield prediction, and overall food security.
### Discussion of Biases and Known Limitations
While AGM<sub>HS</sub> is a valuable dataset, it inherits some limitations from the original AGM Dataset. It primarily involves images from a single vertical farm setting, potentially limiting the representativeness of broader agricultural scenarios. Additionally, the dataset's composition may reflect regional agricultural practices and business-driven crop preferences specific to vertical farming. Researchers should be aware of these potential biases when utilizing AGM<sub>HS</sub> for their work.
## Additional Information
### Dataset Curators
The AGM<sub>HS</sub> Dataset is curated by DeepPlants and AgricolaModerna. For further information, please contact us at:
- nico@URL
- URL@URL
### Licensing Information
If you use the AGM<sub>HS</sub> dataset in your work, please consider citing the following publication:
|
[
"# Dataset Card for AGM_HS Dataset",
"## Dataset Summary\nThe AGM<sub>HS</sub> (AGricolaModerna Healthy-Stress) Dataset is an extension of the AGM Dataset, specifically curated to address the challenge of detecting and localizing plant stress in top-view images of harvested crops. This dataset comprises 6,127 high-resolution RGB images, each with a resolution of 120x120 pixels, selected from the original AGM Dataset. The images in AGM<sub>HS</sub> are divided into two categories: healthy samples (3,798 images) and stressed samples (2,329 images) representing 14 of the 18 classes present in AGM. Alongside the healthy/stressed classification labels, the dataset also provides segmentation masks for the stressed areas.",
"## Supported Tasks\nImage classification: Healthy-stressed classification\nImage segmentation: detection and localization of plant stress in top-view images.",
"## Languages\nThe dataset primarily consists of image data and does not involve language content. Therefore, the primary language is English, but it is not relevant to the dataset's core content.",
"## Dataset Structure",
"### Data Instances\nA typical data instance from the AGM<sub>HS</sub> Dataset consists of the following:",
"### Data Fields\nThe dataset's data instances have the following fields:\n\n- 'image': A PIL.Image.Image object representing the image.\n- 'labels': A string representation indicating whether the image is \"healthy\" or \"stressed.\"\n- 'crop_type': An string representation of the crop type in the image\n- 'mask': A PIL.Image.Image object representing the segmentation mask of stressed areas in the image, stored as a PNG image.",
"### Data Splits\n- Training Set:\n - Number of Examples: 6,127\n - Healthy Samples: 3,798\n - Stressed Samples: 2,329",
"## Dataset Creation",
"### Curation Rationale\nThe AGM<sub>HS</sub> Dataset was created as an extension of the AGM Dataset to specifically address the challenge of detecting and localizing plant stress in top-view images of harvested crops. This dataset is essential for the development and evaluation of advanced segmentation models tailored for this task.",
"### Source Data",
"#### Initial Data Collection and Normalization\nThe images in AGM<sub>HS</sub> were extracted from the original AGM Dataset. During the extraction process, labelers selected images showing clear signs of either good health or high stress. These sub-images were resized to 120x120 pixels to create AGM<sub>HS</sub>.",
"### Annotations",
"#### Annotation Process\nThe AGM<sub>HS</sub> Dataset underwent a secondary stage of annotation. Labelers manually collected images by clicking on points corresponding to stressed areas on the leaves. These clicked points served as prompts for the semi-automatic generation of segmentation masks using the \"Segment Anything\" technique \\cite{kirillov2023segment}. Each image is annotated with a classification label (\"healthy\" or \"stressed\") and a corresponding segmentation mask.",
"### Who Are the Annotators?\nThe annotators for AGM<sub>HS</sub> are domain experts with knowledge of plant health and stress detection.",
"## Personal and Sensitive Information\nThe dataset does not contain personal or sensitive information about individuals. It exclusively consists of images of plants.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nThe AGM<sub>HS</sub> Dataset plays a crucial role in advancing research and technologies for plant stress detection and localization in the context of modern agriculture. By providing a diverse set of top-view crop images with associated segmentation masks, this dataset can facilitate the development of innovative solutions for sustainable agriculture, contributing to increased crop health, yield prediction, and overall food security.",
"### Discussion of Biases and Known Limitations\nWhile AGM<sub>HS</sub> is a valuable dataset, it inherits some limitations from the original AGM Dataset. It primarily involves images from a single vertical farm setting, potentially limiting the representativeness of broader agricultural scenarios. Additionally, the dataset's composition may reflect regional agricultural practices and business-driven crop preferences specific to vertical farming. Researchers should be aware of these potential biases when utilizing AGM<sub>HS</sub> for their work.",
"## Additional Information",
"### Dataset Curators\nThe AGM<sub>HS</sub> Dataset is curated by DeepPlants and AgricolaModerna. For further information, please contact us at:\n- nico@URL\n- URL@URL",
"### Licensing Information\n\n\nIf you use the AGM<sub>HS</sub> dataset in your work, please consider citing the following publication:"
] |
[
"TAGS\n#license-cc #region-us \n",
"# Dataset Card for AGM_HS Dataset",
"## Dataset Summary\nThe AGM<sub>HS</sub> (AGricolaModerna Healthy-Stress) Dataset is an extension of the AGM Dataset, specifically curated to address the challenge of detecting and localizing plant stress in top-view images of harvested crops. This dataset comprises 6,127 high-resolution RGB images, each with a resolution of 120x120 pixels, selected from the original AGM Dataset. The images in AGM<sub>HS</sub> are divided into two categories: healthy samples (3,798 images) and stressed samples (2,329 images) representing 14 of the 18 classes present in AGM. Alongside the healthy/stressed classification labels, the dataset also provides segmentation masks for the stressed areas.",
"## Supported Tasks\nImage classification: Healthy-stressed classification\nImage segmentation: detection and localization of plant stress in top-view images.",
"## Languages\nThe dataset primarily consists of image data and does not involve language content. Therefore, the primary language is English, but it is not relevant to the dataset's core content.",
"## Dataset Structure",
"### Data Instances\nA typical data instance from the AGM<sub>HS</sub> Dataset consists of the following:",
"### Data Fields\nThe dataset's data instances have the following fields:\n\n- 'image': A PIL.Image.Image object representing the image.\n- 'labels': A string representation indicating whether the image is \"healthy\" or \"stressed.\"\n- 'crop_type': An string representation of the crop type in the image\n- 'mask': A PIL.Image.Image object representing the segmentation mask of stressed areas in the image, stored as a PNG image.",
"### Data Splits\n- Training Set:\n - Number of Examples: 6,127\n - Healthy Samples: 3,798\n - Stressed Samples: 2,329",
"## Dataset Creation",
"### Curation Rationale\nThe AGM<sub>HS</sub> Dataset was created as an extension of the AGM Dataset to specifically address the challenge of detecting and localizing plant stress in top-view images of harvested crops. This dataset is essential for the development and evaluation of advanced segmentation models tailored for this task.",
"### Source Data",
"#### Initial Data Collection and Normalization\nThe images in AGM<sub>HS</sub> were extracted from the original AGM Dataset. During the extraction process, labelers selected images showing clear signs of either good health or high stress. These sub-images were resized to 120x120 pixels to create AGM<sub>HS</sub>.",
"### Annotations",
"#### Annotation Process\nThe AGM<sub>HS</sub> Dataset underwent a secondary stage of annotation. Labelers manually collected images by clicking on points corresponding to stressed areas on the leaves. These clicked points served as prompts for the semi-automatic generation of segmentation masks using the \"Segment Anything\" technique \\cite{kirillov2023segment}. Each image is annotated with a classification label (\"healthy\" or \"stressed\") and a corresponding segmentation mask.",
"### Who Are the Annotators?\nThe annotators for AGM<sub>HS</sub> are domain experts with knowledge of plant health and stress detection.",
"## Personal and Sensitive Information\nThe dataset does not contain personal or sensitive information about individuals. It exclusively consists of images of plants.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nThe AGM<sub>HS</sub> Dataset plays a crucial role in advancing research and technologies for plant stress detection and localization in the context of modern agriculture. By providing a diverse set of top-view crop images with associated segmentation masks, this dataset can facilitate the development of innovative solutions for sustainable agriculture, contributing to increased crop health, yield prediction, and overall food security.",
"### Discussion of Biases and Known Limitations\nWhile AGM<sub>HS</sub> is a valuable dataset, it inherits some limitations from the original AGM Dataset. It primarily involves images from a single vertical farm setting, potentially limiting the representativeness of broader agricultural scenarios. Additionally, the dataset's composition may reflect regional agricultural practices and business-driven crop preferences specific to vertical farming. Researchers should be aware of these potential biases when utilizing AGM<sub>HS</sub> for their work.",
"## Additional Information",
"### Dataset Curators\nThe AGM<sub>HS</sub> Dataset is curated by DeepPlants and AgricolaModerna. For further information, please contact us at:\n- nico@URL\n- URL@URL",
"### Licensing Information\n\n\nIf you use the AGM<sub>HS</sub> dataset in your work, please consider citing the following publication:"
] |
[
11,
11,
179,
34,
42,
6,
29,
118,
35,
5,
77,
4,
81,
5,
121,
36,
31,
8,
100,
133,
5,
49,
33
] |
[
"passage: TAGS\n#license-cc #region-us \n# Dataset Card for AGM_HS Dataset## Dataset Summary\nThe AGM<sub>HS</sub> (AGricolaModerna Healthy-Stress) Dataset is an extension of the AGM Dataset, specifically curated to address the challenge of detecting and localizing plant stress in top-view images of harvested crops. This dataset comprises 6,127 high-resolution RGB images, each with a resolution of 120x120 pixels, selected from the original AGM Dataset. The images in AGM<sub>HS</sub> are divided into two categories: healthy samples (3,798 images) and stressed samples (2,329 images) representing 14 of the 18 classes present in AGM. Alongside the healthy/stressed classification labels, the dataset also provides segmentation masks for the stressed areas.## Supported Tasks\nImage classification: Healthy-stressed classification\nImage segmentation: detection and localization of plant stress in top-view images.## Languages\nThe dataset primarily consists of image data and does not involve language content. Therefore, the primary language is English, but it is not relevant to the dataset's core content.## Dataset Structure### Data Instances\nA typical data instance from the AGM<sub>HS</sub> Dataset consists of the following:### Data Fields\nThe dataset's data instances have the following fields:\n\n- 'image': A PIL.Image.Image object representing the image.\n- 'labels': A string representation indicating whether the image is \"healthy\" or \"stressed.\"\n- 'crop_type': An string representation of the crop type in the image\n- 'mask': A PIL.Image.Image object representing the segmentation mask of stressed areas in the image, stored as a PNG image.### Data Splits\n- Training Set:\n - Number of Examples: 6,127\n - Healthy Samples: 3,798\n - Stressed Samples: 2,329## Dataset Creation",
"passage: ### Curation Rationale\nThe AGM<sub>HS</sub> Dataset was created as an extension of the AGM Dataset to specifically address the challenge of detecting and localizing plant stress in top-view images of harvested crops. This dataset is essential for the development and evaluation of advanced segmentation models tailored for this task.### Source Data#### Initial Data Collection and Normalization\nThe images in AGM<sub>HS</sub> were extracted from the original AGM Dataset. During the extraction process, labelers selected images showing clear signs of either good health or high stress. These sub-images were resized to 120x120 pixels to create AGM<sub>HS</sub>.### Annotations#### Annotation Process\nThe AGM<sub>HS</sub> Dataset underwent a secondary stage of annotation. Labelers manually collected images by clicking on points corresponding to stressed areas on the leaves. These clicked points served as prompts for the semi-automatic generation of segmentation masks using the \"Segment Anything\" technique \\cite{kirillov2023segment}. Each image is annotated with a classification label (\"healthy\" or \"stressed\") and a corresponding segmentation mask.### Who Are the Annotators?\nThe annotators for AGM<sub>HS</sub> are domain experts with knowledge of plant health and stress detection.## Personal and Sensitive Information\nThe dataset does not contain personal or sensitive information about individuals. It exclusively consists of images of plants.## Considerations for Using the Data### Social Impact of Dataset\nThe AGM<sub>HS</sub> Dataset plays a crucial role in advancing research and technologies for plant stress detection and localization in the context of modern agriculture. By providing a diverse set of top-view crop images with associated segmentation masks, this dataset can facilitate the development of innovative solutions for sustainable agriculture, contributing to increased crop health, yield prediction, and overall food security."
] |
53b06745cd725298e670329b23bc59c755a7d3af
|
This dataset is used to train a multilingual ingredient list detection model. The goal is to automate the extraction of ingredient lists from food packaging images. See [this issue](https://github.com/openfoodfacts/openfoodfacts-ai/issues/242) for a broader context about ingredient list extraction.
## Dataset generation
Raw unannotated texts are OCR results obtained with Google Cloud Vision. It only contains images marked as ingredient image on Open Food Facts.
The dataset was generated using ChatGPT-3.5: we asked ChatGPT to extract ingredient using the following prompt:
Prompt:
```
Extract ingredient lists from the following texts. The ingredient list should start with the first ingredient and end with the last ingredient. It should not include allergy, label or origin information.
The output format must be a single JSON list containing one element per ingredient list. If there are ingredients in several languages, the output JSON list should contain as many elements as detected languages. Each element should have two fields:
- a "text" field containing the detected ingredient list. The text should be a substring of the original text, you must not alter the original text.
- a "lang" field containing the detected language of the ingredient list.
Don't output anything else than the expected JSON list.
```
System prompt:
```
You are ChatGPT, a large language model trained by OpenAI. Only generate responses in JSON format. The output JSON must be minified.
```
A first cleaning step was performed automatically, we removed responses with:
- invalid JSON
- JSON with missing fields
- JSON where the detected ingredient list is not a substring of the original text
A first NER model was trained on this dataset. The model prediction errors on this dataset were inspected, which allowed us to spot the different kind of annotation errors made by ChatGPT. Then, using a semi-automatic approach, we manually corrected samples that were likely to have the error spotted during the inspection phase. For example, we noticed that the prefix "Ingredients:" was sometimes included in the ingredient text span. We looked for every sample where "Ingredients" (and translations in other languages) was part of the ingredient text, and corrected these samples manually. This approach allowed us to focus on problematic samples, instead of having to check the full train set.
These detection rules were mostly implemented using regex. The cleaning script with all rules [can be found here](https://github.com/openfoodfacts/openfoodfacts-ai/blob/149447bdbcd19cb7c15127405d9112bc9bfe3685/ingredient_extraction/clean_dataset.py#L23).
Once the detected errors were fixed using this approach, a new dataset alpha version was released, and we trained the model on this new dataset.
Dataset was split between train (90%) and test (10%) sets. Train and test splits were kept consistent at each alpha release. Only the test dataset was fully reviewed and corrected manually.
We tokenized the text using huggingface pre-tokenizer with the `[WhitespaceSplit(), Punctuation()]` sequence. The dataset generation script [can be found here](https://github.com/openfoodfacts/openfoodfacts-ai/blob/149447bdbcd19cb7c15127405d9112bc9bfe3685/ingredient_extraction/generate_dataset.py).
This dataset is exactly the same as `ingredient-detection-alpha-v6` used during model trainings.
## Annotation guidelines
Annotations guidelines were updated continuously during dataset refinement and model trainings, but here are the final guidelines:
1. ingredient lists in all languages must be annotated.
2. ingredients list should start with the first ingredient, without `ingredient` prefix ("Ingredients:", "Zutaten", "Ingrédients: ") or `language` prefix ("EN:", "FR - ",...)
3. ingredient list containing single ingredients without any `ingredient` or `language` prefix should not be annotated. Otherwise, it's very difficult to know whether the mention is the ingredient list or just a random mention of an ingredient on the packaging.
4. We have a very restrictive approach on where the ingredient list ends: we don't include any extra information (allergen, origin, trace, organic mentions) at the end of the ingredient list. The only exception is when this information is in bracket after the ingredient. This rule is in place to make it easier for the detector to know what is an ingredient list and what is not. Additional information can be added afterward as a post-processing step.
## Dataset schema
The dataset is made of 2 JSONL files:
- `ingredient_detection_dataset-v1_train.jsonl.gz`: train split, 5065 samples
- `ingredient_detection_dataset-v1_test.jsonl.gz`: test split, 556 samples
Each sample has the following fields:
- `text`: the original text obtained from OCR result
- `marked_text`: the text with ingredient spans delimited by `<b>` and `</b>`
- `tokens`: tokens obtained with pre-tokenization
- `ner_tags`: tag ID associated with each token: 0 for `O`, 1 for `B-ING` and 2 for `I-ING` (BIO schema)
- `offsets`: a list containing character start and end offsets of ingredients spans
- `meta`: a dict containing additional meta-data about the sample:
- `barcode`: the product barcode of the image that was used
- `image_id`: unique digit identifier of the image for the product
- `url`: image URL from which the text was extracted
|
openfoodfacts/ingredient-detection
|
[
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:en",
"language:fr",
"language:de",
"language:it",
"language:nl",
"language:ru",
"language:he",
"license:cc-by-sa-4.0",
"region:us"
] |
2023-08-16T09:05:56+00:00
|
{"language": ["en", "fr", "de", "it", "nl", "ru", "he"], "license": "cc-by-sa-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["token-classification"], "pretty_name": "Ingredient List Detection"}
|
2023-08-16T09:08:17+00:00
|
[] |
[
"en",
"fr",
"de",
"it",
"nl",
"ru",
"he"
] |
TAGS
#task_categories-token-classification #size_categories-1K<n<10K #language-English #language-French #language-German #language-Italian #language-Dutch #language-Russian #language-Hebrew #license-cc-by-sa-4.0 #region-us
|
This dataset is used to train a multilingual ingredient list detection model. The goal is to automate the extraction of ingredient lists from food packaging images. See this issue for a broader context about ingredient list extraction.
## Dataset generation
Raw unannotated texts are OCR results obtained with Google Cloud Vision. It only contains images marked as ingredient image on Open Food Facts.
The dataset was generated using ChatGPT-3.5: we asked ChatGPT to extract ingredient using the following prompt:
Prompt:
System prompt:
A first cleaning step was performed automatically, we removed responses with:
- invalid JSON
- JSON with missing fields
- JSON where the detected ingredient list is not a substring of the original text
A first NER model was trained on this dataset. The model prediction errors on this dataset were inspected, which allowed us to spot the different kind of annotation errors made by ChatGPT. Then, using a semi-automatic approach, we manually corrected samples that were likely to have the error spotted during the inspection phase. For example, we noticed that the prefix "Ingredients:" was sometimes included in the ingredient text span. We looked for every sample where "Ingredients" (and translations in other languages) was part of the ingredient text, and corrected these samples manually. This approach allowed us to focus on problematic samples, instead of having to check the full train set.
These detection rules were mostly implemented using regex. The cleaning script with all rules can be found here.
Once the detected errors were fixed using this approach, a new dataset alpha version was released, and we trained the model on this new dataset.
Dataset was split between train (90%) and test (10%) sets. Train and test splits were kept consistent at each alpha release. Only the test dataset was fully reviewed and corrected manually.
We tokenized the text using huggingface pre-tokenizer with the '[WhitespaceSplit(), Punctuation()]' sequence. The dataset generation script can be found here.
This dataset is exactly the same as 'ingredient-detection-alpha-v6' used during model trainings.
## Annotation guidelines
Annotations guidelines were updated continuously during dataset refinement and model trainings, but here are the final guidelines:
1. ingredient lists in all languages must be annotated.
2. ingredients list should start with the first ingredient, without 'ingredient' prefix ("Ingredients:", "Zutaten", "Ingrédients: ") or 'language' prefix ("EN:", "FR - ",...)
3. ingredient list containing single ingredients without any 'ingredient' or 'language' prefix should not be annotated. Otherwise, it's very difficult to know whether the mention is the ingredient list or just a random mention of an ingredient on the packaging.
4. We have a very restrictive approach on where the ingredient list ends: we don't include any extra information (allergen, origin, trace, organic mentions) at the end of the ingredient list. The only exception is when this information is in bracket after the ingredient. This rule is in place to make it easier for the detector to know what is an ingredient list and what is not. Additional information can be added afterward as a post-processing step.
## Dataset schema
The dataset is made of 2 JSONL files:
- 'ingredient_detection_dataset-v1_train.URL': train split, 5065 samples
- 'ingredient_detection_dataset-v1_test.URL': test split, 556 samples
Each sample has the following fields:
- 'text': the original text obtained from OCR result
- 'marked_text': the text with ingredient spans delimited by '<b>' and '</b>'
- 'tokens': tokens obtained with pre-tokenization
- 'ner_tags': tag ID associated with each token: 0 for 'O', 1 for 'B-ING' and 2 for 'I-ING' (BIO schema)
- 'offsets': a list containing character start and end offsets of ingredients spans
- 'meta': a dict containing additional meta-data about the sample:
- 'barcode': the product barcode of the image that was used
- 'image_id': unique digit identifier of the image for the product
- 'url': image URL from which the text was extracted
|
[
"## Dataset generation\n\nRaw unannotated texts are OCR results obtained with Google Cloud Vision. It only contains images marked as ingredient image on Open Food Facts.\nThe dataset was generated using ChatGPT-3.5: we asked ChatGPT to extract ingredient using the following prompt:\n\nPrompt:\n\n\nSystem prompt:\n\n\nA first cleaning step was performed automatically, we removed responses with:\n- invalid JSON\n- JSON with missing fields\n- JSON where the detected ingredient list is not a substring of the original text\n\nA first NER model was trained on this dataset. The model prediction errors on this dataset were inspected, which allowed us to spot the different kind of annotation errors made by ChatGPT. Then, using a semi-automatic approach, we manually corrected samples that were likely to have the error spotted during the inspection phase. For example, we noticed that the prefix \"Ingredients:\" was sometimes included in the ingredient text span. We looked for every sample where \"Ingredients\" (and translations in other languages) was part of the ingredient text, and corrected these samples manually. This approach allowed us to focus on problematic samples, instead of having to check the full train set.\n\nThese detection rules were mostly implemented using regex. The cleaning script with all rules can be found here. \n\nOnce the detected errors were fixed using this approach, a new dataset alpha version was released, and we trained the model on this new dataset.\nDataset was split between train (90%) and test (10%) sets. Train and test splits were kept consistent at each alpha release. Only the test dataset was fully reviewed and corrected manually.\n\nWe tokenized the text using huggingface pre-tokenizer with the '[WhitespaceSplit(), Punctuation()]' sequence. The dataset generation script can be found here.\n\nThis dataset is exactly the same as 'ingredient-detection-alpha-v6' used during model trainings.",
"## Annotation guidelines\n\nAnnotations guidelines were updated continuously during dataset refinement and model trainings, but here are the final guidelines:\n\n1. ingredient lists in all languages must be annotated.\n2. ingredients list should start with the first ingredient, without 'ingredient' prefix (\"Ingredients:\", \"Zutaten\", \"Ingrédients: \") or 'language' prefix (\"EN:\", \"FR - \",...)\n3. ingredient list containing single ingredients without any 'ingredient' or 'language' prefix should not be annotated. Otherwise, it's very difficult to know whether the mention is the ingredient list or just a random mention of an ingredient on the packaging.\n4. We have a very restrictive approach on where the ingredient list ends: we don't include any extra information (allergen, origin, trace, organic mentions) at the end of the ingredient list. The only exception is when this information is in bracket after the ingredient. This rule is in place to make it easier for the detector to know what is an ingredient list and what is not. Additional information can be added afterward as a post-processing step.",
"## Dataset schema\n\nThe dataset is made of 2 JSONL files:\n\n- 'ingredient_detection_dataset-v1_train.URL': train split, 5065 samples\n- 'ingredient_detection_dataset-v1_test.URL': test split, 556 samples\n\nEach sample has the following fields:\n\n- 'text': the original text obtained from OCR result\n- 'marked_text': the text with ingredient spans delimited by '<b>' and '</b>'\n- 'tokens': tokens obtained with pre-tokenization\n- 'ner_tags': tag ID associated with each token: 0 for 'O', 1 for 'B-ING' and 2 for 'I-ING' (BIO schema)\n- 'offsets': a list containing character start and end offsets of ingredients spans\n- 'meta': a dict containing additional meta-data about the sample:\n - 'barcode': the product barcode of the image that was used\n - 'image_id': unique digit identifier of the image for the product\n - 'url': image URL from which the text was extracted"
] |
[
"TAGS\n#task_categories-token-classification #size_categories-1K<n<10K #language-English #language-French #language-German #language-Italian #language-Dutch #language-Russian #language-Hebrew #license-cc-by-sa-4.0 #region-us \n",
"## Dataset generation\n\nRaw unannotated texts are OCR results obtained with Google Cloud Vision. It only contains images marked as ingredient image on Open Food Facts.\nThe dataset was generated using ChatGPT-3.5: we asked ChatGPT to extract ingredient using the following prompt:\n\nPrompt:\n\n\nSystem prompt:\n\n\nA first cleaning step was performed automatically, we removed responses with:\n- invalid JSON\n- JSON with missing fields\n- JSON where the detected ingredient list is not a substring of the original text\n\nA first NER model was trained on this dataset. The model prediction errors on this dataset were inspected, which allowed us to spot the different kind of annotation errors made by ChatGPT. Then, using a semi-automatic approach, we manually corrected samples that were likely to have the error spotted during the inspection phase. For example, we noticed that the prefix \"Ingredients:\" was sometimes included in the ingredient text span. We looked for every sample where \"Ingredients\" (and translations in other languages) was part of the ingredient text, and corrected these samples manually. This approach allowed us to focus on problematic samples, instead of having to check the full train set.\n\nThese detection rules were mostly implemented using regex. The cleaning script with all rules can be found here. \n\nOnce the detected errors were fixed using this approach, a new dataset alpha version was released, and we trained the model on this new dataset.\nDataset was split between train (90%) and test (10%) sets. Train and test splits were kept consistent at each alpha release. Only the test dataset was fully reviewed and corrected manually.\n\nWe tokenized the text using huggingface pre-tokenizer with the '[WhitespaceSplit(), Punctuation()]' sequence. The dataset generation script can be found here.\n\nThis dataset is exactly the same as 'ingredient-detection-alpha-v6' used during model trainings.",
"## Annotation guidelines\n\nAnnotations guidelines were updated continuously during dataset refinement and model trainings, but here are the final guidelines:\n\n1. ingredient lists in all languages must be annotated.\n2. ingredients list should start with the first ingredient, without 'ingredient' prefix (\"Ingredients:\", \"Zutaten\", \"Ingrédients: \") or 'language' prefix (\"EN:\", \"FR - \",...)\n3. ingredient list containing single ingredients without any 'ingredient' or 'language' prefix should not be annotated. Otherwise, it's very difficult to know whether the mention is the ingredient list or just a random mention of an ingredient on the packaging.\n4. We have a very restrictive approach on where the ingredient list ends: we don't include any extra information (allergen, origin, trace, organic mentions) at the end of the ingredient list. The only exception is when this information is in bracket after the ingredient. This rule is in place to make it easier for the detector to know what is an ingredient list and what is not. Additional information can be added afterward as a post-processing step.",
"## Dataset schema\n\nThe dataset is made of 2 JSONL files:\n\n- 'ingredient_detection_dataset-v1_train.URL': train split, 5065 samples\n- 'ingredient_detection_dataset-v1_test.URL': test split, 556 samples\n\nEach sample has the following fields:\n\n- 'text': the original text obtained from OCR result\n- 'marked_text': the text with ingredient spans delimited by '<b>' and '</b>'\n- 'tokens': tokens obtained with pre-tokenization\n- 'ner_tags': tag ID associated with each token: 0 for 'O', 1 for 'B-ING' and 2 for 'I-ING' (BIO schema)\n- 'offsets': a list containing character start and end offsets of ingredients spans\n- 'meta': a dict containing additional meta-data about the sample:\n - 'barcode': the product barcode of the image that was used\n - 'image_id': unique digit identifier of the image for the product\n - 'url': image URL from which the text was extracted"
] |
[
76,
446,
250,
265
] |
[
"passage: TAGS\n#task_categories-token-classification #size_categories-1K<n<10K #language-English #language-French #language-German #language-Italian #language-Dutch #language-Russian #language-Hebrew #license-cc-by-sa-4.0 #region-us \n",
"passage: ## Dataset generation\n\nRaw unannotated texts are OCR results obtained with Google Cloud Vision. It only contains images marked as ingredient image on Open Food Facts.\nThe dataset was generated using ChatGPT-3.5: we asked ChatGPT to extract ingredient using the following prompt:\n\nPrompt:\n\n\nSystem prompt:\n\n\nA first cleaning step was performed automatically, we removed responses with:\n- invalid JSON\n- JSON with missing fields\n- JSON where the detected ingredient list is not a substring of the original text\n\nA first NER model was trained on this dataset. The model prediction errors on this dataset were inspected, which allowed us to spot the different kind of annotation errors made by ChatGPT. Then, using a semi-automatic approach, we manually corrected samples that were likely to have the error spotted during the inspection phase. For example, we noticed that the prefix \"Ingredients:\" was sometimes included in the ingredient text span. We looked for every sample where \"Ingredients\" (and translations in other languages) was part of the ingredient text, and corrected these samples manually. This approach allowed us to focus on problematic samples, instead of having to check the full train set.\n\nThese detection rules were mostly implemented using regex. The cleaning script with all rules can be found here. \n\nOnce the detected errors were fixed using this approach, a new dataset alpha version was released, and we trained the model on this new dataset.\nDataset was split between train (90%) and test (10%) sets. Train and test splits were kept consistent at each alpha release. Only the test dataset was fully reviewed and corrected manually.\n\nWe tokenized the text using huggingface pre-tokenizer with the '[WhitespaceSplit(), Punctuation()]' sequence. The dataset generation script can be found here.\n\nThis dataset is exactly the same as 'ingredient-detection-alpha-v6' used during model trainings.## Annotation guidelines\n\nAnnotations guidelines were updated continuously during dataset refinement and model trainings, but here are the final guidelines:\n\n1. ingredient lists in all languages must be annotated.\n2. ingredients list should start with the first ingredient, without 'ingredient' prefix (\"Ingredients:\", \"Zutaten\", \"Ingrédients: \") or 'language' prefix (\"EN:\", \"FR - \",...)\n3. ingredient list containing single ingredients without any 'ingredient' or 'language' prefix should not be annotated. Otherwise, it's very difficult to know whether the mention is the ingredient list or just a random mention of an ingredient on the packaging.\n4. We have a very restrictive approach on where the ingredient list ends: we don't include any extra information (allergen, origin, trace, organic mentions) at the end of the ingredient list. The only exception is when this information is in bracket after the ingredient. This rule is in place to make it easier for the detector to know what is an ingredient list and what is not. Additional information can be added afterward as a post-processing step."
] |
3d098c2d64c7250e63d2adabe081e669d3b354d1
|
# Dataset Card for "imagenetsubset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ioxil/imagenetsubset
|
[
"region:us"
] |
2023-08-16T09:05:57+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6003374.0, "num_examples": 150}], "download_size": 6003372, "dataset_size": 6003374.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-16T09:13:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "imagenetsubset"
More Information needed
|
[
"# Dataset Card for \"imagenetsubset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"imagenetsubset\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"imagenetsubset\"\n\nMore Information needed"
] |
23601b4755a4812836de2483a4862fc543c013ce
|
An MPT-compatible version of [wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)
|
Heitechsoft/Wizard-Vicuna-MPT
|
[
"size_categories:100K<n<1M",
"license:apache-2.0",
"chat",
"conversational",
"conversation",
"region:us"
] |
2023-08-16T09:09:01+00:00
|
{"license": "apache-2.0", "size_categories": ["100K<n<1M"], "pretty_name": "Wizard Vicuna MPT", "tags": ["chat", "conversational", "conversation"]}
|
2023-08-16T09:14:10+00:00
|
[] |
[] |
TAGS
#size_categories-100K<n<1M #license-apache-2.0 #chat #conversational #conversation #region-us
|
An MPT-compatible version of wizard_vicuna_70k_unfiltered
|
[] |
[
"TAGS\n#size_categories-100K<n<1M #license-apache-2.0 #chat #conversational #conversation #region-us \n"
] |
[
36
] |
[
"passage: TAGS\n#size_categories-100K<n<1M #license-apache-2.0 #chat #conversational #conversation #region-us \n"
] |
b5099e039295eb29134df08ddab231831ba04d19
|
# Dataset of wiz (Kono Subarashii Sekai ni Shukufuku wo!)
This is the dataset of wiz (Kono Subarashii Sekai ni Shukufuku wo!), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/wiz_konosuba
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T09:15:38+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:09:46+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of wiz (Kono Subarashii Sekai ni Shukufuku wo!)
This is the dataset of wiz (Kono Subarashii Sekai ni Shukufuku wo!), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of wiz (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of wiz (Kono Subarashii Sekai ni Shukufuku wo!), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of wiz (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of wiz (Kono Subarashii Sekai ni Shukufuku wo!), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
99
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of wiz (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of wiz (Kono Subarashii Sekai ni Shukufuku wo!), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
5ca01c5c44cab462aeec2f5a2fd4cab73ea13352
|
# Dataset Card for "my-image-captioning-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ndr01/my-image-captioning-dataset
|
[
"region:us"
] |
2023-08-16T09:21:04+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "Unnamed: 0", "dtype": "int64"}, {"name": "additional_feature", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 593273391.0, "num_examples": 179}], "download_size": 588198819, "dataset_size": 593273391.0}}
|
2023-08-16T09:24:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "my-image-captioning-dataset"
More Information needed
|
[
"# Dataset Card for \"my-image-captioning-dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"my-image-captioning-dataset\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"my-image-captioning-dataset\"\n\nMore Information needed"
] |
4ba450e2c3e6c2b44c4eea7d22065cc5edd27336
|
# Dataset of kirigaya_suguha (Sword Art Online)
This is the dataset of kirigaya_suguha (Sword Art Online), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/kirigaya_suguha_swordartonline
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T09:24:41+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:09:48+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of kirigaya_suguha (Sword Art Online)
This is the dataset of kirigaya_suguha (Sword Art Online), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of kirigaya_suguha (Sword Art Online)\n\nThis is the dataset of kirigaya_suguha (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of kirigaya_suguha (Sword Art Online)\n\nThis is the dataset of kirigaya_suguha (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
87
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of kirigaya_suguha (Sword Art Online)\n\nThis is the dataset of kirigaya_suguha (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
a38440a77103b96766870a3bab69dac4cd3b36d7
|
<head><link rel="stylesheet" href="https://huggingface.co/front/build/kube-91c9610/style.css"></head>
<div class="container mt-4"><div class="prose"><p>Edit this <code>README.md</code> markdown file to author your organization card 🔥</p>
</div></div>
|
e-mohammadii/jhkgl
|
[
"region:us"
] |
2023-08-16T09:26:03+00:00
|
{}
|
2023-08-20T13:10:40+00:00
|
[] |
[] |
TAGS
#region-us
|
<head><link rel="stylesheet" href="URL
<div class="container mt-4"><div class="prose"><p>Edit this <code>URL</code> markdown file to author your organization card </p>
</div></div>
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
4a221ff65c8748c971187fc4e4ffbf68cdd11067
|
# Dataset of eris (Kono Subarashii Sekai ni Shukufuku wo!)
This is the dataset of eris (Kono Subarashii Sekai ni Shukufuku wo!), containing 52 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/eris_konosuba
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T09:37:51+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:09:50+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of eris (Kono Subarashii Sekai ni Shukufuku wo!)
This is the dataset of eris (Kono Subarashii Sekai ni Shukufuku wo!), containing 52 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of eris (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of eris (Kono Subarashii Sekai ni Shukufuku wo!), containing 52 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of eris (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of eris (Kono Subarashii Sekai ni Shukufuku wo!), containing 52 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
99
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of eris (Kono Subarashii Sekai ni Shukufuku wo!)\n\nThis is the dataset of eris (Kono Subarashii Sekai ni Shukufuku wo!), containing 52 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
e99a09c331c480ade2e26d72a18c47616c1ed055
|
# Dataset Card for "Intent_Classification_FluentSpeechCommands_Action"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DynamicSuperb/IntentClassification_FluentSpeechCommands-Action
|
[
"region:us"
] |
2023-08-16T09:46:12+00:00
|
{"dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "speakerId", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "label", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 743300704.0, "num_examples": 10000}], "download_size": 636643694, "dataset_size": 743300704.0}}
|
2023-08-16T09:48:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Intent_Classification_FluentSpeechCommands_Action"
More Information needed
|
[
"# Dataset Card for \"Intent_Classification_FluentSpeechCommands_Action\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Intent_Classification_FluentSpeechCommands_Action\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Intent_Classification_FluentSpeechCommands_Action\"\n\nMore Information needed"
] |
1ca56522f65d193be4c1a0b0f537b61255000f01
|
# Dataset Card for "Intent_Classification_FluentSpeechCommands_Object"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DynamicSuperb/IntentClassification_FluentSpeechCommands-Object
|
[
"region:us"
] |
2023-08-16T09:48:47+00:00
|
{"dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "speakerId", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "label", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 740602751.0, "num_examples": 10000}], "download_size": 643682916, "dataset_size": 740602751.0}}
|
2023-08-16T09:51:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Intent_Classification_FluentSpeechCommands_Object"
More Information needed
|
[
"# Dataset Card for \"Intent_Classification_FluentSpeechCommands_Object\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Intent_Classification_FluentSpeechCommands_Object\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Intent_Classification_FluentSpeechCommands_Object\"\n\nMore Information needed"
] |
2cea4fb57d7c22b0b8d6bb345c4b5f6564d04904
|
# Dataset Card for "Intent_Classification_FluentSpeechCommands_Location"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DynamicSuperb/IntentClassification_FluentSpeechCommands-Location
|
[
"region:us"
] |
2023-08-16T09:51:30+00:00
|
{"dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "speakerId", "dtype": "string"}, {"name": "transcription", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "label", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 752958575.0, "num_examples": 10000}], "download_size": 639176861, "dataset_size": 752958575.0}}
|
2023-08-16T09:53:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Intent_Classification_FluentSpeechCommands_Location"
More Information needed
|
[
"# Dataset Card for \"Intent_Classification_FluentSpeechCommands_Location\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Intent_Classification_FluentSpeechCommands_Location\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Intent_Classification_FluentSpeechCommands_Location\"\n\nMore Information needed"
] |
595c2e263f5f87b2f2b474bdd29c94529c563256
|
# Dataset Card for "Lyric400"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Jamesgetsit/Lyric400
|
[
"region:us"
] |
2023-08-16T09:58:32+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1611536, "num_examples": 393}], "download_size": 653671, "dataset_size": 1611536}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-16T09:58:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Lyric400"
More Information needed
|
[
"# Dataset Card for \"Lyric400\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Lyric400\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Lyric400\"\n\nMore Information needed"
] |
454b8345d8b85a2ec30e21dda4c668dfe0e90d02
|
# Dataset Card for "datacomp_small_filtered_gcp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nielsr/datacomp_small_filtered_gcp
|
[
"region:us"
] |
2023-08-16T10:11:58+00:00
|
{"dataset_info": {"features": [{"name": "uid", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "original_width", "dtype": "int64"}, {"name": "original_height", "dtype": "int64"}, {"name": "clip_b32_similarity_score", "dtype": "float32"}, {"name": "clip_l14_similarity_score", "dtype": "float32"}, {"name": "face_bboxes", "sequence": {"sequence": "float64"}}, {"name": "sha256", "dtype": "string"}, {"name": "detected_language", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1211544425.5798087, "num_examples": 3774475}], "download_size": 979844306, "dataset_size": 1211544425.5798087}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-16T11:10:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "datacomp_small_filtered_gcp"
More Information needed
|
[
"# Dataset Card for \"datacomp_small_filtered_gcp\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"datacomp_small_filtered_gcp\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"datacomp_small_filtered_gcp\"\n\nMore Information needed"
] |
12fc7cccf0dd24b628462b49ec7bfe98509e958b
|
# The South African Gov-ZA multilingual corpus
## About Dataset
The data set contains cabinet statements from the South African government, maintained by the [Government Communication and Information System (GCIS)](https://www.gcis.gov.za/). Data was scraped from the governments website:
https://www.gov.za/cabinet-statements
The datasets contain government cabinet statements in 11 languages, namely:
| Language | Code | Language | Code |
| ---------- | ---- | --------- | ----- |
| Afrikaans | (af) | Setswana | (tn) |
| English | (en) | Sepedi | (nso) |
| Sesotho | (st) | Siswati | (ss) |
| isiNdebele | (nr) | Tshivenda | (ve) |
| isiXhosa | (xh) | Xitstonga | (ts) |
| isiZulu | (zu) |
**Note:** The code is assigned from the GCIS website, all codes except Sepedi (nso) follow the ISO 639-1 language code format, whereas Sepedi follwo the ISO 639-2 language code format.
The dataset is in JSON format as follows:
```
[
{
"title": "Title in English",
"date": "DD MMM YYYY",
"datetime": "YYYY-MM-DD", #sometimes a timestamp
"url": "URL to original text",
"en": {
"text": "Cabinet",
"title": "Title in translated language",
"url": "URL to translated text"
},
"af" : {},
. . .
},
{},
. . .
]
```
Disclaimer
-------
This dataset contains machine-readable data extracted from online cabinet statements from the South African government, provided by the Government Communication Information System (GCIS). While efforts were made to ensure the accuracy and completeness of this data, there may be errors or discrepancies between the original publications and this dataset. No warranties, guarantees or representations are given in relation to the information contained in the dataset. The members of the Data Science for Societal Impact Research Group bear no responsibility and/or liability for any such errors or discrepancies in this dataset. The Government Communication Information System (GCIS) bears no responsibility and/or liability for any such errors or discrepancies in this dataset. It is recommended that users verify all information contained herein before making decisions based upon this information.
## Authors
- Vukosi Marivate - [@vukosi](https://twitter.com/vukosi)
- Matimba Shingange
- Richard Lastrucci
- Isheanesu Joseph Dzingirai
- Jenalea Rajab
## Citation
**Paper**
[Preparing the Vuk'uzenzele and ZA-gov-multilingual South African multilingual corpora](https://arxiv.org/pdf/2303.03750)
> @inproceedings{lastrucci-etal-2023-preparing,
title = "Preparing the Vuk{'}uzenzele and {ZA}-gov-multilingual {S}outh {A}frican multilingual corpora",
author = "Richard Lastrucci and Isheanesu Dzingirai and Jenalea Rajab and Andani Madodonga and Matimba Shingange and Daniel Njini and Vukosi Marivate",
booktitle = "Proceedings of the Fourth workshop on Resources for African Indigenous Languages (RAIL 2023)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.rail-1.3",
pages = "18--25"
}
**Dataset**
Vukosi Marivate, Matimba Shingange, Richard Lastrucci, Isheanesu Joseph Dzingirai, Jenalea Rajab. **The South African Gov-ZA multilingual corpus**, 2022
> @dataset{marivate_vukosi_2023_7635168,
author = {Marivate, Vukosi and
Shingange, Matimba and
Lastrucci, Richard and
Dzingirai, Isheanesu and
Rajab, Jenalea},
title = {The South African Gov-ZA multilingual corpus},
month = feb,
year = 2023,
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.7635168},
url = {https://doi.org/10.5281/zenodo.7635168}
}
## Licences
* License for Data - [CC 4.0 BY](LICENSE_data.md)
* Licence for Code - [MIT License](LICENSE)
|
dsfsi/gov-za-monolingual
|
[
"task_categories:translation",
"language:eng",
"language:afr",
"language:nbl",
"language:xho",
"language:zul",
"language:sot",
"language:nso",
"language:tsn",
"language:ssw",
"language:ven",
"language:tso",
"license:mit",
"multilingual",
"arxiv:2303.03750",
"region:us"
] |
2023-08-16T10:26:25+00:00
|
{"language": ["eng", "afr", "nbl", "xho", "zul", "sot", "nso", "tsn", "ssw", "ven", "tso"], "license": "mit", "task_categories": ["translation"], "pretty_name": "The Gov South African Multilingual Corpus", "tags": ["multilingual"], "arxiv": 2303.0375}
|
2023-08-16T16:57:47+00:00
|
[
"2303.03750"
] |
[
"eng",
"afr",
"nbl",
"xho",
"zul",
"sot",
"nso",
"tsn",
"ssw",
"ven",
"tso"
] |
TAGS
#task_categories-translation #language-English #language-Afrikaans #language-South Ndebele #language-Xhosa #language-Zulu #language-Southern Sotho #language-Pedi #language-Tswana #language-Swati #language-Venda #language-Tsonga #license-mit #multilingual #arxiv-2303.03750 #region-us
|
The South African Gov-ZA multilingual corpus
============================================
About Dataset
-------------
The data set contains cabinet statements from the South African government, maintained by the Government Communication and Information System (GCIS). Data was scraped from the governments website:
URL
The datasets contain government cabinet statements in 11 languages, namely:
The dataset is in JSON format as follows:
Disclaimer
----------
This dataset contains machine-readable data extracted from online cabinet statements from the South African government, provided by the Government Communication Information System (GCIS). While efforts were made to ensure the accuracy and completeness of this data, there may be errors or discrepancies between the original publications and this dataset. No warranties, guarantees or representations are given in relation to the information contained in the dataset. The members of the Data Science for Societal Impact Research Group bear no responsibility and/or liability for any such errors or discrepancies in this dataset. The Government Communication Information System (GCIS) bears no responsibility and/or liability for any such errors or discrepancies in this dataset. It is recommended that users verify all information contained herein before making decisions based upon this information.
Authors
-------
* Vukosi Marivate - @vukosi
* Matimba Shingange
* Richard Lastrucci
* Isheanesu Joseph Dzingirai
* Jenalea Rajab
Paper
Preparing the Vuk'uzenzele and ZA-gov-multilingual South African multilingual corpora
>
> @inproceedings{lastrucci-etal-2023-preparing,
> title = "Preparing the Vuk{'}uzenzele and {ZA}-gov-multilingual {S}outh {A}frican multilingual corpora",
> author = "Richard Lastrucci and Isheanesu Dzingirai and Jenalea Rajab and Andani Madodonga and Matimba Shingange and Daniel Njini and Vukosi Marivate",
> booktitle = "Proceedings of the Fourth workshop on Resources for African Indigenous Languages (RAIL 2023)",
> month = may,
> year = "2023",
> address = "Dubrovnik, Croatia",
> publisher = "Association for Computational Linguistics",
> url = "URL
> pages = "18--25"
> }
>
>
>
Dataset
Vukosi Marivate, Matimba Shingange, Richard Lastrucci, Isheanesu Joseph Dzingirai, Jenalea Rajab. The South African Gov-ZA multilingual corpus, 2022
>
> @dataset{marivate\_vukosi\_2023\_7635168,
> author = {Marivate, Vukosi and
> Shingange, Matimba and
> Lastrucci, Richard and
> Dzingirai, Isheanesu and
> Rajab, Jenalea},
> title = {The South African Gov-ZA multilingual corpus},
> month = feb,
> year = 2023,
> publisher = {Zenodo},
> version = {1.0},
> doi = {10.5281/zenodo.7635168},
> url = {URL
> }
>
>
>
Licences
--------
* License for Data - CC 4.0 BY
* Licence for Code - MIT License
|
[] |
[
"TAGS\n#task_categories-translation #language-English #language-Afrikaans #language-South Ndebele #language-Xhosa #language-Zulu #language-Southern Sotho #language-Pedi #language-Tswana #language-Swati #language-Venda #language-Tsonga #license-mit #multilingual #arxiv-2303.03750 #region-us \n"
] |
[
93
] |
[
"passage: TAGS\n#task_categories-translation #language-English #language-Afrikaans #language-South Ndebele #language-Xhosa #language-Zulu #language-Southern Sotho #language-Pedi #language-Tswana #language-Swati #language-Venda #language-Tsonga #license-mit #multilingual #arxiv-2303.03750 #region-us \n"
] |
2284fd6c910d3e7feeb4ceed0d81b49a474f39aa
|
# Dataset of rem (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of rem (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/rem_rezero
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T10:34:19+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:09:52+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of rem (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of rem (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of rem (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of rem (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of rem (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of rem (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
97
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of rem (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of rem (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
b689fe84fdc2ceb01793dfeb9835327825e7a987
|
# The Vuk'uzenzele South African Multilingual Corpus
Give Feedback 📑: [DSFSI Resource Feedback Form](https://docs.google.com/forms/d/e/1FAIpQLSf7S36dyAUPx2egmXbFpnTBuzoRulhL5Elu-N1eoMhaO7v10w/formResponse)
## About Dataset
The dataset was obtained from the South African government magazine Vuk'uzenzele, created by the [Government Communication and Information System (GCIS)](https://www.gcis.gov.za/).
The original raw PDFs were obtatined from the [Vuk'uzenzele website](https://www.vukuzenzele.gov.za/).
The datasets contain government magazine editions in 11 languages, namely:
| Language | Code | Language | Code |
|------------|-------|------------|-------|
| English | (eng) | Sepedi | (nso) |
| Afrikaans | (afr) | Setswana | (tsn) |
| isiNdebele | (nbl) | Siswati | (ssw) |
| isiXhosa | (xho) | Tshivenda | (ven) |
| isiZulu | (zul) | Xitstonga | (tso) |
| Sesotho | (sot) |
**Note:** The languages use the ISO 639-2 language codes.
The data is split by language in JSONL format and each row is of the form:
```
{
"title": "Title for article",
"author": "Author Name or Vukuzenzele",
"text": "Article text",
"edition": "Linked Magazine edition",
"language_code": "ISO 639-2 language code"
}
```
## Disclaimer
This dataset contains machine-readable data extracted from PDF documents, from https://www.vukuzenzele.gov.za/, provided by the Government Communication Information System (GCIS). While efforts were made to ensure the accuracy and completeness of this data, there may be errors or discrepancies between the original publications and this dataset. No warranties, guarantees or representations are given in relation to the information contained in the dataset. The members of the Data Science for Societal Impact Research Group bear no responsibility and/or liability for any such errors or discrepancies in this dataset. The Government Communication Information System (GCIS) bears no responsibility and/or liability for any such errors or discrepancies in this dataset. It is recommended that users verify all information contained herein before making decisions based upon this information.
## Authors
- Vukosi Marivate - [@vukosi](https://twitter.com/vukosi)
- Andani Madodonga
- Daniel Njini
- Richard Lastrucci
- Isheanesu Dzingirai
- Jenalea Rajab
## Citation
**Paper**
[Preparing the Vuk'uzenzele and ZA-gov-multilingual South African multilingual corpora](https://arxiv.org/pdf/2303.03750)
> @inproceedings{lastrucci-etal-2023-preparing,
title = "Preparing the Vuk{'}uzenzele and {ZA}-gov-multilingual {S}outh {A}frican multilingual corpora",
author = "Richard Lastrucci and Isheanesu Dzingirai and Jenalea Rajab and Andani Madodonga and Matimba Shingange and Daniel Njini and Vukosi Marivate",
booktitle = "Proceedings of the Fourth workshop on Resources for African Indigenous Languages (RAIL 2023)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.rail-1.3",
pages = "18--25"
}
**Dataset**
Vukosi Marivate, Andani Madodonga, Daniel Njini, Richard Lastrucci, Isheanesu Dzingirai, Jenalea Rajab. **The Vuk'uzenzele South African Multilingual Corpus**, 2023
> @dataset{marivate_vukosi_2023_7598540,
author = {Marivate, Vukosi and
Njini, Daniel and
Madodonga, Andani and
Lastrucci, Richard and
Dzingirai, Isheanesu
Rajab, Jenalea},
title = {The Vuk'uzenzele South African Multilingual Corpus},
month = feb,
year = 2023,
publisher = {Zenodo},
doi = {10.5281/zenodo.7598539},
url = {https://doi.org/10.5281/zenodo.7598539}
}
Licences
-------
* License for Data - [CC 4.0 BY](LICENSE.data.md)
* Licence for Code - [MIT License](LICENSE.md)
|
dsfsi/vukuzenzele-monolingual
|
[
"task_categories:translation",
"language:eng",
"language:afr",
"language:nbl",
"language:xho",
"language:zul",
"language:nso",
"language:sep",
"language:tsn",
"language:ssw",
"language:ven",
"language:tso",
"license:cc-by-4.0",
"multilingual",
"government",
"arxiv:2303.03750",
"region:us"
] |
2023-08-16T10:42:05+00:00
|
{"language": ["eng", "afr", "nbl", "xho", "zul", "nso", "sep", "tsn", "ssw", "ven", "tso"], "license": "cc-by-4.0", "task_categories": ["translation"], "pretty_name": "The Vuk'uzenzele South African Multilingual Corpus", "tags": ["multilingual", "government"], "arxiv": 2303.0375, "dataset_info": [{"config_name": "afr", "features": [{"name": "title", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "edition", "dtype": "string"}, {"name": "language_code", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 462140, "num_examples": 130}, {"name": "test", "num_bytes": 117811, "num_examples": 28}, {"name": "eval", "num_bytes": 109553, "num_examples": 29}], "download_size": 431879, "dataset_size": 689504}, {"config_name": "eng", "features": [{"name": "title", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "edition", "dtype": "string"}, {"name": "language_code", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 369888, "num_examples": 120}, {"name": "test", "num_bytes": 89637, "num_examples": 26}, {"name": "eval", "num_bytes": 77360, "num_examples": 26}], "download_size": 338733, "dataset_size": 536885}, {"config_name": "nbl", "features": [{"name": "title", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "edition", "dtype": "string"}, {"name": "language_code", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 535653, "num_examples": 132}, {"name": "test", "num_bytes": 112521, "num_examples": 28}, {"name": "eval", "num_bytes": 125205, "num_examples": 29}], "download_size": 494289, "dataset_size": 773379}, {"config_name": "nso", "features": [{"name": "title", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "edition", "dtype": "string"}, {"name": "language_code", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 538443, "num_examples": 128}, {"name": "test", "num_bytes": 129131, "num_examples": 27}, {"name": "eval", "num_bytes": 114196, "num_examples": 28}], "download_size": 452010, "dataset_size": 781770}, {"config_name": "sot", "features": [{"name": "title", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "edition", "dtype": "string"}, {"name": "language_code", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 532606, "num_examples": 131}, {"name": "test", "num_bytes": 113414, "num_examples": 28}, {"name": "eval", "num_bytes": 118072, "num_examples": 29}], "download_size": 453603, "dataset_size": 764092}, {"config_name": "ssw", "features": [{"name": "title", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "edition", "dtype": "string"}, {"name": "language_code", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 526390, "num_examples": 130}, {"name": "test", "num_bytes": 116446, "num_examples": 28}, {"name": "eval", "num_bytes": 121511, "num_examples": 29}], "download_size": 477822, "dataset_size": 764347}, {"config_name": "tsn", "features": [{"name": "title", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "edition", "dtype": "string"}, {"name": "language_code", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 622646, "num_examples": 128}, {"name": "test", "num_bytes": 121183, "num_examples": 27}, {"name": "eval", "num_bytes": 127609, "num_examples": 28}], "download_size": 496882, "dataset_size": 871438}, {"config_name": "tso", "features": [{"name": "title", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "edition", "dtype": "string"}, {"name": "language_code", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 546021, "num_examples": 128}, {"name": "test", "num_bytes": 120869, "num_examples": 28}, {"name": "eval", "num_bytes": 98419, "num_examples": 28}], "download_size": 446456, "dataset_size": 765309}, {"config_name": "ven", "features": [{"name": "title", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "edition", "dtype": "string"}, {"name": "language_code", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 587325, "num_examples": 128}, {"name": "test", "num_bytes": 127171, "num_examples": 28}, {"name": "eval", "num_bytes": 109780, "num_examples": 28}], "download_size": 461952, "dataset_size": 824276}, {"config_name": "xho", "features": [{"name": "title", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "edition", "dtype": "string"}, {"name": "language_code", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 518328, "num_examples": 130}, {"name": "test", "num_bytes": 120927, "num_examples": 28}, {"name": "eval", "num_bytes": 113282, "num_examples": 28}], "download_size": 478513, "dataset_size": 752537}, {"config_name": "zul", "features": [{"name": "title", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "edition", "dtype": "string"}, {"name": "language_code", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 520964, "num_examples": 129}, {"name": "test", "num_bytes": 107058, "num_examples": 28}, {"name": "eval", "num_bytes": 107359, "num_examples": 28}], "download_size": 459835, "dataset_size": 735381}], "configs": [{"config_name": "afr", "data_files": [{"split": "train", "path": "afr/train-*"}, {"split": "test", "path": "afr/test-*"}, {"split": "eval", "path": "afr/eval-*"}]}, {"config_name": "eng", "data_files": [{"split": "train", "path": "eng/train-*"}, {"split": "test", "path": "eng/test-*"}, {"split": "eval", "path": "eng/eval-*"}]}, {"config_name": "nbl", "data_files": [{"split": "train", "path": "nbl/train-*"}, {"split": "test", "path": "nbl/test-*"}, {"split": "eval", "path": "nbl/eval-*"}]}, {"config_name": "nso", "data_files": [{"split": "train", "path": "nso/train-*"}, {"split": "test", "path": "nso/test-*"}, {"split": "eval", "path": "nso/eval-*"}]}, {"config_name": "sot", "data_files": [{"split": "train", "path": "sot/train-*"}, {"split": "test", "path": "sot/test-*"}, {"split": "eval", "path": "sot/eval-*"}]}, {"config_name": "ssw", "data_files": [{"split": "train", "path": "ssw/train-*"}, {"split": "test", "path": "ssw/test-*"}, {"split": "eval", "path": "ssw/eval-*"}]}, {"config_name": "tsn", "data_files": [{"split": "train", "path": "tsn/train-*"}, {"split": "test", "path": "tsn/test-*"}, {"split": "eval", "path": "tsn/eval-*"}]}, {"config_name": "tso", "data_files": [{"split": "train", "path": "tso/train-*"}, {"split": "test", "path": "tso/test-*"}, {"split": "eval", "path": "tso/eval-*"}]}, {"config_name": "ven", "data_files": [{"split": "train", "path": "ven/train-*"}, {"split": "test", "path": "ven/test-*"}, {"split": "eval", "path": "ven/eval-*"}]}, {"config_name": "xho", "data_files": [{"split": "train", "path": "xho/train-*"}, {"split": "test", "path": "xho/test-*"}, {"split": "eval", "path": "xho/eval-*"}]}, {"config_name": "zul", "data_files": [{"split": "train", "path": "zul/train-*"}, {"split": "test", "path": "zul/test-*"}, {"split": "eval", "path": "zul/eval-*"}]}]}
|
2023-12-06T10:12:42+00:00
|
[
"2303.03750"
] |
[
"eng",
"afr",
"nbl",
"xho",
"zul",
"nso",
"sep",
"tsn",
"ssw",
"ven",
"tso"
] |
TAGS
#task_categories-translation #language-English #language-Afrikaans #language-South Ndebele #language-Xhosa #language-Zulu #language-Pedi #language-Sìcìté Sénoufo #language-Tswana #language-Swati #language-Venda #language-Tsonga #license-cc-by-4.0 #multilingual #government #arxiv-2303.03750 #region-us
|
The Vuk'uzenzele South African Multilingual Corpus
==================================================
Give Feedback : DSFSI Resource Feedback Form
About Dataset
-------------
The dataset was obtained from the South African government magazine Vuk'uzenzele, created by the Government Communication and Information System (GCIS).
The original raw PDFs were obtatined from the Vuk'uzenzele website.
The datasets contain government magazine editions in 11 languages, namely:
The data is split by language in JSONL format and each row is of the form:
Disclaimer
----------
This dataset contains machine-readable data extracted from PDF documents, from URL provided by the Government Communication Information System (GCIS). While efforts were made to ensure the accuracy and completeness of this data, there may be errors or discrepancies between the original publications and this dataset. No warranties, guarantees or representations are given in relation to the information contained in the dataset. The members of the Data Science for Societal Impact Research Group bear no responsibility and/or liability for any such errors or discrepancies in this dataset. The Government Communication Information System (GCIS) bears no responsibility and/or liability for any such errors or discrepancies in this dataset. It is recommended that users verify all information contained herein before making decisions based upon this information.
Authors
-------
* Vukosi Marivate - @vukosi
* Andani Madodonga
* Daniel Njini
* Richard Lastrucci
* Isheanesu Dzingirai
* Jenalea Rajab
Paper
Preparing the Vuk'uzenzele and ZA-gov-multilingual South African multilingual corpora
>
> @inproceedings{lastrucci-etal-2023-preparing,
> title = "Preparing the Vuk{'}uzenzele and {ZA}-gov-multilingual {S}outh {A}frican multilingual corpora",
> author = "Richard Lastrucci and Isheanesu Dzingirai and Jenalea Rajab and Andani Madodonga and Matimba Shingange and Daniel Njini and Vukosi Marivate",
> booktitle = "Proceedings of the Fourth workshop on Resources for African Indigenous Languages (RAIL 2023)",
> month = may,
> year = "2023",
> address = "Dubrovnik, Croatia",
> publisher = "Association for Computational Linguistics",
> url = "URL
> pages = "18--25"
> }
>
>
>
Dataset
Vukosi Marivate, Andani Madodonga, Daniel Njini, Richard Lastrucci, Isheanesu Dzingirai, Jenalea Rajab. The Vuk'uzenzele South African Multilingual Corpus, 2023
>
> @dataset{marivate\_vukosi\_2023\_7598540,
> author = {Marivate, Vukosi and
> Njini, Daniel and
> Madodonga, Andani and
> Lastrucci, Richard and
> Dzingirai, Isheanesu
> Rajab, Jenalea},
> title = {The Vuk'uzenzele South African Multilingual Corpus},
> month = feb,
> year = 2023,
> publisher = {Zenodo},
> doi = {10.5281/zenodo.7598539},
> url = {URL
> }
>
>
>
Licences
--------
* License for Data - CC 4.0 BY
* Licence for Code - MIT License
|
[] |
[
"TAGS\n#task_categories-translation #language-English #language-Afrikaans #language-South Ndebele #language-Xhosa #language-Zulu #language-Pedi #language-Sìcìté Sénoufo #language-Tswana #language-Swati #language-Venda #language-Tsonga #license-cc-by-4.0 #multilingual #government #arxiv-2303.03750 #region-us \n"
] |
[
104
] |
[
"passage: TAGS\n#task_categories-translation #language-English #language-Afrikaans #language-South Ndebele #language-Xhosa #language-Zulu #language-Pedi #language-Sìcìté Sénoufo #language-Tswana #language-Swati #language-Venda #language-Tsonga #license-cc-by-4.0 #multilingual #government #arxiv-2303.03750 #region-us \n"
] |
a6e44c12fd50c1840e2b05918bafb634a651f830
|
# Dataset of shinozaki_rika (Sword Art Online)
This is the dataset of shinozaki_rika (Sword Art Online), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/shinozaki_rika_swordartonline
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T10:46:09+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:09:54+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of shinozaki_rika (Sword Art Online)
This is the dataset of shinozaki_rika (Sword Art Online), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of shinozaki_rika (Sword Art Online)\n\nThis is the dataset of shinozaki_rika (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of shinozaki_rika (Sword Art Online)\n\nThis is the dataset of shinozaki_rika (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
85
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of shinozaki_rika (Sword Art Online)\n\nThis is the dataset of shinozaki_rika (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
b166836a1307ca086ea235d3f819895f18f6a802
|
# Dataset Card for "lima-m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
TinyPixel/lima-m
|
[
"region:us"
] |
2023-08-16T10:46:55+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2909080, "num_examples": 1030}], "download_size": 1697654, "dataset_size": 2909080}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-11-14T04:25:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "lima-m"
More Information needed
|
[
"# Dataset Card for \"lima-m\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"lima-m\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"lima-m\"\n\nMore Information needed"
] |
7115c901a25d759840a06bc217d097c8629571dd
|
# Dataset Card for "orca_minis-m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
TinyPixel/orca_minis-m
|
[
"region:us"
] |
2023-08-16T10:52:59+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 170844694, "num_examples": 104179}], "download_size": 79081160, "dataset_size": 170844694}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-16T10:53:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "orca_minis-m"
More Information needed
|
[
"# Dataset Card for \"orca_minis-m\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"orca_minis-m\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"orca_minis-m\"\n\nMore Information needed"
] |
930ff5734fca446b5c6b3c5f127e04a4870c721d
|
# Dataset Card for "airo-m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
TinyPixel/airo-m
|
[
"region:us"
] |
2023-08-16T11:04:49+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 58192355, "num_examples": 34204}], "download_size": 30109400, "dataset_size": 58192355}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-02T08:55:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "airo-m"
More Information needed
|
[
"# Dataset Card for \"airo-m\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"airo-m\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"airo-m\"\n\nMore Information needed"
] |
a92412daf925dbd61eec946e6600ac5e47da61a1
|
# Dataset Card for "banel_jakir_arnob_training_dataset_90"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fia24/banel_jakir_arnob_training_dataset_90
|
[
"region:us"
] |
2023-08-16T11:12:28+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "translation", "struct": [{"name": "en", "dtype": "string"}, {"name": "fr", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1772766, "num_examples": 31757}, {"name": "test", "num_bytes": 197897, "num_examples": 3529}], "download_size": 1046720, "dataset_size": 1970663}}
|
2023-08-16T11:12:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "banel_jakir_arnob_training_dataset_90"
More Information needed
|
[
"# Dataset Card for \"banel_jakir_arnob_training_dataset_90\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"banel_jakir_arnob_training_dataset_90\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"banel_jakir_arnob_training_dataset_90\"\n\nMore Information needed"
] |
4e2d1dd3ebccd4e2906894fd50f4de6bd91dd35a
|
# Dataset Card for "oasst1-m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
TinyPixel/oasst1-m
|
[
"region:us"
] |
2023-08-16T11:13:21+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9384110, "num_examples": 8274}], "download_size": 5119052, "dataset_size": 9384110}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-14T03:17:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "oasst1-m"
More Information needed
|
[
"# Dataset Card for \"oasst1-m\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"oasst1-m\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"oasst1-m\"\n\nMore Information needed"
] |
216a4094275e700664c010acf11f8c70e4f0b459
|
# Dataset of yui (Sword Art Online)
This is the dataset of yui (Sword Art Online), containing 106 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/yui_swordartonline
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T11:20:52+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:09:56+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of yui (Sword Art Online)
This is the dataset of yui (Sword Art Online), containing 106 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of yui (Sword Art Online)\n\nThis is the dataset of yui (Sword Art Online), containing 106 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of yui (Sword Art Online)\n\nThis is the dataset of yui (Sword Art Online), containing 106 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
79
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of yui (Sword Art Online)\n\nThis is the dataset of yui (Sword Art Online), containing 106 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
4118c4eeccf667c619058e6cc8c0b05b58bde6c7
|
# Dataset Card for "mix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
TinyPixel/mix
|
[
"region:us"
] |
2023-08-16T11:21:20+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12278702, "num_examples": 9304}], "download_size": 6793704, "dataset_size": 12278702}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-15T13:12:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "mix"
More Information needed
|
[
"# Dataset Card for \"mix\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"mix\"\n\nMore Information needed"
] |
[
6,
11
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"mix\"\n\nMore Information needed"
] |
0232dbe75ae5cc04fe74bd558e52810e0d00e094
|
# Dataset of ronye_arabel (Sword Art Online)
This is the dataset of ronye_arabel (Sword Art Online), containing 38 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/ronye_arabel_swordartonline
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T11:34:13+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:09:58+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of ronye_arabel (Sword Art Online)
This is the dataset of ronye_arabel (Sword Art Online), containing 38 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of ronye_arabel (Sword Art Online)\n\nThis is the dataset of ronye_arabel (Sword Art Online), containing 38 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of ronye_arabel (Sword Art Online)\n\nThis is the dataset of ronye_arabel (Sword Art Online), containing 38 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
85
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of ronye_arabel (Sword Art Online)\n\nThis is the dataset of ronye_arabel (Sword Art Online), containing 38 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
2f7e3e312b35774187e0cbccc31e46a7eaad3e98
|
# Dataset Card for ARTigo: Social Image Tagging
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.artigo.org
- **Repository:** https://github.com/arthist-lmu/artigo
- **Data:** https://doi.org/10.5281/zenodo.8202331
### Dataset Summary
ARTigo (https://www.artigo.org/) is a Citizen Science project that has been jointly developed at the Institute for Art History and the Institute for Informatics at Ludwig Maximilian University of Munich since 2010. It enables participants to engage in the tagging of artworks, thus fostering knowledge accumulation and democratizing access to a traditionally elitist field. ARTigo is built as an interactive web application that offers Games With a Purpose: in them, players are presented with an image – and then challenged to communicate with one another using visual or textual annotations, *tags*, within a given time. Through this playful approach, the project aims to inspire greater appreciation for art and draw new audiences to museums and archives. It streamlines the discoverability of art-historical images, while promoting inclusivity, effective communication, and collaborative research practices. The project’s data are freely available to the wider research community for novel scientific investigations.
### Supported Tasks and Leaderboards
- `object-detection`: This dataset can be used to train models for object detection tasks on art-historical images.
- `image-classification`: This dataset can also be used for image classification tasks by using only the tags and not the associated region information.
## Dataset Structure
This dataset has a single configuration.
### Data Instances
An example instance from this dataset:
```python
{
'id': 32254,
'hash_id': 'e34fa90bf4c73d20ac19b14fa615206e',
'titles': {
'id': [10893],
'name': ['Entwurf für ein zwölfteiliges Kartenspiel']
},
'creators': {
'id': [2391],
'name': ['Félix Vallotton']
},
'location': 'Lausanne',
'institution': 'Galerie du Chêne',
'source': {
'id': 2,
'name': 'Artemis',
'url': 'http://artemis.uni-muenchen.de/'
},
'path': 'https://api.artigo.org/media/e3/4f/e34fa90bf4c73d20ac19b14fa615206e.jpg',
'tags': {
'id': [6, 10, 13, ..., 206331],
'name': ['blau', 'feder', 'flügel', ..., 'herzober'],
'language': ['de', 'de', 'de', ..., 'de'],
'count': [16, 6, 6, ..., 1],
'regions': [None, None, None, ..., None]
},
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=381x600 at 0x7FEF3A415820>
}
```
### Data Fields
This dataset contains ten fields:
- `id`: a unique identifier for the image;
- `hash_id`: a unique identifier for the image based on its content (e.g., image hash);
- `titles`: a list of titles associated with the image, with each title having the following key-value pairs:
- `id`: a unique identifier for the title;
- `name`: the name of the title;
- `creators`: a list of creators associated with the image, with each creator having the following key-value pairs:
- `id`: a unique identifier for the creator;
- `name`: the name of the creator;
- `location`: the location associated with the image;
- `institution`: the institution that holds the image;
- `source`: information about the source of the image, with the following key-value pairs:
- `id`: a unique identifier for the source;
- `name`: the name of the source;
- `url`: the URL of the source;
- `path`: the path to the image file;
- `tags`: a list of tags associated with the image, with each tag having the following key-value pairs:
- `id`: a unique identifier for the tag;
- `name`: the name of the tag;
- `language`: the language of the tag (if available);
- `count`: the number of times the tag has been applied to the image;
- `regions`: the regions of the image to which the tag can be applied (if available);
- `image`: the image.
### Data Splits
This dataset doesn't provide any predefined train, validation or test splits.
## Additional Information
### Licensing Information
[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
```
@dataset{bry_et_al_artigo,
author = {Bry, François and
Kohle, Hubertus and
Krefeld, Thomas and
Riepl, Christian and
Schneider, Stefanie and
Schön, Gerhard and
Schulz, Klaus},
title = {{ARTigo}: Social Image Tagging (Aggregated Data)},
publisher = {Zenodo},
doi = {10.5281/zenodo.8202331},
url = {https://doi.org/10.5281/zenodo.8202331}}
```
|
biglam/artigo
|
[
"task_categories:object-detection",
"task_categories:image-classification",
"annotations_creators:crowd-generated",
"language:de",
"language:en",
"language:fr",
"license:cc-by-sa-4.0",
"lam",
"region:us"
] |
2023-08-16T11:39:37+00:00
|
{"annotations_creators": ["crowd-generated"], "language": ["de", "en", "fr"], "license": "cc-by-sa-4.0", "task_categories": ["object-detection", "image-classification"], "pretty_name": "ARTigo: Social Image Tagging", "tags": ["lam"], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "hash_id", "dtype": "string"}, {"name": "titles", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "name", "dtype": "string"}]}, {"name": "creators", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "name", "dtype": "string"}]}, {"name": "location", "dtype": "string"}, {"name": "institution", "dtype": "string"}, {"name": "source", "struct": [{"name": "id", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "path", "dtype": "string"}, {"name": "tags", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "count", "dtype": "int64"}, {"name": "regions", "sequence": [{"name": "x", "dtype": "float64"}, {"name": "y", "dtype": "float64"}, {"name": "width", "dtype": "float64"}, {"name": "height", "dtype": "float64"}]}]}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 269611482, "num_examples": 60633}], "download_size": 5625395643, "dataset_size": 269611482}}
|
2023-08-23T10:35:47+00:00
|
[] |
[
"de",
"en",
"fr"
] |
TAGS
#task_categories-object-detection #task_categories-image-classification #annotations_creators-crowd-generated #language-German #language-English #language-French #license-cc-by-sa-4.0 #lam #region-us
|
# Dataset Card for ARTigo: Social Image Tagging
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Additional Information
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Repository: URL
- Data: URL
### Dataset Summary
ARTigo (URL is a Citizen Science project that has been jointly developed at the Institute for Art History and the Institute for Informatics at Ludwig Maximilian University of Munich since 2010. It enables participants to engage in the tagging of artworks, thus fostering knowledge accumulation and democratizing access to a traditionally elitist field. ARTigo is built as an interactive web application that offers Games With a Purpose: in them, players are presented with an image – and then challenged to communicate with one another using visual or textual annotations, *tags*, within a given time. Through this playful approach, the project aims to inspire greater appreciation for art and draw new audiences to museums and archives. It streamlines the discoverability of art-historical images, while promoting inclusivity, effective communication, and collaborative research practices. The project’s data are freely available to the wider research community for novel scientific investigations.
### Supported Tasks and Leaderboards
- 'object-detection': This dataset can be used to train models for object detection tasks on art-historical images.
- 'image-classification': This dataset can also be used for image classification tasks by using only the tags and not the associated region information.
## Dataset Structure
This dataset has a single configuration.
### Data Instances
An example instance from this dataset:
### Data Fields
This dataset contains ten fields:
- 'id': a unique identifier for the image;
- 'hash_id': a unique identifier for the image based on its content (e.g., image hash);
- 'titles': a list of titles associated with the image, with each title having the following key-value pairs:
- 'id': a unique identifier for the title;
- 'name': the name of the title;
- 'creators': a list of creators associated with the image, with each creator having the following key-value pairs:
- 'id': a unique identifier for the creator;
- 'name': the name of the creator;
- 'location': the location associated with the image;
- 'institution': the institution that holds the image;
- 'source': information about the source of the image, with the following key-value pairs:
- 'id': a unique identifier for the source;
- 'name': the name of the source;
- 'url': the URL of the source;
- 'path': the path to the image file;
- 'tags': a list of tags associated with the image, with each tag having the following key-value pairs:
- 'id': a unique identifier for the tag;
- 'name': the name of the tag;
- 'language': the language of the tag (if available);
- 'count': the number of times the tag has been applied to the image;
- 'regions': the regions of the image to which the tag can be applied (if available);
- 'image': the image.
### Data Splits
This dataset doesn't provide any predefined train, validation or test splits.
## Additional Information
### Licensing Information
CC BY-SA 4.0
|
[
"# Dataset Card for ARTigo: Social Image Tagging",
"## Table of Contents\n\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Data: URL",
"### Dataset Summary\n\nARTigo (URL is a Citizen Science project that has been jointly developed at the Institute for Art History and the Institute for Informatics at Ludwig Maximilian University of Munich since 2010. It enables participants to engage in the tagging of artworks, thus fostering knowledge accumulation and democratizing access to a traditionally elitist field. ARTigo is built as an interactive web application that offers Games With a Purpose: in them, players are presented with an image – and then challenged to communicate with one another using visual or textual annotations, *tags*, within a given time. Through this playful approach, the project aims to inspire greater appreciation for art and draw new audiences to museums and archives. It streamlines the discoverability of art-historical images, while promoting inclusivity, effective communication, and collaborative research practices. The project’s data are freely available to the wider research community for novel scientific investigations.",
"### Supported Tasks and Leaderboards\n\n- 'object-detection': This dataset can be used to train models for object detection tasks on art-historical images.\n- 'image-classification': This dataset can also be used for image classification tasks by using only the tags and not the associated region information.",
"## Dataset Structure\n\nThis dataset has a single configuration.",
"### Data Instances\n\nAn example instance from this dataset:",
"### Data Fields\n\nThis dataset contains ten fields:\n\n- 'id': a unique identifier for the image;\n- 'hash_id': a unique identifier for the image based on its content (e.g., image hash);\n- 'titles': a list of titles associated with the image, with each title having the following key-value pairs:\n - 'id': a unique identifier for the title;\n - 'name': the name of the title;\n- 'creators': a list of creators associated with the image, with each creator having the following key-value pairs:\n - 'id': a unique identifier for the creator;\n - 'name': the name of the creator;\n- 'location': the location associated with the image;\n- 'institution': the institution that holds the image;\n- 'source': information about the source of the image, with the following key-value pairs:\n - 'id': a unique identifier for the source;\n - 'name': the name of the source;\n - 'url': the URL of the source;\n- 'path': the path to the image file;\n- 'tags': a list of tags associated with the image, with each tag having the following key-value pairs:\n - 'id': a unique identifier for the tag;\n - 'name': the name of the tag;\n - 'language': the language of the tag (if available);\n - 'count': the number of times the tag has been applied to the image;\n - 'regions': the regions of the image to which the tag can be applied (if available);\n- 'image': the image.",
"### Data Splits\n\nThis dataset doesn't provide any predefined train, validation or test splits.",
"## Additional Information",
"### Licensing Information\n\nCC BY-SA 4.0"
] |
[
"TAGS\n#task_categories-object-detection #task_categories-image-classification #annotations_creators-crowd-generated #language-German #language-English #language-French #license-cc-by-sa-4.0 #lam #region-us \n",
"# Dataset Card for ARTigo: Social Image Tagging",
"## Table of Contents\n\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Data: URL",
"### Dataset Summary\n\nARTigo (URL is a Citizen Science project that has been jointly developed at the Institute for Art History and the Institute for Informatics at Ludwig Maximilian University of Munich since 2010. It enables participants to engage in the tagging of artworks, thus fostering knowledge accumulation and democratizing access to a traditionally elitist field. ARTigo is built as an interactive web application that offers Games With a Purpose: in them, players are presented with an image – and then challenged to communicate with one another using visual or textual annotations, *tags*, within a given time. Through this playful approach, the project aims to inspire greater appreciation for art and draw new audiences to museums and archives. It streamlines the discoverability of art-historical images, while promoting inclusivity, effective communication, and collaborative research practices. The project’s data are freely available to the wider research community for novel scientific investigations.",
"### Supported Tasks and Leaderboards\n\n- 'object-detection': This dataset can be used to train models for object detection tasks on art-historical images.\n- 'image-classification': This dataset can also be used for image classification tasks by using only the tags and not the associated region information.",
"## Dataset Structure\n\nThis dataset has a single configuration.",
"### Data Instances\n\nAn example instance from this dataset:",
"### Data Fields\n\nThis dataset contains ten fields:\n\n- 'id': a unique identifier for the image;\n- 'hash_id': a unique identifier for the image based on its content (e.g., image hash);\n- 'titles': a list of titles associated with the image, with each title having the following key-value pairs:\n - 'id': a unique identifier for the title;\n - 'name': the name of the title;\n- 'creators': a list of creators associated with the image, with each creator having the following key-value pairs:\n - 'id': a unique identifier for the creator;\n - 'name': the name of the creator;\n- 'location': the location associated with the image;\n- 'institution': the institution that holds the image;\n- 'source': information about the source of the image, with the following key-value pairs:\n - 'id': a unique identifier for the source;\n - 'name': the name of the source;\n - 'url': the URL of the source;\n- 'path': the path to the image file;\n- 'tags': a list of tags associated with the image, with each tag having the following key-value pairs:\n - 'id': a unique identifier for the tag;\n - 'name': the name of the tag;\n - 'language': the language of the tag (if available);\n - 'count': the number of times the tag has been applied to the image;\n - 'regions': the regions of the image to which the tag can be applied (if available);\n- 'image': the image.",
"### Data Splits\n\nThis dataset doesn't provide any predefined train, validation or test splits.",
"## Additional Information",
"### Licensing Information\n\nCC BY-SA 4.0"
] |
[
69,
12,
61,
18,
213,
73,
14,
14,
377,
26,
5,
11
] |
[
"passage: TAGS\n#task_categories-object-detection #task_categories-image-classification #annotations_creators-crowd-generated #language-German #language-English #language-French #license-cc-by-sa-4.0 #lam #region-us \n# Dataset Card for ARTigo: Social Image Tagging## Table of Contents\n\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Additional Information\n - Licensing Information\n - Citation Information## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Data: URL### Dataset Summary\n\nARTigo (URL is a Citizen Science project that has been jointly developed at the Institute for Art History and the Institute for Informatics at Ludwig Maximilian University of Munich since 2010. It enables participants to engage in the tagging of artworks, thus fostering knowledge accumulation and democratizing access to a traditionally elitist field. ARTigo is built as an interactive web application that offers Games With a Purpose: in them, players are presented with an image – and then challenged to communicate with one another using visual or textual annotations, *tags*, within a given time. Through this playful approach, the project aims to inspire greater appreciation for art and draw new audiences to museums and archives. It streamlines the discoverability of art-historical images, while promoting inclusivity, effective communication, and collaborative research practices. The project’s data are freely available to the wider research community for novel scientific investigations.### Supported Tasks and Leaderboards\n\n- 'object-detection': This dataset can be used to train models for object detection tasks on art-historical images.\n- 'image-classification': This dataset can also be used for image classification tasks by using only the tags and not the associated region information.## Dataset Structure\n\nThis dataset has a single configuration.### Data Instances\n\nAn example instance from this dataset:"
] |
d85872e9c9fb68354d03bedbcc8440975e4cb31c
|
# Dataset Card for "processed_bert_dataset-datalore"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ian-m/processed_bert_dataset-datalore
|
[
"region:us"
] |
2023-08-16T11:50:11+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 24902388000.0, "num_examples": 6917330}], "download_size": 6083242697, "dataset_size": 24902388000.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-16T12:55:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "processed_bert_dataset-datalore"
More Information needed
|
[
"# Dataset Card for \"processed_bert_dataset-datalore\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"processed_bert_dataset-datalore\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"processed_bert_dataset-datalore\"\n\nMore Information needed"
] |
49c4644d3f9542c726773b0fbdc5ef379da69766
|
# Dataset Card for "test-data-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
enelpe/test-data-3
|
[
"region:us"
] |
2023-08-16T11:50:15+00:00
|
{"dataset_info": {"features": [{"name": "Sentences", "sequence": "string"}, {"name": "Labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 32999, "num_examples": 103}], "download_size": 0, "dataset_size": 32999}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-16T11:50:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "test-data-3"
More Information needed
|
[
"# Dataset Card for \"test-data-3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"test-data-3\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"test-data-3\"\n\nMore Information needed"
] |
9bce36750f6e5f6e075595ee658740b5fe9d9485
|
# Dataset Card for "Formulas"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
crewdon/Formulas
|
[
"region:us"
] |
2023-08-16T11:51:08+00:00
|
{"dataset_info": {"config_name": "crewdon", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 398193050, "num_examples": 195968}], "download_size": 93433372, "dataset_size": 398193050}, "configs": [{"config_name": "crewdon", "data_files": [{"split": "train", "path": "crewdon/train-*"}]}]}
|
2023-08-16T11:51:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Formulas"
More Information needed
|
[
"# Dataset Card for \"Formulas\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Formulas\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Formulas\"\n\nMore Information needed"
] |
ee542b9c3d6d9b7058abb6f4b1335d1fb377c098
|
## Dataset Description
TODO
### Dataset Summary
TODO
## Dataset Creatioon
TODO
|
alkzar90/mini-croupier
|
[
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:found",
"size_categories:n<1K",
"source_datasets:original",
"license:apache-2.0",
"mgt",
"magic-card-game",
"creature-dataset",
"region:us"
] |
2023-08-16T11:52:29+00:00
|
{"annotations_creators": ["found"], "language_creators": [], "language": [], "license": ["apache-2.0"], "multilinguality": [], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "pretty_name": "Mini-Croupier: Magic the Gathering creatures mini-dataset", "tags": ["mgt", "magic-card-game", "creature-dataset"]}
|
2023-08-16T22:03:28+00:00
|
[] |
[] |
TAGS
#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-found #size_categories-n<1K #source_datasets-original #license-apache-2.0 #mgt #magic-card-game #creature-dataset #region-us
|
## Dataset Description
TODO
### Dataset Summary
TODO
## Dataset Creatioon
TODO
|
[
"## Dataset Description\n\nTODO",
"### Dataset Summary\n\nTODO",
"## Dataset Creatioon\n\nTODO"
] |
[
"TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-found #size_categories-n<1K #source_datasets-original #license-apache-2.0 #mgt #magic-card-game #creature-dataset #region-us \n",
"## Dataset Description\n\nTODO",
"### Dataset Summary\n\nTODO",
"## Dataset Creatioon\n\nTODO"
] |
[
84,
6,
8,
8
] |
[
"passage: TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-found #size_categories-n<1K #source_datasets-original #license-apache-2.0 #mgt #magic-card-game #creature-dataset #region-us \n## Dataset Description\n\nTODO### Dataset Summary\n\nTODO## Dataset Creatioon\n\nTODO"
] |
d8b1e15a9775ecdaaa6efc622206c01e4044bbd1
|
# Dataset Card for "bengaliAI-preprocessed-whisper-medium-0-50000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rounak28/bengaliAI-preprocessed-whisper-medium-0-50000
|
[
"region:us"
] |
2023-08-16T11:58:37+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 48065858980, "num_examples": 50000}], "download_size": 6861840289, "dataset_size": 48065858980}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-16T12:06:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bengaliAI-preprocessed-whisper-medium-0-50000"
More Information needed
|
[
"# Dataset Card for \"bengaliAI-preprocessed-whisper-medium-0-50000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bengaliAI-preprocessed-whisper-medium-0-50000\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bengaliAI-preprocessed-whisper-medium-0-50000\"\n\nMore Information needed"
] |
587463ba064b28a5106d68f806b5691d2b25c82e
|
# Dataset Card for HeQ_v1
## Dataset Description
- **Homepage:** [HeQ - Hebrew Question Answering Dataset](https://github.com/NNLP-IL/Hebrew-Question-Answering-Dataset)
- **Repository:** [GitHub Repository](https://github.com/NNLP-IL/Hebrew-Question-Answering-Dataset)
- **Paper:** [HeQ: A Dataset for Hebrew Question Answering](https://u.cs.biu.ac.il/~yogo/heq.pdf)
- **Leaderboard:** N/A
### Dataset Summary
HeQ is a question answering dataset in Modern Hebrew, consisting of 30,147 questions. It follows the format and crowdsourcing methodology of SQuAD and ParaShoot, with paragraphs sourced from Hebrew Wikipedia and Geektime.
### Supported Tasks and Leaderboards
- **Task:** Question Answering
### Languages
- Hebrew (he)
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- **ID:** `string`
- **Title:** `string`
- **Source:** `string`
- **Context:** `string`
- **Question:** `string`
- **Answers:** `string`
- **Is_Impossible:** `bool`
- **WH_Question:** `string`
- **Question_Quality:** `string`
### Data Splits
- **Train:** 27,142 examples
- **Test:** 1,504 examples
- **Validation:** 1,501 examples
## Dataset Creation
### Curation Rationale
The dataset was created to provide a resource for question answering research in Hebrew.
### Source Data
#### Initial Data Collection and Normalization
Paragraphs were sourced from Hebrew Wikipedia and Geektime.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
A team of crowdworkers formulated and answered reading comprehension questions.
#### Who are the annotators?
crowdsourced
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
License: cc-by-4.0
### Citation Information
[More Information Needed]
### Contributions
Contributions and additional information are welcome.
|
pig4431/HeQ_v1
|
[
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:he",
"license:cc-by-4.0",
"region:us"
] |
2023-08-16T11:59:03+00:00
|
{"language": ["he"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"]}
|
2023-08-16T12:13:16+00:00
|
[] |
[
"he"
] |
TAGS
#task_categories-question-answering #size_categories-1K<n<10K #language-Hebrew #license-cc-by-4.0 #region-us
|
# Dataset Card for HeQ_v1
## Dataset Description
- Homepage: HeQ - Hebrew Question Answering Dataset
- Repository: GitHub Repository
- Paper: HeQ: A Dataset for Hebrew Question Answering
- Leaderboard: N/A
### Dataset Summary
HeQ is a question answering dataset in Modern Hebrew, consisting of 30,147 questions. It follows the format and crowdsourcing methodology of SQuAD and ParaShoot, with paragraphs sourced from Hebrew Wikipedia and Geektime.
### Supported Tasks and Leaderboards
- Task: Question Answering
### Languages
- Hebrew (he)
## Dataset Structure
### Data Instances
### Data Fields
- ID: 'string'
- Title: 'string'
- Source: 'string'
- Context: 'string'
- Question: 'string'
- Answers: 'string'
- Is_Impossible: 'bool'
- WH_Question: 'string'
- Question_Quality: 'string'
### Data Splits
- Train: 27,142 examples
- Test: 1,504 examples
- Validation: 1,501 examples
## Dataset Creation
### Curation Rationale
The dataset was created to provide a resource for question answering research in Hebrew.
### Source Data
#### Initial Data Collection and Normalization
Paragraphs were sourced from Hebrew Wikipedia and Geektime.
#### Who are the source language producers?
### Annotations
#### Annotation process
A team of crowdworkers formulated and answered reading comprehension questions.
#### Who are the annotators?
crowdsourced
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
License: cc-by-4.0
### Contributions
Contributions and additional information are welcome.
|
[
"# Dataset Card for HeQ_v1",
"## Dataset Description\n\n- Homepage: HeQ - Hebrew Question Answering Dataset\n- Repository: GitHub Repository\n- Paper: HeQ: A Dataset for Hebrew Question Answering\n- Leaderboard: N/A",
"### Dataset Summary\n\nHeQ is a question answering dataset in Modern Hebrew, consisting of 30,147 questions. It follows the format and crowdsourcing methodology of SQuAD and ParaShoot, with paragraphs sourced from Hebrew Wikipedia and Geektime.",
"### Supported Tasks and Leaderboards\n\n- Task: Question Answering",
"### Languages\n\n- Hebrew (he)",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- ID: 'string'\n- Title: 'string'\n- Source: 'string'\n- Context: 'string'\n- Question: 'string'\n- Answers: 'string'\n- Is_Impossible: 'bool'\n- WH_Question: 'string'\n- Question_Quality: 'string'",
"### Data Splits\n\n- Train: 27,142 examples\n- Test: 1,504 examples\n- Validation: 1,501 examples",
"## Dataset Creation",
"### Curation Rationale\n\nThe dataset was created to provide a resource for question answering research in Hebrew.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nParagraphs were sourced from Hebrew Wikipedia and Geektime.",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\nA team of crowdworkers formulated and answered reading comprehension questions.",
"#### Who are the annotators?\n\ncrowdsourced",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nLicense: cc-by-4.0",
"### Contributions\n\nContributions and additional information are welcome."
] |
[
"TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Hebrew #license-cc-by-4.0 #region-us \n",
"# Dataset Card for HeQ_v1",
"## Dataset Description\n\n- Homepage: HeQ - Hebrew Question Answering Dataset\n- Repository: GitHub Repository\n- Paper: HeQ: A Dataset for Hebrew Question Answering\n- Leaderboard: N/A",
"### Dataset Summary\n\nHeQ is a question answering dataset in Modern Hebrew, consisting of 30,147 questions. It follows the format and crowdsourcing methodology of SQuAD and ParaShoot, with paragraphs sourced from Hebrew Wikipedia and Geektime.",
"### Supported Tasks and Leaderboards\n\n- Task: Question Answering",
"### Languages\n\n- Hebrew (he)",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- ID: 'string'\n- Title: 'string'\n- Source: 'string'\n- Context: 'string'\n- Question: 'string'\n- Answers: 'string'\n- Is_Impossible: 'bool'\n- WH_Question: 'string'\n- Question_Quality: 'string'",
"### Data Splits\n\n- Train: 27,142 examples\n- Test: 1,504 examples\n- Validation: 1,501 examples",
"## Dataset Creation",
"### Curation Rationale\n\nThe dataset was created to provide a resource for question answering research in Hebrew.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nParagraphs were sourced from Hebrew Wikipedia and Geektime.",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\nA team of crowdworkers formulated and answered reading comprehension questions.",
"#### Who are the annotators?\n\ncrowdsourced",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nLicense: cc-by-4.0",
"### Contributions\n\nContributions and additional information are welcome."
] |
[
44,
10,
50,
61,
17,
10,
6,
6,
72,
28,
5,
25,
4,
25,
10,
5,
21,
12,
8,
8,
7,
8,
7,
5,
6,
14,
14
] |
[
"passage: TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Hebrew #license-cc-by-4.0 #region-us \n# Dataset Card for HeQ_v1## Dataset Description\n\n- Homepage: HeQ - Hebrew Question Answering Dataset\n- Repository: GitHub Repository\n- Paper: HeQ: A Dataset for Hebrew Question Answering\n- Leaderboard: N/A### Dataset Summary\n\nHeQ is a question answering dataset in Modern Hebrew, consisting of 30,147 questions. It follows the format and crowdsourcing methodology of SQuAD and ParaShoot, with paragraphs sourced from Hebrew Wikipedia and Geektime.### Supported Tasks and Leaderboards\n\n- Task: Question Answering### Languages\n\n- Hebrew (he)## Dataset Structure### Data Instances### Data Fields\n\n- ID: 'string'\n- Title: 'string'\n- Source: 'string'\n- Context: 'string'\n- Question: 'string'\n- Answers: 'string'\n- Is_Impossible: 'bool'\n- WH_Question: 'string'\n- Question_Quality: 'string'### Data Splits\n\n- Train: 27,142 examples\n- Test: 1,504 examples\n- Validation: 1,501 examples## Dataset Creation### Curation Rationale\n\nThe dataset was created to provide a resource for question answering research in Hebrew.### Source Data#### Initial Data Collection and Normalization\n\nParagraphs were sourced from Hebrew Wikipedia and Geektime.#### Who are the source language producers?### Annotations#### Annotation process\n\nA team of crowdworkers formulated and answered reading comprehension questions.#### Who are the annotators?\n\ncrowdsourced### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\nLicense: cc-by-4.0### Contributions\n\nContributions and additional information are welcome."
] |
de502ddff09ca76b302f9939b399f63010851808
|
# Dataset of sachi (Sword Art Online)
This is the dataset of sachi (Sword Art Online), containing 65 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/sachi_swordartonline
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T11:59:04+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:00+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of sachi (Sword Art Online)
This is the dataset of sachi (Sword Art Online), containing 65 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of sachi (Sword Art Online)\n\nThis is the dataset of sachi (Sword Art Online), containing 65 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of sachi (Sword Art Online)\n\nThis is the dataset of sachi (Sword Art Online), containing 65 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
79
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of sachi (Sword Art Online)\n\nThis is the dataset of sachi (Sword Art Online), containing 65 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
2cb991aab876711c69ad9fece08cf7de986ebed4
|
# Dataset of ram (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of ram (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/ram_rezero
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T12:00:05+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:02+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of ram (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of ram (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of ram (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of ram (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of ram (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of ram (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
97
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of ram (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of ram (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
fedcada326b21ba5dfbabcf9f8d401f0e0fcb778
|
# Dataset Card for gt-doremiti-instructions
## Dataset Description
Jeu d'instruction pour fine-tuner un LLM suivant les préconisations du projet Stanford-Alpaca (https://github.com/tatsu-lab/stanford_alpaca)
Ces instructions sont extraites de la FAQ crée par le GT DOREMITI et disponible à cette adresse (https://gt-atelier-donnees.miti.cnrs.fr/faq.html)
Les données sont mise à disposition selon les termes de la Licence Creative Commons Attribution 4.0 International.
|
Gt-Doremiti/gt-doremiti-instructions
|
[
"task_categories:text-generation",
"language:fr",
"license:cc-by-4.0",
"instruction-finetuning",
"region:us"
] |
2023-08-16T12:09:26+00:00
|
{"language": ["fr"], "license": "cc-by-4.0", "task_categories": ["text-generation"], "pretty_name": "gt-doremiti-instructions", "tags": ["instruction-finetuning"]}
|
2023-08-16T12:26:07+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #language-French #license-cc-by-4.0 #instruction-finetuning #region-us
|
# Dataset Card for gt-doremiti-instructions
## Dataset Description
Jeu d'instruction pour fine-tuner un LLM suivant les préconisations du projet Stanford-Alpaca (URL
Ces instructions sont extraites de la FAQ crée par le GT DOREMITI et disponible à cette adresse (URL
Les données sont mise à disposition selon les termes de la Licence Creative Commons Attribution 4.0 International.
|
[
"# Dataset Card for gt-doremiti-instructions",
"## Dataset Description\n\nJeu d'instruction pour fine-tuner un LLM suivant les préconisations du projet Stanford-Alpaca (URL\n\nCes instructions sont extraites de la FAQ crée par le GT DOREMITI et disponible à cette adresse (URL\n\nLes données sont mise à disposition selon les termes de la Licence Creative Commons Attribution 4.0 International."
] |
[
"TAGS\n#task_categories-text-generation #language-French #license-cc-by-4.0 #instruction-finetuning #region-us \n",
"# Dataset Card for gt-doremiti-instructions",
"## Dataset Description\n\nJeu d'instruction pour fine-tuner un LLM suivant les préconisations du projet Stanford-Alpaca (URL\n\nCes instructions sont extraites de la FAQ crée par le GT DOREMITI et disponible à cette adresse (URL\n\nLes données sont mise à disposition selon les termes de la Licence Creative Commons Attribution 4.0 International."
] |
[
38,
14,
73
] |
[
"passage: TAGS\n#task_categories-text-generation #language-French #license-cc-by-4.0 #instruction-finetuning #region-us \n# Dataset Card for gt-doremiti-instructions## Dataset Description\n\nJeu d'instruction pour fine-tuner un LLM suivant les préconisations du projet Stanford-Alpaca (URL\n\nCes instructions sont extraites de la FAQ crée par le GT DOREMITI et disponible à cette adresse (URL\n\nLes données sont mise à disposition selon les termes de la Licence Creative Commons Attribution 4.0 International."
] |
50fccfe1a9a9ab9c022574f0347ad26a0a2a2246
|
# Dataset of sakuya (Sword Art Online)
This is the dataset of sakuya (Sword Art Online), containing 24 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/sakuya_swordartonline
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T12:10:43+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:04+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of sakuya (Sword Art Online)
This is the dataset of sakuya (Sword Art Online), containing 24 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of sakuya (Sword Art Online)\n\nThis is the dataset of sakuya (Sword Art Online), containing 24 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of sakuya (Sword Art Online)\n\nThis is the dataset of sakuya (Sword Art Online), containing 24 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
81
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of sakuya (Sword Art Online)\n\nThis is the dataset of sakuya (Sword Art Online), containing 24 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
39d3d5c3c8ca40b0d67938aee5b3ad9194601ede
|
# Dataset of beatrice (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of beatrice (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/beatrice_rezero
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T12:42:13+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:06+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of beatrice (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of beatrice (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of beatrice (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of beatrice (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of beatrice (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of beatrice (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
99
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of beatrice (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of beatrice (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
77fddd13793064bd4fd04a271226ea99232a16ec
|
# Dataset of elsa_granhiert (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of elsa_granhiert (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 30 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/elsa_granhiert_rezero
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T12:51:37+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:08+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of elsa_granhiert (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of elsa_granhiert (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 30 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of elsa_granhiert (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of elsa_granhiert (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 30 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of elsa_granhiert (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of elsa_granhiert (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 30 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
107
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of elsa_granhiert (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of elsa_granhiert (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 30 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
7683114660f47f02991fba27b2bd46de97a06e7f
|
# Dataset Card for "processed_bert_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ian-m/processed_bert_dataset
|
[
"region:us"
] |
2023-08-16T12:53:56+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 24902388000.0, "num_examples": 6917330}], "download_size": 6083242604, "dataset_size": 24902388000.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-16T13:21:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "processed_bert_dataset"
More Information needed
|
[
"# Dataset Card for \"processed_bert_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"processed_bert_dataset\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"processed_bert_dataset\"\n\nMore Information needed"
] |
ec3de71ed4bd22c4d7b11a4fc3a6dc3dc2b5496f
|
# Dataset Card for "textbooks-filtering-600-samples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
loubnabnl/textbooks-filtering-600-samples
|
[
"region:us"
] |
2023-08-16T12:54:22+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "llama_70b_sample_prompt0", "path": "data/llama_70b_sample_prompt0-*"}, {"split": "llama_70b_greedy", "path": "data/llama_70b_greedy-*"}, {"split": "llama_70b_greedy_discrete", "path": "data/llama_70b_greedy_discrete-*"}, {"split": "llama_70b_greedy_no_conf", "path": "data/llama_70b_greedy_no_conf-*"}, {"split": "llama_70b_greedy_no_conf_noprefix", "path": "data/llama_70b_greedy_no_conf_noprefix-*"}, {"split": "llama_70b_meta", "path": "data/llama_70b_meta-*"}, {"split": "llama_70b_nometa", "path": "data/llama_70b_nometa-*"}, {"split": "llama_70b_meta_v2", "path": "data/llama_70b_meta_v2-*"}, {"split": "chatgpt", "path": "data/chatgpt-*"}, {"split": "gpt4", "path": "data/gpt4-*"}]}], "dataset_info": {"features": [{"name": "completion", "dtype": "string"}, {"name": "eval_prompt_header", "dtype": "string"}, {"name": "generation_config", "struct": [{"name": "temperature", "dtype": "float64"}, {"name": "top_p", "dtype": "float64"}]}, {"name": "prompt", "dtype": "string"}, {"name": "review_model", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "llama_70b_sample_prompt0", "num_bytes": 2756529, "num_examples": 600}, {"name": "llama_70b_greedy", "num_bytes": 3139908, "num_examples": 600}, {"name": "llama_70b_greedy_discrete", "num_bytes": 3138291, "num_examples": 600}, {"name": "llama_70b_greedy_no_conf", "num_bytes": 3359124, "num_examples": 600}, {"name": "llama_70b_greedy_no_conf_noprefix", "num_bytes": 3461124, "num_examples": 600}, {"name": "llama_70b_meta", "num_bytes": 3085159, "num_examples": 600}, {"name": "llama_70b_nometa", "num_bytes": 3068954, "num_examples": 600}, {"name": "llama_70b_meta_v2", "num_bytes": 3327190, "num_examples": 600}, {"name": "chatgpt", "num_bytes": 2772298, "num_examples": 600}, {"name": "gpt4", "num_bytes": 2800099, "num_examples": 600}], "download_size": 1748097, "dataset_size": 30908676}}
|
2023-08-22T21:18:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "textbooks-filtering-600-samples"
More Information needed
|
[
"# Dataset Card for \"textbooks-filtering-600-samples\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"textbooks-filtering-600-samples\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"textbooks-filtering-600-samples\"\n\nMore Information needed"
] |
4af834f8b83bea887a0550d7652c5a8068568f02
|
# Dataset Card for "FormulasMax500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
crewdon/FormulasMax500
|
[
"region:us"
] |
2023-08-16T13:01:33+00:00
|
{"dataset_info": {"config_name": "crewdon", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 35055132, "num_examples": 154634}], "download_size": 6463417, "dataset_size": 35055132}, "configs": [{"config_name": "crewdon", "data_files": [{"split": "train", "path": "crewdon/train-*"}]}]}
|
2023-08-16T13:01:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "FormulasMax500"
More Information needed
|
[
"# Dataset Card for \"FormulasMax500\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"FormulasMax500\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"FormulasMax500\"\n\nMore Information needed"
] |
1e27420a50416125236bd3bebd83b862050bf131
|
# Dataset Card for "monitorul_trial"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
coralexbadea/monitorul_trial
|
[
"region:us"
] |
2023-08-16T13:12:27+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5528534, "num_examples": 441}], "download_size": 2082949, "dataset_size": 5528534}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-16T13:12:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "monitorul_trial"
More Information needed
|
[
"# Dataset Card for \"monitorul_trial\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"monitorul_trial\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"monitorul_trial\"\n\nMore Information needed"
] |
c999fe4348222f46d539623cc43d7c1f1d9a278c
|
# Dataset Card for "job_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
CaoHaiNam/job_dataset
|
[
"region:us"
] |
2023-08-16T13:13:56+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 250564266, "num_examples": 388293}], "download_size": 101073533, "dataset_size": 250564266}}
|
2023-08-16T13:14:10+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "job_dataset"
More Information needed
|
[
"# Dataset Card for \"job_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"job_dataset\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"job_dataset\"\n\nMore Information needed"
] |
f89544164a29a9eec66490bf6d0cd571ceab4358
|
# Dataset Card for ACLUE
- **Homepage:** [https://github.com/isen-zhang/ACLUE](https://github.com/isen-zhang/ACLUE)
- **Repository:** [https://huggingface.co/datasets/tyouisen/aclue](https://huggingface.co/datasets/tyouisen/aclue)
- **Paper:** [https://arxiv.org/abs/2310.0955](https://arxiv.org/abs/2310.0955)
- **Leaderboard:** [https://github.com/isen-zhang/ACLUE](https://github.com/isen-zhang/ACLUE)
### 简介 (Introduction)
Ancient Chinese Language Understanding Evaluation (ACLUE) 是一个面向古代汉语的评估基准,旨在帮助评估大型语言模型在古代汉语上的表现。
The Ancient Chinese Language Understanding Evaluation (ACLUE) is an evaluation benchmark focused on ancient Chinese language comprehension. It aims to assess the performance of large-scale language models (LLMs) on understanding ancient Chinese.
### 数据 (Data)
该基准测试包含15个任务,涵盖了各个领域,包括词汇、句法、语义、推理和知识。我们为这15个任务提供了开发集和测试集数据,开发集中有5个问题,而测试集中则有100多个问题。我们鼓励研究人员使用ACLUE来测试和提升其模型在古代汉语语言理解方面的能力。ACLUE的任务取自人工挑选的公开资源和自动生成的古代汉语语料库。这些问题涵盖了从夏朝(公元前2070年)到明朝(公元1368年)的广泛时间范围。ACLUE对所有任务都采用了多项选择题的形式。
The benchmark comprises 15 tasks spanning various domains, including lexical, syntactic, semantic, inference, and knowledge. We provide development and test dataset for each of 15 tasks, with 5 questions in development set and 100+ quesitons in test set. We encourage researchers to use ACLUE to test and enhance their models' abilities in ancient Chinese language understanding. ACLUE's tasks are derived from a combination of manually curated questions from publicly available resources, and automatic generated questions from classical Chinese language corpora. The range of questions span from the Xia dynasty (2070 BCE) to the Ming dynasty (1368 CE). ACLUE employs a multiple-choice question format for all tasks.
### 数据实例( Data Instances)
数据集中的每个问题都是一个包含4个选项的多项选择题,其中只有一个选项是正确答案。以下是两个示例:
Each question in the dataset is a multiple-choice questions with 4 choices and only one choice as the correct answer. Here are two examples:
```
以下是关于{古诗词曲鉴赏}的单项选择题,请直接给出正确答案的选项。
题目:《木兰诗--北朝民歌》唧唧复唧唧,木兰当户织。不闻机杼声,唯闻女叹息。问女何所思,问女何所忆。女亦无所思,女亦无所忆。昨夜见军帖,可汗大点兵,军书十二卷,卷卷有爷名。阿爷无大儿,木兰无长兄,愿为市鞍马,从此替爷征。东市买骏马,西市买鞍鞯,南市买辔头,北市买长鞭。旦辞爷娘去,暮宿黄河边,不闻爷娘唤女声,但闻黄河流水鸣溅溅。旦辞黄河去,暮至黑山头,不闻爷娘唤女声,但闻燕山胡骑鸣啾啾。万里赴戎机,关山度若飞。朔气传金柝,寒光照铁衣。将军百战死,壮士十年归。归来见天子,天子坐明堂。策勋十二转,赏赐百千强。可汗问所欲,木兰不用尚书郎,愿驰千里足,送儿还故乡。爷娘闻女来,出郭相扶将;阿姊闻妹来,当户理红妆;小弟闻姊来,磨刀霍霍向猪羊。开我东阁门,坐我西阁床。脱我战时袍,著我旧时裳。当窗理云鬓,对镜帖花黄。出门看火伴,火伴皆惊忙:同行十二年,不知木兰是女郎。雄兔脚扑朔,雌兔眼迷离;双兔傍地走,安能辨我是雄雌?下列对这首诗的理解和分析,不正确的一项是 ()
A. 《木兰诗》是南北朝时期的一首长篇叙事民歌,风格刚健质朴。全诗以“木兰是女郎”来构思木兰的传奇故事,富有浪漫色彩。
B. “愿为市鞍马”的“市”是“市场”的意思,“万里赴戎机”的“戎机”是“战事”的意思。
C. 木兰“不用尚书郎”而愿“还故乡”固然有对家乡的眷恋,但也有自己女儿身秘密的因素。
D. “朔气传金柝,寒光照铁衣”运用对偶手法,描写了木兰在边塞艰苦的军旅生活。
答案是:B
```
```
题目:《虞美人》李煜。春花秋月何时了?往事知多少。小楼昨夜又东风,故国不堪回首月明中。雕栏玉砌应犹在,只是朱颜改。问君能有几多愁?恰似一江春水向东流。对《虞美人》的赏析,不恰当的一项是()
A. 词作从眼前景物入手,生发联想和想像,追怀昔日帝王生活,描摹了一幅幅鲜活的画面,隐晦地表达出叛逆之情,惹恼了宋太宗,铸成了词人悲惨结局。
B. 词作以实虚相间的手法来绘景、抒情、达意,忽而写眼前,忽而写想像。
C. 《虞美人》乃李煜绝笔词
D. 《虞美人》以其形式别致给人美感愉悦。
答案是:
```
以下列出了任务的类别、实例数量、问题平均长度以及任务的来源:
The category, number of instances, average length of the question, and origin of the tasks are provided below:
| Task | Total Q. | Avg. len |Task (zh) | Category | Origin |
|-------------------------------|------|------|-----------------------------------|----------|-----------|
| Named entity recognition | 500 | 138 | 古汉语命名体识别 | lexical | generated |
| Polysemy resolution | 500 | 116 | 古文单字多义 | lexical | generated |
| Homographic character resolution | 500 | 137 | 通假字 | lexical | generated |
| Sentence segmentation | 500 | 210 | 古文断句 | syntactic| generated |
| Couplet prediction | 500 | 62 | 对联预测 | semantic | generated |
| Poetry context prediction | 500 | 77 | 古诗词上下句预测 | semantic | generated |
| Poetry sentiment analysis | 500 | 60 | 诗词情感分类 | inference| generated |
| Poem quality estimation | 406 | 118 | 古诗词质量评估 | inference| generated |
| Ancient Chinese medical | 211 | 38 | 医古文 | knowledge| collected |
| Ancient Chinese literature | 160 | 44 | 古代文学知识 | knowledge| collected |
| Traditional Chinese culture | 136 | 59 | 国学常识 | knowledge| collected |
| Poetry appreciation | 103 | 258 | 古诗词曲鉴赏 | inference| collected |
| Basic ancient Chinese | 249 | 52 | 基础古汉语知识 | knowledge| collected |
| Reading comprehension | 101 | 982 | 古文阅读理解 | inference| collected |
| Ancient Chinese phonetics | 101 | 50 | 古音学 | knowledge| collected |
#### 加载数据 (Load data)
```python
task_list = ['polysemy_resolution',
'poetry_sentiment_analysis',
'named_entity_recognition',
'basic_ancient_chinese',
'poetry_context_prediction',
'sentence_segmentation',
'couplet_prediction',
'poetry_appreciate',
'ancient_chinese_culture',
'ancient_phonetics',
'homographic_character_resolution',
'ancient_literature',
'ancient_medical',
'poetry_quality_assessment',
'reading_comprehension']
from datasets import load_dataset
dataset = {k: load_dataset(r"tyouisen/aclue", k) for k in task_list}
# Print an example:
print(dataset['polysemy_resolution']['test'][0])
# Or download specific dataset:
dataset = load_dataset("tyouisen/aclue", "couplet_prediction", split="test") # or split = "dev"
```
### 引用 (Citation)
```
@inproceedings{zhang-li-2023-large,
title = "Can Large Langauge Model Comprehend {A}ncient {C}hinese? A Preliminary Test on {ACLUE}",
author = "Zhang, Yixuan and Li, Haonan",
booktitle = "Proceedings of the Ancient Language Processing Workshop",
month = sep,
year = "2023",
address = "Varna, Bulgaria",
publisher = "INCOMA Ltd., Shoumen, Bulgaria",
url = "https://aclanthology.org/2023.alp-1.9",
pages = "80--87"
}
```
### 许可证 (License)
ACLUE数据集采用:(The ACLUE dataset is licensed under a:)
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).
|
tyouisen/aclue
|
[
"task_categories:multiple-choice",
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:zh",
"license:cc-by-nc-4.0",
"llm",
"Ancient Chinese",
"Evaluation",
"chinese",
"arxiv:2310.0955",
"region:us"
] |
2023-08-16T13:14:21+00:00
|
{"language": ["zh"], "license": "cc-by-nc-4.0", "size_categories": ["1M<n<10M"], "task_categories": ["multiple-choice", "question-answering"], "pretty_name": "ACLUE", "tags": ["llm", "Ancient Chinese", "Evaluation", "chinese"]}
|
2024-01-29T12:16:33+00:00
|
[
"2310.0955"
] |
[
"zh"
] |
TAGS
#task_categories-multiple-choice #task_categories-question-answering #size_categories-1M<n<10M #language-Chinese #license-cc-by-nc-4.0 #llm #Ancient Chinese #Evaluation #chinese #arxiv-2310.0955 #region-us
|
Dataset Card for ACLUE
======================
* Homepage: URL
* Repository: URL
* Paper: URL
* Leaderboard: URL
### 简介 (Introduction)
Ancient Chinese Language Understanding Evaluation (ACLUE) 是一个面向古代汉语的评估基准,旨在帮助评估大型语言模型在古代汉语上的表现。
The Ancient Chinese Language Understanding Evaluation (ACLUE) is an evaluation benchmark focused on ancient Chinese language comprehension. It aims to assess the performance of large-scale language models (LLMs) on understanding ancient Chinese.
### 数据 (Data)
该基准测试包含15个任务,涵盖了各个领域,包括词汇、句法、语义、推理和知识。我们为这15个任务提供了开发集和测试集数据,开发集中有5个问题,而测试集中则有100多个问题。我们鼓励研究人员使用ACLUE来测试和提升其模型在古代汉语语言理解方面的能力。ACLUE的任务取自人工挑选的公开资源和自动生成的古代汉语语料库。这些问题涵盖了从夏朝(公元前2070年)到明朝(公元1368年)的广泛时间范围。ACLUE对所有任务都采用了多项选择题的形式。
The benchmark comprises 15 tasks spanning various domains, including lexical, syntactic, semantic, inference, and knowledge. We provide development and test dataset for each of 15 tasks, with 5 questions in development set and 100+ quesitons in test set. We encourage researchers to use ACLUE to test and enhance their models' abilities in ancient Chinese language understanding. ACLUE's tasks are derived from a combination of manually curated questions from publicly available resources, and automatic generated questions from classical Chinese language corpora. The range of questions span from the Xia dynasty (2070 BCE) to the Ming dynasty (1368 CE). ACLUE employs a multiple-choice question format for all tasks.
### 数据实例( Data Instances)
数据集中的每个问题都是一个包含4个选项的多项选择题,其中只有一个选项是正确答案。以下是两个示例:
Each question in the dataset is a multiple-choice questions with 4 choices and only one choice as the correct answer. Here are two examples:
以下列出了任务的类别、实例数量、问题平均长度以及任务的来源:
The category, number of instances, average length of the question, and origin of the tasks are provided below:
#### 加载数据 (Load data)
### 引用 (Citation)
### 许可证 (License)
ACLUE数据集采用:(The ACLUE dataset is licensed under a:)
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
|
[
"### 简介 (Introduction)\n\n\nAncient Chinese Language Understanding Evaluation (ACLUE) 是一个面向古代汉语的评估基准,旨在帮助评估大型语言模型在古代汉语上的表现。\n\n\nThe Ancient Chinese Language Understanding Evaluation (ACLUE) is an evaluation benchmark focused on ancient Chinese language comprehension. It aims to assess the performance of large-scale language models (LLMs) on understanding ancient Chinese.",
"### 数据 (Data)\n\n\n该基准测试包含15个任务,涵盖了各个领域,包括词汇、句法、语义、推理和知识。我们为这15个任务提供了开发集和测试集数据,开发集中有5个问题,而测试集中则有100多个问题。我们鼓励研究人员使用ACLUE来测试和提升其模型在古代汉语语言理解方面的能力。ACLUE的任务取自人工挑选的公开资源和自动生成的古代汉语语料库。这些问题涵盖了从夏朝(公元前2070年)到明朝(公元1368年)的广泛时间范围。ACLUE对所有任务都采用了多项选择题的形式。\n\n\nThe benchmark comprises 15 tasks spanning various domains, including lexical, syntactic, semantic, inference, and knowledge. We provide development and test dataset for each of 15 tasks, with 5 questions in development set and 100+ quesitons in test set. We encourage researchers to use ACLUE to test and enhance their models' abilities in ancient Chinese language understanding. ACLUE's tasks are derived from a combination of manually curated questions from publicly available resources, and automatic generated questions from classical Chinese language corpora. The range of questions span from the Xia dynasty (2070 BCE) to the Ming dynasty (1368 CE). ACLUE employs a multiple-choice question format for all tasks.",
"### 数据实例( Data Instances)\n\n\n数据集中的每个问题都是一个包含4个选项的多项选择题,其中只有一个选项是正确答案。以下是两个示例:\n\n\nEach question in the dataset is a multiple-choice questions with 4 choices and only one choice as the correct answer. Here are two examples:\n\n\n以下列出了任务的类别、实例数量、问题平均长度以及任务的来源:\n\n\nThe category, number of instances, average length of the question, and origin of the tasks are provided below:",
"#### 加载数据 (Load data)",
"### 引用 (Citation)",
"### 许可证 (License)\n\n\nACLUE数据集采用:(The ACLUE dataset is licensed under a:)\n\n\nCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International License."
] |
[
"TAGS\n#task_categories-multiple-choice #task_categories-question-answering #size_categories-1M<n<10M #language-Chinese #license-cc-by-nc-4.0 #llm #Ancient Chinese #Evaluation #chinese #arxiv-2310.0955 #region-us \n",
"### 简介 (Introduction)\n\n\nAncient Chinese Language Understanding Evaluation (ACLUE) 是一个面向古代汉语的评估基准,旨在帮助评估大型语言模型在古代汉语上的表现。\n\n\nThe Ancient Chinese Language Understanding Evaluation (ACLUE) is an evaluation benchmark focused on ancient Chinese language comprehension. It aims to assess the performance of large-scale language models (LLMs) on understanding ancient Chinese.",
"### 数据 (Data)\n\n\n该基准测试包含15个任务,涵盖了各个领域,包括词汇、句法、语义、推理和知识。我们为这15个任务提供了开发集和测试集数据,开发集中有5个问题,而测试集中则有100多个问题。我们鼓励研究人员使用ACLUE来测试和提升其模型在古代汉语语言理解方面的能力。ACLUE的任务取自人工挑选的公开资源和自动生成的古代汉语语料库。这些问题涵盖了从夏朝(公元前2070年)到明朝(公元1368年)的广泛时间范围。ACLUE对所有任务都采用了多项选择题的形式。\n\n\nThe benchmark comprises 15 tasks spanning various domains, including lexical, syntactic, semantic, inference, and knowledge. We provide development and test dataset for each of 15 tasks, with 5 questions in development set and 100+ quesitons in test set. We encourage researchers to use ACLUE to test and enhance their models' abilities in ancient Chinese language understanding. ACLUE's tasks are derived from a combination of manually curated questions from publicly available resources, and automatic generated questions from classical Chinese language corpora. The range of questions span from the Xia dynasty (2070 BCE) to the Ming dynasty (1368 CE). ACLUE employs a multiple-choice question format for all tasks.",
"### 数据实例( Data Instances)\n\n\n数据集中的每个问题都是一个包含4个选项的多项选择题,其中只有一个选项是正确答案。以下是两个示例:\n\n\nEach question in the dataset is a multiple-choice questions with 4 choices and only one choice as the correct answer. Here are two examples:\n\n\n以下列出了任务的类别、实例数量、问题平均长度以及任务的来源:\n\n\nThe category, number of instances, average length of the question, and origin of the tasks are provided below:",
"#### 加载数据 (Load data)",
"### 引用 (Citation)",
"### 许可证 (License)\n\n\nACLUE数据集采用:(The ACLUE dataset is licensed under a:)\n\n\nCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International License."
] |
[
81,
96,
317,
120,
11,
8,
43
] |
[
"passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #size_categories-1M<n<10M #language-Chinese #license-cc-by-nc-4.0 #llm #Ancient Chinese #Evaluation #chinese #arxiv-2310.0955 #region-us \n### 简介 (Introduction)\n\n\nAncient Chinese Language Understanding Evaluation (ACLUE) 是一个面向古代汉语的评估基准,旨在帮助评估大型语言模型在古代汉语上的表现。\n\n\nThe Ancient Chinese Language Understanding Evaluation (ACLUE) is an evaluation benchmark focused on ancient Chinese language comprehension. It aims to assess the performance of large-scale language models (LLMs) on understanding ancient Chinese.### 数据 (Data)\n\n\n该基准测试包含15个任务,涵盖了各个领域,包括词汇、句法、语义、推理和知识。我们为这15个任务提供了开发集和测试集数据,开发集中有5个问题,而测试集中则有100多个问题。我们鼓励研究人员使用ACLUE来测试和提升其模型在古代汉语语言理解方面的能力。ACLUE的任务取自人工挑选的公开资源和自动生成的古代汉语语料库。这些问题涵盖了从夏朝(公元前2070年)到明朝(公元1368年)的广泛时间范围。ACLUE对所有任务都采用了多项选择题的形式。\n\n\nThe benchmark comprises 15 tasks spanning various domains, including lexical, syntactic, semantic, inference, and knowledge. We provide development and test dataset for each of 15 tasks, with 5 questions in development set and 100+ quesitons in test set. We encourage researchers to use ACLUE to test and enhance their models' abilities in ancient Chinese language understanding. ACLUE's tasks are derived from a combination of manually curated questions from publicly available resources, and automatic generated questions from classical Chinese language corpora. The range of questions span from the Xia dynasty (2070 BCE) to the Ming dynasty (1368 CE). ACLUE employs a multiple-choice question format for all tasks."
] |
971912a24c728ca806accc492279590008d2108e
|
# Dataset Card for "datacomp_small_llamav2_classified_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nielsr/datacomp_small_llamav2_classified_v4
|
[
"region:us"
] |
2023-08-16T13:29:55+00:00
|
{"dataset_info": {"features": [{"name": "uid", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "original_width", "dtype": "int64"}, {"name": "original_height", "dtype": "int64"}, {"name": "clip_b32_similarity_score", "dtype": "float32"}, {"name": "clip_l14_similarity_score", "dtype": "float32"}, {"name": "face_bboxes", "sequence": {"sequence": "float64"}}, {"name": "sha256", "dtype": "string"}, {"name": "detected_language", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16900904, "num_examples": 50000}], "download_size": 12980611, "dataset_size": 16900904}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-16T13:29:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "datacomp_small_llamav2_classified_v4"
More Information needed
|
[
"# Dataset Card for \"datacomp_small_llamav2_classified_v4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"datacomp_small_llamav2_classified_v4\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"datacomp_small_llamav2_classified_v4\"\n\nMore Information needed"
] |
068322981c9f848667ad332ef3b293e885064734
|
# Dataset of pandora (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of pandora (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 86 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/pandora_rezero
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T13:30:54+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:10+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of pandora (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of pandora (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 86 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of pandora (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of pandora (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 86 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of pandora (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of pandora (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 86 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
99
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of pandora (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of pandora (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 86 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
0ab1018b79efaa54ba8512fc3d5c942f31810ce4
|
# Dataset of crusch_karsten (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of crusch_karsten (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 60 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/crusch_karsten_rezero
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T13:56:09+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:12+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of crusch_karsten (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of crusch_karsten (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 60 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of crusch_karsten (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of crusch_karsten (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 60 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of crusch_karsten (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of crusch_karsten (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 60 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
105
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of crusch_karsten (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of crusch_karsten (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 60 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
e0cbd64174affcd34dde8c736f39caef9df02f66
|
# Dataset of serena/セレナ (Pokémon)
This is the dataset of serena/セレナ (Pokémon), containing 500 images and their tags.
The core tags of this character are `long_hair, blue_eyes, hat, blonde_hair, breasts, sunglasses, eyelashes, brown_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:------------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 605.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/serena_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 363.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/serena_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1181 | 744.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/serena_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 542.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/serena_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1181 | 1021.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/serena_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/serena_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 33 |  |  |  |  |  | 1girl, eyewear_on_headwear, pleated_skirt, red_skirt, sleeveless_shirt, solo, bracelet, black_thighhighs, pink_bag, pink_headwear, collared_shirt, looking_at_viewer, white-framed_eyewear, black_shirt, grey_eyes, high-waist_skirt, handbag, open_mouth, :d, blush, red_headwear, shoes |
| 1 | 7 |  |  |  |  |  | 1girl, collared_shirt, eyewear_on_headwear, pleated_skirt, red_skirt, sleeveless_shirt, white-framed_eyewear, high-waist_skirt, looking_at_viewer, pink_headwear, red_headwear, solo, black_shirt, black_thighhighs, parted_lips, floating_hair, sitting, white_background, zettai_ryouiki |
| 2 | 8 |  |  |  |  |  | 1girl, black_thighhighs, solo, eyewear_on_head, pleated_skirt, sleeveless, smile, bracelet, zettai_ryouiki |
| 3 | 10 |  |  |  |  |  | 1girl, blush, day, outdoors, solo, black_thighhighs, looking_at_viewer, open_mouth, sky, tree, cloud, no_panties, pleated_skirt, red_skirt, sleeveless_shirt, pink_headwear, :d, black_shirt, tongue, anus, bow, bush, flower, from_behind, looking_back, pussy_juice, bare_shoulders, grass, shiny, sweat, uncensored |
| 4 | 34 |  |  |  |  |  | 1girl, nipples, blush, open_mouth, navel, 1boy, hetero, pussy, penis, sex, vaginal, mosaic_censoring, spread_legs, day, collarbone, light_brown_hair, outdoors, shiny_skin, tongue, completely_nude, solo_focus, grass, looking_at_viewer, shiny_hair, smile, cum, sweat, tree |
| 5 | 5 |  |  |  |  |  | 1girl, cloud, looking_at_viewer, navel, outdoors, solo, blush, day, medium_breasts, ocean, water, wet, beach, blue_sky, closed_mouth, nipples, shiny, bangs, cleavage, collarbone, completely_nude, front-tie_top, pink_bikini, pussy, rock, side-tie_bikini_bottom, smile, standing, wading |
| 6 | 5 |  |  |  |  |  | 1girl, heart, looking_at_viewer, anus, blush, female_pubic_hair, solo, uncensored, ass, choker, grin, nude, on_back, presenting, spread_legs, artist_name, black_thighhighs, clitoris, closed_mouth, simple_background, spread_pussy, sweat |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | eyewear_on_headwear | pleated_skirt | red_skirt | sleeveless_shirt | solo | bracelet | black_thighhighs | pink_bag | pink_headwear | collared_shirt | looking_at_viewer | white-framed_eyewear | black_shirt | grey_eyes | high-waist_skirt | handbag | open_mouth | :d | blush | red_headwear | shoes | parted_lips | floating_hair | sitting | white_background | zettai_ryouiki | eyewear_on_head | sleeveless | smile | day | outdoors | sky | tree | cloud | no_panties | tongue | anus | bow | bush | flower | from_behind | looking_back | pussy_juice | bare_shoulders | grass | shiny | sweat | uncensored | nipples | navel | 1boy | hetero | pussy | penis | sex | vaginal | mosaic_censoring | spread_legs | collarbone | light_brown_hair | shiny_skin | completely_nude | solo_focus | shiny_hair | cum | medium_breasts | ocean | water | wet | beach | blue_sky | closed_mouth | bangs | cleavage | front-tie_top | pink_bikini | rock | side-tie_bikini_bottom | standing | wading | heart | female_pubic_hair | ass | choker | grin | nude | on_back | presenting | artist_name | clitoris | simple_background | spread_pussy |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------------|:----------------|:------------|:-------------------|:-------|:-----------|:-------------------|:-----------|:----------------|:-----------------|:--------------------|:-----------------------|:--------------|:------------|:-------------------|:----------|:-------------|:-----|:--------|:---------------|:--------|:--------------|:----------------|:----------|:-------------------|:-----------------|:------------------|:-------------|:--------|:------|:-----------|:------|:-------|:--------|:-------------|:---------|:-------|:------|:-------|:---------|:--------------|:---------------|:--------------|:-----------------|:--------|:--------|:--------|:-------------|:----------|:--------|:-------|:---------|:--------|:--------|:------|:----------|:-------------------|:--------------|:-------------|:-------------------|:-------------|:------------------|:-------------|:-------------|:------|:-----------------|:--------|:--------|:------|:--------|:-----------|:---------------|:--------|:-----------|:----------------|:--------------|:-------|:-------------------------|:-----------|:---------|:--------|:--------------------|:------|:---------|:-------|:-------|:----------|:-------------|:--------------|:-----------|:--------------------|:---------------|
| 0 | 33 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | X | X | | X | | X | X | X | X | X | | X | | | | | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | | X | | | X | X | X | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 10 |  |  |  |  |  | X | | X | X | X | X | | X | | X | | X | | X | | | | X | X | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 34 |  |  |  |  |  | X | | | | | | | | | | | X | | | | | | X | | X | | | | | | | | | | X | X | X | | X | | | X | | | | | | | | | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | | | | X | | | | | | X | | | | | | | | X | | | | | | | | | | X | X | X | | | X | | | | | | | | | | | | X | | | X | X | | | X | | | | | | X | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | | | | | X | | X | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | X | | | | | | | | | | X | | | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/serena_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T14:00:25+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T14:18:16+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of serena/セレナ (Pokémon)
===============================
This is the dataset of serena/セレナ (Pokémon), containing 500 images and their tags.
The core tags of this character are 'long\_hair, blue\_eyes, hat, blonde\_hair, breasts, sunglasses, eyelashes, brown\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
569498cab1f92ce58f97571f62ea97630ebc44f8
|
## Delhi Pollution Dataset
Duration: Nov 2020 - Jan 2021
Mean PM2.5: 212.67
### Dataset Contents:
deviceId, dateTime, lat, long, pressure, temperature, humidity, pm1_0, pm2_5, pm10
### Dataset variants
**1.Raw(PM)** : Contains spatio-temporal raw PM data
**2.Raw(PM+Met)** : Contains spatio-temporal raw PM and meteorological data
**3.Clean(PM+Met)** : Contains spatio-temporal cleaned PM and meteorological data (with wrong lat-long samples removed)
**4.Grid(PM+Met)** : Average the cleaned dataset over spatio-temporal grids of 1km x 1km x 1hr
Website: https://www.cse.iitd.ac.in/pollutiondata
This dataset is collected by IIT-Delhi. For any queries, kindly contact: [email protected]_
|
sachin-iitd/DelhiPollDataset
|
[
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-4.0",
"pm2.5",
"pollution",
"meteorological",
"region:us"
] |
2023-08-16T14:06:19+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["1M<n<10M"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "train.csv"}, {"split": "test", "path": "test.csv"}]}], "tags": ["pm2.5", "pollution", "meteorological"]}
|
2023-08-17T08:35:46+00:00
|
[] |
[
"en"
] |
TAGS
#size_categories-1M<n<10M #language-English #license-cc-by-4.0 #pm2.5 #pollution #meteorological #region-us
|
## Delhi Pollution Dataset
Duration: Nov 2020 - Jan 2021
Mean PM2.5: 212.67
### Dataset Contents:
deviceId, dateTime, lat, long, pressure, temperature, humidity, pm1_0, pm2_5, pm10
### Dataset variants
1.Raw(PM) : Contains spatio-temporal raw PM data
2.Raw(PM+Met) : Contains spatio-temporal raw PM and meteorological data
3.Clean(PM+Met) : Contains spatio-temporal cleaned PM and meteorological data (with wrong lat-long samples removed)
4.Grid(PM+Met) : Average the cleaned dataset over spatio-temporal grids of 1km x 1km x 1hr
Website: URL
This dataset is collected by IIT-Delhi. For any queries, kindly contact: [email protected]_
|
[
"## Delhi Pollution Dataset\n\nDuration: Nov 2020 - Jan 2021\n\nMean PM2.5: 212.67",
"### Dataset Contents:\ndeviceId, dateTime, lat, long, pressure, temperature, humidity, pm1_0, pm2_5, pm10",
"### Dataset variants\n\n1.Raw(PM) : Contains spatio-temporal raw PM data\n\n2.Raw(PM+Met) : Contains spatio-temporal raw PM and meteorological data\n\n3.Clean(PM+Met) : Contains spatio-temporal cleaned PM and meteorological data (with wrong lat-long samples removed)\n\n4.Grid(PM+Met) : Average the cleaned dataset over spatio-temporal grids of 1km x 1km x 1hr\n\nWebsite: URL\n\nThis dataset is collected by IIT-Delhi. For any queries, kindly contact: [email protected]_"
] |
[
"TAGS\n#size_categories-1M<n<10M #language-English #license-cc-by-4.0 #pm2.5 #pollution #meteorological #region-us \n",
"## Delhi Pollution Dataset\n\nDuration: Nov 2020 - Jan 2021\n\nMean PM2.5: 212.67",
"### Dataset Contents:\ndeviceId, dateTime, lat, long, pressure, temperature, humidity, pm1_0, pm2_5, pm10",
"### Dataset variants\n\n1.Raw(PM) : Contains spatio-temporal raw PM data\n\n2.Raw(PM+Met) : Contains spatio-temporal raw PM and meteorological data\n\n3.Clean(PM+Met) : Contains spatio-temporal cleaned PM and meteorological data (with wrong lat-long samples removed)\n\n4.Grid(PM+Met) : Average the cleaned dataset over spatio-temporal grids of 1km x 1km x 1hr\n\nWebsite: URL\n\nThis dataset is collected by IIT-Delhi. For any queries, kindly contact: [email protected]_"
] |
[
42,
22,
37,
143
] |
[
"passage: TAGS\n#size_categories-1M<n<10M #language-English #license-cc-by-4.0 #pm2.5 #pollution #meteorological #region-us \n## Delhi Pollution Dataset\n\nDuration: Nov 2020 - Jan 2021\n\nMean PM2.5: 212.67### Dataset Contents:\ndeviceId, dateTime, lat, long, pressure, temperature, humidity, pm1_0, pm2_5, pm10### Dataset variants\n\n1.Raw(PM) : Contains spatio-temporal raw PM data\n\n2.Raw(PM+Met) : Contains spatio-temporal raw PM and meteorological data\n\n3.Clean(PM+Met) : Contains spatio-temporal cleaned PM and meteorological data (with wrong lat-long samples removed)\n\n4.Grid(PM+Met) : Average the cleaned dataset over spatio-temporal grids of 1km x 1km x 1hr\n\nWebsite: URL\n\nThis dataset is collected by IIT-Delhi. For any queries, kindly contact: [email protected]_"
] |
1ad67b9e603ee7024793ff329dac77ae3f7274ce
|
# Dataset of anastasia_hoshin (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of anastasia_hoshin (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 56 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/anastasia_hoshin_rezero
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T14:16:43+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:16+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of anastasia_hoshin (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of anastasia_hoshin (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 56 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of anastasia_hoshin (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of anastasia_hoshin (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 56 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of anastasia_hoshin (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of anastasia_hoshin (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 56 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
107
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of anastasia_hoshin (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of anastasia_hoshin (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 56 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
f263739deb1b9f199afb8eed724de497a03dd093
|
Ricardo Flores
|
RicardoFlores/mini-croupier
|
[
"license:apache-2.0",
"region:us"
] |
2023-08-16T14:36:05+00:00
|
{"license": "apache-2.0"}
|
2023-08-16T14:39:17+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
Ricardo Flores
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
a99146b9c93a4544cc6fc7e3d17958d416af2e6d
|
# Dataset of frederica_baumann (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of frederica_baumann (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 92 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/frederica_baumann_rezero
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T14:42:19+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:18+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of frederica_baumann (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of frederica_baumann (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 92 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of frederica_baumann (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of frederica_baumann (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 92 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of frederica_baumann (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of frederica_baumann (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 92 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
107
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of frederica_baumann (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of frederica_baumann (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 92 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
ce638beb17ee2030e4bf00106faaf4357ffb8b0b
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
sr1sht1/starcoderdataset
|
[
"region:us"
] |
2023-08-16T14:44:09+00:00
|
{}
|
2023-08-16T14:48:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
a497a8c7e0493bd168c128ac1082299d1a1549d7
|
# Fruits30 Dataset
## Description:
The Fruits30 dataset is a collection of images featuring 30 different types of fruits. Each image has been preprocessed and standardized to a size of 224x224 pixels, ensuring uniformity in the dataset.
## Dataset Composition:
- **Number of Classes:** 30
- **Image Resolution:** 224x224 pixels
- **Total Images:** 826
## Classes:
0 : acerolas
1 : apples
2 : apricots
3 : avocados
4 : bananas
5 : blackberries
6 : blueberries
7 : cantaloupes
8 : cherries
9 : coconuts
10 : figs
11 : grapefruits
12 : grapes
13 : guava
14 : kiwifruit
15 : lemons
16 : limes
17 : mangos
18 : olives
19 : oranges
20 : passionfruit
21 : peaches
22 : pears
23 : pineapples
24 : plums
25 : pomegranates
26 : raspberries
27 : strawberries
28 : tomatoes
29 : watermelons
## Preprocessing:
Images have undergone preprocessing to maintain consistency and facilitate model training. Preprocessing steps may include resizing, normalization, and other enhancements.
## Intended Use:
The Fruits30 dataset is suitable for tasks such as image classification, object recognition, and machine learning model training within the domain of fruit identification.
## Sources:
Croudsource.
## Note:
Ensure proper attribution and compliance with the dataset's licensing terms when using it for research or development purposes.
|
VinayHajare/Fruits-30
|
[
"task_categories:image-classification",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"multiclass-image-classification",
"vision",
"region:us"
] |
2023-08-16T14:54:47+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["image-classification"], "tags": ["multiclass-image-classification", "vision"]}
|
2023-11-11T05:00:28+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-image-classification #size_categories-n<1K #language-English #license-apache-2.0 #multiclass-image-classification #vision #region-us
|
# Fruits30 Dataset
## Description:
The Fruits30 dataset is a collection of images featuring 30 different types of fruits. Each image has been preprocessed and standardized to a size of 224x224 pixels, ensuring uniformity in the dataset.
## Dataset Composition:
- Number of Classes: 30
- Image Resolution: 224x224 pixels
- Total Images: 826
## Classes:
0 : acerolas
1 : apples
2 : apricots
3 : avocados
4 : bananas
5 : blackberries
6 : blueberries
7 : cantaloupes
8 : cherries
9 : coconuts
10 : figs
11 : grapefruits
12 : grapes
13 : guava
14 : kiwifruit
15 : lemons
16 : limes
17 : mangos
18 : olives
19 : oranges
20 : passionfruit
21 : peaches
22 : pears
23 : pineapples
24 : plums
25 : pomegranates
26 : raspberries
27 : strawberries
28 : tomatoes
29 : watermelons
## Preprocessing:
Images have undergone preprocessing to maintain consistency and facilitate model training. Preprocessing steps may include resizing, normalization, and other enhancements.
## Intended Use:
The Fruits30 dataset is suitable for tasks such as image classification, object recognition, and machine learning model training within the domain of fruit identification.
## Sources:
Croudsource.
## Note:
Ensure proper attribution and compliance with the dataset's licensing terms when using it for research or development purposes.
|
[
"# Fruits30 Dataset",
"## Description:\nThe Fruits30 dataset is a collection of images featuring 30 different types of fruits. Each image has been preprocessed and standardized to a size of 224x224 pixels, ensuring uniformity in the dataset.",
"## Dataset Composition:\n- Number of Classes: 30\n- Image Resolution: 224x224 pixels\n- Total Images: 826",
"## Classes:\n0 : acerolas \n1 : apples \n2 : apricots \n3 : avocados \n4 : bananas \n5 : blackberries \n6 : blueberries \n7 : cantaloupes \n8 : cherries \n9 : coconuts \n10 : figs \n11 : grapefruits \n12 : grapes \n13 : guava \n14 : kiwifruit \n15 : lemons \n16 : limes \n17 : mangos \n18 : olives \n19 : oranges \n20 : passionfruit \n21 : peaches \n22 : pears \n23 : pineapples \n24 : plums \n25 : pomegranates \n26 : raspberries \n27 : strawberries \n28 : tomatoes \n29 : watermelons",
"## Preprocessing:\nImages have undergone preprocessing to maintain consistency and facilitate model training. Preprocessing steps may include resizing, normalization, and other enhancements.",
"## Intended Use:\nThe Fruits30 dataset is suitable for tasks such as image classification, object recognition, and machine learning model training within the domain of fruit identification.",
"## Sources:\nCroudsource.",
"## Note:\nEnsure proper attribution and compliance with the dataset's licensing terms when using it for research or development purposes."
] |
[
"TAGS\n#task_categories-image-classification #size_categories-n<1K #language-English #license-apache-2.0 #multiclass-image-classification #vision #region-us \n",
"# Fruits30 Dataset",
"## Description:\nThe Fruits30 dataset is a collection of images featuring 30 different types of fruits. Each image has been preprocessed and standardized to a size of 224x224 pixels, ensuring uniformity in the dataset.",
"## Dataset Composition:\n- Number of Classes: 30\n- Image Resolution: 224x224 pixels\n- Total Images: 826",
"## Classes:\n0 : acerolas \n1 : apples \n2 : apricots \n3 : avocados \n4 : bananas \n5 : blackberries \n6 : blueberries \n7 : cantaloupes \n8 : cherries \n9 : coconuts \n10 : figs \n11 : grapefruits \n12 : grapes \n13 : guava \n14 : kiwifruit \n15 : lemons \n16 : limes \n17 : mangos \n18 : olives \n19 : oranges \n20 : passionfruit \n21 : peaches \n22 : pears \n23 : pineapples \n24 : plums \n25 : pomegranates \n26 : raspberries \n27 : strawberries \n28 : tomatoes \n29 : watermelons",
"## Preprocessing:\nImages have undergone preprocessing to maintain consistency and facilitate model training. Preprocessing steps may include resizing, normalization, and other enhancements.",
"## Intended Use:\nThe Fruits30 dataset is suitable for tasks such as image classification, object recognition, and machine learning model training within the domain of fruit identification.",
"## Sources:\nCroudsource.",
"## Note:\nEnsure proper attribution and compliance with the dataset's licensing terms when using it for research or development purposes."
] |
[
49,
6,
52,
30,
142,
40,
39,
8,
30
] |
[
"passage: TAGS\n#task_categories-image-classification #size_categories-n<1K #language-English #license-apache-2.0 #multiclass-image-classification #vision #region-us \n# Fruits30 Dataset## Description:\nThe Fruits30 dataset is a collection of images featuring 30 different types of fruits. Each image has been preprocessed and standardized to a size of 224x224 pixels, ensuring uniformity in the dataset.## Dataset Composition:\n- Number of Classes: 30\n- Image Resolution: 224x224 pixels\n- Total Images: 826## Classes:\n0 : acerolas \n1 : apples \n2 : apricots \n3 : avocados \n4 : bananas \n5 : blackberries \n6 : blueberries \n7 : cantaloupes \n8 : cherries \n9 : coconuts \n10 : figs \n11 : grapefruits \n12 : grapes \n13 : guava \n14 : kiwifruit \n15 : lemons \n16 : limes \n17 : mangos \n18 : olives \n19 : oranges \n20 : passionfruit \n21 : peaches \n22 : pears \n23 : pineapples \n24 : plums \n25 : pomegranates \n26 : raspberries \n27 : strawberries \n28 : tomatoes \n29 : watermelons## Preprocessing:\nImages have undergone preprocessing to maintain consistency and facilitate model training. Preprocessing steps may include resizing, normalization, and other enhancements.## Intended Use:\nThe Fruits30 dataset is suitable for tasks such as image classification, object recognition, and machine learning model training within the domain of fruit identification.## Sources:\nCroudsource.## Note:\nEnsure proper attribution and compliance with the dataset's licensing terms when using it for research or development purposes."
] |
bb5b475dd5587be3efcd9fa9196ab694dd16f92b
|
# Dataset of minerva (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of minerva (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 56 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/minerva_rezero
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T14:57:18+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:20+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of minerva (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of minerva (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 56 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of minerva (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of minerva (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 56 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of minerva (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of minerva (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 56 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
99
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of minerva (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of minerva (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 56 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
479ab041ae881aa1fb08b0ea85c265a62f1f0e75
|
# Dataset Card for "rmit_intro"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
haxuanson-rmit/rmit_intro
|
[
"region:us"
] |
2023-08-16T15:03:11+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7255, "num_examples": 10}], "download_size": 7898, "dataset_size": 7255}}
|
2023-08-16T15:03:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "rmit_intro"
More Information needed
|
[
"# Dataset Card for \"rmit_intro\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"rmit_intro\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"rmit_intro\"\n\nMore Information needed"
] |
49c3228581d5662c9010c5b495c8de37abf62670
|
# Dataset of carmilla (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of carmilla (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 43 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/carmilla_rezero
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T15:06:48+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:22+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of carmilla (Re:Zero Kara Hajimeru Isekai Seikatsu)
This is the dataset of carmilla (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 43 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of carmilla (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of carmilla (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 43 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of carmilla (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of carmilla (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 43 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
99
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of carmilla (Re:Zero Kara Hajimeru Isekai Seikatsu)\n\nThis is the dataset of carmilla (Re:Zero Kara Hajimeru Isekai Seikatsu), containing 43 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
90a0aaf3f044b2cc10eaab1393afa2c2c057040c
|
# Dataset Card for "toxicContenData-3k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Back-up/toxicContenData-3k
|
[
"region:us"
] |
2023-08-16T15:14:09+00:00
|
{"dataset_info": {"features": [{"name": "answer", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "update", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 7395775, "num_examples": 24009}], "download_size": 4018765, "dataset_size": 7395775}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-17T02:48:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "toxicContenData-3k"
More Information needed
|
[
"# Dataset Card for \"toxicContenData-3k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"toxicContenData-3k\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"toxicContenData-3k\"\n\nMore Information needed"
] |
5eea807cb22971cf095f0b260b8d43d0fc27e2df
|
# Dataset of kamitsure/カミツレ (Pokémon)
This is the dataset of kamitsure/カミツレ (Pokémon), containing 500 images and their tags.
The core tags of this character are `headphones, blue_eyes, breasts, blonde_hair, short_hair, bangs, blunt_bangs, black_hair, long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 449.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kamitsure_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 298.44 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kamitsure_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1027 | 553.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kamitsure_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 413.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kamitsure_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1027 | 721.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kamitsure_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kamitsure_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, sidelocks, yellow_jacket, blush, short_hair_with_long_locks, hand_up, solo, cleavage, closed_mouth, smile, collarbone, sitting, sleeveless, watermark |
| 1 | 5 |  |  |  |  |  | 1girl, fur_coat, smile, solo, looking_at_viewer, large_breasts, cleavage |
| 2 | 7 |  |  |  |  |  | 1girl, solo, midriff, fur_coat, navel, smile, nail_polish, very_long_hair |
| 3 | 15 |  |  |  |  |  | 1girl, solo, holding_poke_ball, poke_ball_(basic), fur_coat, nail_polish, midriff, looking_at_viewer, shorts, smile |
| 4 | 21 |  |  |  |  |  | 1girl, navel, solo, holding_poke_ball, poke_ball_(basic), bare_shoulders, choker, black_pantyhose, cleavage, high_heels |
| 5 | 11 |  |  |  |  |  | 1girl, solo, bare_shoulders, choker, smile, black_pantyhose, open_mouth, sitting |
| 6 | 21 |  |  |  |  |  | 1girl, bare_arms, black_choker, yellow_dress, black_pantyhose, short_dress, bare_shoulders, collarbone, looking_at_viewer, solo, sleeveless_dress, yellow_skirt, closed_mouth, black_headwear, cable |
| 7 | 9 |  |  |  |  |  | 1girl, blush, large_breasts, nipples, solo, choker, huge_breasts |
| 8 | 16 |  |  |  |  |  | 1girl, hetero, nipples, penis, 1boy, blush, solo_focus, vaginal, censored, cum_in_pussy, spread_legs, large_breasts, pantyhose, sex_from_behind, sweat, medium_breasts, navel, pubic_hair, straddling, torn_clothes |
| 9 | 8 |  |  |  |  |  | nipples, 1boy, 1girl, hetero, navel, penis, pussy, sex, vaginal, blush, looking_at_viewer, open_mouth, spread_legs, completely_nude, mosaic_censoring, arms_up, sweat, armpits, collarbone, on_back, pov |
| 10 | 5 |  |  |  |  |  | 1girl, long_sleeves, looking_at_viewer, official_alternate_costume, black_shorts, blush, earmuffs, eyelashes, red_scarf, solo, hand_up, nail_polish, open_coat, :d, black_footwear, black_nails, boots, closed_mouth, holding, open_mouth, simple_background, sitting, twintails, white_background, white_coat |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | sidelocks | yellow_jacket | blush | short_hair_with_long_locks | hand_up | solo | cleavage | closed_mouth | smile | collarbone | sitting | sleeveless | watermark | fur_coat | large_breasts | midriff | navel | nail_polish | very_long_hair | holding_poke_ball | poke_ball_(basic) | shorts | bare_shoulders | choker | black_pantyhose | high_heels | open_mouth | bare_arms | black_choker | yellow_dress | short_dress | sleeveless_dress | yellow_skirt | black_headwear | cable | nipples | huge_breasts | hetero | penis | 1boy | solo_focus | vaginal | censored | cum_in_pussy | spread_legs | pantyhose | sex_from_behind | sweat | medium_breasts | pubic_hair | straddling | torn_clothes | pussy | sex | completely_nude | mosaic_censoring | arms_up | armpits | on_back | pov | long_sleeves | official_alternate_costume | black_shorts | earmuffs | eyelashes | red_scarf | open_coat | :d | black_footwear | black_nails | boots | holding | simple_background | twintails | white_background | white_coat |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------------------|:------------|:----------------|:--------|:-----------------------------|:----------|:-------|:-----------|:---------------|:--------|:-------------|:----------|:-------------|:------------|:-----------|:----------------|:----------|:--------|:--------------|:-----------------|:--------------------|:--------------------|:---------|:-----------------|:---------|:------------------|:-------------|:-------------|:------------|:---------------|:---------------|:--------------|:-------------------|:---------------|:-----------------|:--------|:----------|:---------------|:---------|:--------|:-------|:-------------|:----------|:-----------|:---------------|:--------------|:------------|:------------------|:--------|:-----------------|:-------------|:-------------|:---------------|:--------|:------|:------------------|:-------------------|:----------|:----------|:----------|:------|:---------------|:-----------------------------|:---------------|:-----------|:------------|:------------|:------------|:-----|:-----------------|:--------------|:--------|:----------|:--------------------|:------------|:-------------------|:-------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | | | | | | X | X | | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | | | | | | | X | | | X | | | | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 15 |  |  |  |  |  | X | X | | | | | | X | | | X | | | | | X | | X | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 21 |  |  |  |  |  | X | | | | | | | X | X | | | | | | | | | | X | | | X | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 11 |  |  |  |  |  | X | | | | | | | X | | | X | | X | | | | | | | | | | | | X | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 21 |  |  |  |  |  | X | X | | | | | | X | | X | | X | | | | | | | | | | | | | X | | X | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 9 |  |  |  |  |  | X | | | | X | | | X | | | | | | | | | X | | | | | | | | | X | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 16 |  |  |  |  |  | X | | | | X | | | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 8 |  |  |  |  |  | X | X | | | X | | | | | | | X | | | | | | | X | | | | | | | | | | X | | | | | | | | | X | | X | X | X | | X | | | X | | | X | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 10 | 5 |  |  |  |  |  | X | X | | | X | | X | X | | X | | | X | | | | | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/kamitsure_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T15:16:00+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T13:34:11+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of kamitsure/カミツレ (Pokémon)
===================================
This is the dataset of kamitsure/カミツレ (Pokémon), containing 500 images and their tags.
The core tags of this character are 'headphones, blue\_eyes, breasts, blonde\_hair, short\_hair, bangs, blunt\_bangs, black\_hair, long\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
6509d1fa27eb271824cfbcc77e6ba734da828252
|
# Dataset Card for "test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tosh97/test
|
[
"region:us"
] |
2023-08-16T15:18:38+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "basketball", "1": "football"}}}}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 68454263.0, "num_examples": 141}], "download_size": 68400421, "dataset_size": 68454263.0}}
|
2023-08-16T15:18:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "test"
More Information needed
|
[
"# Dataset Card for \"test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"test\"\n\nMore Information needed"
] |
[
6,
11
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"test\"\n\nMore Information needed"
] |
58cde6b01c6c894f1bd6dfa1404feb77c5d577d7
|
# Dataset Card for "alpagasus_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
arbml/alpagasus_cleaned
|
[
"region:us"
] |
2023-08-16T15:19:03+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3916935, "num_examples": 9229}], "download_size": 2486390, "dataset_size": 3916935}}
|
2023-08-17T07:05:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "alpagasus_cleaned"
More Information needed
|
[
"# Dataset Card for \"alpagasus_cleaned\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"alpagasus_cleaned\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"alpagasus_cleaned\"\n\nMore Information needed"
] |
50135b5f906aef99c4130b4e2567626595b4a415
|
# Dataset Card for "68759f6d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/68759f6d
|
[
"region:us"
] |
2023-08-16T15:29:20+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 176, "num_examples": 10}], "download_size": 1331, "dataset_size": 176}}
|
2023-08-16T15:29:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "68759f6d"
More Information needed
|
[
"# Dataset Card for \"68759f6d\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"68759f6d\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"68759f6d\"\n\nMore Information needed"
] |
9d18fca3fa459c844e160cb4c76bd8df3e97d0ee
|
# Dataset Card for "perigon-150k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
judy93536/perigon-150k
|
[
"region:us"
] |
2023-08-16T15:30:06+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 146640869.6, "num_examples": 120000}, {"name": "test", "num_bytes": 36660217.4, "num_examples": 30000}], "download_size": 92971443, "dataset_size": 183301087.0}}
|
2023-08-16T15:32:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "perigon-150k"
More Information needed
|
[
"# Dataset Card for \"perigon-150k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"perigon-150k\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"perigon-150k\"\n\nMore Information needed"
] |
cf75c5708298637f3d63ae70a07941f365214136
|
## Model Description
Simple Kazakh Question Answering Dataset (sKQuAD)
## Model Authors
This dataset created by Aliya Nugumanova, Kamila Rakhymbek, Adai Shomanov, Mereke Kydyrali, Sultaniyar Quandyq, Aldiyar Saken, Nurasyl Zhomartkan, Almasbek Maulit, Aigerim Mansurova, Saule Belginova, Kurmash Apayev and Madina Mansurova
|
Kyrmasch/sKQuAD
|
[
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:kk",
"art",
"biology",
"geo",
"hsitory",
"math",
"IT",
"social",
"materials",
"region:us"
] |
2023-08-16T15:42:38+00:00
|
{"language": ["kk"], "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"], "tags": ["art", "biology", "geo", "hsitory", "math", "IT", "social", "materials"]}
|
2023-11-14T12:20:53+00:00
|
[] |
[
"kk"
] |
TAGS
#task_categories-question-answering #size_categories-1K<n<10K #language-Kazakh #art #biology #geo #hsitory #math #IT #social #materials #region-us
|
## Model Description
Simple Kazakh Question Answering Dataset (sKQuAD)
## Model Authors
This dataset created by Aliya Nugumanova, Kamila Rakhymbek, Adai Shomanov, Mereke Kydyrali, Sultaniyar Quandyq, Aldiyar Saken, Nurasyl Zhomartkan, Almasbek Maulit, Aigerim Mansurova, Saule Belginova, Kurmash Apayev and Madina Mansurova
|
[
"## Model Description\n\nSimple Kazakh Question Answering Dataset (sKQuAD)",
"## Model Authors\n\nThis dataset created by Aliya Nugumanova, Kamila Rakhymbek, Adai Shomanov, Mereke Kydyrali, Sultaniyar Quandyq, Aldiyar Saken, Nurasyl Zhomartkan, Almasbek Maulit, Aigerim Mansurova, Saule Belginova, Kurmash Apayev and Madina Mansurova"
] |
[
"TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Kazakh #art #biology #geo #hsitory #math #IT #social #materials #region-us \n",
"## Model Description\n\nSimple Kazakh Question Answering Dataset (sKQuAD)",
"## Model Authors\n\nThis dataset created by Aliya Nugumanova, Kamila Rakhymbek, Adai Shomanov, Mereke Kydyrali, Sultaniyar Quandyq, Aldiyar Saken, Nurasyl Zhomartkan, Almasbek Maulit, Aigerim Mansurova, Saule Belginova, Kurmash Apayev and Madina Mansurova"
] |
[
57,
17,
81
] |
[
"passage: TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Kazakh #art #biology #geo #hsitory #math #IT #social #materials #region-us \n## Model Description\n\nSimple Kazakh Question Answering Dataset (sKQuAD)## Model Authors\n\nThis dataset created by Aliya Nugumanova, Kamila Rakhymbek, Adai Shomanov, Mereke Kydyrali, Sultaniyar Quandyq, Aldiyar Saken, Nurasyl Zhomartkan, Almasbek Maulit, Aigerim Mansurova, Saule Belginova, Kurmash Apayev and Madina Mansurova"
] |
19235bbd83e890e0023ab3e184c6fc82f96c23d9
|
# AutoTrain Dataset for project: en-hu
## Dataset Description
This dataset has been automatically processed by AutoTrain for project en-hu.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"source": "cruiser",
"target": "teesaw"
},
{
"source": "don't move",
"target": "hagwa doopee"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"source": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 320 |
| valid | 81 |
|
bleedchocolate/autotrain-data-en-hu
|
[
"task_categories:translation",
"region:us"
] |
2023-08-16T15:51:27+00:00
|
{"task_categories": ["translation"]}
|
2023-08-16T15:51:57+00:00
|
[] |
[] |
TAGS
#task_categories-translation #region-us
|
AutoTrain Dataset for project: en-hu
====================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project en-hu.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
|
[
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
[
"TAGS\n#task_categories-translation #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
[
15,
27,
17,
23,
27
] |
[
"passage: TAGS\n#task_categories-translation #region-us \n### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from this dataset looks as follows:### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
32bbc5e2ac4b707c5c9052e8a2139f708fb6d5a9
|
# Dataset Card for "toy_dataset_correcting"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
rookshanks/toy_dataset_correcting
|
[
"region:us"
] |
2023-08-16T16:03:43+00:00
|
{"dataset_info": {"features": [{"name": "question", "sequence": "int64"}, {"name": "answer_prefix", "sequence": "int64"}, {"name": "answer_continuation", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 225760, "num_examples": 1000}, {"name": "validation", "num_bytes": 220720, "num_examples": 1000}, {"name": "test", "num_bytes": 226048, "num_examples": 1000}], "download_size": 42823, "dataset_size": 672528}}
|
2023-08-16T16:16:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "toy_dataset_correcting"
More Information needed
|
[
"# Dataset Card for \"toy_dataset_correcting\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"toy_dataset_correcting\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"toy_dataset_correcting\"\n\nMore Information needed"
] |
c5d25517d565f808df2acea10be7da2127264dd7
|
# Dataset of Anya Forger
This is the dataset of Anya Forger, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 425 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 425 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 425 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 425 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/anya_forger_spyxfamily
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T16:08:25+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:26+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Anya Forger
======================
This is the dataset of Anya Forger, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
6530713aadc9bedfb0eb52dbe36ae7ed8104fb00
|
# Dataset of Sylvia Sherwood
This is the dataset of Sylvia Sherwood, containing 67 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 67 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 146 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 67 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 67 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 67 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 67 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 67 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 146 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 146 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 146 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/sylvia_sherwood_spyxfamily
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T16:16:05+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:28+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Sylvia Sherwood
==========================
This is the dataset of Sylvia Sherwood, containing 67 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
ef98fb63d37cec540c52b0b66806b5900e7accb2
|
# Dataset Card for "valid-split-bengaliAI-whisper-medium"
valid datas of bengaliAI ( which are marked as valid in train.csv )
part - train -> the datas which are already in commmon-voice ( so whisper is already trained on that data )
part - valid -> the datas not available in common-voice
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rounak28/valid-split-bengaliAI-whisper-medium
|
[
"region:us"
] |
2023-08-16T16:18:29+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 27082860963, "num_examples": 28166}, {"name": "valid", "num_bytes": 1366910844, "num_examples": 1422}], "download_size": 5423155032, "dataset_size": 28449771807}}
|
2023-08-17T00:38:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "valid-split-bengaliAI-whisper-medium"
valid datas of bengaliAI ( which are marked as valid in URL )
part - train -> the datas which are already in commmon-voice ( so whisper is already trained on that data )
part - valid -> the datas not available in common-voice
More Information needed
|
[
"# Dataset Card for \"valid-split-bengaliAI-whisper-medium\"\n\nvalid datas of bengaliAI ( which are marked as valid in URL )\n\npart - train -> the datas which are already in commmon-voice ( so whisper is already trained on that data )\n\npart - valid -> the datas not available in common-voice\n\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"valid-split-bengaliAI-whisper-medium\"\n\nvalid datas of bengaliAI ( which are marked as valid in URL )\n\npart - train -> the datas which are already in commmon-voice ( so whisper is already trained on that data )\n\npart - valid -> the datas not available in common-voice\n\n\nMore Information needed"
] |
[
6,
86
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"valid-split-bengaliAI-whisper-medium\"\n\nvalid datas of bengaliAI ( which are marked as valid in URL )\n\npart - train -> the datas which are already in commmon-voice ( so whisper is already trained on that data )\n\npart - valid -> the datas not available in common-voice\n\n\nMore Information needed"
] |
d1c18406ef17dc73635ab09f5c992f88b24abd7a
|
# Dataset of fuuro/フウロ (Pokémon)
This is the dataset of fuuro/フウロ (Pokémon), containing 500 images and their tags.
The core tags of this character are `red_hair, hair_ornament, breasts, blue_eyes, sidelocks, large_breasts, bangs, long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 556.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuuro_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 324.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuuro_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1248 | 700.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuuro_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 495.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuuro_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1248 | 972.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fuuro_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/fuuro_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, gloves, midriff, navel, blue_footwear, boots, crop_top, smile, solo, open_mouth, blue_shorts, short_shorts, simple_background, white_background |
| 1 | 15 |  |  |  |  |  | 1girl, :d, blue_gloves, blue_jacket, blue_shorts, cropped_jacket, looking_at_viewer, midriff, navel, one_side_up, open_mouth, short_hair_with_long_locks, upper_teeth_only, crop_top, short_shorts, solo, turtleneck, tongue, blush, eyelashes, hair_between_eyes, sky, boots, thigh_pouch, arm_up, cloud |
| 2 | 5 |  |  |  |  |  | 1girl, blush, smile, solo, gloves, open_mouth, looking_at_viewer |
| 3 | 5 |  |  |  |  |  | 1girl, navel, nipples, nude, one_side_up, collarbone, eyelashes, open_mouth, pussy, shiny_skin, short_hair_with_long_locks, solo, tongue, :d, blush, looking_at_viewer, mosaic_censoring, hand_up, hetero, upper_teeth_only |
| 4 | 24 |  |  |  |  |  | 1boy, 1girl, hetero, blush, solo_focus, nipples, penis, open_mouth, nude, sex, smile, navel, pussy, vaginal, looking_at_viewer, sweat, cum, mosaic_censoring, pov, spread_legs |
| 5 | 5 |  |  |  |  |  | 1girl, christmas, gloves, looking_at_viewer, official_alternate_costume, open_mouth, red_dress, santa_costume, smile, red_footwear, blush, long_sleeves, one_eye_closed, pokemon_(creature), short_hair_with_long_locks, solo, ;d, black_belt, feathers, gift_bag, holding, sack, santa_boots, strap_between_breasts, white_background |
| 6 | 5 |  |  |  |  |  | fake_animal_ears, playboy_bunny, rabbit_ears, 1girl, bowtie, detached_collar, wrist_cuffs, cleavage, open_mouth, pantyhose, smile, white_background, 2girls, alternate_costume, blue_leotard, dark_skin, rabbit_tail, salute, short_hair_with_long_locks, simple_background, solo_focus, strapless_leotard |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | gloves | midriff | navel | blue_footwear | boots | crop_top | smile | solo | open_mouth | blue_shorts | short_shorts | simple_background | white_background | :d | blue_gloves | blue_jacket | cropped_jacket | looking_at_viewer | one_side_up | short_hair_with_long_locks | upper_teeth_only | turtleneck | tongue | blush | eyelashes | hair_between_eyes | sky | thigh_pouch | arm_up | cloud | nipples | nude | collarbone | pussy | shiny_skin | mosaic_censoring | hand_up | hetero | 1boy | solo_focus | penis | sex | vaginal | sweat | cum | pov | spread_legs | christmas | official_alternate_costume | red_dress | santa_costume | red_footwear | long_sleeves | one_eye_closed | pokemon_(creature) | ;d | black_belt | feathers | gift_bag | holding | sack | santa_boots | strap_between_breasts | fake_animal_ears | playboy_bunny | rabbit_ears | bowtie | detached_collar | wrist_cuffs | cleavage | pantyhose | 2girls | alternate_costume | blue_leotard | dark_skin | rabbit_tail | salute | strapless_leotard |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:----------|:--------|:----------------|:--------|:-----------|:--------|:-------|:-------------|:--------------|:---------------|:--------------------|:-------------------|:-----|:--------------|:--------------|:-----------------|:--------------------|:--------------|:-----------------------------|:-------------------|:-------------|:---------|:--------|:------------|:--------------------|:------|:--------------|:---------|:--------|:----------|:-------|:-------------|:--------|:-------------|:-------------------|:----------|:---------|:-------|:-------------|:--------|:------|:----------|:--------|:------|:------|:--------------|:------------|:-----------------------------|:------------|:----------------|:---------------|:---------------|:-----------------|:---------------------|:-----|:-------------|:-----------|:-----------|:----------|:-------|:--------------|:------------------------|:-------------------|:----------------|:--------------|:---------|:------------------|:--------------|:-----------|:------------|:---------|:--------------------|:---------------|:------------|:--------------|:---------|:--------------------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 15 |  |  |  |  |  | X | | X | X | | X | X | | X | X | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | | | | | | X | X | X | | | | | | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | | X | | | | | X | X | | | | | X | | | | X | X | X | X | | X | X | X | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 24 |  |  |  |  |  | X | | | X | | | | X | | X | | | | | | | | | X | | | | | | X | | | | | | | X | X | | X | | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | | | | | | X | X | X | | | | X | | | | | X | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | | | | | | | X | | X | | | X | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/fuuro_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T16:27:12+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T13:39:19+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of fuuro/フウロ (Pokémon)
==============================
This is the dataset of fuuro/フウロ (Pokémon), containing 500 images and their tags.
The core tags of this character are 'red\_hair, hair\_ornament, breasts, blue\_eyes, sidelocks, large\_breasts, bangs, long\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
d7cb18c4e2b5a586f9f15c3be352165775c1a3f3
|
**Brief idea about dataset**:
<br>
This dataset is designed for a Text Classification to be specific Multi Class Classification, inorder to train a model (Supervised Learning) for Sentiment Analysis.
<br>
Also to be able retrain the model on the given feedback over a wrong predicted sentiment this dataset will help to manage those things using **Other Features**.
**Main Features**
| text | labels |
|----------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|
| This feature variable has all sort of texts, sentences, tweets, etc. | This target variable contains 3 types of numeric values as sentiments such as 0, 1 and 2. Where 0 means Negative, 1 means Neutral and 2 means Positive. |
**Other Features**
| preds | feedback | retrain_labels | retrained_preds |
|----------------------------------------------------------|--------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------|
| In this variable all predictions are going to be stored. | In this variable user can enter either yes or no to indicate whether the prediction is right or wrong. | In this variable user will enter the correct label as a feedback inorder to retrain the model. | In this variable all predictions after feedback loop are going to be stored. |
|
prasadsawant7/sentiment_analysis_preprocessed_dataset
|
[
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"sentiment-analysis",
"text-classification",
"multiclass-classification",
"region:us"
] |
2023-08-16T16:52:39+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "pretty_name": "Sentiment Analysis Preprocessed Dataset including training and testing split", "tags": ["sentiment-analysis", "text-classification", "multiclass-classification"]}
|
2023-08-16T18:01:42+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #sentiment-analysis #text-classification #multiclass-classification #region-us
|
Brief idea about dataset:
This dataset is designed for a Text Classification to be specific Multi Class Classification, inorder to train a model (Supervised Learning) for Sentiment Analysis.
Also to be able retrain the model on the given feedback over a wrong predicted sentiment this dataset will help to manage those things using Other Features.
Main Features
Other Features
|
[] |
[
"TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #sentiment-analysis #text-classification #multiclass-classification #region-us \n"
] |
[
55
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #sentiment-analysis #text-classification #multiclass-classification #region-us \n"
] |
96e30aa0a05322d13ad8fde446dc3075d94a069b
|
This dataset originates from ehartford/wizard_vicuna_70k_unfiltered, further removing conversations for uncensored alignment.
|
digitalpipelines/wizard_vicuna_70k_uncensored
|
[
"region:us"
] |
2023-08-16T16:55:21+00:00
|
{}
|
2023-08-16T17:06:37+00:00
|
[] |
[] |
TAGS
#region-us
|
This dataset originates from ehartford/wizard_vicuna_70k_unfiltered, further removing conversations for uncensored alignment.
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
5345e2fbcda85cef739836225e1bab2c8e33f4f8
|
# Dataset Card for "text2SPARQL-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Corentin-tin/text2SPARQL-dataset
|
[
"region:us"
] |
2023-08-16T16:57:40+00:00
|
{"dataset_info": {"features": [{"name": "model", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 5365145.0, "num_examples": 446}, {"name": "test", "num_bytes": 450557.04615384614, "num_examples": 32}, {"name": "validation", "num_bytes": 464636.95384615386, "num_examples": 33}], "download_size": 280965, "dataset_size": 6280339.0}}
|
2023-08-16T18:27:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "text2SPARQL-dataset"
More Information needed
|
[
"# Dataset Card for \"text2SPARQL-dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"text2SPARQL-dataset\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"text2SPARQL-dataset\"\n\nMore Information needed"
] |
0a2bc212033fe0a58da69f2f8ccbed514c46de3d
|
# Dataset of yuuki_asuna (Sword Art Online)
This is the dataset of yuuki_asuna (Sword Art Online), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/yuuki_asuna_swordartonline
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T17:03:59+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:32+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of yuuki_asuna (Sword Art Online)
This is the dataset of yuuki_asuna (Sword Art Online), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of yuuki_asuna (Sword Art Online)\n\nThis is the dataset of yuuki_asuna (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of yuuki_asuna (Sword Art Online)\n\nThis is the dataset of yuuki_asuna (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
87
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of yuuki_asuna (Sword Art Online)\n\nThis is the dataset of yuuki_asuna (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
6d8cd94d16e1c258e9ed78f29575bfd45e465427
|
# Dataset of bel/ベル (Pokémon)
This is the dataset of bel/ベル (Pokémon), containing 500 images and their tags.
The core tags of this character are `blonde_hair, hat, short_hair, green_eyes, breasts, green_headwear, glasses`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 390.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/bel_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 260.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/bel_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 998 | 498.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/bel_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 354.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/bel_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 998 | 650.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/bel_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/bel_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1boy, 1girl, hetero, blush, nipples, penis, sex, vaginal, open_mouth, cum_in_pussy, medium_breasts, solo_focus, completely_nude, large_breasts, outdoors, uncensored, grass, one_eye_closed, overflow, testicles |
| 1 | 10 |  |  |  |  |  | 1boy, 1girl, hetero, large_breasts, nipples, penis, solo_focus, blush, censored, open_mouth, cum, paizuri |
| 2 | 5 |  |  |  |  |  | 1boy, 1girl, blush, hetero, breast_grab, large_breasts, open_mouth, solo_focus, 2girls, grabbing_from_behind, groping, nipples, red-framed_eyewear, simple_background, smile, white_background |
| 3 | 5 |  |  |  |  |  | 1girl, censored, cum_in_pussy, nipples, solo, spread_legs, after_sex, blush, orange_pantyhose, tears, cumdrip, large_breasts, medium_breasts, torn_pantyhose, clothes_lift |
| 4 | 6 |  |  |  |  |  | 1girl, hat_bow, red-framed_eyewear, smile, solo, beret, looking_at_viewer, orange_jacket, semi-rimless_eyewear, closed_mouth, white_bow, adjusting_eyewear, blush, collarbone, simple_background, upper_body |
| 5 | 16 |  |  |  |  |  | 1girl, open_mouth, short_sleeves, :d, simple_background, solo, beret, orange_vest, tongue, upper_teeth_only, white_background, white_dress, collarbone, looking_at_viewer, bag, blush, eyelashes, hand_up, holding, pantyhose, poke_ball_(basic) |
| 6 | 22 |  |  |  |  |  | 1girl, open_mouth, hat_bow, :d, orange_jacket, tongue, upper_teeth_only, white_bow, open_jacket, semi-rimless_eyewear, looking_at_viewer, solo, long_sleeves, white_shirt, closed_eyes, collarbone, pants, beret, blush |
| 7 | 8 |  |  |  |  |  | 1girl, solo, blush, open_mouth, smile, simple_background, beret, closed_eyes, handbag, wristband, looking_at_viewer |
| 8 | 5 |  |  |  |  |  | 1girl, open_mouth, solo, ass, blush_stickers, looking_at_viewer, looking_back, :d, bag, from_behind |
| 9 | 5 |  |  |  |  |  | 1girl, blush, cleavage, solo, navel, bikini, bra, large_breasts, panties, sitting, :d, flower, frills, medium_breasts, open_mouth, striped, thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1boy | 1girl | hetero | blush | nipples | penis | sex | vaginal | open_mouth | cum_in_pussy | medium_breasts | solo_focus | completely_nude | large_breasts | outdoors | uncensored | grass | one_eye_closed | overflow | testicles | censored | cum | paizuri | breast_grab | 2girls | grabbing_from_behind | groping | red-framed_eyewear | simple_background | smile | white_background | solo | spread_legs | after_sex | orange_pantyhose | tears | cumdrip | torn_pantyhose | clothes_lift | hat_bow | beret | looking_at_viewer | orange_jacket | semi-rimless_eyewear | closed_mouth | white_bow | adjusting_eyewear | collarbone | upper_body | short_sleeves | :d | orange_vest | tongue | upper_teeth_only | white_dress | bag | eyelashes | hand_up | holding | pantyhose | poke_ball_(basic) | open_jacket | long_sleeves | white_shirt | closed_eyes | pants | handbag | wristband | ass | blush_stickers | looking_back | from_behind | cleavage | navel | bikini | bra | panties | sitting | flower | frills | striped | thighhighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------|:--------|:---------|:--------|:----------|:--------|:------|:----------|:-------------|:---------------|:-----------------|:-------------|:------------------|:----------------|:-----------|:-------------|:--------|:-----------------|:-----------|:------------|:-----------|:------|:----------|:--------------|:---------|:-----------------------|:----------|:---------------------|:--------------------|:--------|:-------------------|:-------|:--------------|:------------|:-------------------|:--------|:----------|:-----------------|:---------------|:----------|:--------|:--------------------|:----------------|:-----------------------|:---------------|:------------|:--------------------|:-------------|:-------------|:----------------|:-----|:--------------|:---------|:-------------------|:--------------|:------|:------------|:----------|:----------|:------------|:--------------------|:--------------|:---------------|:--------------|:--------------|:--------|:----------|:------------|:------|:-----------------|:---------------|:--------------|:-----------|:--------|:---------|:------|:----------|:----------|:---------|:---------|:----------|:-------------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | X | X | X | X | X | | | X | | | X | | X | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | X | X | | | | X | | | X | | X | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | | X | | X | X | | | | | X | X | | | X | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 16 |  |  |  |  |  | | X | | X | | | | | X | | | | | | | | | | | | | | | | | | | | X | | X | X | | | | | | | | | X | X | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 6 | 22 |  |  |  |  |  | | X | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | X | X | X | X | | X | | X | | | X | | X | X | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 7 | 8 |  |  |  |  |  | | X | | X | | | | | X | | | | | | | | | | | | | | | | | | | | X | X | | X | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | X | | X | X | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | | | | | | | | | X | | | | | X | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | |
| 9 | 5 |  |  |  |  |  | | X | | X | | | | | X | | X | | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/bel_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T17:12:22+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T13:38:11+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of bel/ベル (Pokémon)
===========================
This is the dataset of bel/ベル (Pokémon), containing 500 images and their tags.
The core tags of this character are 'blonde\_hair, hat, short\_hair, green\_eyes, breasts, green\_headwear, glasses', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
e5741f2a4145b54b2e4dbb2bb8653c0ab0b938ea
|
# DragonFire0159x/steamreviews
Yet another dataset with Steam Reviews
Available in Russian (Mostly) and English
|
DragonFire0159x/steamreviews
|
[
"task_categories:text-generation",
"language:ru",
"language:en",
"license:mit",
"region:us"
] |
2023-08-16T17:18:47+00:00
|
{"language": ["ru", "en"], "license": "mit", "task_categories": ["text-generation"]}
|
2024-02-11T08:08:03+00:00
|
[] |
[
"ru",
"en"
] |
TAGS
#task_categories-text-generation #language-Russian #language-English #license-mit #region-us
|
# DragonFire0159x/steamreviews
Yet another dataset with Steam Reviews
Available in Russian (Mostly) and English
|
[
"# DragonFire0159x/steamreviews\nYet another dataset with Steam Reviews\n\nAvailable in Russian (Mostly) and English"
] |
[
"TAGS\n#task_categories-text-generation #language-Russian #language-English #license-mit #region-us \n",
"# DragonFire0159x/steamreviews\nYet another dataset with Steam Reviews\n\nAvailable in Russian (Mostly) and English"
] |
[
31,
30
] |
[
"passage: TAGS\n#task_categories-text-generation #language-Russian #language-English #license-mit #region-us \n# DragonFire0159x/steamreviews\nYet another dataset with Steam Reviews\n\nAvailable in Russian (Mostly) and English"
] |
866b63894c67fb2d61bcf1089d9ae959245cbd8b
|
This dataset includes a collection of image samples of Jacaranda, Palm and others.
They were clipped from Eagle Aerial images of Orange County, California.
These samples have been used for training a deep learning model to classify Jacaranda, and can also be used to train a model for Palm.
|
lily-hust/Tree-clips-RS-imagery
|
[
"license:mit",
"region:us"
] |
2023-08-16T17:29:56+00:00
|
{"license": "mit"}
|
2023-08-16T17:35:27+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
This dataset includes a collection of image samples of Jacaranda, Palm and others.
They were clipped from Eagle Aerial images of Orange County, California.
These samples have been used for training a deep learning model to classify Jacaranda, and can also be used to train a model for Palm.
|
[] |
[
"TAGS\n#license-mit #region-us \n"
] |
[
11
] |
[
"passage: TAGS\n#license-mit #region-us \n"
] |
8a6dbc84c9e801c9c4988e42943bda29b94d12c7
|
# Dataset Card for "invoices_instruct_vf_weird"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
GalaktischeGurke/invoices_instruct_vf_weird
|
[
"region:us"
] |
2023-08-16T17:40:00+00:00
|
{"dataset_info": {"features": [{"name": "ground_truth", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2720389, "num_examples": 501}], "download_size": 1109588, "dataset_size": 2720389}}
|
2023-08-16T17:42:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "invoices_instruct_vf_weird"
More Information needed
|
[
"# Dataset Card for \"invoices_instruct_vf_weird\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"invoices_instruct_vf_weird\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"invoices_instruct_vf_weird\"\n\nMore Information needed"
] |
a48e9b2a272237e9922e2cccda315db16731b482
|
# Dataset of konno_yuuki (Sword Art Online)
This is the dataset of konno_yuuki (Sword Art Online), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/konno_yuuki_swordartonline
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T17:42:50+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:36+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of konno_yuuki (Sword Art Online)
This is the dataset of konno_yuuki (Sword Art Online), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of konno_yuuki (Sword Art Online)\n\nThis is the dataset of konno_yuuki (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of konno_yuuki (Sword Art Online)\n\nThis is the dataset of konno_yuuki (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
85
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of konno_yuuki (Sword Art Online)\n\nThis is the dataset of konno_yuuki (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
583d72b039ab7e254e76e9e82f2919f2e4645ccf
|
# Dataset of shikimi/シキミ (Pokémon)
This is the dataset of shikimi/シキミ (Pokémon), containing 389 images and their tags.
The core tags of this character are `purple_hair, glasses, short_hair, breasts, purple_eyes, bob_cut, bangs, blunt_bangs, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 389 | 316.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shikimi_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 389 | 211.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shikimi_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 796 | 379.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shikimi_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 389 | 292.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shikimi_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 796 | 496.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shikimi_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/shikimi_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 17 |  |  |  |  |  | 1girl, purple_skirt, solo, cleavage, black_pantyhose, book, elbow_gloves, pen, smile, medium_breasts, holding, looking_at_viewer, shoes |
| 1 | 9 |  |  |  |  |  | 1girl, elbow_gloves, looking_at_viewer, purple_dress, purple_skirt, cleavage, holding_pen, black_pantyhose, holding_book, black_gloves, large_bow, pokemon_(creature), :o, open_mouth, smile |
| 2 | 18 |  |  |  |  |  | 1girl, hetero, nipples, sex, 1boy, pantyhose, blush, open_mouth, torn_clothes, penis, sweat, vaginal, heart, breasts_out, cum_in_pussy, pokemon_(creature), pokephilia, elbow_gloves, nude, purple_skirt, solo_focus, tongue, uncensored |
| 3 | 24 |  |  |  |  |  | 1girl, 1boy, hetero, penis, solo_focus, blush, gloves, nipples, paizuri, facial, cum_on_breasts, mosaic_censoring, sweat |
| 4 | 28 |  |  |  |  |  | 1girl, nude, solo, nipples, navel, blush, looking_at_viewer, pussy, smile, collarbone, medium_breasts, censored |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | purple_skirt | solo | cleavage | black_pantyhose | book | elbow_gloves | pen | smile | medium_breasts | holding | looking_at_viewer | shoes | purple_dress | holding_pen | holding_book | black_gloves | large_bow | pokemon_(creature) | :o | open_mouth | hetero | nipples | sex | 1boy | pantyhose | blush | torn_clothes | penis | sweat | vaginal | heart | breasts_out | cum_in_pussy | pokephilia | nude | solo_focus | tongue | uncensored | gloves | paizuri | facial | cum_on_breasts | mosaic_censoring | navel | pussy | collarbone | censored |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-------|:-----------|:------------------|:-------|:---------------|:------|:--------|:-----------------|:----------|:--------------------|:--------|:---------------|:--------------|:---------------|:---------------|:------------|:---------------------|:-----|:-------------|:---------|:----------|:------|:-------|:------------|:--------|:---------------|:--------|:--------|:----------|:--------|:--------------|:---------------|:-------------|:-------|:-------------|:---------|:-------------|:---------|:----------|:---------|:-----------------|:-------------------|:--------|:--------|:-------------|:-----------|
| 0 | 17 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | | X | X | | X | | X | | | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 18 |  |  |  |  |  | X | X | | | | | X | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | |
| 3 | 24 |  |  |  |  |  | X | | | | | | | | | | | | | | | | | | | | | X | X | | X | | X | | X | X | | | | | | | X | | | X | X | X | X | X | | | | |
| 4 | 28 |  |  |  |  |  | X | | X | | | | | | X | X | | X | | | | | | | | | | | X | | | | X | | | | | | | | | X | | | | | | | | | X | X | X | X |
|
CyberHarem/shikimi_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T17:57:22+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T13:17:32+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of shikimi/シキミ (Pokémon)
================================
This is the dataset of shikimi/シキミ (Pokémon), containing 389 images and their tags.
The core tags of this character are 'purple\_hair, glasses, short\_hair, breasts, purple\_eyes, bob\_cut, bangs, blunt\_bangs, large\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
d16d43b0493decc3a88076ee4a1f013651bd26ce
|
# Dataset Card for "ecg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jxie/ecg
|
[
"region:us"
] |
2023-08-16T18:21:04+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "sequence": {"sequence": "float64"}}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 34294848, "num_examples": 1904}, {"name": "val", "num_bytes": 196691040, "num_examples": 10920}, {"name": "train", "num_bytes": 786638076, "num_examples": 43673}], "download_size": 137072440, "dataset_size": 1017623964}}
|
2023-08-16T18:21:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ecg"
More Information needed
|
[
"# Dataset Card for \"ecg\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ecg\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ecg\"\n\nMore Information needed"
] |
cb052374dc0519a2b11f5ff22b4d3a500734df46
|
# Dataset Card for "emg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jxie/emg
|
[
"region:us"
] |
2023-08-16T18:21:27+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "sequence": {"sequence": "float64"}}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "val", "num_bytes": 738492, "num_examples": 41}, {"name": "train", "num_bytes": 2197464, "num_examples": 122}, {"name": "test", "num_bytes": 738492, "num_examples": 41}], "download_size": 472145, "dataset_size": 3674448}}
|
2023-08-16T18:21:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "emg"
More Information needed
|
[
"# Dataset Card for \"emg\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"emg\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"emg\"\n\nMore Information needed"
] |
a5f2b6fb04caa67bc9e60e84dd434b67b4bd9cdf
|
All models are generated with Harvest mode, 500 steps
|
Karlend/RVC_Models
|
[
"region:us"
] |
2023-08-16T18:30:10+00:00
|
{}
|
2023-11-19T16:00:00+00:00
|
[] |
[] |
TAGS
#region-us
|
All models are generated with Harvest mode, 500 steps
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
599ad74672e3b38d497f7a2f5b83135a637b254b
|
## Dataset information
This ia a 19 minute dataset of [Javier Milei](https://en.wikipedia.org/wiki/Javier_Milei)'s voice. The voice recordings
were extracted from [this interview](https://youtu.be/5Z8JRRIhRAo) conducted by [Neura Media](https://www.neura.media/).
|
raycast6000/javiermilei
|
[
"language:es",
"license:openrail",
"region:us"
] |
2023-08-16T18:34:36+00:00
|
{"language": ["es"], "license": "openrail"}
|
2023-08-16T19:45:22+00:00
|
[] |
[
"es"
] |
TAGS
#language-Spanish #license-openrail #region-us
|
## Dataset information
This ia a 19 minute dataset of Javier Milei's voice. The voice recordings
were extracted from this interview conducted by Neura Media.
|
[
"## Dataset information\nThis ia a 19 minute dataset of Javier Milei's voice. The voice recordings\nwere extracted from this interview conducted by Neura Media."
] |
[
"TAGS\n#language-Spanish #license-openrail #region-us \n",
"## Dataset information\nThis ia a 19 minute dataset of Javier Milei's voice. The voice recordings\nwere extracted from this interview conducted by Neura Media."
] |
[
17,
36
] |
[
"passage: TAGS\n#language-Spanish #license-openrail #region-us \n## Dataset information\nThis ia a 19 minute dataset of Javier Milei's voice. The voice recordings\nwere extracted from this interview conducted by Neura Media."
] |
324872bd93822ecae1903f79cefa4dbc4b3edcaa
|
# Dataset of ayano_keiko (Sword Art Online)
This is the dataset of ayano_keiko (Sword Art Online), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
CyberHarem/ayano_keiko_swordartonline
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T18:39:18+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:10:40+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
# Dataset of ayano_keiko (Sword Art Online)
This is the dataset of ayano_keiko (Sword Art Online), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[
"# Dataset of ayano_keiko (Sword Art Online)\n\nThis is the dataset of ayano_keiko (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"# Dataset of ayano_keiko (Sword Art Online)\n\nThis is the dataset of ayano_keiko (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
[
44,
85
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n# Dataset of ayano_keiko (Sword Art Online)\n\nThis is the dataset of ayano_keiko (Sword Art Online), containing 200 images and their tags.\n\nImages are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization)."
] |
02b2abd7c27459929ef61a7701bc1b850a3f7f45
|
# Dataset Card for "Emotion_Recognition_4_llama2_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
RikoteMaster/Emotion_Recognition_4_llama2_v3
|
[
"region:us"
] |
2023-08-16T19:00:50+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "Text_processed", "dtype": "string"}, {"name": "Emotion", "dtype": "string"}, {"name": "Augmented", "dtype": "bool"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28873301, "num_examples": 61463}], "download_size": 9012554, "dataset_size": 28873301}}
|
2023-08-16T19:00:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Emotion_Recognition_4_llama2_v3"
More Information needed
|
[
"# Dataset Card for \"Emotion_Recognition_4_llama2_v3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Emotion_Recognition_4_llama2_v3\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Emotion_Recognition_4_llama2_v3\"\n\nMore Information needed"
] |
ef292da30eee14e6e68e05ebae08a09914335913
|
# Dataset Card for "Biorxiv_abstracts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
KhalfounMehdi/Biorxiv_abstracts
|
[
"region:us"
] |
2023-08-16T19:19:04+00:00
|
{"dataset_info": {"features": [{"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19033309, "num_examples": 11803}], "download_size": 10617303, "dataset_size": 19033309}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-16T19:19:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Biorxiv_abstracts"
More Information needed
|
[
"# Dataset Card for \"Biorxiv_abstracts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Biorxiv_abstracts\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Biorxiv_abstracts\"\n\nMore Information needed"
] |
69f3cc473002ccd1c1865f8578c6aeacc7a277b9
|
# Dataset of higana (Pokémon)
This is the dataset of higana (Pokémon), containing 223 images and their tags.
The core tags of this character are `black_hair, breasts, short_hair, red_eyes, dark_skin, large_breasts, short_ponytail, dark-skinned_female`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 223 | 198.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/higana_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 223 | 121.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/higana_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 540 | 254.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/higana_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 223 | 181.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/higana_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 540 | 337.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/higana_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/higana_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 25 |  |  |  |  |  | 1girl, solo, nipples, blush, nude, smile, looking_at_viewer, navel, pussy |
| 1 | 21 |  |  |  |  |  | 1boy, 1girl, hetero, sex, solo_focus, vaginal, blush, sweat, nude, nipples, penis, girl_on_top, open_mouth, bar_censor, pussy, cowgirl_position |
| 2 | 5 |  |  |  |  |  | 1boy, 1girl, barefoot, blush, feet, hetero, penis, solo_focus, toes, mosaic_censoring, navel, smile, two-footed_footjob, nipples, sweat, bikini, cleavage, ejaculation, naked_cape, naked_cloak, nude, open_mouth, pov |
| 3 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, short_shorts, smile, blush, grey_thighhighs, solo, bangs, cloak, over-kneehighs, bare_shoulders, black_shirt, cleavage, grey_shorts, open_mouth, pokemon_(creature), simple_background, sleeveless_shirt, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | nipples | blush | nude | smile | looking_at_viewer | navel | pussy | 1boy | hetero | sex | solo_focus | vaginal | sweat | penis | girl_on_top | open_mouth | bar_censor | cowgirl_position | barefoot | feet | toes | mosaic_censoring | two-footed_footjob | bikini | cleavage | ejaculation | naked_cape | naked_cloak | pov | short_shorts | grey_thighhighs | bangs | cloak | over-kneehighs | bare_shoulders | black_shirt | grey_shorts | pokemon_(creature) | simple_background | sleeveless_shirt | white_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:----------|:--------|:-------|:--------|:--------------------|:--------|:--------|:-------|:---------|:------|:-------------|:----------|:--------|:--------|:--------------|:-------------|:-------------|:-------------------|:-----------|:-------|:-------|:-------------------|:---------------------|:---------|:-----------|:--------------|:-------------|:--------------|:------|:---------------|:------------------|:--------|:--------|:-----------------|:-----------------|:--------------|:--------------|:---------------------|:--------------------|:-------------------|:-------------------|
| 0 | 25 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 21 |  |  |  |  |  | X | | X | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | | X | X | X | X | | X | | X | X | | X | | X | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | X | | X | | X | X | | | | | | | | | | | X | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/higana_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T19:20:25+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T14:48:21+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of higana (Pokémon)
===========================
This is the dataset of higana (Pokémon), containing 223 images and their tags.
The core tags of this character are 'black\_hair, breasts, short\_hair, red\_eyes, dark\_skin, large\_breasts, short\_ponytail, dark-skinned\_female', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
796b707ce9ad522c44bb3f0be1cc112455e1a513
|
# Dataset Card for resume dataset
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
en
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
job, resume point
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Isabella Shapland using open ai
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
isashap/testresumejobs
|
[
"region:us"
] |
2023-08-16T19:23:05+00:00
|
{}
|
2023-08-16T19:24:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for resume dataset
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
en
## Dataset Structure
### Data Instances
### Data Fields
job, resume point
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Isabella Shapland using open ai
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for resume dataset",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nen",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\njob, resume point",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?\n\nIsabella Shapland using open ai",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for resume dataset",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nen",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\njob, resume point",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?\n\nIsabella Shapland using open ai",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
32,
10,
5,
6,
6,
9,
5,
5,
7,
4,
10,
18,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for resume dataset## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages\n\nen## Dataset Structure### Data Instances### Data Fields\n\njob, resume point### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?\n\nIsabella Shapland using open ai### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
4f9a37deadbb41b4ac1d804399d9df6d449cd25f
|
# Dataset Card for CC OpenBooks
## Dataset Description
CC OpenBooks is a curated collection of high quality non-fiction books. All texts are from CC-By-4.0 sources, with no license ambiguity.
The documents are normalized to markdown, and care is taken to ensure most formatting (e.g. inline LaTeX) remains intact. Files are manually inspected and cleaned of all defects wherever possible.
### Source Data
The following [Openstax](https://github.com/openstax) collections were used in creating this dataset:
- Introduction to Anthropology
- College Success Concise
- College Success
- Preparing for College Success
- Microbiology
- Chemistry 2e
- Chemistry: Atoms First 2e
- Física universitaria volumen 1
- Física universitaria volumen 2
- Física universitaria volumen 3
- Introduction to Business
- Astronomy 2e
- Principles of Marketing
- Psychologia
- Contemporary Mathematics
- Statistics
- World History Volume 1, to 1500
- World History Volume 2, from 1400
- Physics
- Introduction to Political Science
- Introducción a la estadística empresarial
- Introducción a la estadística
- Entrepreneurship
- Fizyka dla szkół wyższych. Tom 1
- Fizyka dla szkół wyższych. Tom 2
- Fizyka dla szkół wyższych. Tom 3
- Writing Guide with Handbook
- Biology 2e
- Biology for AP® Courses
- Concepts of Biology
- Introduction to Sociology 3e
- Life, Liberty, and the Pursuit of Happiness
- Precálculo 2ed
- Psychology 2e
- Playground
- University Physics Volume 1
- University Physics Volume 2
- University Physics Volume 3
- Principles of Finance
- U.S. History
- American Government 3e
- Anatomy and Physiology 2e
- Química 2ed
- Química: Comenzando con los átomos 2ed
- Elementary Algebra 2e
- Intermediate Algebra 2e
- Prealgebra 2e
- Business Ethics
- Organizational Behavior
- Principles of Management
- Introduction to Intellectual Property
- Principles of Economics 3e
- Principles of Macroeconomics 3e
- Principles of Macroeconomics for AP® Courses 2e
- Algebra and Trigonometry 2e
- College Algebra 2e
- College Algebra with Corequisite Support 2e
- Precalculus 2e
- Introduction to Philosophy
- College Physics 2e
- College Physics for AP® Courses 2e
- Mikroekonomia – Podstawy
Books from other sources:
- [Byte of Python](https://github.com/swaroopch/byte-of-python)
- [Non-Programmer's Tutorial for Python 3](https://en.wikibooks.org/wiki/Non-Programmer%27s_Tutorial_for_Python_3)
- [Python Programming](https://en.wikibooks.org/wiki/Python_Programming)
- [Algorithms](https://en.wikibooks.org/wiki/Algorithms)
- [Communication Theory](https://en.wikibooks.org/wiki/Communication_Theory)
- [C Programming](https://en.wikibooks.org/wiki/C_Programming)
- [C Sharp Programming](https://en.wikibooks.org/wiki/C_Sharp_Programming)
- [Formal Logic](https://en.wikibooks.org/wiki/Formal_Logic)
- [Haskell](https://en.wikibooks.org/wiki/Haskell)
- [How To Assemble A Desktop PC](https://en.wikibooks.org/wiki/How_To_Assemble_A_Desktop_PC)
- [LaTeX](https://en.wikibooks.org/wiki/LaTeX)
- [OpenSSH](https://en.wikibooks.org/wiki/OpenSSH)
- [Write Yourself a Scheme in 48 Hours](https://en.wikibooks.org/wiki/Write_Yourself_a_Scheme_in_48_Hours)
- [X86 Disassembly](https://en.wikibooks.org/wiki/X86_Disassembly)
- [XML - Managing Data Exchange](https://en.wikibooks.org/wiki/XML_-_Managing_Data_Exchange)
- [Bourne Shell Scripting](https://en.wikibooks.org/wiki/Bourne_Shell_Scripting)
- [F Sharp Programming](https://en.wikibooks.org/wiki/F_Sharp_Programming)
- [Tcl Programming](https://en.wikibooks.org/wiki/Tcl_Programming)
- [Java Programming](https://en.wikibooks.org/wiki/Java_Programming)
- [MATLAB Programming](https://en.wikibooks.org/wiki/MATLAB_Programming)
- [MySQL](https://en.wikibooks.org/wiki/MySQL)
- [Foundations of Computer_Science](https://en.wikibooks.org/wiki/Foundations_of_Computer_Science)
- [Introduction to Numerical Methods](https://en.wikibooks.org/wiki/Introduction_to_Numerical_Methods)
- [Think Python](https://en.wikibooks.org/wiki/Think_Python)
- [Engineering Acoustics](https://en.wikibooks.org/wiki/Engineering_Acoustics)
- [Control Systems](https://en.wikibooks.org/wiki/Control_Systems)
- [Sensory Systems](https://en.wikibooks.org/wiki/Sensory_Systems)
- [Transportation Economics](https://en.wikibooks.org/wiki/Transportation_Economics)
- [Circuit Theory](https://en.wikibooks.org/wiki/Circuit_Theory)
- [Communication Systems](https://en.wikibooks.org/wiki/Communication_Systems)
- [Spanish](https://en.wikibooks.org/wiki/Spanish/Contents)
- [Latin](https://en.wikibooks.org/wiki/Latin)
- [English in Use](https://en.wikibooks.org/wiki/English_in_Use)
- [French](https://en.wikibooks.org/wiki/French)
- [German](https://en.wikibooks.org/wiki/German)
- [High School Mathematics Extensions](https://en.wikibooks.org/wiki/High_School_Mathematics_Extensions)
- [Linear Algebra](https://en.wikibooks.org/wiki/Linear_Algebra)
- [Timeless Theorems of Mathematics](https://en.wikibooks.org/wiki/Timeless_Theorems_of_Mathematics)
- [A Brief Introduction to Engineering Computation with MATLAB](https://collection.bccampus.ca/textbooks/a-brief-introduction-to-engineering-computation-with-matlab/)
- [Aerodynamics and Aircraft Performance, 3rd edition](https://vtechworks.lib.vt.edu/handle/10919/96525)
- [Acoustics](https://en.wikibooks.org/wiki/Acoustics)
- [Ada_Programming](https://en.wikibooks.org/wiki/Ada_Programming)
- [Algorithms](https://en.wikibooks.org/wiki/Algorithms)
- [Anatomy_and_Physiology_of_Animals](https://en.wikibooks.org/wiki/Anatomy_and_Physiology_of_Animals)
- [Applications_of_ICT_in_Libraries](https://en.wikibooks.org/wiki/Applications_of_ICT_in_Libraries)
- [Arimaa](https://en.wikibooks.org/wiki/Arimaa)
- [A-level_Computing/AQA](https://en.wikibooks.org/wiki/A-level_Computing/AQA)
- [Basic_Physics_of_Nuclear_Medicine](https://en.wikibooks.org/wiki/Basic_Physics_of_Nuclear_Medicine)
- [Blended_Learning_in_K-12](https://en.wikibooks.org/wiki/Blended_Learning_in_K-12)
- [Blender_3D:_Noob_to_Pro](https://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro)
- [C_Programming](https://en.wikibooks.org/wiki/C_Programming)
- [Chess](https://en.wikibooks.org/wiki/Chess)
- [Coaching_Youth_Middle_Distance_Runners](https://en.wikibooks.org/wiki/Coaching_Youth_Middle_Distance_Runners)
- [Cognitive_Psychology_and_Cognitive_Neuroscience](https://en.wikibooks.org/wiki/Cognitive_Psychology_and_Cognitive_Neuroscience)
- [Consciousness_Studies](https://en.wikibooks.org/wiki/Consciousness_Studies)
- [Elements_of_Political_Communication](https://en.wikibooks.org/wiki/Elements_of_Political_Communication)
- [Engineering_Acoustics](https://en.wikibooks.org/wiki/Engineering_Acoustics)
- [European_History](https://en.wikibooks.org/wiki/European_History)
- [First_Aid](https://en.wikibooks.org/wiki/First_Aid)
- [Formal_Logic](https://en.wikibooks.org/wiki/Formal_Logic)
- [Fundamentals_of_Transportation](https://en.wikibooks.org/wiki/Fundamentals_of_Transportation)
- [Guitar](https://en.wikibooks.org/wiki/Guitar)
- [High_School_Mathematics_Extensions](https://en.wikibooks.org/wiki/High_School_Mathematics_Extensions)
- [Historical_Geology](https://en.wikibooks.org/wiki/Historical_Geology)
- [How_To_Assemble_A_Desktop_PC](https://en.wikibooks.org/wiki/How_To_Assemble_A_Desktop_PC)
- [Human_Physiology](https://en.wikibooks.org/wiki/Human_Physiology)
- [Introduction_to_Paleoanthropology](https://en.wikibooks.org/wiki/Introduction_to_Paleoanthropology)
- [Introduction_to_Sociology](https://en.wikibooks.org/wiki/Introduction_to_Sociology)
- [Knowing_Knoppix](https://en.wikibooks.org/wiki/Knowing_Knoppix)
- [Learning_Theories](https://en.wikibooks.org/wiki/Learning_Theories)
- [Linear_Algebra](https://en.wikibooks.org/wiki/Linear_Algebra)
- [Lucid_Dreaming](https://en.wikibooks.org/wiki/Lucid_Dreaming)
- [Managing_Groups_and_Teams](https://en.wikibooks.org/wiki/Managing_Groups_and_Teams)
- [Miskito](https://en.wikibooks.org/wiki/Miskito)
- [Muggles%27_Guide_to_Harry_Potter](https://en.wikibooks.org/wiki/Muggles%27_Guide_to_Harry_Potter)
- [New_Zealand_History](https://en.wikibooks.org/wiki/New_Zealand_History)
- [Physics_Study_Guide](https://en.wikibooks.org/wiki/Physics_Study_Guide)
- [Proteomics](https://en.wikibooks.org/wiki/Proteomics)
- [Radiation_Oncology](https://en.wikibooks.org/wiki/Radiation_Oncology)
- [Social_and_Cultural_Foundations_of_American_Education](https://en.wikibooks.org/wiki/Social_and_Cultural_Foundations_of_American_Education)
- [Special_Relativity](https://en.wikibooks.org/wiki/Special_Relativity)
- [Speech-Language_Pathology/Stuttering](https://en.wikibooks.org/wiki/Speech-Language_Pathology/Stuttering)
- [This_Quantum_World](https://en.wikibooks.org/wiki/This_Quantum_World)
- [UK_Constitution_and_Government](https://en.wikibooks.org/wiki/UK_Constitution_and_Government)
- [UNDP-APDIP_Books](https://en.wikibooks.org/wiki/UNDP-APDIP_Books)
- [Using_Wikibooks](https://en.wikibooks.org/wiki/Using_Wikibooks)
- [Wikijunior:Solar_System](https://en.wikibooks.org/wiki/Wikijunior:Solar_System)
- [XForms](https://en.wikibooks.org/wiki/XForms)
- [Zine_Making](https://en.wikibooks.org/wiki/Zine_Making)
- [Basic_Computing_Using_Windows](https://en.wikibooks.org/wiki/Basic_Computing_Using_Windows)
- [Cognitive_Psychology_and_Cognitive_Neuroscience](https://en.wikibooks.org/wiki/Cognitive_Psychology_and_Cognitive_Neuroscience)
- [Movie_Making_Manual](https://en.wikibooks.org/wiki/Movie_Making_Manual)
- [Organic_Chemistry](https://en.wikibooks.org/wiki/Organic_Chemistry)
- [European_History](https://en.wikibooks.org/wiki/European_History)
- [Cookbook](https://en.wikibooks.org/wiki/Cookbook)
- [Chess](https://en.wikibooks.org/wiki/Chess)
- [Japanese](https://en.wikibooks.org/wiki/Japanese)
- [Consciousness_Studies](https://en.wikibooks.org/wiki/Consciousness_Studies)
- [Chinese_(Mandarin)](https://en.wikibooks.org/wiki/Chinese_(Mandarin))
- [Wikijunior:Solar_System](https://en.wikibooks.org/wiki/Wikijunior:Solar_System)
- [Blender_3D:_Noob_to_Pro](https://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro)
- [FHSST_Physics](https://en.wikibooks.org/wiki/FHSST_Physics)
- [How_To_Assemble_A_Desktop_PC](https://en.wikibooks.org/wiki/How_To_Assemble_A_Desktop_PC)
- [History_of_the_United_States](https://en.wikibooks.org/wiki/History_of_the_United_States)
- [High_School_Mathematics_Extensions](https://en.wikibooks.org/wiki/High_School_Mathematics_Extensions)
- [Lucid_Dreaming](https://en.wikibooks.org/wiki/Lucid_Dreaming)
- [Nanotechnology](https://en.wikibooks.org/wiki/Nanotechnology)
- [Introduction to Online Convex Optimization](https://arxiv.org/abs/1909.05207)
- [Structure and Interpretation of Computer Programs](https://github.com/sarabander/sicp-pdf)
- [Convex Optimization: Algorithms and Complexity](https://arxiv.org/abs/1405.4980)
- [Trustworthy Machine Learning](https://arxiv.org/abs/2310.08215)
#### Initial Data Collection and Normalization
Wherever possible, the books are converted to markdown. This formatting is kept intact with downstream tasks in mind (e.g. conversational QA).
The source of the text is prepended to each document to add context, and it is hoped that this also has the potential to improve source attribution and guidance capabilities of models.
### Licensing Information
All books in this collection were previously released with an unambiguous cc-by-4.0 license by the original authors.
|
Daniel-P-Gonzalez/CCOpenBooks
|
[
"task_categories:text-generation",
"language:en",
"language:es",
"language:pl",
"license:cc-by-4.0",
"arxiv:1909.05207",
"arxiv:1405.4980",
"arxiv:2310.08215",
"region:us"
] |
2023-08-16T19:45:51+00:00
|
{"language": ["en", "es", "pl"], "license": "cc-by-4.0", "task_categories": ["text-generation"], "pretty_name": "CC OpenBooks"}
|
2023-11-30T03:34:19+00:00
|
[
"1909.05207",
"1405.4980",
"2310.08215"
] |
[
"en",
"es",
"pl"
] |
TAGS
#task_categories-text-generation #language-English #language-Spanish #language-Polish #license-cc-by-4.0 #arxiv-1909.05207 #arxiv-1405.4980 #arxiv-2310.08215 #region-us
|
# Dataset Card for CC OpenBooks
## Dataset Description
CC OpenBooks is a curated collection of high quality non-fiction books. All texts are from CC-By-4.0 sources, with no license ambiguity.
The documents are normalized to markdown, and care is taken to ensure most formatting (e.g. inline LaTeX) remains intact. Files are manually inspected and cleaned of all defects wherever possible.
### Source Data
The following Openstax collections were used in creating this dataset:
- Introduction to Anthropology
- College Success Concise
- College Success
- Preparing for College Success
- Microbiology
- Chemistry 2e
- Chemistry: Atoms First 2e
- Física universitaria volumen 1
- Física universitaria volumen 2
- Física universitaria volumen 3
- Introduction to Business
- Astronomy 2e
- Principles of Marketing
- Psychologia
- Contemporary Mathematics
- Statistics
- World History Volume 1, to 1500
- World History Volume 2, from 1400
- Physics
- Introduction to Political Science
- Introducción a la estadística empresarial
- Introducción a la estadística
- Entrepreneurship
- Fizyka dla szkół wyższych. Tom 1
- Fizyka dla szkół wyższych. Tom 2
- Fizyka dla szkół wyższych. Tom 3
- Writing Guide with Handbook
- Biology 2e
- Biology for AP® Courses
- Concepts of Biology
- Introduction to Sociology 3e
- Life, Liberty, and the Pursuit of Happiness
- Precálculo 2ed
- Psychology 2e
- Playground
- University Physics Volume 1
- University Physics Volume 2
- University Physics Volume 3
- Principles of Finance
- U.S. History
- American Government 3e
- Anatomy and Physiology 2e
- Química 2ed
- Química: Comenzando con los átomos 2ed
- Elementary Algebra 2e
- Intermediate Algebra 2e
- Prealgebra 2e
- Business Ethics
- Organizational Behavior
- Principles of Management
- Introduction to Intellectual Property
- Principles of Economics 3e
- Principles of Macroeconomics 3e
- Principles of Macroeconomics for AP® Courses 2e
- Algebra and Trigonometry 2e
- College Algebra 2e
- College Algebra with Corequisite Support 2e
- Precalculus 2e
- Introduction to Philosophy
- College Physics 2e
- College Physics for AP® Courses 2e
- Mikroekonomia – Podstawy
Books from other sources:
- Byte of Python
- Non-Programmer's Tutorial for Python 3
- Python Programming
- Algorithms
- Communication Theory
- C Programming
- C Sharp Programming
- Formal Logic
- Haskell
- How To Assemble A Desktop PC
- LaTeX
- OpenSSH
- Write Yourself a Scheme in 48 Hours
- X86 Disassembly
- XML - Managing Data Exchange
- Bourne Shell Scripting
- F Sharp Programming
- Tcl Programming
- Java Programming
- MATLAB Programming
- MySQL
- Foundations of Computer_Science
- Introduction to Numerical Methods
- Think Python
- Engineering Acoustics
- Control Systems
- Sensory Systems
- Transportation Economics
- Circuit Theory
- Communication Systems
- Spanish
- Latin
- English in Use
- French
- German
- High School Mathematics Extensions
- Linear Algebra
- Timeless Theorems of Mathematics
- A Brief Introduction to Engineering Computation with MATLAB
- Aerodynamics and Aircraft Performance, 3rd edition
- Acoustics
- Ada_Programming
- Algorithms
- Anatomy_and_Physiology_of_Animals
- Applications_of_ICT_in_Libraries
- Arimaa
- A-level_Computing/AQA
- Basic_Physics_of_Nuclear_Medicine
- Blended_Learning_in_K-12
- Blender_3D:_Noob_to_Pro
- C_Programming
- Chess
- Coaching_Youth_Middle_Distance_Runners
- Cognitive_Psychology_and_Cognitive_Neuroscience
- Consciousness_Studies
- Elements_of_Political_Communication
- Engineering_Acoustics
- European_History
- First_Aid
- Formal_Logic
- Fundamentals_of_Transportation
- Guitar
- High_School_Mathematics_Extensions
- Historical_Geology
- How_To_Assemble_A_Desktop_PC
- Human_Physiology
- Introduction_to_Paleoanthropology
- Introduction_to_Sociology
- Knowing_Knoppix
- Learning_Theories
- Linear_Algebra
- Lucid_Dreaming
- Managing_Groups_and_Teams
- Miskito
- Muggles%27_Guide_to_Harry_Potter
- New_Zealand_History
- Physics_Study_Guide
- Proteomics
- Radiation_Oncology
- Social_and_Cultural_Foundations_of_American_Education
- Special_Relativity
- Speech-Language_Pathology/Stuttering
- This_Quantum_World
- UK_Constitution_and_Government
- UNDP-APDIP_Books
- Using_Wikibooks
- Wikijunior:Solar_System
- XForms
- Zine_Making
- Basic_Computing_Using_Windows
- Cognitive_Psychology_and_Cognitive_Neuroscience
- Movie_Making_Manual
- Organic_Chemistry
- European_History
- Cookbook
- Chess
- Japanese
- Consciousness_Studies
- Chinese_(Mandarin))
- Wikijunior:Solar_System
- Blender_3D:_Noob_to_Pro
- FHSST_Physics
- How_To_Assemble_A_Desktop_PC
- History_of_the_United_States
- High_School_Mathematics_Extensions
- Lucid_Dreaming
- Nanotechnology
- Introduction to Online Convex Optimization
- Structure and Interpretation of Computer Programs
- Convex Optimization: Algorithms and Complexity
- Trustworthy Machine Learning
#### Initial Data Collection and Normalization
Wherever possible, the books are converted to markdown. This formatting is kept intact with downstream tasks in mind (e.g. conversational QA).
The source of the text is prepended to each document to add context, and it is hoped that this also has the potential to improve source attribution and guidance capabilities of models.
### Licensing Information
All books in this collection were previously released with an unambiguous cc-by-4.0 license by the original authors.
|
[
"# Dataset Card for CC OpenBooks",
"## Dataset Description\n\n CC OpenBooks is a curated collection of high quality non-fiction books. All texts are from CC-By-4.0 sources, with no license ambiguity.\nThe documents are normalized to markdown, and care is taken to ensure most formatting (e.g. inline LaTeX) remains intact. Files are manually inspected and cleaned of all defects wherever possible.",
"### Source Data\n\nThe following Openstax collections were used in creating this dataset:\n- Introduction to Anthropology\n- College Success Concise\n- College Success\n- Preparing for College Success\n- Microbiology\n- Chemistry 2e\n- Chemistry: Atoms First 2e\n- Física universitaria volumen 1\n- Física universitaria volumen 2\n- Física universitaria volumen 3\n- Introduction to Business\n- Astronomy 2e\n- Principles of Marketing\n- Psychologia\n- Contemporary Mathematics\n- Statistics\n- World History Volume 1, to 1500\n- World History Volume 2, from 1400\n- Physics\n- Introduction to Political Science\n- Introducción a la estadística empresarial\n- Introducción a la estadística\n- Entrepreneurship\n- Fizyka dla szkół wyższych. Tom 1\n- Fizyka dla szkół wyższych. Tom 2\n- Fizyka dla szkół wyższych. Tom 3\n- Writing Guide with Handbook\n- Biology 2e\n- Biology for AP® Courses\n- Concepts of Biology\n- Introduction to Sociology 3e\n- Life, Liberty, and the Pursuit of Happiness\n- Precálculo 2ed\n- Psychology 2e\n- Playground\n- University Physics Volume 1\n- University Physics Volume 2\n- University Physics Volume 3\n- Principles of Finance\n- U.S. History\n- American Government 3e\n- Anatomy and Physiology 2e\n- Química 2ed\n- Química: Comenzando con los átomos 2ed\n- Elementary Algebra 2e\n- Intermediate Algebra 2e\n- Prealgebra 2e\n- Business Ethics\n- Organizational Behavior\n- Principles of Management\n- Introduction to Intellectual Property\n- Principles of Economics 3e\n- Principles of Macroeconomics 3e\n- Principles of Macroeconomics for AP® Courses 2e\n- Algebra and Trigonometry 2e\n- College Algebra 2e\n- College Algebra with Corequisite Support 2e\n- Precalculus 2e\n- Introduction to Philosophy\n- College Physics 2e\n- College Physics for AP® Courses 2e\n- Mikroekonomia – Podstawy\n\nBooks from other sources:\n- Byte of Python\n- Non-Programmer's Tutorial for Python 3\n- Python Programming\n- Algorithms\n- Communication Theory\n- C Programming\n- C Sharp Programming\n- Formal Logic\n- Haskell\n- How To Assemble A Desktop PC\n- LaTeX\n- OpenSSH\n- Write Yourself a Scheme in 48 Hours\n- X86 Disassembly\n- XML - Managing Data Exchange\n- Bourne Shell Scripting\n- F Sharp Programming\n- Tcl Programming\n- Java Programming\n- MATLAB Programming\n- MySQL\n- Foundations of Computer_Science\n- Introduction to Numerical Methods\n- Think Python\n- Engineering Acoustics\n- Control Systems\n- Sensory Systems\n- Transportation Economics\n- Circuit Theory\n- Communication Systems\n- Spanish\n- Latin\n- English in Use\n- French\n- German\n- High School Mathematics Extensions\n- Linear Algebra\n- Timeless Theorems of Mathematics\n- A Brief Introduction to Engineering Computation with MATLAB\n- Aerodynamics and Aircraft Performance, 3rd edition\n- Acoustics\n- Ada_Programming\n- Algorithms\n- Anatomy_and_Physiology_of_Animals\n- Applications_of_ICT_in_Libraries\n- Arimaa\n- A-level_Computing/AQA\n- Basic_Physics_of_Nuclear_Medicine\n- Blended_Learning_in_K-12\n- Blender_3D:_Noob_to_Pro\n- C_Programming\n- Chess\n- Coaching_Youth_Middle_Distance_Runners\n- Cognitive_Psychology_and_Cognitive_Neuroscience\n- Consciousness_Studies\n- Elements_of_Political_Communication\n- Engineering_Acoustics\n- European_History\n- First_Aid\n- Formal_Logic\n- Fundamentals_of_Transportation\n- Guitar\n- High_School_Mathematics_Extensions\n- Historical_Geology\n- How_To_Assemble_A_Desktop_PC\n- Human_Physiology\n- Introduction_to_Paleoanthropology\n- Introduction_to_Sociology\n- Knowing_Knoppix\n- Learning_Theories\n- Linear_Algebra\n- Lucid_Dreaming\n- Managing_Groups_and_Teams\n- Miskito\n- Muggles%27_Guide_to_Harry_Potter\n- New_Zealand_History\n- Physics_Study_Guide\n- Proteomics\n- Radiation_Oncology\n- Social_and_Cultural_Foundations_of_American_Education\n- Special_Relativity\n- Speech-Language_Pathology/Stuttering\n- This_Quantum_World\n- UK_Constitution_and_Government\n- UNDP-APDIP_Books\n- Using_Wikibooks\n- Wikijunior:Solar_System\n- XForms\n- Zine_Making\n- Basic_Computing_Using_Windows\n- Cognitive_Psychology_and_Cognitive_Neuroscience\n- Movie_Making_Manual\n- Organic_Chemistry\n- European_History\n- Cookbook\n- Chess\n- Japanese\n- Consciousness_Studies\n- Chinese_(Mandarin))\n- Wikijunior:Solar_System\n- Blender_3D:_Noob_to_Pro\n- FHSST_Physics\n- How_To_Assemble_A_Desktop_PC\n- History_of_the_United_States\n- High_School_Mathematics_Extensions\n- Lucid_Dreaming\n- Nanotechnology\n- Introduction to Online Convex Optimization\n- Structure and Interpretation of Computer Programs\n- Convex Optimization: Algorithms and Complexity\n- Trustworthy Machine Learning",
"#### Initial Data Collection and Normalization\n\nWherever possible, the books are converted to markdown. This formatting is kept intact with downstream tasks in mind (e.g. conversational QA).\nThe source of the text is prepended to each document to add context, and it is hoped that this also has the potential to improve source attribution and guidance capabilities of models.",
"### Licensing Information\n\nAll books in this collection were previously released with an unambiguous cc-by-4.0 license by the original authors."
] |
[
"TAGS\n#task_categories-text-generation #language-English #language-Spanish #language-Polish #license-cc-by-4.0 #arxiv-1909.05207 #arxiv-1405.4980 #arxiv-2310.08215 #region-us \n",
"# Dataset Card for CC OpenBooks",
"## Dataset Description\n\n CC OpenBooks is a curated collection of high quality non-fiction books. All texts are from CC-By-4.0 sources, with no license ambiguity.\nThe documents are normalized to markdown, and care is taken to ensure most formatting (e.g. inline LaTeX) remains intact. Files are manually inspected and cleaned of all defects wherever possible.",
"### Source Data\n\nThe following Openstax collections were used in creating this dataset:\n- Introduction to Anthropology\n- College Success Concise\n- College Success\n- Preparing for College Success\n- Microbiology\n- Chemistry 2e\n- Chemistry: Atoms First 2e\n- Física universitaria volumen 1\n- Física universitaria volumen 2\n- Física universitaria volumen 3\n- Introduction to Business\n- Astronomy 2e\n- Principles of Marketing\n- Psychologia\n- Contemporary Mathematics\n- Statistics\n- World History Volume 1, to 1500\n- World History Volume 2, from 1400\n- Physics\n- Introduction to Political Science\n- Introducción a la estadística empresarial\n- Introducción a la estadística\n- Entrepreneurship\n- Fizyka dla szkół wyższych. Tom 1\n- Fizyka dla szkół wyższych. Tom 2\n- Fizyka dla szkół wyższych. Tom 3\n- Writing Guide with Handbook\n- Biology 2e\n- Biology for AP® Courses\n- Concepts of Biology\n- Introduction to Sociology 3e\n- Life, Liberty, and the Pursuit of Happiness\n- Precálculo 2ed\n- Psychology 2e\n- Playground\n- University Physics Volume 1\n- University Physics Volume 2\n- University Physics Volume 3\n- Principles of Finance\n- U.S. History\n- American Government 3e\n- Anatomy and Physiology 2e\n- Química 2ed\n- Química: Comenzando con los átomos 2ed\n- Elementary Algebra 2e\n- Intermediate Algebra 2e\n- Prealgebra 2e\n- Business Ethics\n- Organizational Behavior\n- Principles of Management\n- Introduction to Intellectual Property\n- Principles of Economics 3e\n- Principles of Macroeconomics 3e\n- Principles of Macroeconomics for AP® Courses 2e\n- Algebra and Trigonometry 2e\n- College Algebra 2e\n- College Algebra with Corequisite Support 2e\n- Precalculus 2e\n- Introduction to Philosophy\n- College Physics 2e\n- College Physics for AP® Courses 2e\n- Mikroekonomia – Podstawy\n\nBooks from other sources:\n- Byte of Python\n- Non-Programmer's Tutorial for Python 3\n- Python Programming\n- Algorithms\n- Communication Theory\n- C Programming\n- C Sharp Programming\n- Formal Logic\n- Haskell\n- How To Assemble A Desktop PC\n- LaTeX\n- OpenSSH\n- Write Yourself a Scheme in 48 Hours\n- X86 Disassembly\n- XML - Managing Data Exchange\n- Bourne Shell Scripting\n- F Sharp Programming\n- Tcl Programming\n- Java Programming\n- MATLAB Programming\n- MySQL\n- Foundations of Computer_Science\n- Introduction to Numerical Methods\n- Think Python\n- Engineering Acoustics\n- Control Systems\n- Sensory Systems\n- Transportation Economics\n- Circuit Theory\n- Communication Systems\n- Spanish\n- Latin\n- English in Use\n- French\n- German\n- High School Mathematics Extensions\n- Linear Algebra\n- Timeless Theorems of Mathematics\n- A Brief Introduction to Engineering Computation with MATLAB\n- Aerodynamics and Aircraft Performance, 3rd edition\n- Acoustics\n- Ada_Programming\n- Algorithms\n- Anatomy_and_Physiology_of_Animals\n- Applications_of_ICT_in_Libraries\n- Arimaa\n- A-level_Computing/AQA\n- Basic_Physics_of_Nuclear_Medicine\n- Blended_Learning_in_K-12\n- Blender_3D:_Noob_to_Pro\n- C_Programming\n- Chess\n- Coaching_Youth_Middle_Distance_Runners\n- Cognitive_Psychology_and_Cognitive_Neuroscience\n- Consciousness_Studies\n- Elements_of_Political_Communication\n- Engineering_Acoustics\n- European_History\n- First_Aid\n- Formal_Logic\n- Fundamentals_of_Transportation\n- Guitar\n- High_School_Mathematics_Extensions\n- Historical_Geology\n- How_To_Assemble_A_Desktop_PC\n- Human_Physiology\n- Introduction_to_Paleoanthropology\n- Introduction_to_Sociology\n- Knowing_Knoppix\n- Learning_Theories\n- Linear_Algebra\n- Lucid_Dreaming\n- Managing_Groups_and_Teams\n- Miskito\n- Muggles%27_Guide_to_Harry_Potter\n- New_Zealand_History\n- Physics_Study_Guide\n- Proteomics\n- Radiation_Oncology\n- Social_and_Cultural_Foundations_of_American_Education\n- Special_Relativity\n- Speech-Language_Pathology/Stuttering\n- This_Quantum_World\n- UK_Constitution_and_Government\n- UNDP-APDIP_Books\n- Using_Wikibooks\n- Wikijunior:Solar_System\n- XForms\n- Zine_Making\n- Basic_Computing_Using_Windows\n- Cognitive_Psychology_and_Cognitive_Neuroscience\n- Movie_Making_Manual\n- Organic_Chemistry\n- European_History\n- Cookbook\n- Chess\n- Japanese\n- Consciousness_Studies\n- Chinese_(Mandarin))\n- Wikijunior:Solar_System\n- Blender_3D:_Noob_to_Pro\n- FHSST_Physics\n- How_To_Assemble_A_Desktop_PC\n- History_of_the_United_States\n- High_School_Mathematics_Extensions\n- Lucid_Dreaming\n- Nanotechnology\n- Introduction to Online Convex Optimization\n- Structure and Interpretation of Computer Programs\n- Convex Optimization: Algorithms and Complexity\n- Trustworthy Machine Learning",
"#### Initial Data Collection and Normalization\n\nWherever possible, the books are converted to markdown. This formatting is kept intact with downstream tasks in mind (e.g. conversational QA).\nThe source of the text is prepended to each document to add context, and it is hoped that this also has the potential to improve source attribution and guidance capabilities of models.",
"### Licensing Information\n\nAll books in this collection were previously released with an unambiguous cc-by-4.0 license by the original authors."
] |
[
68,
9,
92,
1310,
85,
33
] |
[
"passage: TAGS\n#task_categories-text-generation #language-English #language-Spanish #language-Polish #license-cc-by-4.0 #arxiv-1909.05207 #arxiv-1405.4980 #arxiv-2310.08215 #region-us \n# Dataset Card for CC OpenBooks## Dataset Description\n\n CC OpenBooks is a curated collection of high quality non-fiction books. All texts are from CC-By-4.0 sources, with no license ambiguity.\nThe documents are normalized to markdown, and care is taken to ensure most formatting (e.g. inline LaTeX) remains intact. Files are manually inspected and cleaned of all defects wherever possible."
] |
e98710db057035dc61cff88c432b1842b7256d28
|
# Dataset of viola/ビオラ (Pokémon)
This is the dataset of viola/ビオラ (Pokémon), containing 242 images and their tags.
The core tags of this character are `blonde_hair, green_eyes, breasts, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 242 | 208.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/viola_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 242 | 136.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/viola_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 506 | 264.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/viola_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 242 | 190.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/viola_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 506 | 350.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/viola_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/viola_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, crop_top, green_pants, sleeveless_shirt, white_shirt, wristband, open_mouth, tongue, :d, holding_camera, midriff, eyelashes, solo, looking_at_viewer, pokemon_(creature), upper_teeth_only, white_belt |
| 1 | 6 |  |  |  |  |  | 1boy, 1girl, blush, hetero, paizuri, cum_on_breasts, huge_breasts, open_mouth, penis, smile, solo_focus, nipples, ejaculation, looking_at_viewer, shirt_lift |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | crop_top | green_pants | sleeveless_shirt | white_shirt | wristband | open_mouth | tongue | :d | holding_camera | midriff | eyelashes | solo | looking_at_viewer | pokemon_(creature) | upper_teeth_only | white_belt | 1boy | blush | hetero | paizuri | cum_on_breasts | huge_breasts | penis | smile | solo_focus | nipples | ejaculation | shirt_lift |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:--------------|:-------------------|:--------------|:------------|:-------------|:---------|:-----|:-----------------|:----------|:------------|:-------|:--------------------|:---------------------|:-------------------|:-------------|:-------|:--------|:---------|:----------|:-----------------|:---------------|:--------|:--------|:-------------|:----------|:--------------|:-------------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | | | | | | X | | | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/viola_pokemon
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-16T19:49:18+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T12:45:52+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of viola/ビオラ (Pokémon)
==============================
This is the dataset of viola/ビオラ (Pokémon), containing 242 images and their tags.
The core tags of this character are 'blonde\_hair, green\_eyes, breasts, large\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.