sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
c78da913a5916f3e48b67268948aa0cda44f9fc2
|
# Dataset of oyashio/親潮 (Kantai Collection)
This is the dataset of oyashio/親潮 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `black_hair, long_hair, hair_ornament, hairclip, breasts, ribbon, blue_ribbon, neck_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 464.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oyashio_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 290.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oyashio_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1160 | 623.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oyashio_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 422.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oyashio_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1160 | 834.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oyashio_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/oyashio_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, black_skirt, black_vest, cowboy_shot, dress_shirt, looking_at_viewer, pleated_skirt, short_sleeves, solo, white_gloves, white_shirt, school_uniform, simple_background, smile, grey_eyes, white_background, blush, brown_eyes |
| 1 | 5 |  |  |  |  |  | 1girl, black_vest, open_mouth, short_sleeves, solo, upper_body, white_background, white_gloves, white_shirt, black_skirt, dress_shirt, school_uniform, simple_background, looking_at_viewer, smile, yellow_eyes, pleated_skirt |
| 2 | 9 |  |  |  |  |  | 1girl, black_skirt, black_socks, black_vest, dress_shirt, kneehighs, pleated_skirt, school_uniform, short_sleeves, solo, white_gloves, white_shirt, grey_eyes, simple_background, white_background, looking_at_viewer, feet_out_of_frame, one-hour_drawing_challenge, twitter_username |
| 3 | 6 |  |  |  |  |  | 1girl, black_skirt, black_socks, black_vest, brown_footwear, kneehighs, pleated_skirt, school_uniform, short_sleeves, solo, white_gloves, white_shirt, dress_shirt, full_body, loafers, grey_eyes, simple_background, white_background |
| 4 | 12 |  |  |  |  |  | 1girl, black_skirt, black_vest, pleated_skirt, short_sleeves, skirt_lift, solo, white_shirt, black_panties, lifted_by_self, cowboy_shot, dress_shirt, white_gloves, school_uniform, blush, grey_eyes |
| 5 | 6 |  |  |  |  |  | 1girl, black_bra, black_skirt, looking_at_viewer, medium_breasts, pleated_skirt, solo, white_gloves, white_shirt, cleavage, dress_shirt, open_shirt, short_sleeves, black_vest, blush, cowboy_shot, grey_eyes, navel |
| 6 | 8 |  |  |  |  |  | 1girl, black_leotard, detached_collar, fake_animal_ears, playboy_bunny, rabbit_ears, solo, looking_at_viewer, black_pantyhose, wrist_cuffs, blush, bowtie, simple_background, white_background, cleavage, cowboy_shot, grey_eyes, medium_breasts, strapless_leotard, yellow_eyes, dated, gloves, hair_between_eyes, open_mouth, rabbit_tail |
| 7 | 15 |  |  |  |  |  | 1girl, solo, white_background, looking_at_viewer, grey_eyes, simple_background, twitter_username, open_mouth, black_one-piece_swimsuit, competition_swimsuit, cowboy_shot, dated, large_breasts, one-hour_drawing_challenge, blush, collarbone, covered_navel, hair_between_eyes, blue_one-piece_swimsuit, cleavage, highleg_swimsuit, sitting, two-tone_swimsuit, yellow_eyes |
| 8 | 15 |  |  |  |  |  | 1girl, alternate_costume, black_dress, solo, white_sweater, long_sleeves, simple_background, looking_at_viewer, belt, black_pantyhose, smile, white_background, bow, green_eyes |
| 9 | 16 |  |  |  |  |  | 1girl, solo, black_pantyhose, looking_at_viewer, red_dress, detached_collar, grey_eyes, santa_costume, simple_background, white_background, blush, fur-trimmed_dress, open_mouth, black_belt, christmas, cowboy_shot, green_ribbon, red_coat, cleavage, smile |
| 10 | 5 |  |  |  |  |  | 1girl, blue_shirt, policewoman, blue_necktie, blue_skirt, short_sleeves, black_pantyhose, blush, breast_pocket, open_mouth, pencil_skirt, white_gloves, 1boy, alternate_costume, black_necktie, black_skirt, closed_eyes, grey_eyes, handcuffs, hetero, panties_under_pantyhose, police_hat, restrained, solo_focus, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_skirt | black_vest | cowboy_shot | dress_shirt | looking_at_viewer | pleated_skirt | short_sleeves | solo | white_gloves | white_shirt | school_uniform | simple_background | smile | grey_eyes | white_background | blush | brown_eyes | open_mouth | upper_body | yellow_eyes | black_socks | kneehighs | feet_out_of_frame | one-hour_drawing_challenge | twitter_username | brown_footwear | full_body | loafers | skirt_lift | black_panties | lifted_by_self | black_bra | medium_breasts | cleavage | open_shirt | navel | black_leotard | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | black_pantyhose | wrist_cuffs | bowtie | strapless_leotard | dated | gloves | hair_between_eyes | rabbit_tail | black_one-piece_swimsuit | competition_swimsuit | large_breasts | collarbone | covered_navel | blue_one-piece_swimsuit | highleg_swimsuit | sitting | two-tone_swimsuit | alternate_costume | black_dress | white_sweater | long_sleeves | belt | bow | green_eyes | red_dress | santa_costume | fur-trimmed_dress | black_belt | christmas | green_ribbon | red_coat | blue_shirt | policewoman | blue_necktie | blue_skirt | breast_pocket | pencil_skirt | 1boy | black_necktie | closed_eyes | handcuffs | hetero | panties_under_pantyhose | police_hat | restrained | solo_focus |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------------|:-------------|:--------------|:--------------|:--------------------|:----------------|:----------------|:-------|:---------------|:--------------|:-----------------|:--------------------|:--------|:------------|:-------------------|:--------|:-------------|:-------------|:-------------|:--------------|:--------------|:------------|:--------------------|:-----------------------------|:-------------------|:-----------------|:------------|:----------|:-------------|:----------------|:-----------------|:------------|:-----------------|:-----------|:-------------|:--------|:----------------|:------------------|:-------------------|:----------------|:--------------|:------------------|:--------------|:---------|:--------------------|:--------|:---------|:--------------------|:--------------|:---------------------------|:-----------------------|:----------------|:-------------|:----------------|:--------------------------|:-------------------|:----------|:--------------------|:--------------------|:--------------|:----------------|:---------------|:-------|:------|:-------------|:------------|:----------------|:--------------------|:-------------|:------------|:---------------|:-----------|:-------------|:--------------|:---------------|:-------------|:----------------|:---------------|:-------|:----------------|:--------------|:------------|:---------|:--------------------------|:-------------|:-------------|:-------------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | | X | X | X | X | X | X | X | X | X | X | | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | X | X | X | | X | X | X | X | X | X | X | X | X | | X | X | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | X | | X | | X | X | X | X | X | X | X | | X | X | | | | | | X | X | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 12 |  |  |  |  |  | X | X | X | X | X | | X | X | X | X | X | X | | | X | | X | | | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | X | | X | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 8 |  |  |  |  |  | X | | | X | | X | | | X | | | | X | | X | X | X | | X | | X | | | | | | | | | | | | | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 15 |  |  |  |  |  | X | | | X | | X | | | X | | | | X | | X | X | X | | X | | X | | | | X | X | | | | | | | | | X | | | | | | | | | | | | X | | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 15 |  |  |  |  |  | X | | | | | X | | | X | | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 16 |  |  |  |  |  | X | | | X | | X | | | X | | | | X | X | X | X | X | | X | | | | | | | | | | | | | | | | X | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 10 | 5 |  |  |  |  |  | X | X | | | | | | X | | X | | | | | X | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/oyashio_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T11:16:26+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T19:05:36+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of oyashio/親潮 (Kantai Collection)
=========================================
This is the dataset of oyashio/親潮 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are 'black\_hair, long\_hair, hair\_ornament, hairclip, breasts, ribbon, blue\_ribbon, neck\_ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
92f2e85024e2d0025e573424e3058ac5189721a7
|
Label Names:
{
'business': 0,
'entertainment': 1,
'politics': 2,
'sport': 3,
'tech': 4
}
Dataset: [Kaggle - BBC Full Text Document Classification](https://www.kaggle.com/datasets/shivamkushwaha/bbc-full-text-document-classification/code)
|
KushT/bbc_news_multiclass_train_val_test
|
[
"license:mit",
"region:us"
] |
2023-08-21T11:28:42+00:00
|
{"license": "mit", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3414429, "num_examples": 1512}, {"name": "validation", "num_bytes": 888603, "num_examples": 379}, {"name": "test", "num_bytes": 751863, "num_examples": 334}], "download_size": 0, "dataset_size": 5054895}}
|
2023-08-21T11:56:02+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
Label Names:
{
'business': 0,
'entertainment': 1,
'politics': 2,
'sport': 3,
'tech': 4
}
Dataset: Kaggle - BBC Full Text Document Classification
|
[] |
[
"TAGS\n#license-mit #region-us \n"
] |
[
11
] |
[
"passage: TAGS\n#license-mit #region-us \n"
] |
a5fa796f575566172709e495a2554ef6c05f344c
|
# SURF: A Generalisation Benchmark for GNNs Predicting Fluid Dynamics
SURF, is a benchmark designed to test the generalization of learned graph-based fluid simulators. The benchmark consists of seven independent datasets:
- Base
- Turned
- Topo
- Range
- Dynamic
- Full
- FullFiner
Each dataset is available as separate *.zip file and consists of at least 1200 2D incompressible fluid flow simulations with 300 timesteps.
The data structure is as follows:
- folder: dataset_name
- folders: dpx
- files: sim.npz, triangles.py, constrained_kmeans_20.npy, Simulation_dp1_Timestep_50.png
- folder: Splits
- files: train.txt, test.txt, valid.txt
The file sim.npz (numpy archive) contains the result of the simulation for each timestep at each node:
- 'pointcloud': x, y coordinates
- 'VX': velocity in x-direction
- 'VY': velocity in y-direction
- 'PS': static pressure
- 'PG': dynamic pressure
- 'T': temperature
- 'TC': thermal conductivity of fluid
- 'HC': heat capacity of fluid
The results have the following shape: VX.shape=(#timesteps, #nodes, 1).
The file triangles.py contains the mesh connectivity. triangles.shape=(#timesteps, #elements, 3). Each triangle is defined by the node numbers in counter clockwise direction.
|
SURF-FluidSimulation/FluidSimulation
|
[
"license:cc-by-nc-4.0",
"region:us"
] |
2023-08-21T11:49:05+00:00
|
{"license": "cc-by-nc-4.0"}
|
2023-08-21T15:07:41+00:00
|
[] |
[] |
TAGS
#license-cc-by-nc-4.0 #region-us
|
# SURF: A Generalisation Benchmark for GNNs Predicting Fluid Dynamics
SURF, is a benchmark designed to test the generalization of learned graph-based fluid simulators. The benchmark consists of seven independent datasets:
- Base
- Turned
- Topo
- Range
- Dynamic
- Full
- FullFiner
Each dataset is available as separate *.zip file and consists of at least 1200 2D incompressible fluid flow simulations with 300 timesteps.
The data structure is as follows:
- folder: dataset_name
- folders: dpx
- files: URL, URL, constrained_kmeans_20.npy, Simulation_dp1_Timestep_50.png
- folder: Splits
- files: URL, URL, URL
The file URL (numpy archive) contains the result of the simulation for each timestep at each node:
- 'pointcloud': x, y coordinates
- 'VX': velocity in x-direction
- 'VY': velocity in y-direction
- 'PS': static pressure
- 'PG': dynamic pressure
- 'T': temperature
- 'TC': thermal conductivity of fluid
- 'HC': heat capacity of fluid
The results have the following shape: URL=(#timesteps, #nodes, 1).
The file URL contains the mesh connectivity. URL=(#timesteps, #elements, 3). Each triangle is defined by the node numbers in counter clockwise direction.
|
[
"# SURF: A Generalisation Benchmark for GNNs Predicting Fluid Dynamics\n\nSURF, is a benchmark designed to test the generalization of learned graph-based fluid simulators. The benchmark consists of seven independent datasets:\n\n- Base\n- Turned\n- Topo\n- Range\n- Dynamic\n- Full\n- FullFiner\n\nEach dataset is available as separate *.zip file and consists of at least 1200 2D incompressible fluid flow simulations with 300 timesteps. \nThe data structure is as follows:\n- folder: dataset_name\n\t- folders: dpx\n\t\t- files: URL, URL, constrained_kmeans_20.npy, Simulation_dp1_Timestep_50.png\n\t- folder: Splits\n\t\t- files: URL, URL, URL\n\t\t\nThe file URL (numpy archive) contains the result of the simulation for each timestep at each node:\n- 'pointcloud': x, y coordinates\n- 'VX': velocity in x-direction\n- 'VY': velocity in y-direction\n- 'PS': static pressure\n- 'PG': dynamic pressure\n- 'T': temperature\n- 'TC': thermal conductivity of fluid\n- 'HC': heat capacity of fluid \n\nThe results have the following shape: URL=(#timesteps, #nodes, 1).\n\nThe file URL contains the mesh connectivity. URL=(#timesteps, #elements, 3). Each triangle is defined by the node numbers in counter clockwise direction."
] |
[
"TAGS\n#license-cc-by-nc-4.0 #region-us \n",
"# SURF: A Generalisation Benchmark for GNNs Predicting Fluid Dynamics\n\nSURF, is a benchmark designed to test the generalization of learned graph-based fluid simulators. The benchmark consists of seven independent datasets:\n\n- Base\n- Turned\n- Topo\n- Range\n- Dynamic\n- Full\n- FullFiner\n\nEach dataset is available as separate *.zip file and consists of at least 1200 2D incompressible fluid flow simulations with 300 timesteps. \nThe data structure is as follows:\n- folder: dataset_name\n\t- folders: dpx\n\t\t- files: URL, URL, constrained_kmeans_20.npy, Simulation_dp1_Timestep_50.png\n\t- folder: Splits\n\t\t- files: URL, URL, URL\n\t\t\nThe file URL (numpy archive) contains the result of the simulation for each timestep at each node:\n- 'pointcloud': x, y coordinates\n- 'VX': velocity in x-direction\n- 'VY': velocity in y-direction\n- 'PS': static pressure\n- 'PG': dynamic pressure\n- 'T': temperature\n- 'TC': thermal conductivity of fluid\n- 'HC': heat capacity of fluid \n\nThe results have the following shape: URL=(#timesteps, #nodes, 1).\n\nThe file URL contains the mesh connectivity. URL=(#timesteps, #elements, 3). Each triangle is defined by the node numbers in counter clockwise direction."
] |
[
17,
343
] |
[
"passage: TAGS\n#license-cc-by-nc-4.0 #region-us \n# SURF: A Generalisation Benchmark for GNNs Predicting Fluid Dynamics\n\nSURF, is a benchmark designed to test the generalization of learned graph-based fluid simulators. The benchmark consists of seven independent datasets:\n\n- Base\n- Turned\n- Topo\n- Range\n- Dynamic\n- Full\n- FullFiner\n\nEach dataset is available as separate *.zip file and consists of at least 1200 2D incompressible fluid flow simulations with 300 timesteps. \nThe data structure is as follows:\n- folder: dataset_name\n\t- folders: dpx\n\t\t- files: URL, URL, constrained_kmeans_20.npy, Simulation_dp1_Timestep_50.png\n\t- folder: Splits\n\t\t- files: URL, URL, URL\n\t\t\nThe file URL (numpy archive) contains the result of the simulation for each timestep at each node:\n- 'pointcloud': x, y coordinates\n- 'VX': velocity in x-direction\n- 'VY': velocity in y-direction\n- 'PS': static pressure\n- 'PG': dynamic pressure\n- 'T': temperature\n- 'TC': thermal conductivity of fluid\n- 'HC': heat capacity of fluid \n\nThe results have the following shape: URL=(#timesteps, #nodes, 1).\n\nThe file URL contains the mesh connectivity. URL=(#timesteps, #elements, 3). Each triangle is defined by the node numbers in counter clockwise direction."
] |
b86816ada410f3cb80000506195752f83820a064
|
# Dataset Card for "Emotion_Recognition_4_llama2_chat_oversampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
RikoteMaster/Emotion_Recognition_4_llama2_chat_oversampled
|
[
"region:us"
] |
2023-08-21T11:51:34+00:00
|
{"dataset_info": {"features": [{"name": "Text_processed", "dtype": "string"}, {"name": "Emotion", "dtype": "string"}, {"name": "Augmented", "dtype": "bool"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39065708, "num_examples": 82848}], "download_size": 12633611, "dataset_size": 39065708}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-22T06:43:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Emotion_Recognition_4_llama2_chat_oversampled"
More Information needed
|
[
"# Dataset Card for \"Emotion_Recognition_4_llama2_chat_oversampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Emotion_Recognition_4_llama2_chat_oversampled\"\n\nMore Information needed"
] |
[
6,
29
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Emotion_Recognition_4_llama2_chat_oversampled\"\n\nMore Information needed"
] |
dcacf9a399ced8d56d6523b6a01c30f6efa8e627
|
<p><strong>Cardio Flex (#1 PREMIUM BLOOD FLOW SUPPORT PILLS):</strong> <a href="https://sites.google.com/view/cardioflexx/home">CardioFlex</a> is an all-natural supplement designed to lower blood pressure and improve cardiovascular health.According to the manufacturer, CardioFlex is the first supplement designed to address the root cause of high blood pressure.</p>
<h2 style="background-color: white; box-sizing: border-box; color: black; font-family: Roboto, Helvetica, Arial, sans-serif; font-size: 1.5em; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: bold; letter-spacing: normal; line-height: 1.1; margin: 10px 0px; padding: 0px; text-align: start; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><a href="https://www.healthsupplement24x7.com/get-cardioflex" target="_blank"><span style="background-color: red; box-sizing: border-box;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"><span style="box-sizing: border-box; color: #ffd966;">CardioFlex – Official Website Link – Click Here</span></strong></span></a></h2>
<p style="background-color: white; box-sizing: border-box; color: black; font-family: 'Times New Roman'; font-size: medium; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: 400; letter-spacing: normal; margin: 0px 0px 10px; padding: 0px; text-align: left; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;">➥ <span style="box-sizing: border-box; color: #993300;">Product Name -</span> <span style="box-sizing: border-box; color: red;">{CardioFlex} (Cardio Flex)</span><br style="box-sizing: border-box;" />➥ <span style="box-sizing: border-box; color: green;">Benefits - CardioFlex Designed to address the root cause of high blood pressure!</span><br style="box-sizing: border-box;" />➥ <span style="box-sizing: border-box; color: olive;">Category -</span></strong><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"> Blood Flow Support </strong><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;">Pills<br style="box-sizing: border-box;" />➥ <span style="box-sizing: border-box; color: purple;">Availability –</span> Online<br style="box-sizing: border-box;" />➥ <span style="box-sizing: border-box; color: navy;">Rating: -</span> <span style="box-sizing: border-box; color: red;">5.0/5.0</span> ⭐⭐⭐⭐⭐</strong></p>
<h2 style="background-color: white; box-sizing: border-box; color: black; font-family: Roboto, Helvetica, Arial, sans-serif; font-size: 1.5em; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: bold; letter-spacing: normal; line-height: 1.1; margin: 10px 0px; padding: 0px; text-align: start; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><a href="https://www.healthsupplement24x7.com/get-cardioflex"><span style="background-color: red; box-sizing: border-box;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;">✅<span style="box-sizing: border-box; color: #ffcc00;">Click Here To Visit – “OFFICIAL WEBSITE”</span>✅</strong></span></a></h2>
<h2 style="background-color: white; box-sizing: border-box; color: black; font-family: Roboto, Helvetica, Arial, sans-serif; font-size: 1.5em; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: bold; letter-spacing: normal; line-height: 1.1; margin: 10px 0px; padding: 0px; text-align: start; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><a href="https://www.healthsupplement24x7.com/get-cardioflex"><span style="background-color: red; box-sizing: border-box;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;">✅<span style="box-sizing: border-box; color: #ffcc00;">Click Here To Visit – “OFFICIAL WEBSITE”</span>✅</strong></span></a></h2>
<h2 style="background-color: white; box-sizing: border-box; color: black; font-family: Roboto, Helvetica, Arial, sans-serif; font-size: 1.5em; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: bold; letter-spacing: normal; line-height: 1.1; margin: 10px 0px; padding: 0px; text-align: start; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><a href="https://www.healthsupplement24x7.com/get-cardioflex"><span style="background-color: red; box-sizing: border-box;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;">✅<span style="box-sizing: border-box; color: #ffcc00;">Click Here To Visit – “OFFICIAL WEBSITE”</span>✅</strong></span></a></h2>
<p>Is <a href="https://sites.google.com/view/cardio-flex-pills/home">Cardio Flex</a> the right natural solution to help you safely lower your blood pressure? Are there any potential side effects? Read our full review of CardioFlex to learn everything you need to know about this product before you try it.</p>
<p style="text-align: center;"><em><a style="margin-left: 1em; margin-right: 1em;" href="https://www.healthsupplement24x7.com/get-cardioflex" target="_blank"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvIvq0pXxsGgRgWbTBa6xWWTS9FFHjOiOAwm5QBhLqB1qUCbNsV6_M3TfG8VU_t5lPA6B0k0CQg3U_s-3Be3l5Q8Wj8BWCEMibjuc5_IvkeKBwoiZrtsDA82kk33zir79fo37r9pdaertdetMB4jY8VaB6SPz2ZgyuP4XsWtl5d_Ye82VEIyvQXGWwz2PW/w640-h420/CardioFlex%209.png" alt="" width="640" height="420" border="0" data-original-height="275" data-original-width="420" /></a></em></p>
<h2>What is CardioFlex?</h2>
<p>As briefly mentioned, <a href="https://cardio-flexx.clubeo.com/calendar/2023/08/21/cardio-flex-new-healthy-blood-support-pills-all-you-need-to-know-about-cardio-flex-offer?_ga=2.8403894.2016146275.1692597849-153963928.1692597849">Cardio Flex</a> is an all-natural supplement formulated to support healthy blood pressure levels. Unlike prescription drugs, it uses natural herbal extracts that have been clinically proven to lower blood pressure levels and improve heart health.</p>
<p>According to the manufacturer, <a href="https://cardio-flexx.clubeo.com/page/cardio-flex-dr-warning-is-cardioflex-worth-buying-what-do-customers-say.html">CardioFlex</a> is the first product to directly address the root cause of hypertension. This is because it can deliver real results when similar supplements fall short.</p>
<p>According to the manufacturer, by using their product daily, you can experience several benefits, including the following:</p>
<ul>
<li>Stabilize your blood pressure levels to normal levels</li>
<li>Eliminate dangerous cholesterol and plaque from your arteries</li>
<li>Improve your digestion and immune system function</li>
<li>Protect your arteries from weakening and becoming inflamed</li>
<li>Improve your cognitive health and overall blood flow</li>
</ul>
<p><a href="https://cardio-flexx.clubeo.com/page/cardio-flex-1-premium-blood-flow-support-pills-reduce-the-risk-of-sudden-heart-attacks.html">CardioFlex</a> is designed to help anybody better manage their blood pressure, regardless of age, gender, or any other physiological factors. Therefore, it doesn’t matter whether you’re a man in his sixties or a woman in her forties; <a href="https://www.eventcreate.com/e/cardioflexreviews">CardioFlex</a> can help you control your blood pressure levels.</p>
<h2 style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-cardioflex" target="_blank"><span style="background-color: red; color: #f1c232;">(THE BIG BILLION DAYS SALE) - THE MOST SELLING "CARDIO FLEX" IS HERE ORDER NOW</span></a></h2>
<h2>How Does CardioFlex Work?</h2>
<p><a href="https://groups.google.com/g/cardio-flex-pills/c/OWEcJnqpOwM">Cardio Flex</a> claims to be the first blood pressure supplement to help you naturally balance your blood pressure. According to the official website, here is how it works:</p>
<p>According to new research from the Mayo Clinic, a new stress hormone may be one of the main reasons individuals suffer from high blood pressure, and others don’t. In the study of over 450,000 adults, scientists have found that a stress hormone known as PLR-15 was upwards of 400% more active in those with hypertension than those without.</p>
<p>According to the study, PLR-15’s primary effect is that it causes blood vessels to harden, weakening them. This makes them more susceptible to the buildup of plaque and cholesterol, which restricts blood flow, causing blood pressure to skyrocket even further.</p>
<p><a href="https://sketchfab.com/3d-models/cardio-flex-new-2023-reviews-b6219a95f75f4acca52e2d0b0b5bdf9a">CardioFlex</a> is the first supplement to directly go after PLR-15 and keep hormone levels in check. It claims to use proven ingredients to slow limit this production of PLR-15, slowing down the wear and tear on your arteries.</p>
<h2>Ingredients in CardioFlex</h2>
<p>The manufacturer of <a href="https://pdfhost.io/v/WcJkb1dU~_Cardio_Flex_New_Healthy_Blood_Support_Pills_All_You_Need_To_Know_About_CardioFlex_Offer">Cardio Flex</a> used a team of nutritionists and doctors to discover the best natural ingredients to combat high blood pressure levels. Their research led them to formulate CardioFlex with nine key elements, which include:</p>
<p><strong>Psyllium powder:</strong> Psyllium husk contains soluble fiber that benefits heart health and digestion. It helps regulate bowel movements by acting as a natural bulk-forming laxative. Several studies have shown psyllium can help regulate cholesterol levels and eliminate LDL cholesterol. It also appears to play a role in blood sugar balance as well.</p>
<p><strong>Acai berry:</strong> Acai berry was a popular “superfood” in the late 2000s that is well regarded for its high levels of antioxidants and anti-inflammatory compounds. Several studies have found acai can help to lower cholesterol levels and eliminate plaque from the arteries. Other studies have found acai has neuroprotective benefits and may boost cognition.</p>
<p><strong>Inulin:</strong> Inulin is a soluble fiber that primarily nourishes gut microbes, eases constipation, and helps the body absorb nutrients more efficiently. It appears to work similarly to psyllium by promoting regular bowel movements. In one study, females who took inulin daily had significant decreases in triglycerides and LDL cholesterol, two factors that affect blood pressure.</p>
<p><strong>Slippery elm bark:</strong> Slippery elm bark is common in parts of Canada and the United States. It is commonly used to combat inflammatory bowel diseases like IBS & Crohn’s disease. CardioFlex claims it can lower PLR-15 levels, which increases blood pressure levels. It may also rejuvenate skin and nail health as well.</p>
<p><strong>Chlorella:</strong> Chlorella is an algae primarily used for detoxification and immune support. Several studies have found that chlorella may reduce cholesterol levels, especially in those with high blood pressure. It also appears to improve blood lipid levels and directly lower blood pressure. It seems to do so by supplying the body with arginine, converted into nitric oxide, and widening blood vessels.</p>
<p><strong>Black walnut:</strong> Black walnuts are a type of nut rich in fiber, omega-3 fatty acids, and several antioxidants. A review of multiple studies found that eating walnuts decreased total and LDL cholesterol. Other studies have found that eating black walnuts improved blood vessel function and reduced plaque buildup in the arteries.</p>
<h2 style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-cardioflex" target="_blank"><span style="background-color: red; color: #f1c232;">(THE BIG BILLION DAYS SALE) - THE MOST SELLING "CARDIO FLEX" IS HERE ORDER NOW</span></a></h2>
<h2 style="text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.healthsupplement24x7.com/get-cardioflex" target="_blank"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmMKPypPqRKAhZKLgluQIMO2a1FHjWxYqwsnoBNXA4Gbv6uRkT71u3CllupYE-YbDSrHxbhEj5Gm1kJxR3nEaTYUCWL6nZaJEzpyDrhsI0k-b2RrTiFbOS_w7jMotbGi2tVgxzGGeoGMy096NoXEyvMaBQKQ0F8TcUVQsEK3dtNxM-cNi8p8CLtqKCJo1T/w640-h292/CardioFlex%203.png" alt="" width="640" height="292" border="0" data-original-height="639" data-original-width="1400" /></a></h2>
<h2>Benefits of The Cardio Flex:</h2>
<p>✅<a href="https://devfolio.co/@CardioFlex">Cardio Flex</a> Regulates blood sugar levels.<br />✅Cardio Flex Improves immune function.<br />✅Cardio Flex Destroys harmful cancer cells.<br />✅Cardio Flex Rejuvenates skin and nails.<br />✅It Reduces the risk of cancer.<br />✅Cardio Flex Improves brain function<br />✅It Improves blood flow.<br />✅Cardio Flex Reduces the risk of heart disease.</p>
<h2>Side Effects of CardioFlex – Is it Safe?</h2>
<p><a href="https://cardioflex1.bandcamp.com/track/cardio-flex-2023-new-healthy-blood-pressure-formula-is-cardioflex-right-choice">CardioFlex</a> was formulated by a team of doctors and nutritionists to safely lower your blood pressure to healthier levels. This is why there have not been any reports of anybody experiencing any serious side effects while using this product.</p>
<p>However, this is not to say that side effects cannot occur – only they have not yet. Any supplement can cause minor side effects like headache, nausea, or indigestion. The risk of experiencing these side effects is just very low.</p>
<p>Finally, if you are on prescription medication or have a serious medical condition, you should speak to your doctor before using this product – especially if you are on medication for high blood pressure.</p>
<h2 style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-cardioflex" target="_blank"><span style="background-color: red; color: #f1c232;">(THE BIG BILLION DAYS SALE) - THE MOST SELLING "CARDIO FLEX" IS HERE ORDER NOW</span></a></h2>
<h2>CardioFlex Pricing & Guarantee</h2>
<p>CardioFlex is one – if not the absolute best natural solution on the market to help combat high blood pressure levels. It has already helped thousands of men and women all over the world safely get their blood pressure levels under control.</p>
<p>If you believe CardioFlex suits you, the best place to order is through the official website . There you will find three different purchasing options to choose from, depending on your individual needs:</p>
<p style="text-align: center;"><img class="lazy" title="CardioFlex Pricing" src="https://imgnew.outlookindia.com/uploadimage/library/free_files/jpg/2_2023_08_18_025815.jpg" alt="CardioFlex Pricing" width="640" height="337" /></p>
<ul>
<li>One bottle: $59 + shipping</li>
<li>Three bottles: $165 total - $55 per bottle + shipping</li>
<li>Six bottles: $246 total - $41 per bottle with free shipping</li>
</ul>
<p>No matter your selected package, you are automatically covered by a 100%, 60-day money-back guarantee. According to the manufacturer, you can receive a full refund if you are dissatisfied with your results, experience any unwanted side effects, or simply don’t like the product.</p>
<p>Simply contact the manufacturer within 60 days, and you’ll receive a full refund within 48 hours of returning the unused bottles – no questions asked.</p>
<h2>CardioFlex Bonuses</h2>
<p>For a limited time, if you purchase a three or six-bottle package, you’ll automatically receive two free eBooks. These eBooks can help you further control your blood pressure levels and ensure hypertension no longer causes havoc on your health.</p>
<p><strong>Bonus #1 – The Anti-Anxiety Formula</strong></p>
<p>High stress and anxiety levels can wreak havoc on your blood pressure levels. Using the tips and tricks in the Anti-Anxiety Formula, you’ll effectively learn proven strategies to keep your blood pressure low by managing anxiety, limiting stress, and avoiding depression.</p>
<p><strong>Bonus #2 – Memory Hack</strong></p>
<p>High blood pressure can cause cognitive impairment, especially in older adults. With Memory Hack, you’ll learn how to enhance your memory and improve multiple aspects of brain function. You’ll also learn strategies to remember more, improve your focus, and even how to protect yourself from cognitive decline.</p>
<h2 style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-cardioflex" target="_blank"><span style="background-color: red; color: #f1c232;">(THE BIG BILLION DAYS SALE) - THE MOST SELLING "CARDIO FLEX" IS HERE ORDER NOW</span></a></h2>
<h2>Final Recap</h2>
<p>CardioFlex is a convenient, simple solution to help you naturally combat high blood pressure levels. If high blood pressure is causing your health to suffer, then there’s no better natural solution out there than CardioFlex.</p>
<p>Since its inception, it has already helped thousands of men and women regain control of their blood pressure levels.If you’re ready to try the #1 natural supplement for blood pressure control, visit the official CardioFlex website and order your bottles today!</p>
<h3>READ MORE ON OFFICIAL WEBSITE:</h3>
<p><a href="https://cardio-flexx.clubeo.com/calendar/2023/08/21/cardio-flex-new-healthy-blood-support-pills-all-you-need-to-know-about-cardio-flex-offer?_ga=2.8403894.2016146275.1692597849-153963928.1692597849">https://cardio-flexx.clubeo.com/calendar/2023/08/21/cardio-flex-new-healthy-blood-support-pills-all-you-need-to-know-about-cardio-flex-offer</a></p>
<p><a href="https://pdfhost.io/v/WcJkb1dU~_Cardio_Flex_New_Healthy_Blood_Support_Pills_All_You_Need_To_Know_About_CardioFlex_Offer">https://pdfhost.io/v/WcJkb1dU~_Cardio_Flex_New_Healthy_Blood_Support_Pills_All_You_Need_To_Know_About_CardioFlex_Offer</a></p>
<p><a href="https://cardioflex1.bandcamp.com/track/cardio-flex-2023-new-healthy-blood-pressure-formula-is-cardioflex-right-choice">https://cardioflex1.bandcamp.com/track/cardio-flex-2023-new-healthy-blood-pressure-formula-is-cardioflex-right-choice</a></p>
<p><a href="https://sketchfab.com/3d-models/cardio-flex-new-2023-reviews-b6219a95f75f4acca52e2d0b0b5bdf9a">https://sketchfab.com/3d-models/cardio-flex-new-2023-reviews-b6219a95f75f4acca52e2d0b0b5bdf9a</a></p>
<p><a href="https://sites.google.com/view/cardioflexx/home">https://sites.google.com/view/cardioflexx/home</a></p>
<p><a href="https://www.eventcreate.com/e/cardioflexreviews">https://www.eventcreate.com/e/cardioflexreviews</a></p>
<p><a href="https://devfolio.co/@CardioFlex">https://devfolio.co/@CardioFlex</a></p>
<p><a href="https://groups.google.com/g/cardio-flex-pills/c/OWEcJnqpOwM">https://groups.google.com/g/cardio-flex-pills/c/OWEcJnqpOwM</a></p>
<p><a href="https://cardio-flexx.clubeo.com/page/cardio-flex-dr-warning-is-cardioflex-worth-buying-what-do-customers-say.html">https://cardio-flexx.clubeo.com/page/cardio-flex-dr-warning-is-cardioflex-worth-buying-what-do-customers-say.html</a></p>
<p><a href="https://sites.google.com/view/cardio-flex-pills/home">https://sites.google.com/view/cardio-flex-pills/home</a></p>
<p><a href="https://cardio-flexx.clubeo.com/page/cardio-flex-1-premium-blood-flow-support-pills-reduce-the-risk-of-sudden-heart-attacks.html">https://cardio-flexx.clubeo.com/page/cardio-flex-1-premium-blood-flow-support-pills-reduce-the-risk-of-sudden-heart-attacks.html</a></p>
|
CardioFlexReviews/Cardio-Flex-Official-Website
|
[
"region:us"
] |
2023-08-21T12:02:10+00:00
|
{}
|
2023-08-21T12:02:57+00:00
|
[] |
[] |
TAGS
#region-us
|
<p><strong>Cardio Flex (#1 PREMIUM BLOOD FLOW SUPPORT PILLS):</strong> <a href="URL is an all-natural supplement designed to lower blood pressure and improve cardiovascular health.According to the manufacturer, CardioFlex is the first supplement designed to address the root cause of high blood pressure.</p>
<h2 style="background-color: white; box-sizing: border-box; color: black; font-family: Roboto, Helvetica, Arial, sans-serif; font-size: 1.5em; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: bold; letter-spacing: normal; line-height: 1.1; margin: 10px 0px; padding: 0px; text-align: start; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><a href="URL target="_blank"><span style="background-color: red; box-sizing: border-box;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"><span style="box-sizing: border-box; color: #ffd966;">CardioFlex – Official Website Link – Click Here</span></strong></span></a></h2>
<p style="background-color: white; box-sizing: border-box; color: black; font-family: 'Times New Roman'; font-size: medium; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: 400; letter-spacing: normal; margin: 0px 0px 10px; padding: 0px; text-align: left; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"> <span style="box-sizing: border-box; color: #993300;">Product Name -</span> <span style="box-sizing: border-box; color: red;">{CardioFlex} (Cardio Flex)</span><br style="box-sizing: border-box;" /> <span style="box-sizing: border-box; color: green;">Benefits - CardioFlex Designed to address the root cause of high blood pressure!</span><br style="box-sizing: border-box;" /> <span style="box-sizing: border-box; color: olive;">Category -</span></strong><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"> Blood Flow Support </strong><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;">Pills<br style="box-sizing: border-box;" /> <span style="box-sizing: border-box; color: purple;">Availability –</span> Online<br style="box-sizing: border-box;" /> <span style="box-sizing: border-box; color: navy;">Rating: -</span> <span style="box-sizing: border-box; color: red;">5.0/5.0</span> ⭐⭐⭐⭐⭐</strong></p>
<h2 style="background-color: white; box-sizing: border-box; color: black; font-family: Roboto, Helvetica, Arial, sans-serif; font-size: 1.5em; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: bold; letter-spacing: normal; line-height: 1.1; margin: 10px 0px; padding: 0px; text-align: start; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><a href="URL style="background-color: red; box-sizing: border-box;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"><span style="box-sizing: border-box; color: #ffcc00;">Click Here To Visit – “OFFICIAL WEBSITE”</span></strong></span></a></h2>
<h2 style="background-color: white; box-sizing: border-box; color: black; font-family: Roboto, Helvetica, Arial, sans-serif; font-size: 1.5em; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: bold; letter-spacing: normal; line-height: 1.1; margin: 10px 0px; padding: 0px; text-align: start; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><a href="URL style="background-color: red; box-sizing: border-box;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"><span style="box-sizing: border-box; color: #ffcc00;">Click Here To Visit – “OFFICIAL WEBSITE”</span></strong></span></a></h2>
<h2 style="background-color: white; box-sizing: border-box; color: black; font-family: Roboto, Helvetica, Arial, sans-serif; font-size: 1.5em; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: bold; letter-spacing: normal; line-height: 1.1; margin: 10px 0px; padding: 0px; text-align: start; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><a href="URL style="background-color: red; box-sizing: border-box;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"><span style="box-sizing: border-box; color: #ffcc00;">Click Here To Visit – “OFFICIAL WEBSITE”</span></strong></span></a></h2>
<p>Is <a href="URL Flex</a> the right natural solution to help you safely lower your blood pressure? Are there any potential side effects? Read our full review of CardioFlex to learn everything you need to know about this product before you try it.</p>
<p style="text-align: center;"><em><a style="margin-left: 1em; margin-right: 1em;" href="URL target="_blank"><img src="URL alt="" width="640" height="420" border="0" data-original-height="275" data-original-width="420" /></a></em></p>
<h2>What is CardioFlex?</h2>
<p>As briefly mentioned, <a href="URL Flex</a> is an all-natural supplement formulated to support healthy blood pressure levels. Unlike prescription drugs, it uses natural herbal extracts that have been clinically proven to lower blood pressure levels and improve heart health.</p>
<p>According to the manufacturer, <a href="URL is the first product to directly address the root cause of hypertension. This is because it can deliver real results when similar supplements fall short.</p>
<p>According to the manufacturer, by using their product daily, you can experience several benefits, including the following:</p>
<ul>
<li>Stabilize your blood pressure levels to normal levels</li>
<li>Eliminate dangerous cholesterol and plaque from your arteries</li>
<li>Improve your digestion and immune system function</li>
<li>Protect your arteries from weakening and becoming inflamed</li>
<li>Improve your cognitive health and overall blood flow</li>
</ul>
<p><a href="URL is designed to help anybody better manage their blood pressure, regardless of age, gender, or any other physiological factors. Therefore, it doesn’t matter whether you’re a man in his sixties or a woman in her forties; <a href="URL can help you control your blood pressure levels.</p>
<h2 style="text-align: center;"><a href="URL target="_blank"><span style="background-color: red; color: #f1c232;">(THE BIG BILLION DAYS SALE) - THE MOST SELLING "CARDIO FLEX" IS HERE ORDER NOW</span></a></h2>
<h2>How Does CardioFlex Work?</h2>
<p><a href="URL Flex</a> claims to be the first blood pressure supplement to help you naturally balance your blood pressure. According to the official website, here is how it works:</p>
<p>According to new research from the Mayo Clinic, a new stress hormone may be one of the main reasons individuals suffer from high blood pressure, and others don’t. In the study of over 450,000 adults, scientists have found that a stress hormone known as PLR-15 was upwards of 400% more active in those with hypertension than those without.</p>
<p>According to the study, PLR-15’s primary effect is that it causes blood vessels to harden, weakening them. This makes them more susceptible to the buildup of plaque and cholesterol, which restricts blood flow, causing blood pressure to skyrocket even further.</p>
<p><a href="URL is the first supplement to directly go after PLR-15 and keep hormone levels in check. It claims to use proven ingredients to slow limit this production of PLR-15, slowing down the wear and tear on your arteries.</p>
<h2>Ingredients in CardioFlex</h2>
<p>The manufacturer of <a href="URL Flex</a> used a team of nutritionists and doctors to discover the best natural ingredients to combat high blood pressure levels. Their research led them to formulate CardioFlex with nine key elements, which include:</p>
<p><strong>Psyllium powder:</strong> Psyllium husk contains soluble fiber that benefits heart health and digestion. It helps regulate bowel movements by acting as a natural bulk-forming laxative. Several studies have shown psyllium can help regulate cholesterol levels and eliminate LDL cholesterol. It also appears to play a role in blood sugar balance as well.</p>
<p><strong>Acai berry:</strong> Acai berry was a popular “superfood” in the late 2000s that is well regarded for its high levels of antioxidants and anti-inflammatory compounds. Several studies have found acai can help to lower cholesterol levels and eliminate plaque from the arteries. Other studies have found acai has neuroprotective benefits and may boost cognition.</p>
<p><strong>Inulin:</strong> Inulin is a soluble fiber that primarily nourishes gut microbes, eases constipation, and helps the body absorb nutrients more efficiently. It appears to work similarly to psyllium by promoting regular bowel movements. In one study, females who took inulin daily had significant decreases in triglycerides and LDL cholesterol, two factors that affect blood pressure.</p>
<p><strong>Slippery elm bark:</strong> Slippery elm bark is common in parts of Canada and the United States. It is commonly used to combat inflammatory bowel diseases like IBS & Crohn’s disease. CardioFlex claims it can lower PLR-15 levels, which increases blood pressure levels. It may also rejuvenate skin and nail health as well.</p>
<p><strong>Chlorella:</strong> Chlorella is an algae primarily used for detoxification and immune support. Several studies have found that chlorella may reduce cholesterol levels, especially in those with high blood pressure. It also appears to improve blood lipid levels and directly lower blood pressure. It seems to do so by supplying the body with arginine, converted into nitric oxide, and widening blood vessels.</p>
<p><strong>Black walnut:</strong> Black walnuts are a type of nut rich in fiber, omega-3 fatty acids, and several antioxidants. A review of multiple studies found that eating walnuts decreased total and LDL cholesterol. Other studies have found that eating black walnuts improved blood vessel function and reduced plaque buildup in the arteries.</p>
<h2 style="text-align: center;"><a href="URL target="_blank"><span style="background-color: red; color: #f1c232;">(THE BIG BILLION DAYS SALE) - THE MOST SELLING "CARDIO FLEX" IS HERE ORDER NOW</span></a></h2>
<h2 style="text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL target="_blank"><img src="URL alt="" width="640" height="292" border="0" data-original-height="639" data-original-width="1400" /></a></h2>
<h2>Benefits of The Cardio Flex:</h2>
<p><a href="URL Flex</a> Regulates blood sugar levels.<br />Cardio Flex Improves immune function.<br />Cardio Flex Destroys harmful cancer cells.<br />Cardio Flex Rejuvenates skin and nails.<br />It Reduces the risk of cancer.<br />Cardio Flex Improves brain function<br />It Improves blood flow.<br />Cardio Flex Reduces the risk of heart disease.</p>
<h2>Side Effects of CardioFlex – Is it Safe?</h2>
<p><a href="URL was formulated by a team of doctors and nutritionists to safely lower your blood pressure to healthier levels. This is why there have not been any reports of anybody experiencing any serious side effects while using this product.</p>
<p>However, this is not to say that side effects cannot occur – only they have not yet. Any supplement can cause minor side effects like headache, nausea, or indigestion. The risk of experiencing these side effects is just very low.</p>
<p>Finally, if you are on prescription medication or have a serious medical condition, you should speak to your doctor before using this product – especially if you are on medication for high blood pressure.</p>
<h2 style="text-align: center;"><a href="URL target="_blank"><span style="background-color: red; color: #f1c232;">(THE BIG BILLION DAYS SALE) - THE MOST SELLING "CARDIO FLEX" IS HERE ORDER NOW</span></a></h2>
<h2>CardioFlex Pricing & Guarantee</h2>
<p>CardioFlex is one – if not the absolute best natural solution on the market to help combat high blood pressure levels. It has already helped thousands of men and women all over the world safely get their blood pressure levels under control.</p>
<p>If you believe CardioFlex suits you, the best place to order is through the official website . There you will find three different purchasing options to choose from, depending on your individual needs:</p>
<p style="text-align: center;"><img class="lazy" title="CardioFlex Pricing" src="URL alt="CardioFlex Pricing" width="640" height="337" /></p>
<ul>
<li>One bottle: $59 + shipping</li>
<li>Three bottles: $165 total - $55 per bottle + shipping</li>
<li>Six bottles: $246 total - $41 per bottle with free shipping</li>
</ul>
<p>No matter your selected package, you are automatically covered by a 100%, 60-day money-back guarantee. According to the manufacturer, you can receive a full refund if you are dissatisfied with your results, experience any unwanted side effects, or simply don’t like the product.</p>
<p>Simply contact the manufacturer within 60 days, and you’ll receive a full refund within 48 hours of returning the unused bottles – no questions asked.</p>
<h2>CardioFlex Bonuses</h2>
<p>For a limited time, if you purchase a three or six-bottle package, you’ll automatically receive two free eBooks. These eBooks can help you further control your blood pressure levels and ensure hypertension no longer causes havoc on your health.</p>
<p><strong>Bonus #1 – The Anti-Anxiety Formula</strong></p>
<p>High stress and anxiety levels can wreak havoc on your blood pressure levels. Using the tips and tricks in the Anti-Anxiety Formula, you’ll effectively learn proven strategies to keep your blood pressure low by managing anxiety, limiting stress, and avoiding depression.</p>
<p><strong>Bonus #2 – Memory Hack</strong></p>
<p>High blood pressure can cause cognitive impairment, especially in older adults. With Memory Hack, you’ll learn how to enhance your memory and improve multiple aspects of brain function. You’ll also learn strategies to remember more, improve your focus, and even how to protect yourself from cognitive decline.</p>
<h2 style="text-align: center;"><a href="URL target="_blank"><span style="background-color: red; color: #f1c232;">(THE BIG BILLION DAYS SALE) - THE MOST SELLING "CARDIO FLEX" IS HERE ORDER NOW</span></a></h2>
<h2>Final Recap</h2>
<p>CardioFlex is a convenient, simple solution to help you naturally combat high blood pressure levels. If high blood pressure is causing your health to suffer, then there’s no better natural solution out there than CardioFlex.</p>
<p>Since its inception, it has already helped thousands of men and women regain control of their blood pressure levels.If you’re ready to try the #1 natural supplement for blood pressure control, visit the official CardioFlex website and order your bottles today!</p>
<h3>READ MORE ON OFFICIAL WEBSITE:</h3>
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
8ad99f828a7b5edd42d06c1fb5ec7d6757e0e88f
|
# Dataset of shikinami/敷波/敷波 (Kantai Collection)
This is the dataset of shikinami/敷波/敷波 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `brown_hair, ponytail, brown_eyes, short_hair, ribbon, hair_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 398.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shikinami_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 266.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shikinami_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1057 | 529.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shikinami_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 367.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shikinami_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1057 | 700.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shikinami_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/shikinami_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | 1girl, brown_sailor_collar, serafuku, solo, looking_at_viewer, blush, short_sleeves, brown_skirt, pleated_skirt, simple_background, upper_body, white_background, twitter_username |
| 1 | 5 |  |  |  |  |  | 1girl, black_socks, brown_sailor_collar, brown_skirt, kneehighs, pleated_skirt, serafuku, simple_background, solo, white_background, looking_at_viewer, blush, short_sleeves, sitting, open_mouth |
| 2 | 7 |  |  |  |  |  | 1girl, black_sailor_collar, black_skirt, black_socks, serafuku, solo, anchor_symbol, kneehighs, pleated_skirt, looking_at_viewer, white_background, short_sleeves, simple_background, wariza |
| 3 | 9 |  |  |  |  |  | 1girl, black_socks, full_body, kneehighs, machinery, pleated_skirt, serafuku, simple_background, solo, smokestack, white_background, black_sailor_collar, black_skirt, looking_at_viewer, torpedo_launcher, adapted_turret, standing, cannon, rigging, short_sleeves, short_ponytail |
| 4 | 5 |  |  |  |  |  | 1girl, black_socks, brown_sailor_collar, brown_skirt, grey_footwear, kneehighs, pleated_skirt, serafuku, solo, full_body, short_sleeves, anchor_symbol, shoes, blush, open_mouth, outdoors, smile, standing |
| 5 | 5 |  |  |  |  |  | anchor_symbol, black_sailor_collar, kneehighs, pleated_skirt, serafuku, short_sleeves, solo_focus, black_skirt, black_socks, 2girls, long_hair, standing |
| 6 | 7 |  |  |  |  |  | 1girl, black_pantyhose, detached_collar, playboy_bunny, rabbit_ears, small_breasts, solo, strapless_leotard, fake_animal_ears, simple_background, wrist_cuffs, alternate_costume, looking_at_viewer, black_leotard, full_body, grey_background, red_bowtie, red_leotard, sitting, white_background |
| 7 | 9 |  |  |  |  |  | 1girl, solo, cowboy_shot, looking_at_viewer, collarbone, black_one-piece_swimsuit, blue_one-piece_swimsuit, covered_navel, school_swimsuit, small_breasts, standing, blush, gradient_background |
| 8 | 11 |  |  |  |  |  | 1girl, alternate_costume, obi, blush, solo, yukata, looking_at_viewer, upper_body, uchiwa, floral_print |
| 9 | 9 |  |  |  |  |  | 1girl, white_apron, black_dress, enmaided, solo, blush, maid_apron, maid_headdress, simple_background, frilled_apron, looking_at_viewer, open_mouth, puffy_sleeves, short_sleeves, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | brown_sailor_collar | serafuku | solo | looking_at_viewer | blush | short_sleeves | brown_skirt | pleated_skirt | simple_background | upper_body | white_background | twitter_username | black_socks | kneehighs | sitting | open_mouth | black_sailor_collar | black_skirt | anchor_symbol | wariza | full_body | machinery | smokestack | torpedo_launcher | adapted_turret | standing | cannon | rigging | short_ponytail | grey_footwear | shoes | outdoors | smile | solo_focus | 2girls | long_hair | black_pantyhose | detached_collar | playboy_bunny | rabbit_ears | small_breasts | strapless_leotard | fake_animal_ears | wrist_cuffs | alternate_costume | black_leotard | grey_background | red_bowtie | red_leotard | cowboy_shot | collarbone | black_one-piece_swimsuit | blue_one-piece_swimsuit | covered_navel | school_swimsuit | gradient_background | obi | yukata | uchiwa | floral_print | white_apron | black_dress | enmaided | maid_apron | maid_headdress | frilled_apron | puffy_sleeves |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------------|:-----------|:-------|:--------------------|:--------|:----------------|:--------------|:----------------|:--------------------|:-------------|:-------------------|:-------------------|:--------------|:------------|:----------|:-------------|:----------------------|:--------------|:----------------|:---------|:------------|:------------|:-------------|:-------------------|:-----------------|:-----------|:---------|:----------|:-----------------|:----------------|:--------|:-----------|:--------|:-------------|:---------|:------------|:------------------|:------------------|:----------------|:--------------|:----------------|:--------------------|:-------------------|:--------------|:--------------------|:----------------|:------------------|:-------------|:--------------|:--------------|:-------------|:---------------------------|:--------------------------|:----------------|:------------------|:----------------------|:------|:---------|:---------|:---------------|:--------------|:--------------|:-----------|:-------------|:-----------------|:----------------|:----------------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | | X | X | X | | X | | X | X | | X | | X | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 9 |  |  |  |  |  | X | | X | X | X | | X | | X | X | | X | | X | X | | | X | X | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | X | X | | X | X | X | X | | | | | X | X | | X | | | X | | X | | | | | X | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | | | X | | | | X | | X | | | | | X | X | | | X | X | X | | | | | | | X | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | | | X | X | | | | | X | | X | | | | X | | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 7 | 9 |  |  |  |  |  | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | |
| 8 | 11 |  |  |  |  |  | X | | | X | X | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | X | X | X | X | | | | | | | |
| 9 | 9 |  |  |  |  |  | X | | | X | X | X | X | | | X | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X |
|
CyberHarem/shikinami_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T12:04:46+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T05:33:48+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of shikinami/敷波/敷波 (Kantai Collection)
==============================================
This is the dataset of shikinami/敷波/敷波 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are 'brown\_hair, ponytail, brown\_eyes, short\_hair, ribbon, hair\_ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
a06baaf4e369a4f2f254e73ff26b52c3e23800d4
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
astrocoder/Cherokee-English
|
[
"region:us"
] |
2023-08-21T12:11:06+00:00
|
{}
|
2023-08-21T12:11:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
6a3335a9ec7909883601cf80a470c6a5f70cdc93
|
# Dataset of akigumo/秋雲 (Kantai Collection)
This is the dataset of akigumo/秋雲 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `long_hair, brown_hair, ponytail, ribbon, hair_ribbon, green_eyes, mole_under_eye, mole, bow, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 477.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akigumo_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 295.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akigumo_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1098 | 618.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akigumo_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 429.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akigumo_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1098 | 837.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/akigumo_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/akigumo_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, cowboy_shot, grey_pantyhose, school_uniform, solo, white_shirt, dress, long_sleeves, looking_at_viewer, bowtie, simple_background, sketchbook, skirt, grin, pencil, twitter_username |
| 1 | 12 |  |  |  |  |  | 1girl, school_uniform, solo, simple_background, upper_body, white_background, white_shirt, bowtie, long_sleeves, looking_at_viewer, smile, dress |
| 2 | 9 |  |  |  |  |  | 1girl, school_uniform, solo, simple_background, skirt, grey_pantyhose, white_background, white_shirt, open_mouth, long_sleeves, looking_at_viewer, smile, dress |
| 3 | 11 |  |  |  |  |  | 1girl, school_uniform, solo, pantyhose, skirt, looking_at_viewer, grin, boots |
| 4 | 24 |  |  |  |  |  | school_uniform, 1girl, solo, bowtie, blazer, looking_at_viewer, partially_fingerless_gloves, simple_background, single_glove, smile, stylus, black_gloves, thighhighs, white_background, drawing_tablet, pleated_dress, one_eye_closed, skirt |
| 5 | 5 |  |  |  |  |  | 1girl, blazer, grey_thighhighs, lace-up_boots, school_uniform, solo, full_body, standing, pleated_dress, simple_background, bowtie, grey_dress, holding, twitter_username |
| 6 | 9 |  |  |  |  |  | 1girl, orange_skirt, simple_background, solo, white_background, green_sweater, looking_at_viewer, official_alternate_costume, smile, one-hour_drawing_challenge |
| 7 | 7 |  |  |  |  |  | 1girl, simple_background, solo, blue_panties, camisole, looking_at_viewer, sitting, stylus, white_background, barefoot, medium_breasts, strap_slip |
| 8 | 6 |  |  |  |  |  | 1girl, solo, underwear_only, barefoot, sitting, feet, looking_at_viewer, small_breasts, smile, toes, black_ribbon, blue_bra, blue_panties, cleavage, full_body |
| 9 | 10 |  |  |  |  |  | 1girl, detached_collar, fake_animal_ears, playboy_bunny, rabbit_ears, solo, strapless_leotard, simple_background, white_background, wrist_cuffs, looking_at_viewer, grey_pantyhose, rabbit_tail, smile, purple_leotard, blue_bowtie, blush, cowboy_shot, adapted_costume, cleavage, covered_navel, high_heels, medium_breasts, small_breasts |
| 10 | 18 |  |  |  |  |  | blue_jacket, official_alternate_costume, race_queen, 1girl, midriff, navel, blue_skirt, cropped_jacket, solo, partially_fingerless_gloves, thighhighs, black_gloves, medium_breasts, looking_at_viewer, pleated_skirt, shorts_under_skirt, cowboy_shot, standing, thigh_boots, white_shorts |
| 11 | 5 |  |  |  |  |  | 1boy, 1girl, blush, censored, hetero, medium_breasts, nipples, open_mouth, solo_focus, girl_on_top, penis, thighhighs, vaginal, white_shirt, cum, long_sleeves, navel, open_shirt, bangs, blue_panties, bowtie, clothed_female_nude_male, clothed_sex, dark-skinned_male, interracial, large_breasts, one_eye_closed, panties_aside, squatting_cowgirl_position |
| 12 | 6 |  |  |  |  |  | 1girl, blush, 1boy, hetero, large_breasts, looking_at_viewer, nipples, nude, solo_focus, grin, indoors, mosaic_censoring, paizuri, penis, simple_background, sweat, white_background |
| 13 | 5 |  |  |  |  |  | 1girl, after_sex, anus, bar_censor, blush, bondage, chair, cum_in_pussy, cumdrip, marker, money, navel, nipples, nude, open_mouth, restrained, small_breasts, spread_legs, table, tally, tape, timestamp, 1boy, bukkake, chain, cum_on_breasts, cum_on_hair, facial, hetero, open_book, penis, solo_focus, trembling, wavy_mouth, after_vaginal, tongue_out, after_anal, cum_in_ass, ejaculation |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | cowboy_shot | grey_pantyhose | school_uniform | solo | white_shirt | dress | long_sleeves | looking_at_viewer | bowtie | simple_background | sketchbook | skirt | grin | pencil | twitter_username | upper_body | white_background | smile | open_mouth | pantyhose | boots | blazer | partially_fingerless_gloves | single_glove | stylus | black_gloves | thighhighs | drawing_tablet | pleated_dress | one_eye_closed | grey_thighhighs | lace-up_boots | full_body | standing | grey_dress | holding | orange_skirt | green_sweater | official_alternate_costume | one-hour_drawing_challenge | blue_panties | camisole | sitting | barefoot | medium_breasts | strap_slip | underwear_only | feet | small_breasts | toes | black_ribbon | blue_bra | cleavage | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | strapless_leotard | wrist_cuffs | rabbit_tail | purple_leotard | blue_bowtie | blush | adapted_costume | covered_navel | high_heels | blue_jacket | race_queen | midriff | navel | blue_skirt | cropped_jacket | pleated_skirt | shorts_under_skirt | thigh_boots | white_shorts | 1boy | censored | hetero | nipples | solo_focus | girl_on_top | penis | vaginal | cum | open_shirt | bangs | clothed_female_nude_male | clothed_sex | dark-skinned_male | interracial | large_breasts | panties_aside | squatting_cowgirl_position | nude | indoors | mosaic_censoring | paizuri | sweat | after_sex | anus | bar_censor | bondage | chair | cum_in_pussy | cumdrip | marker | money | restrained | spread_legs | table | tally | tape | timestamp | bukkake | chain | cum_on_breasts | cum_on_hair | facial | open_book | trembling | wavy_mouth | after_vaginal | tongue_out | after_anal | cum_in_ass | ejaculation |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------------|:-----------------|:-----------------|:-------|:--------------|:--------|:---------------|:--------------------|:---------|:--------------------|:-------------|:--------|:-------|:---------|:-------------------|:-------------|:-------------------|:--------|:-------------|:------------|:--------|:---------|:------------------------------|:---------------|:---------|:---------------|:-------------|:-----------------|:----------------|:-----------------|:------------------|:----------------|:------------|:-----------|:-------------|:----------|:---------------|:----------------|:-----------------------------|:-----------------------------|:---------------|:-----------|:----------|:-----------|:-----------------|:-------------|:-----------------|:-------|:----------------|:-------|:---------------|:-----------|:-----------|:------------------|:-------------------|:----------------|:--------------|:--------------------|:--------------|:--------------|:-----------------|:--------------|:--------|:------------------|:----------------|:-------------|:--------------|:-------------|:----------|:--------|:-------------|:-----------------|:----------------|:---------------------|:--------------|:---------------|:-------|:-----------|:---------|:----------|:-------------|:--------------|:--------|:----------|:------|:-------------|:--------|:---------------------------|:--------------|:--------------------|:--------------|:----------------|:----------------|:-----------------------------|:-------|:----------|:-------------------|:----------|:--------|:------------|:-------|:-------------|:----------|:--------|:---------------|:----------|:---------|:--------|:-------------|:--------------|:--------|:--------|:-------|:------------|:----------|:--------|:-----------------|:--------------|:---------|:------------|:------------|:-------------|:----------------|:-------------|:-------------|:-------------|:--------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 12 |  |  |  |  |  | X | | | X | X | X | X | X | X | X | X | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | X | | X | X | X | X | X | X | X | | X | | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 11 |  |  |  |  |  | X | | | X | X | | | | X | | | | X | X | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 24 |  |  |  |  |  | X | | | X | X | | | | X | X | X | | X | | | | | X | X | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | | X | X | | | | | X | X | | | | | X | | | | | | | X | | | | | | | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 9 |  |  |  |  |  | X | | | | X | | | | X | | X | | | | | | | X | X | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 7 |  |  |  |  |  | X | | | | X | | | | X | | X | | | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 6 |  |  |  |  |  | X | | | | X | | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | X | | | | | | | | X | | X | X | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 10 |  |  |  |  |  | X | X | X | | X | | | | X | | X | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 10 | 18 |  |  |  |  |  | X | X | | | X | | | | X | | | | | | | | | | | | | | | X | | | X | X | | | | | | | X | | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 11 | 5 |  |  |  |  |  | X | | | | | X | | X | | X | | | | | | | | | | X | | | | | | | | X | | | X | | | | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | X | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 12 | 6 |  |  |  |  |  | X | | | | | | | | X | | X | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | X | | X | X | X | | X | | | | | | | | | X | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 13 | 5 |  |  |  |  |  | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | X | | | | | | | X | | | | | | | X | | X | X | X | | X | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/akigumo_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T12:16:39+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T09:19:28+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of akigumo/秋雲 (Kantai Collection)
=========================================
This is the dataset of akigumo/秋雲 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are 'long\_hair, brown\_hair, ponytail, ribbon, hair\_ribbon, green\_eyes, mole\_under\_eye, mole, bow, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
e823a8a8681a3c42cfd5621e664c452c0ddff2bb
|
# Dataset Card for "AA_ApplicationDistilRoBERTa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
EgilKarlsen/AA_ApplicationDistilRoBERTa
|
[
"region:us"
] |
2023-08-21T12:17:35+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "0", "dtype": "float32"}, {"name": "1", "dtype": "float32"}, {"name": "2", "dtype": "float32"}, {"name": "3", "dtype": "float32"}, {"name": "4", "dtype": "float32"}, {"name": "5", "dtype": "float32"}, {"name": "6", "dtype": "float32"}, {"name": "7", "dtype": "float32"}, {"name": "8", "dtype": "float32"}, {"name": "9", "dtype": "float32"}, {"name": "10", "dtype": "float32"}, {"name": "11", "dtype": "float32"}, {"name": "12", "dtype": "float32"}, {"name": "13", "dtype": "float32"}, {"name": "14", "dtype": "float32"}, {"name": "15", "dtype": "float32"}, {"name": "16", "dtype": "float32"}, {"name": "17", "dtype": "float32"}, {"name": "18", "dtype": "float32"}, {"name": "19", "dtype": "float32"}, {"name": "20", "dtype": "float32"}, {"name": "21", "dtype": "float32"}, {"name": "22", "dtype": "float32"}, {"name": "23", "dtype": "float32"}, {"name": "24", "dtype": "float32"}, {"name": "25", "dtype": "float32"}, {"name": "26", "dtype": "float32"}, {"name": "27", "dtype": "float32"}, {"name": "28", "dtype": "float32"}, {"name": "29", "dtype": "float32"}, {"name": "30", "dtype": "float32"}, {"name": "31", "dtype": "float32"}, {"name": "32", "dtype": "float32"}, {"name": "33", "dtype": "float32"}, {"name": "34", "dtype": "float32"}, {"name": "35", "dtype": "float32"}, {"name": "36", "dtype": "float32"}, {"name": "37", "dtype": "float32"}, {"name": "38", "dtype": "float32"}, {"name": "39", "dtype": "float32"}, {"name": "40", "dtype": "float32"}, {"name": "41", "dtype": "float32"}, {"name": "42", "dtype": "float32"}, {"name": "43", "dtype": "float32"}, {"name": "44", "dtype": "float32"}, {"name": "45", "dtype": "float32"}, {"name": "46", "dtype": "float32"}, {"name": "47", "dtype": "float32"}, {"name": "48", "dtype": "float32"}, {"name": "49", "dtype": "float32"}, {"name": "50", "dtype": "float32"}, {"name": "51", "dtype": "float32"}, {"name": "52", "dtype": "float32"}, {"name": "53", "dtype": "float32"}, {"name": "54", "dtype": "float32"}, {"name": "55", "dtype": "float32"}, {"name": "56", "dtype": "float32"}, {"name": "57", "dtype": "float32"}, {"name": "58", "dtype": "float32"}, {"name": "59", "dtype": "float32"}, {"name": "60", "dtype": "float32"}, {"name": "61", "dtype": "float32"}, {"name": "62", "dtype": "float32"}, {"name": "63", "dtype": "float32"}, {"name": "64", "dtype": "float32"}, {"name": "65", "dtype": "float32"}, {"name": "66", "dtype": "float32"}, {"name": "67", "dtype": "float32"}, {"name": "68", "dtype": "float32"}, {"name": "69", "dtype": "float32"}, {"name": "70", "dtype": "float32"}, {"name": "71", "dtype": "float32"}, {"name": "72", "dtype": "float32"}, {"name": "73", "dtype": "float32"}, {"name": "74", "dtype": "float32"}, {"name": "75", "dtype": "float32"}, {"name": "76", "dtype": "float32"}, {"name": "77", "dtype": "float32"}, {"name": "78", "dtype": "float32"}, {"name": "79", "dtype": "float32"}, {"name": "80", "dtype": "float32"}, {"name": "81", "dtype": "float32"}, {"name": "82", "dtype": "float32"}, {"name": "83", "dtype": "float32"}, {"name": "84", "dtype": "float32"}, {"name": "85", "dtype": "float32"}, {"name": "86", "dtype": "float32"}, {"name": "87", "dtype": "float32"}, {"name": "88", "dtype": "float32"}, {"name": "89", "dtype": "float32"}, {"name": "90", "dtype": "float32"}, {"name": "91", "dtype": "float32"}, {"name": "92", "dtype": "float32"}, {"name": "93", "dtype": "float32"}, {"name": "94", "dtype": "float32"}, {"name": "95", "dtype": "float32"}, {"name": "96", "dtype": "float32"}, {"name": "97", "dtype": "float32"}, {"name": "98", "dtype": "float32"}, {"name": "99", "dtype": "float32"}, {"name": "100", "dtype": "float32"}, {"name": "101", "dtype": "float32"}, {"name": "102", "dtype": "float32"}, {"name": "103", "dtype": "float32"}, {"name": "104", "dtype": "float32"}, {"name": "105", "dtype": "float32"}, {"name": "106", "dtype": "float32"}, {"name": "107", "dtype": "float32"}, {"name": "108", "dtype": "float32"}, {"name": "109", "dtype": "float32"}, {"name": "110", "dtype": "float32"}, {"name": "111", "dtype": "float32"}, {"name": "112", "dtype": "float32"}, {"name": "113", "dtype": "float32"}, {"name": "114", "dtype": "float32"}, {"name": "115", "dtype": "float32"}, {"name": "116", "dtype": "float32"}, {"name": "117", "dtype": "float32"}, {"name": "118", "dtype": "float32"}, {"name": "119", "dtype": "float32"}, {"name": "120", "dtype": "float32"}, {"name": "121", "dtype": "float32"}, {"name": "122", "dtype": "float32"}, {"name": "123", "dtype": "float32"}, {"name": "124", "dtype": "float32"}, {"name": "125", "dtype": "float32"}, {"name": "126", "dtype": "float32"}, {"name": "127", "dtype": "float32"}, {"name": "128", "dtype": "float32"}, {"name": "129", "dtype": "float32"}, {"name": "130", "dtype": "float32"}, {"name": "131", "dtype": "float32"}, {"name": "132", "dtype": "float32"}, {"name": "133", "dtype": "float32"}, {"name": "134", "dtype": "float32"}, {"name": "135", "dtype": "float32"}, {"name": "136", "dtype": "float32"}, {"name": "137", "dtype": "float32"}, {"name": "138", "dtype": "float32"}, {"name": "139", "dtype": "float32"}, {"name": "140", "dtype": "float32"}, {"name": "141", "dtype": "float32"}, {"name": "142", "dtype": "float32"}, {"name": "143", "dtype": "float32"}, {"name": "144", "dtype": "float32"}, {"name": "145", "dtype": "float32"}, {"name": "146", "dtype": "float32"}, {"name": "147", "dtype": "float32"}, {"name": "148", "dtype": "float32"}, {"name": "149", "dtype": "float32"}, {"name": "150", "dtype": "float32"}, {"name": "151", "dtype": "float32"}, {"name": "152", "dtype": "float32"}, {"name": "153", "dtype": "float32"}, {"name": "154", "dtype": "float32"}, {"name": "155", "dtype": "float32"}, {"name": "156", "dtype": "float32"}, {"name": "157", "dtype": "float32"}, {"name": "158", "dtype": "float32"}, {"name": "159", "dtype": "float32"}, {"name": "160", "dtype": "float32"}, {"name": "161", "dtype": "float32"}, {"name": "162", "dtype": "float32"}, {"name": "163", "dtype": "float32"}, {"name": "164", "dtype": "float32"}, {"name": "165", "dtype": "float32"}, {"name": "166", "dtype": "float32"}, {"name": "167", "dtype": "float32"}, {"name": "168", "dtype": "float32"}, {"name": "169", "dtype": "float32"}, {"name": "170", "dtype": "float32"}, {"name": "171", "dtype": "float32"}, {"name": "172", "dtype": "float32"}, {"name": "173", "dtype": "float32"}, {"name": "174", "dtype": "float32"}, {"name": "175", "dtype": "float32"}, {"name": "176", "dtype": "float32"}, {"name": "177", "dtype": "float32"}, {"name": "178", "dtype": "float32"}, {"name": "179", "dtype": "float32"}, {"name": "180", "dtype": "float32"}, {"name": "181", "dtype": "float32"}, {"name": "182", "dtype": "float32"}, {"name": "183", "dtype": "float32"}, {"name": "184", "dtype": "float32"}, {"name": "185", "dtype": "float32"}, {"name": "186", "dtype": "float32"}, {"name": "187", "dtype": "float32"}, {"name": "188", "dtype": "float32"}, {"name": "189", "dtype": "float32"}, {"name": "190", "dtype": "float32"}, {"name": "191", "dtype": "float32"}, {"name": "192", "dtype": "float32"}, {"name": "193", "dtype": "float32"}, {"name": "194", "dtype": "float32"}, {"name": "195", "dtype": "float32"}, {"name": "196", "dtype": "float32"}, {"name": "197", "dtype": "float32"}, {"name": "198", "dtype": "float32"}, {"name": "199", "dtype": "float32"}, {"name": "200", "dtype": "float32"}, {"name": "201", "dtype": "float32"}, {"name": "202", "dtype": "float32"}, {"name": "203", "dtype": "float32"}, {"name": "204", "dtype": "float32"}, {"name": "205", "dtype": "float32"}, {"name": "206", "dtype": "float32"}, {"name": "207", "dtype": "float32"}, {"name": "208", "dtype": "float32"}, {"name": "209", "dtype": "float32"}, {"name": "210", "dtype": "float32"}, {"name": "211", "dtype": "float32"}, {"name": "212", "dtype": "float32"}, {"name": "213", "dtype": "float32"}, {"name": "214", "dtype": "float32"}, {"name": "215", "dtype": "float32"}, {"name": "216", "dtype": "float32"}, {"name": "217", "dtype": "float32"}, {"name": "218", "dtype": "float32"}, {"name": "219", "dtype": "float32"}, {"name": "220", "dtype": "float32"}, {"name": "221", "dtype": "float32"}, {"name": "222", "dtype": "float32"}, {"name": "223", "dtype": "float32"}, {"name": "224", "dtype": "float32"}, {"name": "225", "dtype": "float32"}, {"name": "226", "dtype": "float32"}, {"name": "227", "dtype": "float32"}, {"name": "228", "dtype": "float32"}, {"name": "229", "dtype": "float32"}, {"name": "230", "dtype": "float32"}, {"name": "231", "dtype": "float32"}, {"name": "232", "dtype": "float32"}, {"name": "233", "dtype": "float32"}, {"name": "234", "dtype": "float32"}, {"name": "235", "dtype": "float32"}, {"name": "236", "dtype": "float32"}, {"name": "237", "dtype": "float32"}, {"name": "238", "dtype": "float32"}, {"name": "239", "dtype": "float32"}, {"name": "240", "dtype": "float32"}, {"name": "241", "dtype": "float32"}, {"name": "242", "dtype": "float32"}, {"name": "243", "dtype": "float32"}, {"name": "244", "dtype": "float32"}, {"name": "245", "dtype": "float32"}, {"name": "246", "dtype": "float32"}, {"name": "247", "dtype": "float32"}, {"name": "248", "dtype": "float32"}, {"name": "249", "dtype": "float32"}, {"name": "250", "dtype": "float32"}, {"name": "251", "dtype": "float32"}, {"name": "252", "dtype": "float32"}, {"name": "253", "dtype": "float32"}, {"name": "254", "dtype": "float32"}, {"name": "255", "dtype": "float32"}, {"name": "256", "dtype": "float32"}, {"name": "257", "dtype": "float32"}, {"name": "258", "dtype": "float32"}, {"name": "259", "dtype": "float32"}, {"name": "260", "dtype": "float32"}, {"name": "261", "dtype": "float32"}, {"name": "262", "dtype": "float32"}, {"name": "263", "dtype": "float32"}, {"name": "264", "dtype": "float32"}, {"name": "265", "dtype": "float32"}, {"name": "266", "dtype": "float32"}, {"name": "267", "dtype": "float32"}, {"name": "268", "dtype": "float32"}, {"name": "269", "dtype": "float32"}, {"name": "270", "dtype": "float32"}, {"name": "271", "dtype": "float32"}, {"name": "272", "dtype": "float32"}, {"name": "273", "dtype": "float32"}, {"name": "274", "dtype": "float32"}, {"name": "275", "dtype": "float32"}, {"name": "276", "dtype": "float32"}, {"name": "277", "dtype": "float32"}, {"name": "278", "dtype": "float32"}, {"name": "279", "dtype": "float32"}, {"name": "280", "dtype": "float32"}, {"name": "281", "dtype": "float32"}, {"name": "282", "dtype": "float32"}, {"name": "283", "dtype": "float32"}, {"name": "284", "dtype": "float32"}, {"name": "285", "dtype": "float32"}, {"name": "286", "dtype": "float32"}, {"name": "287", "dtype": "float32"}, {"name": "288", "dtype": "float32"}, {"name": "289", "dtype": "float32"}, {"name": "290", "dtype": "float32"}, {"name": "291", "dtype": "float32"}, {"name": "292", "dtype": "float32"}, {"name": "293", "dtype": "float32"}, {"name": "294", "dtype": "float32"}, {"name": "295", "dtype": "float32"}, {"name": "296", "dtype": "float32"}, {"name": "297", "dtype": "float32"}, {"name": "298", "dtype": "float32"}, {"name": "299", "dtype": "float32"}, {"name": "300", "dtype": "float32"}, {"name": "301", "dtype": "float32"}, {"name": "302", "dtype": "float32"}, {"name": "303", "dtype": "float32"}, {"name": "304", "dtype": "float32"}, {"name": "305", "dtype": "float32"}, {"name": "306", "dtype": "float32"}, {"name": "307", "dtype": "float32"}, {"name": "308", "dtype": "float32"}, {"name": "309", "dtype": "float32"}, {"name": "310", "dtype": "float32"}, {"name": "311", "dtype": "float32"}, {"name": "312", "dtype": "float32"}, {"name": "313", "dtype": "float32"}, {"name": "314", "dtype": "float32"}, {"name": "315", "dtype": "float32"}, {"name": "316", "dtype": "float32"}, {"name": "317", "dtype": "float32"}, {"name": "318", "dtype": "float32"}, {"name": "319", "dtype": "float32"}, {"name": "320", "dtype": "float32"}, {"name": "321", "dtype": "float32"}, {"name": "322", "dtype": "float32"}, {"name": "323", "dtype": "float32"}, {"name": "324", "dtype": "float32"}, {"name": "325", "dtype": "float32"}, {"name": "326", "dtype": "float32"}, {"name": "327", "dtype": "float32"}, {"name": "328", "dtype": "float32"}, {"name": "329", "dtype": "float32"}, {"name": "330", "dtype": "float32"}, {"name": "331", "dtype": "float32"}, {"name": "332", "dtype": "float32"}, {"name": "333", "dtype": "float32"}, {"name": "334", "dtype": "float32"}, {"name": "335", "dtype": "float32"}, {"name": "336", "dtype": "float32"}, {"name": "337", "dtype": "float32"}, {"name": "338", "dtype": "float32"}, {"name": "339", "dtype": "float32"}, {"name": "340", "dtype": "float32"}, {"name": "341", "dtype": "float32"}, {"name": "342", "dtype": "float32"}, {"name": "343", "dtype": "float32"}, {"name": "344", "dtype": "float32"}, {"name": "345", "dtype": "float32"}, {"name": "346", "dtype": "float32"}, {"name": "347", "dtype": "float32"}, {"name": "348", "dtype": "float32"}, {"name": "349", "dtype": "float32"}, {"name": "350", "dtype": "float32"}, {"name": "351", "dtype": "float32"}, {"name": "352", "dtype": "float32"}, {"name": "353", "dtype": "float32"}, {"name": "354", "dtype": "float32"}, {"name": "355", "dtype": "float32"}, {"name": "356", "dtype": "float32"}, {"name": "357", "dtype": "float32"}, {"name": "358", "dtype": "float32"}, {"name": "359", "dtype": "float32"}, {"name": "360", "dtype": "float32"}, {"name": "361", "dtype": "float32"}, {"name": "362", "dtype": "float32"}, {"name": "363", "dtype": "float32"}, {"name": "364", "dtype": "float32"}, {"name": "365", "dtype": "float32"}, {"name": "366", "dtype": "float32"}, {"name": "367", "dtype": "float32"}, {"name": "368", "dtype": "float32"}, {"name": "369", "dtype": "float32"}, {"name": "370", "dtype": "float32"}, {"name": "371", "dtype": "float32"}, {"name": "372", "dtype": "float32"}, {"name": "373", "dtype": "float32"}, {"name": "374", "dtype": "float32"}, {"name": "375", "dtype": "float32"}, {"name": "376", "dtype": "float32"}, {"name": "377", "dtype": "float32"}, {"name": "378", "dtype": "float32"}, {"name": "379", "dtype": "float32"}, {"name": "380", "dtype": "float32"}, {"name": "381", "dtype": "float32"}, {"name": "382", "dtype": "float32"}, {"name": "383", "dtype": "float32"}, {"name": "384", "dtype": "float32"}, {"name": "385", "dtype": "float32"}, {"name": "386", "dtype": "float32"}, {"name": "387", "dtype": "float32"}, {"name": "388", "dtype": "float32"}, {"name": "389", "dtype": "float32"}, {"name": "390", "dtype": "float32"}, {"name": "391", "dtype": "float32"}, {"name": "392", "dtype": "float32"}, {"name": "393", "dtype": "float32"}, {"name": "394", "dtype": "float32"}, {"name": "395", "dtype": "float32"}, {"name": "396", "dtype": "float32"}, {"name": "397", "dtype": "float32"}, {"name": "398", "dtype": "float32"}, {"name": "399", "dtype": "float32"}, {"name": "400", "dtype": "float32"}, {"name": "401", "dtype": "float32"}, {"name": "402", "dtype": "float32"}, {"name": "403", "dtype": "float32"}, {"name": "404", "dtype": "float32"}, {"name": "405", "dtype": "float32"}, {"name": "406", "dtype": "float32"}, {"name": "407", "dtype": "float32"}, {"name": "408", "dtype": "float32"}, {"name": "409", "dtype": "float32"}, {"name": "410", "dtype": "float32"}, {"name": "411", "dtype": "float32"}, {"name": "412", "dtype": "float32"}, {"name": "413", "dtype": "float32"}, {"name": "414", "dtype": "float32"}, {"name": "415", "dtype": "float32"}, {"name": "416", "dtype": "float32"}, {"name": "417", "dtype": "float32"}, {"name": "418", "dtype": "float32"}, {"name": "419", "dtype": "float32"}, {"name": "420", "dtype": "float32"}, {"name": "421", "dtype": "float32"}, {"name": "422", "dtype": "float32"}, {"name": "423", "dtype": "float32"}, {"name": "424", "dtype": "float32"}, {"name": "425", "dtype": "float32"}, {"name": "426", "dtype": "float32"}, {"name": "427", "dtype": "float32"}, {"name": "428", "dtype": "float32"}, {"name": "429", "dtype": "float32"}, {"name": "430", "dtype": "float32"}, {"name": "431", "dtype": "float32"}, {"name": "432", "dtype": "float32"}, {"name": "433", "dtype": "float32"}, {"name": "434", "dtype": "float32"}, {"name": "435", "dtype": "float32"}, {"name": "436", "dtype": "float32"}, {"name": "437", "dtype": "float32"}, {"name": "438", "dtype": "float32"}, {"name": "439", "dtype": "float32"}, {"name": "440", "dtype": "float32"}, {"name": "441", "dtype": "float32"}, {"name": "442", "dtype": "float32"}, {"name": "443", "dtype": "float32"}, {"name": "444", "dtype": "float32"}, {"name": "445", "dtype": "float32"}, {"name": "446", "dtype": "float32"}, {"name": "447", "dtype": "float32"}, {"name": "448", "dtype": "float32"}, {"name": "449", "dtype": "float32"}, {"name": "450", "dtype": "float32"}, {"name": "451", "dtype": "float32"}, {"name": "452", "dtype": "float32"}, {"name": "453", "dtype": "float32"}, {"name": "454", "dtype": "float32"}, {"name": "455", "dtype": "float32"}, {"name": "456", "dtype": "float32"}, {"name": "457", "dtype": "float32"}, {"name": "458", "dtype": "float32"}, {"name": "459", "dtype": "float32"}, {"name": "460", "dtype": "float32"}, {"name": "461", "dtype": "float32"}, {"name": "462", "dtype": "float32"}, {"name": "463", "dtype": "float32"}, {"name": "464", "dtype": "float32"}, {"name": "465", "dtype": "float32"}, {"name": "466", "dtype": "float32"}, {"name": "467", "dtype": "float32"}, {"name": "468", "dtype": "float32"}, {"name": "469", "dtype": "float32"}, {"name": "470", "dtype": "float32"}, {"name": "471", "dtype": "float32"}, {"name": "472", "dtype": "float32"}, {"name": "473", "dtype": "float32"}, {"name": "474", "dtype": "float32"}, {"name": "475", "dtype": "float32"}, {"name": "476", "dtype": "float32"}, {"name": "477", "dtype": "float32"}, {"name": "478", "dtype": "float32"}, {"name": "479", "dtype": "float32"}, {"name": "480", "dtype": "float32"}, {"name": "481", "dtype": "float32"}, {"name": "482", "dtype": "float32"}, {"name": "483", "dtype": "float32"}, {"name": "484", "dtype": "float32"}, {"name": "485", "dtype": "float32"}, {"name": "486", "dtype": "float32"}, {"name": "487", "dtype": "float32"}, {"name": "488", "dtype": "float32"}, {"name": "489", "dtype": "float32"}, {"name": "490", "dtype": "float32"}, {"name": "491", "dtype": "float32"}, {"name": "492", "dtype": "float32"}, {"name": "493", "dtype": "float32"}, {"name": "494", "dtype": "float32"}, {"name": "495", "dtype": "float32"}, {"name": "496", "dtype": "float32"}, {"name": "497", "dtype": "float32"}, {"name": "498", "dtype": "float32"}, {"name": "499", "dtype": "float32"}, {"name": "500", "dtype": "float32"}, {"name": "501", "dtype": "float32"}, {"name": "502", "dtype": "float32"}, {"name": "503", "dtype": "float32"}, {"name": "504", "dtype": "float32"}, {"name": "505", "dtype": "float32"}, {"name": "506", "dtype": "float32"}, {"name": "507", "dtype": "float32"}, {"name": "508", "dtype": "float32"}, {"name": "509", "dtype": "float32"}, {"name": "510", "dtype": "float32"}, {"name": "511", "dtype": "float32"}, {"name": "512", "dtype": "float32"}, {"name": "513", "dtype": "float32"}, {"name": "514", "dtype": "float32"}, {"name": "515", "dtype": "float32"}, {"name": "516", "dtype": "float32"}, {"name": "517", "dtype": "float32"}, {"name": "518", "dtype": "float32"}, {"name": "519", "dtype": "float32"}, {"name": "520", "dtype": "float32"}, {"name": "521", "dtype": "float32"}, {"name": "522", "dtype": "float32"}, {"name": "523", "dtype": "float32"}, {"name": "524", "dtype": "float32"}, {"name": "525", "dtype": "float32"}, {"name": "526", "dtype": "float32"}, {"name": "527", "dtype": "float32"}, {"name": "528", "dtype": "float32"}, {"name": "529", "dtype": "float32"}, {"name": "530", "dtype": "float32"}, {"name": "531", "dtype": "float32"}, {"name": "532", "dtype": "float32"}, {"name": "533", "dtype": "float32"}, {"name": "534", "dtype": "float32"}, {"name": "535", "dtype": "float32"}, {"name": "536", "dtype": "float32"}, {"name": "537", "dtype": "float32"}, {"name": "538", "dtype": "float32"}, {"name": "539", "dtype": "float32"}, {"name": "540", "dtype": "float32"}, {"name": "541", "dtype": "float32"}, {"name": "542", "dtype": "float32"}, {"name": "543", "dtype": "float32"}, {"name": "544", "dtype": "float32"}, {"name": "545", "dtype": "float32"}, {"name": "546", "dtype": "float32"}, {"name": "547", "dtype": "float32"}, {"name": "548", "dtype": "float32"}, {"name": "549", "dtype": "float32"}, {"name": "550", "dtype": "float32"}, {"name": "551", "dtype": "float32"}, {"name": "552", "dtype": "float32"}, {"name": "553", "dtype": "float32"}, {"name": "554", "dtype": "float32"}, {"name": "555", "dtype": "float32"}, {"name": "556", "dtype": "float32"}, {"name": "557", "dtype": "float32"}, {"name": "558", "dtype": "float32"}, {"name": "559", "dtype": "float32"}, {"name": "560", "dtype": "float32"}, {"name": "561", "dtype": "float32"}, {"name": "562", "dtype": "float32"}, {"name": "563", "dtype": "float32"}, {"name": "564", "dtype": "float32"}, {"name": "565", "dtype": "float32"}, {"name": "566", "dtype": "float32"}, {"name": "567", "dtype": "float32"}, {"name": "568", "dtype": "float32"}, {"name": "569", "dtype": "float32"}, {"name": "570", "dtype": "float32"}, {"name": "571", "dtype": "float32"}, {"name": "572", "dtype": "float32"}, {"name": "573", "dtype": "float32"}, {"name": "574", "dtype": "float32"}, {"name": "575", "dtype": "float32"}, {"name": "576", "dtype": "float32"}, {"name": "577", "dtype": "float32"}, {"name": "578", "dtype": "float32"}, {"name": "579", "dtype": "float32"}, {"name": "580", "dtype": "float32"}, {"name": "581", "dtype": "float32"}, {"name": "582", "dtype": "float32"}, {"name": "583", "dtype": "float32"}, {"name": "584", "dtype": "float32"}, {"name": "585", "dtype": "float32"}, {"name": "586", "dtype": "float32"}, {"name": "587", "dtype": "float32"}, {"name": "588", "dtype": "float32"}, {"name": "589", "dtype": "float32"}, {"name": "590", "dtype": "float32"}, {"name": "591", "dtype": "float32"}, {"name": "592", "dtype": "float32"}, {"name": "593", "dtype": "float32"}, {"name": "594", "dtype": "float32"}, {"name": "595", "dtype": "float32"}, {"name": "596", "dtype": "float32"}, {"name": "597", "dtype": "float32"}, {"name": "598", "dtype": "float32"}, {"name": "599", "dtype": "float32"}, {"name": "600", "dtype": "float32"}, {"name": "601", "dtype": "float32"}, {"name": "602", "dtype": "float32"}, {"name": "603", "dtype": "float32"}, {"name": "604", "dtype": "float32"}, {"name": "605", "dtype": "float32"}, {"name": "606", "dtype": "float32"}, {"name": "607", "dtype": "float32"}, {"name": "608", "dtype": "float32"}, {"name": "609", "dtype": "float32"}, {"name": "610", "dtype": "float32"}, {"name": "611", "dtype": "float32"}, {"name": "612", "dtype": "float32"}, {"name": "613", "dtype": "float32"}, {"name": "614", "dtype": "float32"}, {"name": "615", "dtype": "float32"}, {"name": "616", "dtype": "float32"}, {"name": "617", "dtype": "float32"}, {"name": "618", "dtype": "float32"}, {"name": "619", "dtype": "float32"}, {"name": "620", "dtype": "float32"}, {"name": "621", "dtype": "float32"}, {"name": "622", "dtype": "float32"}, {"name": "623", "dtype": "float32"}, {"name": "624", "dtype": "float32"}, {"name": "625", "dtype": "float32"}, {"name": "626", "dtype": "float32"}, {"name": "627", "dtype": "float32"}, {"name": "628", "dtype": "float32"}, {"name": "629", "dtype": "float32"}, {"name": "630", "dtype": "float32"}, {"name": "631", "dtype": "float32"}, {"name": "632", "dtype": "float32"}, {"name": "633", "dtype": "float32"}, {"name": "634", "dtype": "float32"}, {"name": "635", "dtype": "float32"}, {"name": "636", "dtype": "float32"}, {"name": "637", "dtype": "float32"}, {"name": "638", "dtype": "float32"}, {"name": "639", "dtype": "float32"}, {"name": "640", "dtype": "float32"}, {"name": "641", "dtype": "float32"}, {"name": "642", "dtype": "float32"}, {"name": "643", "dtype": "float32"}, {"name": "644", "dtype": "float32"}, {"name": "645", "dtype": "float32"}, {"name": "646", "dtype": "float32"}, {"name": "647", "dtype": "float32"}, {"name": "648", "dtype": "float32"}, {"name": "649", "dtype": "float32"}, {"name": "650", "dtype": "float32"}, {"name": "651", "dtype": "float32"}, {"name": "652", "dtype": "float32"}, {"name": "653", "dtype": "float32"}, {"name": "654", "dtype": "float32"}, {"name": "655", "dtype": "float32"}, {"name": "656", "dtype": "float32"}, {"name": "657", "dtype": "float32"}, {"name": "658", "dtype": "float32"}, {"name": "659", "dtype": "float32"}, {"name": "660", "dtype": "float32"}, {"name": "661", "dtype": "float32"}, {"name": "662", "dtype": "float32"}, {"name": "663", "dtype": "float32"}, {"name": "664", "dtype": "float32"}, {"name": "665", "dtype": "float32"}, {"name": "666", "dtype": "float32"}, {"name": "667", "dtype": "float32"}, {"name": "668", "dtype": "float32"}, {"name": "669", "dtype": "float32"}, {"name": "670", "dtype": "float32"}, {"name": "671", "dtype": "float32"}, {"name": "672", "dtype": "float32"}, {"name": "673", "dtype": "float32"}, {"name": "674", "dtype": "float32"}, {"name": "675", "dtype": "float32"}, {"name": "676", "dtype": "float32"}, {"name": "677", "dtype": "float32"}, {"name": "678", "dtype": "float32"}, {"name": "679", "dtype": "float32"}, {"name": "680", "dtype": "float32"}, {"name": "681", "dtype": "float32"}, {"name": "682", "dtype": "float32"}, {"name": "683", "dtype": "float32"}, {"name": "684", "dtype": "float32"}, {"name": "685", "dtype": "float32"}, {"name": "686", "dtype": "float32"}, {"name": "687", "dtype": "float32"}, {"name": "688", "dtype": "float32"}, {"name": "689", "dtype": "float32"}, {"name": "690", "dtype": "float32"}, {"name": "691", "dtype": "float32"}, {"name": "692", "dtype": "float32"}, {"name": "693", "dtype": "float32"}, {"name": "694", "dtype": "float32"}, {"name": "695", "dtype": "float32"}, {"name": "696", "dtype": "float32"}, {"name": "697", "dtype": "float32"}, {"name": "698", "dtype": "float32"}, {"name": "699", "dtype": "float32"}, {"name": "700", "dtype": "float32"}, {"name": "701", "dtype": "float32"}, {"name": "702", "dtype": "float32"}, {"name": "703", "dtype": "float32"}, {"name": "704", "dtype": "float32"}, {"name": "705", "dtype": "float32"}, {"name": "706", "dtype": "float32"}, {"name": "707", "dtype": "float32"}, {"name": "708", "dtype": "float32"}, {"name": "709", "dtype": "float32"}, {"name": "710", "dtype": "float32"}, {"name": "711", "dtype": "float32"}, {"name": "712", "dtype": "float32"}, {"name": "713", "dtype": "float32"}, {"name": "714", "dtype": "float32"}, {"name": "715", "dtype": "float32"}, {"name": "716", "dtype": "float32"}, {"name": "717", "dtype": "float32"}, {"name": "718", "dtype": "float32"}, {"name": "719", "dtype": "float32"}, {"name": "720", "dtype": "float32"}, {"name": "721", "dtype": "float32"}, {"name": "722", "dtype": "float32"}, {"name": "723", "dtype": "float32"}, {"name": "724", "dtype": "float32"}, {"name": "725", "dtype": "float32"}, {"name": "726", "dtype": "float32"}, {"name": "727", "dtype": "float32"}, {"name": "728", "dtype": "float32"}, {"name": "729", "dtype": "float32"}, {"name": "730", "dtype": "float32"}, {"name": "731", "dtype": "float32"}, {"name": "732", "dtype": "float32"}, {"name": "733", "dtype": "float32"}, {"name": "734", "dtype": "float32"}, {"name": "735", "dtype": "float32"}, {"name": "736", "dtype": "float32"}, {"name": "737", "dtype": "float32"}, {"name": "738", "dtype": "float32"}, {"name": "739", "dtype": "float32"}, {"name": "740", "dtype": "float32"}, {"name": "741", "dtype": "float32"}, {"name": "742", "dtype": "float32"}, {"name": "743", "dtype": "float32"}, {"name": "744", "dtype": "float32"}, {"name": "745", "dtype": "float32"}, {"name": "746", "dtype": "float32"}, {"name": "747", "dtype": "float32"}, {"name": "748", "dtype": "float32"}, {"name": "749", "dtype": "float32"}, {"name": "750", "dtype": "float32"}, {"name": "751", "dtype": "float32"}, {"name": "752", "dtype": "float32"}, {"name": "753", "dtype": "float32"}, {"name": "754", "dtype": "float32"}, {"name": "755", "dtype": "float32"}, {"name": "756", "dtype": "float32"}, {"name": "757", "dtype": "float32"}, {"name": "758", "dtype": "float32"}, {"name": "759", "dtype": "float32"}, {"name": "760", "dtype": "float32"}, {"name": "761", "dtype": "float32"}, {"name": "762", "dtype": "float32"}, {"name": "763", "dtype": "float32"}, {"name": "764", "dtype": "float32"}, {"name": "765", "dtype": "float32"}, {"name": "766", "dtype": "float32"}, {"name": "767", "dtype": "float32"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 80318780.21618997, "num_examples": 26057}, {"name": "test", "num_bytes": 26774087.073587257, "num_examples": 8686}], "download_size": 147219122, "dataset_size": 107092867.28977722}}
|
2023-08-21T18:49:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "AA_ApplicationDistilRoBERTa"
More Information needed
|
[
"# Dataset Card for \"AA_ApplicationDistilRoBERTa\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"AA_ApplicationDistilRoBERTa\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"AA_ApplicationDistilRoBERTa\"\n\nMore Information needed"
] |
9ae995ce9853da7df5448acd7098063449a7ac78
|
### A merged dataset...
### Open-Platypus & Alpaca Data
|
FinchResearch/OpenPlatypus-Alpaca
|
[
"size_categories:10K<n<100K",
"license:apache-2.0",
"region:us"
] |
2023-08-21T12:31:52+00:00
|
{"license": "apache-2.0", "size_categories": ["10K<n<100K"]}
|
2023-08-29T12:53:43+00:00
|
[] |
[] |
TAGS
#size_categories-10K<n<100K #license-apache-2.0 #region-us
|
### A merged dataset...
### Open-Platypus & Alpaca Data
|
[
"### A merged dataset...",
"### Open-Platypus & Alpaca Data"
] |
[
"TAGS\n#size_categories-10K<n<100K #license-apache-2.0 #region-us \n",
"### A merged dataset...",
"### Open-Platypus & Alpaca Data"
] |
[
26,
8,
11
] |
[
"passage: TAGS\n#size_categories-10K<n<100K #license-apache-2.0 #region-us \n### A merged dataset...### Open-Platypus & Alpaca Data"
] |
9c90fc491aece1e84d929646d289252e68461544
|
# Dataset Card for "italian-dataset-helsinki"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thomasavare/italian-dataset-helsinki
|
[
"region:us"
] |
2023-08-21T12:38:30+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "english", "dtype": "string"}, {"name": "italian", "dtype": "string"}, {"name": "Class", "dtype": "string"}, {"name": "Class_index", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 61402, "num_examples": 500}], "download_size": 22595, "dataset_size": 61402}}
|
2023-08-21T12:38:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "italian-dataset-helsinki"
More Information needed
|
[
"# Dataset Card for \"italian-dataset-helsinki\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"italian-dataset-helsinki\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"italian-dataset-helsinki\"\n\nMore Information needed"
] |
85c3f971ad710ade3f80f8f8aac2af8c3a3d807d
|
# Dataset Card for "sick-br"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
loremipsum3658/sick-br
|
[
"region:us"
] |
2023-08-21T12:46:25+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "pair_ID", "dtype": "int64"}, {"name": "sentence_A", "dtype": "string"}, {"name": "sentence_B", "dtype": "string"}, {"name": "entailment_label", "dtype": "string"}, {"name": "relatedness_score", "dtype": "float64"}, {"name": "entailment_AB", "dtype": "string"}, {"name": "entailment_BA", "dtype": "string"}, {"name": "sentence_A_original", "dtype": "string"}, {"name": "sentence_B_original", "dtype": "string"}, {"name": "sentence_A_dataset", "dtype": "string"}, {"name": "sentence_B_dataset", "dtype": "string"}, {"name": "SemEval_set", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2196243, "num_examples": 6887}, {"name": "test", "num_bytes": 470001, "num_examples": 1477}, {"name": "validation", "num_bytes": 470022, "num_examples": 1476}], "download_size": 1217241, "dataset_size": 3136266}}
|
2023-08-21T12:46:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "sick-br"
More Information needed
|
[
"# Dataset Card for \"sick-br\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"sick-br\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"sick-br\"\n\nMore Information needed"
] |
907546ae5c25ba0aa1b90bb30b6d7cb4a0552a9d
|
---
license:
- other
language:
- en
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for Explicit content detection
## Table of Contents
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Source Data](#source-data)
## Dataset Description
1189 News Articles classified into different categories namely: "Explicit" if the article contains explicit content and "Not_Explicit" if not.
## Languages
The text in the dataset is in English
## Dataset Structure
The dataset consists of two columns namely Article and Category.
The Article column consists of the news article and the Category column consists of the class each article belongs to wether it contains explicit content or not
## Source Data
The dataset is queried from the Otherweb database
|
valurank/Explicit_content
|
[
"task_categories:text-classification",
"size_categories:1K<n<10K",
"license:other",
"region:us"
] |
2023-08-21T12:52:47+00:00
|
{"license": "other", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"]}
|
2023-08-21T13:14:35+00:00
|
[] |
[] |
TAGS
#task_categories-text-classification #size_categories-1K<n<10K #license-other #region-us
|
---
license:
- other
language:
- en
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for Explicit content detection
## Table of Contents
- Dataset Description
- Languages
- Dataset Structure
- Source Data
## Dataset Description
1189 News Articles classified into different categories namely: "Explicit" if the article contains explicit content and "Not_Explicit" if not.
## Languages
The text in the dataset is in English
## Dataset Structure
The dataset consists of two columns namely Article and Category.
The Article column consists of the news article and the Category column consists of the class each article belongs to wether it contains explicit content or not
## Source Data
The dataset is queried from the Otherweb database
|
[
"# Dataset Card for Explicit content detection",
"## Table of Contents\n- Dataset Description\n- Languages\n- Dataset Structure\n- Source Data",
"## Dataset Description\n\n1189 News Articles classified into different categories namely: \"Explicit\" if the article contains explicit content and \"Not_Explicit\" if not.",
"## Languages\n\nThe text in the dataset is in English",
"## Dataset Structure\n\nThe dataset consists of two columns namely Article and Category.\nThe Article column consists of the news article and the Category column consists of the class each article belongs to wether it contains explicit content or not",
"## Source Data\n\nThe dataset is queried from the Otherweb database"
] |
[
"TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #license-other #region-us \n",
"# Dataset Card for Explicit content detection",
"## Table of Contents\n- Dataset Description\n- Languages\n- Dataset Structure\n- Source Data",
"## Dataset Description\n\n1189 News Articles classified into different categories namely: \"Explicit\" if the article contains explicit content and \"Not_Explicit\" if not.",
"## Languages\n\nThe text in the dataset is in English",
"## Dataset Structure\n\nThe dataset consists of two columns namely Article and Category.\nThe Article column consists of the news article and the Category column consists of the class each article belongs to wether it contains explicit content or not",
"## Source Data\n\nThe dataset is queried from the Otherweb database"
] |
[
34,
10,
21,
41,
12,
59,
14
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #license-other #region-us \n# Dataset Card for Explicit content detection## Table of Contents\n- Dataset Description\n- Languages\n- Dataset Structure\n- Source Data## Dataset Description\n\n1189 News Articles classified into different categories namely: \"Explicit\" if the article contains explicit content and \"Not_Explicit\" if not.## Languages\n\nThe text in the dataset is in English## Dataset Structure\n\nThe dataset consists of two columns namely Article and Category.\nThe Article column consists of the news article and the Category column consists of the class each article belongs to wether it contains explicit content or not## Source Data\n\nThe dataset is queried from the Otherweb database"
] |
b3aa22617f2bde1b87c06b55583226e06c39179d
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
qtoino/form_matcher_demo_flagged
|
[
"region:us"
] |
2023-08-21T13:02:34+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.csv"}]}]}
|
2023-10-31T14:34:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
6,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
9446db7b4341e2c8e31dfd8ba63c0628b1d5bbcd
|
# Dataset Card for "Whopper"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
JorangHorse/Whopper
|
[
"region:us"
] |
2023-08-21T13:06:50+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 63071092.0, "num_examples": 153}, {"name": "test", "num_bytes": 14749545.0, "num_examples": 34}], "download_size": 42180904, "dataset_size": 77820637.0}}
|
2023-08-21T14:12:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Whopper"
More Information needed
|
[
"# Dataset Card for \"Whopper\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Whopper\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Whopper\"\n\nMore Information needed"
] |
d311674e604a3df202dd021f0cd986a65be0f0fd
|
# Dataset of arashi/嵐 (Kantai Collection)
This is the dataset of arashi/嵐 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `red_hair, ahoge, messy_hair, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 418.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arashi_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 291.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arashi_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1065 | 586.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arashi_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 389.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arashi_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1065 | 732.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arashi_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/arashi_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 20 |  |  |  |  |  | 1girl, blush, 1boy, hetero, open_mouth, nipples, navel, nude, simple_background, thighhighs, white_gloves, penis, pussy, sweat, bar_censor, solo_focus, sex, white_background, medium_breasts, small_breasts, spread_legs, vaginal, long_hair, closed_eyes, shirt |
| 1 | 35 |  |  |  |  |  | 1girl, black_vest, pleated_skirt, school_uniform, solo, white_shirt, black_skirt, white_gloves, short_sleeves, red_neckerchief, black_thighhighs, dress_shirt, looking_at_viewer, simple_background, blouse, white_background, cowboy_shot, grey_eyes, smile |
| 2 | 5 |  |  |  |  |  | 1girl, black_skirt, black_vest, brown_footwear, loafers, pleated_skirt, red_neckerchief, school_uniform, short_sleeves, simple_background, solo, white_background, white_gloves, white_shirt, black_thighhighs, full_body, red_ascot, blouse, dress_shirt, grey_thighhighs, standing, grey_eyes, grin |
| 3 | 8 |  |  |  |  |  | 1girl, black_vest, looking_at_viewer, school_uniform, short_sleeves, solo, upper_body, white_gloves, white_shirt, blouse, dress_shirt, purple_eyes, red_neckerchief, simple_background, hair_between_eyes, long_hair, white_background, grin, open_mouth, red_background |
| 4 | 14 |  |  |  |  |  | 1girl, fake_animal_ears, playboy_bunny, rabbit_ears, detached_collar, solo, strapless_leotard, wrist_cuffs, black_leotard, black_pantyhose, blush, looking_at_viewer, medium_breasts, simple_background, alternate_costume, purple_eyes, cleavage, cowboy_shot, white_background, black_bowtie, medium_hair, rabbit_tail |
| 5 | 5 |  |  |  |  |  | 1girl, detached_collar, simple_background, solo, white_background, blush, enmaided, maid_bikini, maid_headdress, navel, open_mouth, waist_apron, white_gloves, black_thighhighs, cowboy_shot, frills, medium_hair, small_breasts, white_apron, ascot, black_bikini, black_skirt, brown_eyes, elbow_gloves, looking_at_viewer, medium_breasts, purple_eyes, twitter_username |
| 6 | 5 |  |  |  |  |  | 1girl, cow_ears, cow_horns, cow_print, fake_animal_ears, blush, elbow_gloves, open_mouth, white_gloves, alternate_costume, cowbell, fake_horns, hair_between_eyes, long_hair, simple_background, small_breasts, sweat, white_bikini, 1boy, bed_sheet, black_eyes, brown_eyes, dated, hetero, looking_at_viewer, medium_breasts, neck_bell, on_side, one-hour_drawing_challenge, ponytail, solo_focus, white_background, white_thighhighs |
| 7 | 23 |  |  |  |  |  | 1girl, solo, pink_kimono, obi, yukata, looking_at_viewer, alternate_costume, blush, smile, medium_hair, upper_body, holding, wide_sleeves |
| 8 | 7 |  |  |  |  |  | 1girl, small_breasts, solo, hair_between_eyes, looking_at_viewer, simple_background, white_background, navel, open_mouth, twitter_username, underwear_only, collarbone, grey_eyes, long_hair, bare_shoulders, black_bra, black_panties, spoken_blush, white_gloves |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | 1boy | hetero | open_mouth | nipples | navel | nude | simple_background | thighhighs | white_gloves | penis | pussy | sweat | bar_censor | solo_focus | sex | white_background | medium_breasts | small_breasts | spread_legs | vaginal | long_hair | closed_eyes | shirt | black_vest | pleated_skirt | school_uniform | solo | white_shirt | black_skirt | short_sleeves | red_neckerchief | black_thighhighs | dress_shirt | looking_at_viewer | blouse | cowboy_shot | grey_eyes | smile | brown_footwear | loafers | full_body | red_ascot | grey_thighhighs | standing | grin | upper_body | purple_eyes | hair_between_eyes | red_background | fake_animal_ears | playboy_bunny | rabbit_ears | detached_collar | strapless_leotard | wrist_cuffs | black_leotard | black_pantyhose | alternate_costume | cleavage | black_bowtie | medium_hair | rabbit_tail | enmaided | maid_bikini | maid_headdress | waist_apron | frills | white_apron | ascot | black_bikini | brown_eyes | elbow_gloves | twitter_username | cow_ears | cow_horns | cow_print | cowbell | fake_horns | white_bikini | bed_sheet | black_eyes | dated | neck_bell | on_side | one-hour_drawing_challenge | ponytail | white_thighhighs | pink_kimono | obi | yukata | holding | wide_sleeves | underwear_only | collarbone | bare_shoulders | black_bra | black_panties | spoken_blush |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:---------|:-------------|:----------|:--------|:-------|:--------------------|:-------------|:---------------|:--------|:--------|:--------|:-------------|:-------------|:------|:-------------------|:-----------------|:----------------|:--------------|:----------|:------------|:--------------|:--------|:-------------|:----------------|:-----------------|:-------|:--------------|:--------------|:----------------|:------------------|:-------------------|:--------------|:--------------------|:---------|:--------------|:------------|:--------|:-----------------|:----------|:------------|:------------|:------------------|:-----------|:-------|:-------------|:--------------|:--------------------|:-----------------|:-------------------|:----------------|:--------------|:------------------|:--------------------|:--------------|:----------------|:------------------|:--------------------|:-----------|:---------------|:--------------|:--------------|:-----------|:--------------|:-----------------|:--------------|:---------|:--------------|:--------|:---------------|:-------------|:---------------|:-------------------|:-----------|:------------|:------------|:----------|:-------------|:---------------|:------------|:-------------|:--------|:------------|:----------|:-----------------------------|:-----------|:-------------------|:--------------|:------|:---------|:----------|:---------------|:-----------------|:-------------|:-----------------|:------------|:----------------|:---------------|
| 0 | 20 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 35 |  |  |  |  |  | X | | | | | | | | X | | X | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | | | | | | | | X | | X | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | | X | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | X | | | | X | | | | X | | X | | | | | | | X | | | | | X | | | X | | X | X | X | | X | X | | X | X | X | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 14 |  |  |  |  |  | X | X | | | | | | | X | | | | | | | | | X | X | | | | | | | | | | X | | | | | | | X | | X | | | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | | | X | | X | | X | | X | | | | | | | X | X | X | | | | | | | | | X | | X | | | X | | X | | X | | | | | | | | | | | X | | | | | | X | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | X | X | X | X | | | | X | | X | | | X | | X | | X | X | X | | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | X | | X | | | | | | | | X | | | | | | | | | | | | | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 7 | 23 |  |  |  |  |  | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | | | | X | | | | | | | | X | | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | |
| 8 | 7 |  |  |  |  |  | X | | | | X | | X | | X | | X | | | | | | | X | | X | | | X | | | | | | X | | | | | | | X | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X |
|
CyberHarem/arashi_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T13:13:05+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T16:15:21+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of arashi/嵐 (Kantai Collection)
=======================================
This is the dataset of arashi/嵐 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are 'red\_hair, ahoge, messy\_hair, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
13fdbd598ab8d15b8bd762916cd590c5eb283c77
|
# Dataset Card for "datasetfortrain"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
linges0103/datasetfortrain
|
[
"region:us"
] |
2023-08-21T13:27:38+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 42812, "num_examples": 59}], "download_size": 22507, "dataset_size": 42812}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-21T15:43:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "datasetfortrain"
More Information needed
|
[
"# Dataset Card for \"datasetfortrain\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"datasetfortrain\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"datasetfortrain\"\n\nMore Information needed"
] |
f4ff6fdf69de9c263aa6344b42de1efb43388080
|
fork from [[YeungNLP/firefly-train-1.1M]](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M)
我们收集了23个常见的中文数据集,对于每个任务,由人工书写若干种指令模板,保证数据的高质量与丰富度,数据量为115万 。数据分布如下图所示:

每条数据的格式如下,包含任务类型、输入、目标输出:
```json
[
{
"instruction": "ClassicalChinese",
"input": "将下面句子翻译成现代文:\n石中央又生一树,高百余尺,条干偃阴为五色,翠叶如盘,花径尺余,色深碧,蕊深红,异香成烟,著物霏霏。",
"output": "大石的中央长着一棵树,一百多尺高,枝干是彩色的,树叶有盘子那样大,花的直径有一尺宽,花瓣深蓝色,花中飘出奇异的香气笼罩着周围,如烟似雾。",
"history":""
}
]
```
训练数据集的token长度分布如下图所示,绝大部分数据的长度都小于600:

|
ticoAg/firefly-train-1.1M
|
[
"region:us"
] |
2023-08-21T13:40:42+00:00
|
{}
|
2023-08-23T11:49:12+00:00
|
[] |
[] |
TAGS
#region-us
|
fork from [[YeungNLP/firefly-train-1.1M]](URL
我们收集了23个常见的中文数据集,对于每个任务,由人工书写若干种指令模板,保证数据的高质量与丰富度,数据量为115万 。数据分布如下图所示:
!task_distribution
每条数据的格式如下,包含任务类型、输入、目标输出:
训练数据集的token长度分布如下图所示,绝大部分数据的长度都小于600:
!len_distribution.png
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
cbb2790527334fb3e2074d472f92f27134379f4e
|
# orange_sum_fr_prompt_summarization
## Summary
**orange_sum_fr_prompt_summarization** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **683,228** rows that can be used for a summary task.
The original data (without prompts) comes from the dataset [orange_sum](https://huggingface.co/datasets/orange_sum) by Eddine et al.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
28 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Résumer le texte suivant : "'+document+'"',
'Résume le texte suivant : "'+document+'"',
'Résumez le texte suivant : "'+document+'"',
'Résumer le texte suivant en quelques mots : "'+document+'"',
'Résume le texte suivant en quelques mots : "'+document+'"',
'Résumez le texte suivant en quelques mots : "'+document+'"',
"Condenser le texte à l'essentiel :" +document,
"Condense le texte à l'essentiel :" +document,
"Condensez le texte à l'essentiel :" +document,
'"'+document+' Rédiger un résumé du texte ci-dessus :',
'"'+document+' Rédige un résumé du texte ci-dessus :',
'"'+document+' Rédigez un résumé du texte ci-dessus :',
'Premièrement, lire le texte ci-dessous. \n\n "'+document+'"\n\n Maintenant, rédiger un court résumé.',
'Premièrement, lis le texte ci-dessous. \n\n "'+document+'"\n\n Maintenant, rédige un court résumé.',
'Premièrement, lisez le texte ci-dessous. \n\n "'+document+'"\n\n Maintenant, rédigez un court résumé.',
'Article : "'+document+'"/n Résumé : ',
'"'+document+' Comment reformuler cela en quelques mots ?',
'"'+document+' Comment peux-tu reformuler cela en quelques mots ?',
'"'+document+' Comment pouvez-vous reformuler cela en quelques mots ?',
'Résumer ce document : "'+document+'" Résumé :',
'Résume ce document : "'+document+'" Résumé :',
'Résumez ce document : "'+document+'" Résumé :',
'"'+document+' Compte tenu du document ci-dessus, écrire une phrase pour le résumer :',
'"'+document+' Compte tenu du document ci-dessus, écris une phrase pour le résumer :',
'"'+document+' Compte tenu du document ci-dessus, écrivez une phrase pour le résumer :',
'"'+document+' Rédiger un résumé du texte ci-dessus : ',
'"'+document+' Rédige un résumé du texte ci-dessus : ',
'"'+document+' Rédigez un résumé du texte ci-dessus : '
```
### Features used in the prompts
In the prompt list above, `document` and `targets` have been constructed from:
```
orange_sum = load_dataset('orange_sum','abstract')
document = orange_sum['train'][i]['text']
targets = orange_sum['train'][i]['summary']
```
# Splits
- `train` with 599,228 samples
- `valid` with 42,000 samples
- `test` with 42,000 samples
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/orange_sum_fr_prompt_summarization")
```
# Citation
## Original data
> @article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC-BY-SA-4.0
|
CATIE-AQ/orange_sum_fr_prompt_summarization
|
[
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:orange_sum",
"language:fr",
"license:cc-by-sa-4.0",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T13:45:07+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "cc-by-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["orange_sum"], "task_categories": ["summarization"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:24:23+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-summarization #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-orange_sum #language-French #license-cc-by-sa-4.0 #DFP #french prompts #region-us
|
# orange_sum_fr_prompt_summarization
## Summary
orange_sum_fr_prompt_summarization is a subset of the Dataset of French Prompts (DFP).
It contains 683,228 rows that can be used for a summary task.
The original data (without prompts) comes from the dataset orange_sum by Eddine et al.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
28 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
### Features used in the prompts
In the prompt list above, 'document' and 'targets' have been constructed from:
# Splits
- 'train' with 599,228 samples
- 'valid' with 42,000 samples
- 'test' with 42,000 samples
# How to use?
## Original data
> @article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC-BY-SA-4.0
|
[
"# orange_sum_fr_prompt_summarization",
"## Summary\n\norange_sum_fr_prompt_summarization is a subset of the Dataset of French Prompts (DFP). \nIt contains 683,228 rows that can be used for a summary task. \nThe original data (without prompts) comes from the dataset orange_sum by Eddine et al. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n28 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"### Features used in the prompts\nIn the prompt list above, 'document' and 'targets' have been constructed from:",
"# Splits\n- 'train' with 599,228 samples\n- 'valid' with 42,000 samples\n- 'test' with 42,000 samples",
"# How to use?",
"## Original data\n> @article{eddine2020barthez,\n title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},\n author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},\n journal={arXiv preprint arXiv:2010.12321},\n year={2020}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC-BY-SA-4.0"
] |
[
"TAGS\n#task_categories-summarization #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-orange_sum #language-French #license-cc-by-sa-4.0 #DFP #french prompts #region-us \n",
"# orange_sum_fr_prompt_summarization",
"## Summary\n\norange_sum_fr_prompt_summarization is a subset of the Dataset of French Prompts (DFP). \nIt contains 683,228 rows that can be used for a summary task. \nThe original data (without prompts) comes from the dataset orange_sum by Eddine et al. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n28 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"### Features used in the prompts\nIn the prompt list above, 'document' and 'targets' have been constructed from:",
"# Splits\n- 'train' with 599,228 samples\n- 'valid' with 42,000 samples\n- 'test' with 42,000 samples",
"# How to use?",
"## Original data\n> @article{eddine2020barthez,\n title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},\n author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},\n journal={arXiv preprint arXiv:2010.12321},\n year={2020}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC-BY-SA-4.0"
] |
[
91,
15,
122,
5,
46,
30,
33,
5,
91,
106,
9
] |
[
"passage: TAGS\n#task_categories-summarization #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-orange_sum #language-French #license-cc-by-sa-4.0 #DFP #french prompts #region-us \n# orange_sum_fr_prompt_summarization## Summary\n\norange_sum_fr_prompt_summarization is a subset of the Dataset of French Prompts (DFP). \nIt contains 683,228 rows that can be used for a summary task. \nThe original data (without prompts) comes from the dataset orange_sum by Eddine et al. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n28 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.### Features used in the prompts\nIn the prompt list above, 'document' and 'targets' have been constructed from:# Splits\n- 'train' with 599,228 samples\n- 'valid' with 42,000 samples\n- 'test' with 42,000 samples# How to use?## Original data\n> @article{eddine2020barthez,\n title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},\n author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},\n journal={arXiv preprint arXiv:2010.12321},\n year={2020}\n}"
] |
2caaa379726c2046290aaa2b4074b18be43aeb04
|
# orange_sum_fr_prompt_text_generation_from_an_article
## Summary
**orange_sum_fr_prompt_text_generation_from_an_article** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **539,400** rows that can be used for a text generation task.
The original data (without prompts) comes from the dataset [orange_sum](https://huggingface.co/datasets/orange_sum) by Eddine et al.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'"'+document+'"\n Continuer le texte sur 1000 caractères maximum :',
'"'+document+'"\n Continue le texte sur 1000 caractères maximum :',
'"'+document+'"\n Continuez le texte sur 1000 caractères maximum :',
'"'+document+'"\n Poursuivre le texte sur 1000 caractères maximum :',
'"'+document+'"\n Poursuis le texte sur 1000 caractères maximum :',
'"'+document+'"\n Poursuivez le texte sur 1000 caractères maximum :',
'"'+document+'"\n Prolonger le texte sur 1000 caractères maximum :',
'"'+document+'"\n Prolonge le texte sur 1000 caractères maximum :',
'"'+document+'"\n Prolongez le texte sur 1000 caractères maximum :',
'"'+document+'"\n Rédiger la suite du texte : ',
'"'+document+'"\n Rédige la suite du texte : ',
'"'+document+'"\n Rédigez la suite du texte : ',
'"'+document+'"\n Imaginer la suite du texte : ',
'"'+document+'"\n Imagine la suite du texte : ',
'"'+document+'"\n Imaginez la suite du texte : ',
'"'+document+'"\n Ecrire la suite du texte : ',
'"'+document+'"\n Ecris la suite du texte : ',
'"'+document+'"\n Ecriver la suite du texte : ',
'"'+document+'"\n Développer la suite du texte : ',
'"'+document+'"\n Développe la suite du texte : ',
'"'+document+'"\n Développez la suite du texte : ',
'"'+document+'"\nGénérer la suite du texte : ',
'"'+document+'"\nGénère la suite du texte : ',
'"'+document+'"\n Générez la suite du texte : ',
```
### Features used in the prompts
In the prompt list above, `text` and `targets` have been constructed from:
```
orange_sum = load_dataset('orange_sum','abstract')
if len(orange_sum['train'][i]['text']) > 1000:
document = orange_sum['train'][i]['text'][:1000]
targets = orange_sum['train'][i]['summary'][1000:]
```
# Splits
- `train` with 472,944 samples
- `valid` with 33,096 samples
- `test` with 33,360 samples
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/orange_sum_fr_prompt_text_generation_from_an_article")
```
# Citation
## Original data
> @article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC-BY-SA-4.0
|
CATIE-AQ/orange_sum_fr_prompt_text_generation_from_an_article
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:orange_sum",
"language:fr",
"license:cc-by-sa-4.0",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T13:47:17+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "cc-by-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["orange_sum"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:24:32+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-orange_sum #language-French #license-cc-by-sa-4.0 #DFP #french prompts #region-us
|
# orange_sum_fr_prompt_text_generation_from_an_article
## Summary
orange_sum_fr_prompt_text_generation_from_an_article is a subset of the Dataset of French Prompts (DFP).
It contains 539,400 rows that can be used for a text generation task.
The original data (without prompts) comes from the dataset orange_sum by Eddine et al.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
### Features used in the prompts
In the prompt list above, 'text' and 'targets' have been constructed from:
# Splits
- 'train' with 472,944 samples
- 'valid' with 33,096 samples
- 'test' with 33,360 samples
# How to use?
## Original data
> @article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC-BY-SA-4.0
|
[
"# orange_sum_fr_prompt_text_generation_from_an_article",
"## Summary\n\norange_sum_fr_prompt_text_generation_from_an_article is a subset of the Dataset of French Prompts (DFP). \nIt contains 539,400 rows that can be used for a text generation task. \nThe original data (without prompts) comes from the dataset orange_sum by Eddine et al. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"### Features used in the prompts\nIn the prompt list above, 'text' and 'targets' have been constructed from:",
"# Splits\n- 'train' with 472,944 samples\n- 'valid' with 33,096 samples\n- 'test' with 33,360 samples",
"# How to use?",
"## Original data\n> @article{eddine2020barthez,\n title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},\n author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},\n journal={arXiv preprint arXiv:2010.12321},\n year={2020}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC-BY-SA-4.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-orange_sum #language-French #license-cc-by-sa-4.0 #DFP #french prompts #region-us \n",
"# orange_sum_fr_prompt_text_generation_from_an_article",
"## Summary\n\norange_sum_fr_prompt_text_generation_from_an_article is a subset of the Dataset of French Prompts (DFP). \nIt contains 539,400 rows that can be used for a text generation task. \nThe original data (without prompts) comes from the dataset orange_sum by Eddine et al. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"### Features used in the prompts\nIn the prompt list above, 'text' and 'targets' have been constructed from:",
"# Splits\n- 'train' with 472,944 samples\n- 'valid' with 33,096 samples\n- 'test' with 33,360 samples",
"# How to use?",
"## Original data\n> @article{eddine2020barthez,\n title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},\n author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},\n journal={arXiv preprint arXiv:2010.12321},\n year={2020}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC-BY-SA-4.0"
] |
[
92,
22,
130,
5,
46,
30,
36,
5,
91,
106,
9
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-orange_sum #language-French #license-cc-by-sa-4.0 #DFP #french prompts #region-us \n# orange_sum_fr_prompt_text_generation_from_an_article## Summary\n\norange_sum_fr_prompt_text_generation_from_an_article is a subset of the Dataset of French Prompts (DFP). \nIt contains 539,400 rows that can be used for a text generation task. \nThe original data (without prompts) comes from the dataset orange_sum by Eddine et al. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.### Features used in the prompts\nIn the prompt list above, 'text' and 'targets' have been constructed from:# Splits\n- 'train' with 472,944 samples\n- 'valid' with 33,096 samples\n- 'test' with 33,360 samples# How to use?## Original data\n> @article{eddine2020barthez,\n title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},\n author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},\n journal={arXiv preprint arXiv:2010.12321},\n year={2020}\n}"
] |
051e9efc4b5c69923cccac4d9be4d43cf30520b7
|
# orange_sum_fr_prompt_text_generation_from_title_of_an_article
## Summary
**orange_sum_fr_prompt_text_generation_from_title_of_an_article** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **908,793** rows that can be used for a part-of-speech task.
The original data (without prompts) comes from the dataset [orange_sum](https://huggingface.co/datasets/orange_sum) by Eddine et al.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
27 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Rédiger un texte dont le titre est : "'+title+'".',
'Rédige un texte dont le titre est : "'+title+'".',
'Rédigez un texte dont le titre est : "'+title+'".',
'Rédiger une article dont le titre est : "'+title+'".',
'Rédige un article dont le titre est : "'+title+'".',
'Rédigez un article dont le titre est : "'+title+'".',
'Rédiger un document dont le titre est : "'+title+'".',
'Rédige un document dont le titre est : "'+title+'".',
'Rédigez un document dont le titre est : "'+title+'".',
‘Génèrer un texte dont le titre est : "'+title+'".\nTexte : ',
'Génère un texte dont le titre est : "'+title+'".\nTexte : ',
‘Génèrez un texte dont le titre est : "'+title+'".\nTexte : ',
‘Génèrer un article dont le titre est : "'+title+'".\nArticle : ',
‘Génère un article dont le titre est : "'+title+'".\nArticle : ',
‘Génèrez un article dont le titre est : "'+title+'".\nArticle : ',
‘Génèrer un document dont le titre est : "'+title+'".\nDocument : ',
'Génère un document dont le titre est : "'+title+'".\nDocument : ',
‘Génèrez un document dont le titre est : "'+title+'".\nDocument : ',
'"'+title +'"\n Ecrire un texte de 1 à 5 phrases sur le titre précédent : ',
'"'+title +'"\n Ecris un texte de 1 à 5 phrases sur le titre précédent : ',
'"'+title +'"\n Ecrivez un texte de 1 à 5 phrases sur le titre précédent : ',
'"'+title +'"\n Ecrire un article de 1 à 5 phrases sur le titre précédent : ',
'"'+title +'"\n Ecris un article de 1 à 5 phrases sur le titre précédent : ',
'"'+title +'"\n Ecrivez un article de 1 à 5 phrases sur le titre précédent : ',
'"'+title +'"\n Ecrire un document de 1 à 5 phrases sur le titre précédent : ',
'"'+title +'"\n Ecris un document de 1 à 5 phrases sur le titre précédent : ',
'"'+title +'"\n Ecrivez un document de 1 à 5 phrases sur le titre précédent : '
```
### Features used in the prompts
In the prompt list above, `title` and `targets` have been constructed from:
```
orange_sum = load_dataset('orange_sum','title')
title = orange_sum['train'][i]['summary']
targets = orange_sum['train'][i]['text']
```
# Splits
- `train` with 827,793 samples
- `valid` with 40,500 samples
- `test` with 40,500 samples
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/orange_sum_fr_prompt_text_generation_from_title_of_an_article")
```
# Citation
## Original data
> @article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC-BY-SA-4.0
|
CATIE-AQ/orange_sum_fr_prompt_text_generation_from_title_of_an_article
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:orange_sum",
"language:fr",
"license:cc-by-sa-4.0",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T13:48:36+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "cc-by-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["orange_sum"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:27:00+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-orange_sum #language-French #license-cc-by-sa-4.0 #DFP #french prompts #region-us
|
# orange_sum_fr_prompt_text_generation_from_title_of_an_article
## Summary
orange_sum_fr_prompt_text_generation_from_title_of_an_article is a subset of the Dataset of French Prompts (DFP).
It contains 908,793 rows that can be used for a part-of-speech task.
The original data (without prompts) comes from the dataset orange_sum by Eddine et al.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
27 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
### Features used in the prompts
In the prompt list above, 'title' and 'targets' have been constructed from:
# Splits
- 'train' with 827,793 samples
- 'valid' with 40,500 samples
- 'test' with 40,500 samples
# How to use?
## Original data
> @article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC-BY-SA-4.0
|
[
"# orange_sum_fr_prompt_text_generation_from_title_of_an_article",
"## Summary\n\norange_sum_fr_prompt_text_generation_from_title_of_an_article is a subset of the Dataset of French Prompts (DFP). \nIt contains 908,793 rows that can be used for a part-of-speech task. \nThe original data (without prompts) comes from the dataset orange_sum by Eddine et al. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n27 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"### Features used in the prompts\nIn the prompt list above, 'title' and 'targets' have been constructed from:",
"# Splits\n- 'train' with 827,793 samples\n- 'valid' with 40,500 samples\n- 'test' with 40,500 samples",
"# How to use?",
"## Original data\n> @article{eddine2020barthez,\n title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},\n author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},\n journal={arXiv preprint arXiv:2010.12321},\n year={2020}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC-BY-SA-4.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-orange_sum #language-French #license-cc-by-sa-4.0 #DFP #french prompts #region-us \n",
"# orange_sum_fr_prompt_text_generation_from_title_of_an_article",
"## Summary\n\norange_sum_fr_prompt_text_generation_from_title_of_an_article is a subset of the Dataset of French Prompts (DFP). \nIt contains 908,793 rows that can be used for a part-of-speech task. \nThe original data (without prompts) comes from the dataset orange_sum by Eddine et al. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n27 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"### Features used in the prompts\nIn the prompt list above, 'title' and 'targets' have been constructed from:",
"# Splits\n- 'train' with 827,793 samples\n- 'valid' with 40,500 samples\n- 'test' with 40,500 samples",
"# How to use?",
"## Original data\n> @article{eddine2020barthez,\n title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},\n author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},\n journal={arXiv preprint arXiv:2010.12321},\n year={2020}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC-BY-SA-4.0"
] |
[
92,
26,
137,
5,
46,
30,
35,
5,
91,
106,
9
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-orange_sum #language-French #license-cc-by-sa-4.0 #DFP #french prompts #region-us \n# orange_sum_fr_prompt_text_generation_from_title_of_an_article## Summary\n\norange_sum_fr_prompt_text_generation_from_title_of_an_article is a subset of the Dataset of French Prompts (DFP). \nIt contains 908,793 rows that can be used for a part-of-speech task. \nThe original data (without prompts) comes from the dataset orange_sum by Eddine et al. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n27 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.### Features used in the prompts\nIn the prompt list above, 'title' and 'targets' have been constructed from:# Splits\n- 'train' with 827,793 samples\n- 'valid' with 40,500 samples\n- 'test' with 40,500 samples# How to use?## Original data\n> @article{eddine2020barthez,\n title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},\n author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},\n journal={arXiv preprint arXiv:2010.12321},\n year={2020}\n}"
] |
713477bd62438a740a60697dac9fb196dba93932
|
# orange_sum_fr_prompt_title_generation_from_an_article
## Summary
**orange_sum_fr_prompt_title_generation_from_an_article** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **639,521** rows that can be used for a title generation task.
The original data (without prompts) comes from the dataset [orange_sum](https://huggingface.co/datasets/orange_sum) by Eddine et al.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
19 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'"'+document+'"\n Générer un titre pour cet article :',
'"'+document+'"\n Génère un titre pour cet article :',
'"'+document+'"\n Générez un titre pour cet article :',
'"'+document+'"\n Rédiger un titre pour cet article :',
'"'+document+'"\n Rédige un titre pour cet article :',
'"'+document+'"\n Rédigez un titre pour cet article :',
'"'+document+'"\n Ecrire un titre pour cet article :',
'"'+document+'"\n Ecris un titre pour cet article :',
'"'+document+'"\n Ecrivez un titre pour cet article :',
"Générer un titre pour l'article suivant : "+document,
"Génère un titre pour l'article suivant : "+document,
"Générez un titre pour l'article suivant : "+document,
"Rédiger un titre pour l'article suivant : "+document,
"Rédige un titre pour l'article suivant : "+document,
"Rédigez un titre pour l'article suivant : "+document,
"Ecrire un titre pour l'article suivant : "+document,
"Ecris un titre pour l'article suivant : "+document,
"Ecrivez un titre pour l'article suivant : "+document,
'"'+document+'"\n Titre :\n '
```
### Features used in the prompts
In the prompt list above, `document` and `targets` have been constructed from:
```
orange_sum = load_dataset('orange_sum','title')
document = orange_sum['train'][i]['text']
targets = orange_sum['train'][i]['summary']
```
# Splits
- `train` with 582,521 samples
- `valid` with 28,500 samples
- `test` with 28,500 samples
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/orange_sum_fr_prompt_title_generation_from_an_article")
```
# Citation
## Original data
> @article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC-BY-SA-4.0
|
CATIE-AQ/orange_sum_fr_prompt_title_generation_from_an_article
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:orange_sum",
"language:fr",
"license:cc-by-sa-4.0",
"title generation",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T13:50:06+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "cc-by-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["orange_sum"], "task_categories": ["text-generation"], "tags": ["title generation", "DFP", "french prompts"]}
|
2023-10-11T11:27:17+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-orange_sum #language-French #license-cc-by-sa-4.0 #title generation #DFP #french prompts #region-us
|
# orange_sum_fr_prompt_title_generation_from_an_article
## Summary
orange_sum_fr_prompt_title_generation_from_an_article is a subset of the Dataset of French Prompts (DFP).
It contains 639,521 rows that can be used for a title generation task.
The original data (without prompts) comes from the dataset orange_sum by Eddine et al.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
19 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
### Features used in the prompts
In the prompt list above, 'document' and 'targets' have been constructed from:
# Splits
- 'train' with 582,521 samples
- 'valid' with 28,500 samples
- 'test' with 28,500 samples
# How to use?
## Original data
> @article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC-BY-SA-4.0
|
[
"# orange_sum_fr_prompt_title_generation_from_an_article",
"## Summary\n\norange_sum_fr_prompt_title_generation_from_an_article is a subset of the Dataset of French Prompts (DFP). \nIt contains 639,521 rows that can be used for a title generation task. \nThe original data (without prompts) comes from the dataset orange_sum by Eddine et al. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n19 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"### Features used in the prompts\nIn the prompt list above, 'document' and 'targets' have been constructed from:",
"# Splits\n- 'train' with 582,521 samples\n- 'valid' with 28,500 samples\n- 'test' with 28,500 samples",
"# How to use?",
"## Original data\n> @article{eddine2020barthez,\n title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},\n author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},\n journal={arXiv preprint arXiv:2010.12321},\n year={2020}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC-BY-SA-4.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-orange_sum #language-French #license-cc-by-sa-4.0 #title generation #DFP #french prompts #region-us \n",
"# orange_sum_fr_prompt_title_generation_from_an_article",
"## Summary\n\norange_sum_fr_prompt_title_generation_from_an_article is a subset of the Dataset of French Prompts (DFP). \nIt contains 639,521 rows that can be used for a title generation task. \nThe original data (without prompts) comes from the dataset orange_sum by Eddine et al. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n19 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"### Features used in the prompts\nIn the prompt list above, 'document' and 'targets' have been constructed from:",
"# Splits\n- 'train' with 582,521 samples\n- 'valid' with 28,500 samples\n- 'test' with 28,500 samples",
"# How to use?",
"## Original data\n> @article{eddine2020barthez,\n title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},\n author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},\n journal={arXiv preprint arXiv:2010.12321},\n year={2020}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC-BY-SA-4.0"
] |
[
95,
22,
129,
5,
46,
30,
33,
5,
91,
106,
9
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-orange_sum #language-French #license-cc-by-sa-4.0 #title generation #DFP #french prompts #region-us \n# orange_sum_fr_prompt_title_generation_from_an_article## Summary\n\norange_sum_fr_prompt_title_generation_from_an_article is a subset of the Dataset of French Prompts (DFP). \nIt contains 639,521 rows that can be used for a title generation task. \nThe original data (without prompts) comes from the dataset orange_sum by Eddine et al. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n19 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.### Features used in the prompts\nIn the prompt list above, 'document' and 'targets' have been constructed from:# Splits\n- 'train' with 582,521 samples\n- 'valid' with 28,500 samples\n- 'test' with 28,500 samples# How to use?## Original data\n> @article{eddine2020barthez,\n title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},\n author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},\n journal={arXiv preprint arXiv:2010.12321},\n year={2020}\n}"
] |
aa85b2d3cd6f9f64fa21fd4ecfa3194ac32b7376
|
# amazon_reviews_multi_fr_prompt_binary_text_generation_from_title_of_a_review
## Summary
**amazon_reviews_multi_fr_prompt_binary_text_generation_from_title_of_a_review** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **7,560,000** rows that can be used for a text generation task.
The original data (without prompts) comes from the dataset [amazon_reviews_multi](https://huggingface.co/datasets/amazon_reviews_multi) by Keung et al. where only the French split has been kept.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
36 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
# négatifs
'Rédiger un commentaire négatif dont le titre est : "'+title+'"".',
'Rédige un commentaire négatif dont le titre est : "'+title+'"".',
'Rédigez un commentaire négatif dont le titre est : "'+title+'"".',
'Rédiger un avis négatif dont le titre est : "'+title+'"".',
'Rédige un avis négatif dont le titre est : "'+title+'"".',
'Rédigez un avis négatif dont le titre est : "'+title+'"".',
'Rédiger une critique négative dont le titre est : "'+title+'"".',
'Rédige une critique négative dont le titre est : "'+title+'"".',
'Rédigez une critique négative dont le titre est : "'+title+'"".',
'Rédiger une évaluation négative dont le titre est : "'+title+'"".',
'Rédige une évaluation négative dont le titre est : "'+title+'"".',
'Rédigez une évaluation négative dont le titre est : "'+title+'"".',
"""Générer un commentaire négatif d'un produit imaginaire dont le titre est : " """+title+""" "\nLe commentaire : """,
"""Génère un commentaire négatif d'un produit imaginaire dont le titre est : " """+title+""" "\nLe commentaire : """,
"""Générez un commentaire négatif d'un produit imaginaire dont le titre est : " """+title+""" "\nLe commentaire : """,
"""Générer un avis négatif d'un produit imaginaire dont le titre est : " """+title+""" "\nL'avis : """,
"""Génère un avis négatif d'un produit imaginaire dont le titre est : " """+title+""" "\nL'avis : """,
"""Générez un avis négatif d'un produit imaginaire dont le titre est : " """+title+""" "\nL'avis : """,
"""Générer une critique négative d'un produit imaginaire dont le titre est : " """+title+""" "\nLa critique : """,
"""Génère une critique négative d'un produit imaginaire dont le titre est : " """+title+""" "\nLa critique : """,
"""Générez une critique négative d'un produit imaginaire dont le titre est : " """+title+""" "\nLa critique : """,
"""Générer une évaluation négative d'un produit imaginaire dont le titre est : " """+title+""" "\nL'évaluation : """,
"""Génère une évaluation négative d'un produit imaginaire dont le titre est : " """+title+""" "\nL'évaluation : """,
"""Générez une évaluation négative d'un produit imaginaire dont le titre est : " """+title+""" "\nL'évaluation : """,
'Titre : "'+title +'"\n Ecrire un commentaire négatif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecris un commentaire négatif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrivez un commentaire négatif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrire un avis négatif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecris un avis négatif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrivez un avis négatif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrire une critique négative de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecris une critique négative de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrivez une critique négative de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrire une évaluation négative de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecris une évaluation négative de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrivez une évaluation négative de 1 à 5 phrases sur le titre précédent : ',
# positifs
'Rédiger un commentaire positif dont le titre est : '+title+'.',
'Rédige un commentaire positif dont le titre est : '+title+'.',
'Rédigez un commentaire positif dont le titre est : '+title+'.',
'Rédiger un avis positif dont le titre est : '+title+'.',
'Rédige un avis positif dont le titre est : '+title+'.',
'Rédigez un avis positif dont le titre est : '+title+'.',
'Rédiger une critique positive dont le titre est : '+title+'.',
'Rédige une critique positive dont le titre est : '+title+'.',
'Rédigez une critique positive dont le titre est : '+title+'.',
'Rédiger une évaluation positive dont le titre est : '+title+'.',
'Rédige une évaluation positive dont le titre est : '+title+'.',
'Rédigez une évaluation positive dont le titre est : '+title+'.',
"""Générer un commentaire positif d'un produit imaginaire dont le titre est : " """+title+""" "\nLe commentaire : """,
"""Génère un commentaire positif d'un produit imaginaire dont le titre est : " """+title+""" "\nLe commentaire : """,
"""Générez un commentaire positif d'un produit imaginaire dont le titre est : " """+title+""" "\nLe commentaire : """,
"""Générer un avis positif d'un produit imaginaire dont le titre est : " """+title+""" "\nL'avis : """,
"""Génère un avis positif d'un produit imaginaire dont le titre est : " """+title+""" "\nL'avis : """,
"""Générez un avis positif d'un produit imaginaire dont le titre est : " """+title+""" "\nL'avis : """,
"""Générer une critique positive d'un produit imaginaire dont le titre est : " """+title+""" "\nLa critique : """,
"""Génère une critique positive d'un produit imaginaire dont le titre est : " """+title+""" "\nLa critique : """,
"""Générez une critique positive d'un produit imaginaire dont le titre est : " """+title+""" "\nLa critique : """,
"""Générer une évaluation positive d'un produit imaginaire dont le titre est : " """+title+""" "\nL'évaluation : """,
"""Génère une évaluation positive d'un produit imaginaire dont le titre est : " """+title+""" "\nL'évaluation : """,
"""Générez une évaluation positive d'un produit imaginaire dont le titre est : " """+title+""" "\nL'évaluation : """,
'Titre : "'+title +'"\n Ecrire un commentaire positif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecris un commentaire positif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrivez un commentaire positif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrire un avis positif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecris un avis positif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrivez un avis positif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrire une critique positive de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecris une critique positive de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrivez une critique positive de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrire une évaluation positive de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecris une évaluation positive de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrivez une évaluation positive de 1 à 5 phrases sur le titre précédent : ',
```
### Features used in the prompts
In the prompt list above, `text` and `targets` have been constructed from:
```
arm = load_dataset('amazon_reviews_multi', 'fr')
title = arm['train']['review_title'][i]
targets = arm['train']['review_body'][i]
```
# Splits
- `train` with 7,200,000 samples
- `valid` with 180,000 samples
- `test` with 180,000 samples
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/amazon_reviews_multi_fr_prompt_binary_text_generation_from_title_of_a_review")
```
# Citation
## Original data
> @inproceedings{marc_reviews,
title={The Multilingual Amazon Reviews Corpus},
author={Keung, Phillip and Lu, Yichao and Szarvas, György and Smith, Noah A.},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing},
year={2020}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
Amazon has licensed his dataset under its own agreement for non-commercial research usage only. This licence is quite restrictive preventing use anywhere a fee is received including paid for internships etc. A copy of the agreement can be found at the dataset webpage here: https://docs.opendata.aws/amazon-reviews-ml/license.txt
By accessing the Multilingual Amazon Reviews Corpus ("Reviews Corpus"), you agree that the Reviews Corpus is an Amazon Service subject to the Amazon.com Conditions of Use and you agree to be bound by them, with the following additional conditions:
In addition to the license rights granted under the Conditions of Use, Amazon or its content providers grant you a limited, non-exclusive, non-transferable, non-sublicensable, revocable license to access and use the Reviews Corpus for purposes of academic research. You may not resell, republish, or make any commercial use of the Reviews Corpus or its contents, including use of the Reviews Corpus for commercial research, such as research related to a funding or consultancy contract, internship, or other relationship in which the results are provided for a fee or delivered to a for-profit organization. You may not (a) link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or (b) attempt to determine the identity of the author of any content in the Reviews Corpus. If you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically terminate without prejudice to any of the other rights or remedies Amazon may have.
|
CATIE-AQ/amazon_reviews_multi_fr_prompt_binary_text_generation_from_title_of_a_review
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:amazon_reviews_multi",
"language:fr",
"license:other",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T13:54:31+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "other", "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["amazon_reviews_multi"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:24:05+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-amazon_reviews_multi #language-French #license-other #DFP #french prompts #region-us
|
# amazon_reviews_multi_fr_prompt_binary_text_generation_from_title_of_a_review
## Summary
amazon_reviews_multi_fr_prompt_binary_text_generation_from_title_of_a_review is a subset of the Dataset of French Prompts (DFP).
It contains 7,560,000 rows that can be used for a text generation task.
The original data (without prompts) comes from the dataset amazon_reviews_multi by Keung et al. where only the French split has been kept.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
36 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
### Features used in the prompts
In the prompt list above, 'text' and 'targets' have been constructed from:
# Splits
- 'train' with 7,200,000 samples
- 'valid' with 180,000 samples
- 'test' with 180,000 samples
# How to use?
## Original data
> @inproceedings{marc_reviews,
title={The Multilingual Amazon Reviews Corpus},
author={Keung, Phillip and Lu, Yichao and Szarvas, György and Smith, Noah A.},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing},
year={2020}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
Amazon has licensed his dataset under its own agreement for non-commercial research usage only. This licence is quite restrictive preventing use anywhere a fee is received including paid for internships etc. A copy of the agreement can be found at the dataset webpage here: URL
By accessing the Multilingual Amazon Reviews Corpus ("Reviews Corpus"), you agree that the Reviews Corpus is an Amazon Service subject to the URL Conditions of Use and you agree to be bound by them, with the following additional conditions:
In addition to the license rights granted under the Conditions of Use, Amazon or its content providers grant you a limited, non-exclusive, non-transferable, non-sublicensable, revocable license to access and use the Reviews Corpus for purposes of academic research. You may not resell, republish, or make any commercial use of the Reviews Corpus or its contents, including use of the Reviews Corpus for commercial research, such as research related to a funding or consultancy contract, internship, or other relationship in which the results are provided for a fee or delivered to a for-profit organization. You may not (a) link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or (b) attempt to determine the identity of the author of any content in the Reviews Corpus. If you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically terminate without prejudice to any of the other rights or remedies Amazon may have.
|
[
"# amazon_reviews_multi_fr_prompt_binary_text_generation_from_title_of_a_review",
"## Summary\n\namazon_reviews_multi_fr_prompt_binary_text_generation_from_title_of_a_review is a subset of the Dataset of French Prompts (DFP). \nIt contains 7,560,000 rows that can be used for a text generation task. \nThe original data (without prompts) comes from the dataset amazon_reviews_multi by Keung et al. where only the French split has been kept. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n36 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"### Features used in the prompts\nIn the prompt list above, 'text' and 'targets' have been constructed from:",
"# Splits\n- 'train' with 7,200,000 samples\n- 'valid' with 180,000 samples\n- 'test' with 180,000 samples",
"# How to use?",
"## Original data\n> @inproceedings{marc_reviews,\n title={The Multilingual Amazon Reviews Corpus},\n author={Keung, Phillip and Lu, Yichao and Szarvas, György and Smith, Noah A.},\n booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing},\n year={2020}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nAmazon has licensed his dataset under its own agreement for non-commercial research usage only. This licence is quite restrictive preventing use anywhere a fee is received including paid for internships etc. A copy of the agreement can be found at the dataset webpage here: URL\n\nBy accessing the Multilingual Amazon Reviews Corpus (\"Reviews Corpus\"), you agree that the Reviews Corpus is an Amazon Service subject to the URL Conditions of Use and you agree to be bound by them, with the following additional conditions:\n\nIn addition to the license rights granted under the Conditions of Use, Amazon or its content providers grant you a limited, non-exclusive, non-transferable, non-sublicensable, revocable license to access and use the Reviews Corpus for purposes of academic research. You may not resell, republish, or make any commercial use of the Reviews Corpus or its contents, including use of the Reviews Corpus for commercial research, such as research related to a funding or consultancy contract, internship, or other relationship in which the results are provided for a fee or delivered to a for-profit organization. You may not (a) link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or (b) attempt to determine the identity of the author of any content in the Reviews Corpus. If you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically terminate without prejudice to any of the other rights or remedies Amazon may have."
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-amazon_reviews_multi #language-French #license-other #DFP #french prompts #region-us \n",
"# amazon_reviews_multi_fr_prompt_binary_text_generation_from_title_of_a_review",
"## Summary\n\namazon_reviews_multi_fr_prompt_binary_text_generation_from_title_of_a_review is a subset of the Dataset of French Prompts (DFP). \nIt contains 7,560,000 rows that can be used for a text generation task. \nThe original data (without prompts) comes from the dataset amazon_reviews_multi by Keung et al. where only the French split has been kept. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n36 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"### Features used in the prompts\nIn the prompt list above, 'text' and 'targets' have been constructed from:",
"# Splits\n- 'train' with 7,200,000 samples\n- 'valid' with 180,000 samples\n- 'test' with 180,000 samples",
"# How to use?",
"## Original data\n> @inproceedings{marc_reviews,\n title={The Multilingual Amazon Reviews Corpus},\n author={Keung, Phillip and Lu, Yichao and Szarvas, György and Smith, Noah A.},\n booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing},\n year={2020}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nAmazon has licensed his dataset under its own agreement for non-commercial research usage only. This licence is quite restrictive preventing use anywhere a fee is received including paid for internships etc. A copy of the agreement can be found at the dataset webpage here: URL\n\nBy accessing the Multilingual Amazon Reviews Corpus (\"Reviews Corpus\"), you agree that the Reviews Corpus is an Amazon Service subject to the URL Conditions of Use and you agree to be bound by them, with the following additional conditions:\n\nIn addition to the license rights granted under the Conditions of Use, Amazon or its content providers grant you a limited, non-exclusive, non-transferable, non-sublicensable, revocable license to access and use the Reviews Corpus for purposes of academic research. You may not resell, republish, or make any commercial use of the Reviews Corpus or its contents, including use of the Reviews Corpus for commercial research, such as research related to a funding or consultancy contract, internship, or other relationship in which the results are provided for a fee or delivered to a for-profit organization. You may not (a) link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or (b) attempt to determine the identity of the author of any content in the Reviews Corpus. If you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically terminate without prejudice to any of the other rights or remedies Amazon may have."
] |
[
89,
32,
150,
5,
46,
30,
35,
5,
89,
106,
339
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-amazon_reviews_multi #language-French #license-other #DFP #french prompts #region-us \n# amazon_reviews_multi_fr_prompt_binary_text_generation_from_title_of_a_review## Summary\n\namazon_reviews_multi_fr_prompt_binary_text_generation_from_title_of_a_review is a subset of the Dataset of French Prompts (DFP). \nIt contains 7,560,000 rows that can be used for a text generation task. \nThe original data (without prompts) comes from the dataset amazon_reviews_multi by Keung et al. where only the French split has been kept. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n36 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.### Features used in the prompts\nIn the prompt list above, 'text' and 'targets' have been constructed from:# Splits\n- 'train' with 7,200,000 samples\n- 'valid' with 180,000 samples\n- 'test' with 180,000 samples# How to use?## Original data\n> @inproceedings{marc_reviews,\n title={The Multilingual Amazon Reviews Corpus},\n author={Keung, Phillip and Lu, Yichao and Szarvas, György and Smith, Noah A.},\n booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing},\n year={2020}\n}"
] |
4655c71964bb7d8f14fcf3115a2a2e1f198954d2
|
# amazon_reviews_multi_fr_prompt_text_generation_from_title_of_a_review
## Summary
**amazon_reviews_multi_fr_prompt_text_generation_from_title_of_a_review** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **7,560,000** rows that can be used for a text generation task.
The original data (without prompts) comes from the dataset [amazon_reviews_multi](https://huggingface.co/datasets/amazon_reviews_multi) by Keung et al. where only the French split has been kept.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
36 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Rédiger un commentaire dont le titre est : "'+title+'"',
'Rédige un commentaire dont le titre est : "'+title+'"',
'Rédigez un commentaire dont le titre est : "'+title+'"',
'Rédiger un avis dont le titre est : "'+title+'"',
'Rédige un avis dont le titre est : "'+title+'"',
'Rédigez un avis dont le titre est : "'+title+'"',
'Rédiger une critique dont le titre est : "'+title+'"',
'Rédige une critique dont le titre est : "'+title+'"',
'Rédigez une critique dont le titre est : "'+title+'"',
'Rédiger une évaluation dont le titre est : "'+title+'"',
'Rédige une évaluation dont le titre est : "'+title+'"',
'Rédigez une évaluation dont le titre est : "'+title+'"',
"""Générer un commentaire d'un produit imaginaire dont le titre est : " """+title+""" "\nLe commentaire : """,
"""Génère un commentaire d'un produit imaginaire dont le titre est : " """+title+""" "\nLe commentaire : """,
"""Générez un commentaire d'un produit imaginaire dont le titre est : " """+title+""" "\nLe commentaire : """,
"""Générer un avis d'un produit imaginaire dont le titre est : " """+title+""" "\nL'avis : """,
"""Génére un avis d'un produit imaginaire dont le titre est : " """+title+""" "\nL'avis : """,
"""Générez un avis d'un produit imaginaire dont le titre est : " """+title+""" "\nL'avis : """,
"""Générer une critique d'un produit imaginaire dont le titre est : " """+title+""" "\nLa critique : """,
"""Génère une critique d'un produit imaginaire dont le titre est : " """+title+""" "\nLa critique : """,
"""Générez une critique d'un produit imaginaire dont le titre est : " """+title+""" "\nLa critique : """,
"""Générer une évaluation d'un produit imaginaire dont le titre est : " """+title+""" "\nL'évaluation : """,
"""Génère une évaluation d'un produit imaginaire dont le titre est : " """+title+""" "\nL'évaluation : """,
"""Générez une évaluation d'un produit imaginaire dont le titre est : " """+title+""" "\nL'évaluation : """,
'Titre : "'+title +'"\nEcrire un commentaire de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\nEcris un commentaire de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\nEcrivez un commentaire de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\nEcrire un avis de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\nEcris un avis de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\nEcrivez un avis de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\nEcrire une critique de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\nEcris une critique de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\nEcrivez une critique de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\nEcrire une évaluation de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\nEcris une évaluation de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\nEcrivez une évaluation de 1 à 5 phrases sur le titre précédent : ',
```
### Features used in the prompts
In the prompt list above, `title` and `targets` have been constructed from:
```
arm = load_dataset('amazon_reviews_multi', 'fr')
title = arm['train']['review_title'][i]
targets = arm['train']['review_body'][i]
```
# Splits
- `train` with 7,200,000 samples
- `valid` with 180,000 samples
- `test` with 180,000 samples
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/amazon_reviews_multi_fr_prompt_text_generation_from_title_of_a_review")
```
# Citation
## Original data
> @inproceedings{marc_reviews,
title={The Multilingual Amazon Reviews Corpus},
author={Keung, Phillip and Lu, Yichao and Szarvas, György and Smith, Noah A.},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing},
year={2020}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
Amazon has licensed his dataset under its own agreement for non-commercial research usage only. This licence is quite restrictive preventing use anywhere a fee is received including paid for internships etc. A copy of the agreement can be found at the dataset webpage here: https://docs.opendata.aws/amazon-reviews-ml/license.txt
By accessing the Multilingual Amazon Reviews Corpus ("Reviews Corpus"), you agree that the Reviews Corpus is an Amazon Service subject to the Amazon.com Conditions of Use and you agree to be bound by them, with the following additional conditions:
In addition to the license rights granted under the Conditions of Use, Amazon or its content providers grant you a limited, non-exclusive, non-transferable, non-sublicensable, revocable license to access and use the Reviews Corpus for purposes of academic research. You may not resell, republish, or make any commercial use of the Reviews Corpus or its contents, including use of the Reviews Corpus for commercial research, such as research related to a funding or consultancy contract, internship, or other relationship in which the results are provided for a fee or delivered to a for-profit organization. You may not (a) link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or (b) attempt to determine the identity of the author of any content in the Reviews Corpus. If you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically terminate without prejudice to any of the other rights or remedies Amazon may have.
|
CATIE-AQ/amazon_reviews_multi_fr_prompt_text_generation_from_title_of_a_review
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:amazon_reviews_multi",
"language:fr",
"license:other",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T13:59:09+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "other", "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["amazon_reviews_multi"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:24:14+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-amazon_reviews_multi #language-French #license-other #DFP #french prompts #region-us
|
# amazon_reviews_multi_fr_prompt_text_generation_from_title_of_a_review
## Summary
amazon_reviews_multi_fr_prompt_text_generation_from_title_of_a_review is a subset of the Dataset of French Prompts (DFP).
It contains 7,560,000 rows that can be used for a text generation task.
The original data (without prompts) comes from the dataset amazon_reviews_multi by Keung et al. where only the French split has been kept.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
36 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
### Features used in the prompts
In the prompt list above, 'title' and 'targets' have been constructed from:
# Splits
- 'train' with 7,200,000 samples
- 'valid' with 180,000 samples
- 'test' with 180,000 samples
# How to use?
## Original data
> @inproceedings{marc_reviews,
title={The Multilingual Amazon Reviews Corpus},
author={Keung, Phillip and Lu, Yichao and Szarvas, György and Smith, Noah A.},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing},
year={2020}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
Amazon has licensed his dataset under its own agreement for non-commercial research usage only. This licence is quite restrictive preventing use anywhere a fee is received including paid for internships etc. A copy of the agreement can be found at the dataset webpage here: URL
By accessing the Multilingual Amazon Reviews Corpus ("Reviews Corpus"), you agree that the Reviews Corpus is an Amazon Service subject to the URL Conditions of Use and you agree to be bound by them, with the following additional conditions:
In addition to the license rights granted under the Conditions of Use, Amazon or its content providers grant you a limited, non-exclusive, non-transferable, non-sublicensable, revocable license to access and use the Reviews Corpus for purposes of academic research. You may not resell, republish, or make any commercial use of the Reviews Corpus or its contents, including use of the Reviews Corpus for commercial research, such as research related to a funding or consultancy contract, internship, or other relationship in which the results are provided for a fee or delivered to a for-profit organization. You may not (a) link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or (b) attempt to determine the identity of the author of any content in the Reviews Corpus. If you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically terminate without prejudice to any of the other rights or remedies Amazon may have.
|
[
"# amazon_reviews_multi_fr_prompt_text_generation_from_title_of_a_review",
"## Summary\n\namazon_reviews_multi_fr_prompt_text_generation_from_title_of_a_review is a subset of the Dataset of French Prompts (DFP). \nIt contains 7,560,000 rows that can be used for a text generation task. \nThe original data (without prompts) comes from the dataset amazon_reviews_multi by Keung et al. where only the French split has been kept. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n36 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"### Features used in the prompts\nIn the prompt list above, 'title' and 'targets' have been constructed from:",
"# Splits\n- 'train' with 7,200,000 samples\n- 'valid' with 180,000 samples\n- 'test' with 180,000 samples",
"# How to use?",
"## Original data\n> @inproceedings{marc_reviews,\n title={The Multilingual Amazon Reviews Corpus},\n author={Keung, Phillip and Lu, Yichao and Szarvas, György and Smith, Noah A.},\n booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing},\n year={2020}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nAmazon has licensed his dataset under its own agreement for non-commercial research usage only. This licence is quite restrictive preventing use anywhere a fee is received including paid for internships etc. A copy of the agreement can be found at the dataset webpage here: URL\n\nBy accessing the Multilingual Amazon Reviews Corpus (\"Reviews Corpus\"), you agree that the Reviews Corpus is an Amazon Service subject to the URL Conditions of Use and you agree to be bound by them, with the following additional conditions:\n\nIn addition to the license rights granted under the Conditions of Use, Amazon or its content providers grant you a limited, non-exclusive, non-transferable, non-sublicensable, revocable license to access and use the Reviews Corpus for purposes of academic research. You may not resell, republish, or make any commercial use of the Reviews Corpus or its contents, including use of the Reviews Corpus for commercial research, such as research related to a funding or consultancy contract, internship, or other relationship in which the results are provided for a fee or delivered to a for-profit organization. You may not (a) link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or (b) attempt to determine the identity of the author of any content in the Reviews Corpus. If you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically terminate without prejudice to any of the other rights or remedies Amazon may have."
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-amazon_reviews_multi #language-French #license-other #DFP #french prompts #region-us \n",
"# amazon_reviews_multi_fr_prompt_text_generation_from_title_of_a_review",
"## Summary\n\namazon_reviews_multi_fr_prompt_text_generation_from_title_of_a_review is a subset of the Dataset of French Prompts (DFP). \nIt contains 7,560,000 rows that can be used for a text generation task. \nThe original data (without prompts) comes from the dataset amazon_reviews_multi by Keung et al. where only the French split has been kept. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n36 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"### Features used in the prompts\nIn the prompt list above, 'title' and 'targets' have been constructed from:",
"# Splits\n- 'train' with 7,200,000 samples\n- 'valid' with 180,000 samples\n- 'test' with 180,000 samples",
"# How to use?",
"## Original data\n> @inproceedings{marc_reviews,\n title={The Multilingual Amazon Reviews Corpus},\n author={Keung, Phillip and Lu, Yichao and Szarvas, György and Smith, Noah A.},\n booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing},\n year={2020}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nAmazon has licensed his dataset under its own agreement for non-commercial research usage only. This licence is quite restrictive preventing use anywhere a fee is received including paid for internships etc. A copy of the agreement can be found at the dataset webpage here: URL\n\nBy accessing the Multilingual Amazon Reviews Corpus (\"Reviews Corpus\"), you agree that the Reviews Corpus is an Amazon Service subject to the URL Conditions of Use and you agree to be bound by them, with the following additional conditions:\n\nIn addition to the license rights granted under the Conditions of Use, Amazon or its content providers grant you a limited, non-exclusive, non-transferable, non-sublicensable, revocable license to access and use the Reviews Corpus for purposes of academic research. You may not resell, republish, or make any commercial use of the Reviews Corpus or its contents, including use of the Reviews Corpus for commercial research, such as research related to a funding or consultancy contract, internship, or other relationship in which the results are provided for a fee or delivered to a for-profit organization. You may not (a) link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or (b) attempt to determine the identity of the author of any content in the Reviews Corpus. If you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically terminate without prejudice to any of the other rights or remedies Amazon may have."
] |
[
89,
29,
147,
5,
46,
30,
35,
5,
89,
106,
339
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-amazon_reviews_multi #language-French #license-other #DFP #french prompts #region-us \n# amazon_reviews_multi_fr_prompt_text_generation_from_title_of_a_review## Summary\n\namazon_reviews_multi_fr_prompt_text_generation_from_title_of_a_review is a subset of the Dataset of French Prompts (DFP). \nIt contains 7,560,000 rows that can be used for a text generation task. \nThe original data (without prompts) comes from the dataset amazon_reviews_multi by Keung et al. where only the French split has been kept. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n36 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.### Features used in the prompts\nIn the prompt list above, 'title' and 'targets' have been constructed from:# Splits\n- 'train' with 7,200,000 samples\n- 'valid' with 180,000 samples\n- 'test' with 180,000 samples# How to use?## Original data\n> @inproceedings{marc_reviews,\n title={The Multilingual Amazon Reviews Corpus},\n author={Keung, Phillip and Lu, Yichao and Szarvas, György and Smith, Noah A.},\n booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing},\n year={2020}\n}"
] |
917b993563785711f651f1c9f0897c052aa7d275
|
# KOpenPlatypus: Korean Translation dataset about Open-Platypus
## Korean Translation Method
I use [DeepL-pro-API](https://www.deepl.com/ko/pro/change-plan?cta=header-pro#single) and selenium.
It takes about 140h times.
+) 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다😭😭
## Korean Translation post-processing





And also, applying post-processing. See below lists. (*약 2000개 이상의 코드 관련 데이터를 수작업으로 수정함)
1. 코드와 주석은 그대로 유지하고, 설명 부분만 한국어로 수정
2. 1번과 더불어서, Python, Java, Cpp, xml 등등 결과들은 전부 기존의 데이터 형태로 최대한 보존
3. 단일 숫자와 영어는 본래의 결과 그대로 가져옴
4. DeepL Pro 번역 결과 중 미완성 변역 결과 직접 수정(예를 들면, '[...]'가 포함되어 있음)
5. DeepL Pro 번역 결과가 본래의 데이터에 비해 글자수가 50% 이하로 낮으면, 번역 결과 수정
6. 번역하고자 하는 글자수가 1500자 이상일 경우, API로 변경해서 번역
7. `고유명사`는 최대한 유지함
- 95% 이상의 번역 오류는 전부 고친 것으로 생각됨.
- 약 144h 정도 번역 작업을 진행함. (72h/72h; Translation/Post-processing)
## Introdcution
This dataset is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. It is comprised of the following datasets, which were filtered using keyword search and then Sentence Transformers to remove questions with a similarity above 80%:
| Dataset Name | License Type |
|--------------------------------------------------------------|--------------|
| [PRM800K](https://github.com/openai/prm800k) | MIT |
| [ScienceQA](https://github.com/lupantech/ScienceQA) | [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/) |
| [SciBench](https://github.com/mandyyyyii/scibench) | MIT |
| [ReClor](https://whyu.me/reclor/) | Non-commercial |
| [TheoremQA](https://huggingface.co/datasets/wenhu/TheoremQA) | MIT |
| [`nuprl/leetcode-solutions-python-testgen-gpt4`](https://huggingface.co/datasets/nuprl/leetcode-solutions-python-testgen-gpt4/viewer/nuprl--leetcode-solutions-python-testgen-gpt4/train?p=1) | None listed |
| [`jondurbin/airoboros-gpt4-1.4.1`](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1) | other |
| [`TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k`](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k/viewer/TigerResearch--tigerbot-kaggle-leetcodesolutions-en-2k/train?p=2) | apache-2.0 |
| [openbookQA](https://huggingface.co/datasets/openbookqa/viewer/additional/train?row=35) | apache-2.0 |
| [ARB](https://arb.duckai.org) | MIT |
| [`timdettmers/openassistant-guanaco`](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) | apache-2.0 |
## Data Contamination Check
We've removed approximately 200 questions that appear in the Hugging Face benchmark test sets. Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information.
## Model Info
Please see models at [`garage-bAInd`](https://huggingface.co/garage-bAInd).
## Training and filtering code
Please see the [Platypus GitHub repo](https://github.com/arielnlee/Platypus).
## Citations
```bibtex
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
```
```bibtex
@article{lightman2023lets,
title={Let's Verify Step by Step},
author={Lightman, Hunter and Kosaraju, Vineet and Burda, Yura and Edwards, Harri and Baker, Bowen and Lee, Teddy and Leike, Jan and Schulman, John and Sutskever, Ilya and Cobbe, Karl},
journal={preprint arXiv:2305.20050},
year={2023}
}
```
```bibtex
@inproceedings{lu2022learn,
title={Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering},
author={Lu, Pan and Mishra, Swaroop and Xia, Tony and Qiu, Liang and Chang, Kai-Wei and Zhu, Song-Chun and Tafjord, Oyvind and Clark, Peter and Ashwin Kalyan},
booktitle={The 36th Conference on Neural Information Processing Systems (NeurIPS)},
year={2022}
}
```
```bibtex
@misc{wang2023scibench,
title={SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models},
author={Xiaoxuan Wang and Ziniu Hu and Pan Lu and Yanqiao Zhu and Jieyu Zhang and Satyen Subramaniam and Arjun R. Loomba and Shichang Zhang and Yizhou Sun and Wei Wang},
year={2023},
arXiv eprint 2307.10635
}
```
```bibtex
@inproceedings{yu2020reclor,
author = {Yu, Weihao and Jiang, Zihang and Dong, Yanfei and Feng, Jiashi},
title = {ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning},
booktitle = {International Conference on Learning Representations (ICLR)},
month = {April},
year = {2020}
}
```
```bibtex
@article{chen2023theoremqa,
title={TheoremQA: A Theorem-driven Question Answering dataset},
author={Chen, Wenhu and Ming Yin, Max Ku, Elaine Wan, Xueguang Ma, Jianyu Xu, Tony Xia, Xinyi Wang, Pan Lu},
journal={preprint arXiv:2305.12524},
year={2023}
}
```
```bibtex
@inproceedings{OpenBookQA2018,
title={Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering},
author={Todor Mihaylov and Peter Clark and Tushar Khot and Ashish Sabharwal},
booktitle={EMNLP},
year={2018}
}
```
```bibtex
@misc{sawada2023arb,
title={ARB: Advanced Reasoning Benchmark for Large Language Models},
author={Tomohiro Sawada and Daniel Paleka and Alexander Havrilla and Pranav Tadepalli and Paula Vidas and Alexander Kranias and John J. Nay and Kshitij Gupta and Aran Komatsuzaki},
arXiv eprint 2307.13692,
year={2023}
}
```
|
kyujinpy/KOpen-platypus
|
[
"size_categories:10K<n<100K",
"language:en",
"language:ko",
"license:cc-by-4.0",
"arxiv:2308.07317",
"region:us"
] |
2023-08-21T13:59:26+00:00
|
{"language": ["en", "ko"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "data_source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 34213211, "num_examples": 24926}], "download_size": 16662523, "dataset_size": 34213211}}
|
2023-11-01T20:18:07+00:00
|
[
"2308.07317"
] |
[
"en",
"ko"
] |
TAGS
#size_categories-10K<n<100K #language-English #language-Korean #license-cc-by-4.0 #arxiv-2308.07317 #region-us
|
KOpenPlatypus: Korean Translation dataset about Open-Platypus
=============================================================
Korean Translation Method
-------------------------
I use DeepL-pro-API and selenium.
It takes about 140h times.
+) 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다
Korean Translation post-processing
----------------------------------
!image
!image
!image
!image
!image
And also, applying post-processing. See below lists. (\*약 2000개 이상의 코드 관련 데이터를 수작업으로 수정함)
1. 코드와 주석은 그대로 유지하고, 설명 부분만 한국어로 수정
2. 1번과 더불어서, Python, Java, Cpp, xml 등등 결과들은 전부 기존의 데이터 형태로 최대한 보존
3. 단일 숫자와 영어는 본래의 결과 그대로 가져옴
4. DeepL Pro 번역 결과 중 미완성 변역 결과 직접 수정(예를 들면, '[...]'가 포함되어 있음)
5. DeepL Pro 번역 결과가 본래의 데이터에 비해 글자수가 50% 이하로 낮으면, 번역 결과 수정
6. 번역하고자 하는 글자수가 1500자 이상일 경우, API로 변경해서 번역
7. '고유명사'는 최대한 유지함
* 95% 이상의 번역 오류는 전부 고친 것으로 생각됨.
* 약 144h 정도 번역 작업을 진행함. (72h/72h; Translation/Post-processing)
Introdcution
------------
This dataset is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. It is comprised of the following datasets, which were filtered using keyword search and then Sentence Transformers to remove questions with a similarity above 80%:
Data Contamination Check
------------------------
We've removed approximately 200 questions that appear in the Hugging Face benchmark test sets. Please see our paper and project webpage for additional information.
Model Info
----------
Please see models at 'garage-bAInd'.
Training and filtering code
---------------------------
Please see the Platypus GitHub repo.
s
|
[] |
[
"TAGS\n#size_categories-10K<n<100K #language-English #language-Korean #license-cc-by-4.0 #arxiv-2308.07317 #region-us \n"
] |
[
45
] |
[
"passage: TAGS\n#size_categories-10K<n<100K #language-English #language-Korean #license-cc-by-4.0 #arxiv-2308.07317 #region-us \n"
] |
3da950d1be00414cad82ec3fbd5e25cdce20b2a8
|
# Dataset Card for "AA_ApplicationDistilRoBERTa_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
EgilKarlsen/AA_ApplicationDistilRoBERTa_2
|
[
"region:us"
] |
2023-08-21T14:04:29+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "0", "dtype": "float32"}, {"name": "1", "dtype": "float32"}, {"name": "2", "dtype": "float32"}, {"name": "3", "dtype": "float32"}, {"name": "4", "dtype": "float32"}, {"name": "5", "dtype": "float32"}, {"name": "6", "dtype": "float32"}, {"name": "7", "dtype": "float32"}, {"name": "8", "dtype": "float32"}, {"name": "9", "dtype": "float32"}, {"name": "10", "dtype": "float32"}, {"name": "11", "dtype": "float32"}, {"name": "12", "dtype": "float32"}, {"name": "13", "dtype": "float32"}, {"name": "14", "dtype": "float32"}, {"name": "15", "dtype": "float32"}, {"name": "16", "dtype": "float32"}, {"name": "17", "dtype": "float32"}, {"name": "18", "dtype": "float32"}, {"name": "19", "dtype": "float32"}, {"name": "20", "dtype": "float32"}, {"name": "21", "dtype": "float32"}, {"name": "22", "dtype": "float32"}, {"name": "23", "dtype": "float32"}, {"name": "24", "dtype": "float32"}, {"name": "25", "dtype": "float32"}, {"name": "26", "dtype": "float32"}, {"name": "27", "dtype": "float32"}, {"name": "28", "dtype": "float32"}, {"name": "29", "dtype": "float32"}, {"name": "30", "dtype": "float32"}, {"name": "31", "dtype": "float32"}, {"name": "32", "dtype": "float32"}, {"name": "33", "dtype": "float32"}, {"name": "34", "dtype": "float32"}, {"name": "35", "dtype": "float32"}, {"name": "36", "dtype": "float32"}, {"name": "37", "dtype": "float32"}, {"name": "38", "dtype": "float32"}, {"name": "39", "dtype": "float32"}, {"name": "40", "dtype": "float32"}, {"name": "41", "dtype": "float32"}, {"name": "42", "dtype": "float32"}, {"name": "43", "dtype": "float32"}, {"name": "44", "dtype": "float32"}, {"name": "45", "dtype": "float32"}, {"name": "46", "dtype": "float32"}, {"name": "47", "dtype": "float32"}, {"name": "48", "dtype": "float32"}, {"name": "49", "dtype": "float32"}, {"name": "50", "dtype": "float32"}, {"name": "51", "dtype": "float32"}, {"name": "52", "dtype": "float32"}, {"name": "53", "dtype": "float32"}, {"name": "54", "dtype": "float32"}, {"name": "55", "dtype": "float32"}, {"name": "56", "dtype": "float32"}, {"name": "57", "dtype": "float32"}, {"name": "58", "dtype": "float32"}, {"name": "59", "dtype": "float32"}, {"name": "60", "dtype": "float32"}, {"name": "61", "dtype": "float32"}, {"name": "62", "dtype": "float32"}, {"name": "63", "dtype": "float32"}, {"name": "64", "dtype": "float32"}, {"name": "65", "dtype": "float32"}, {"name": "66", "dtype": "float32"}, {"name": "67", "dtype": "float32"}, {"name": "68", "dtype": "float32"}, {"name": "69", "dtype": "float32"}, {"name": "70", "dtype": "float32"}, {"name": "71", "dtype": "float32"}, {"name": "72", "dtype": "float32"}, {"name": "73", "dtype": "float32"}, {"name": "74", "dtype": "float32"}, {"name": "75", "dtype": "float32"}, {"name": "76", "dtype": "float32"}, {"name": "77", "dtype": "float32"}, {"name": "78", "dtype": "float32"}, {"name": "79", "dtype": "float32"}, {"name": "80", "dtype": "float32"}, {"name": "81", "dtype": "float32"}, {"name": "82", "dtype": "float32"}, {"name": "83", "dtype": "float32"}, {"name": "84", "dtype": "float32"}, {"name": "85", "dtype": "float32"}, {"name": "86", "dtype": "float32"}, {"name": "87", "dtype": "float32"}, {"name": "88", "dtype": "float32"}, {"name": "89", "dtype": "float32"}, {"name": "90", "dtype": "float32"}, {"name": "91", "dtype": "float32"}, {"name": "92", "dtype": "float32"}, {"name": "93", "dtype": "float32"}, {"name": "94", "dtype": "float32"}, {"name": "95", "dtype": "float32"}, {"name": "96", "dtype": "float32"}, {"name": "97", "dtype": "float32"}, {"name": "98", "dtype": "float32"}, {"name": "99", "dtype": "float32"}, {"name": "100", "dtype": "float32"}, {"name": "101", "dtype": "float32"}, {"name": "102", "dtype": "float32"}, {"name": "103", "dtype": "float32"}, {"name": "104", "dtype": "float32"}, {"name": "105", "dtype": "float32"}, {"name": "106", "dtype": "float32"}, {"name": "107", "dtype": "float32"}, {"name": "108", "dtype": "float32"}, {"name": "109", "dtype": "float32"}, {"name": "110", "dtype": "float32"}, {"name": "111", "dtype": "float32"}, {"name": "112", "dtype": "float32"}, {"name": "113", "dtype": "float32"}, {"name": "114", "dtype": "float32"}, {"name": "115", "dtype": "float32"}, {"name": "116", "dtype": "float32"}, {"name": "117", "dtype": "float32"}, {"name": "118", "dtype": "float32"}, {"name": "119", "dtype": "float32"}, {"name": "120", "dtype": "float32"}, {"name": "121", "dtype": "float32"}, {"name": "122", "dtype": "float32"}, {"name": "123", "dtype": "float32"}, {"name": "124", "dtype": "float32"}, {"name": "125", "dtype": "float32"}, {"name": "126", "dtype": "float32"}, {"name": "127", "dtype": "float32"}, {"name": "128", "dtype": "float32"}, {"name": "129", "dtype": "float32"}, {"name": "130", "dtype": "float32"}, {"name": "131", "dtype": "float32"}, {"name": "132", "dtype": "float32"}, {"name": "133", "dtype": "float32"}, {"name": "134", "dtype": "float32"}, {"name": "135", "dtype": "float32"}, {"name": "136", "dtype": "float32"}, {"name": "137", "dtype": "float32"}, {"name": "138", "dtype": "float32"}, {"name": "139", "dtype": "float32"}, {"name": "140", "dtype": "float32"}, {"name": "141", "dtype": "float32"}, {"name": "142", "dtype": "float32"}, {"name": "143", "dtype": "float32"}, {"name": "144", "dtype": "float32"}, {"name": "145", "dtype": "float32"}, {"name": "146", "dtype": "float32"}, {"name": "147", "dtype": "float32"}, {"name": "148", "dtype": "float32"}, {"name": "149", "dtype": "float32"}, {"name": "150", "dtype": "float32"}, {"name": "151", "dtype": "float32"}, {"name": "152", "dtype": "float32"}, {"name": "153", "dtype": "float32"}, {"name": "154", "dtype": "float32"}, {"name": "155", "dtype": "float32"}, {"name": "156", "dtype": "float32"}, {"name": "157", "dtype": "float32"}, {"name": "158", "dtype": "float32"}, {"name": "159", "dtype": "float32"}, {"name": "160", "dtype": "float32"}, {"name": "161", "dtype": "float32"}, {"name": "162", "dtype": "float32"}, {"name": "163", "dtype": "float32"}, {"name": "164", "dtype": "float32"}, {"name": "165", "dtype": "float32"}, {"name": "166", "dtype": "float32"}, {"name": "167", "dtype": "float32"}, {"name": "168", "dtype": "float32"}, {"name": "169", "dtype": "float32"}, {"name": "170", "dtype": "float32"}, {"name": "171", "dtype": "float32"}, {"name": "172", "dtype": "float32"}, {"name": "173", "dtype": "float32"}, {"name": "174", "dtype": "float32"}, {"name": "175", "dtype": "float32"}, {"name": "176", "dtype": "float32"}, {"name": "177", "dtype": "float32"}, {"name": "178", "dtype": "float32"}, {"name": "179", "dtype": "float32"}, {"name": "180", "dtype": "float32"}, {"name": "181", "dtype": "float32"}, {"name": "182", "dtype": "float32"}, {"name": "183", "dtype": "float32"}, {"name": "184", "dtype": "float32"}, {"name": "185", "dtype": "float32"}, {"name": "186", "dtype": "float32"}, {"name": "187", "dtype": "float32"}, {"name": "188", "dtype": "float32"}, {"name": "189", "dtype": "float32"}, {"name": "190", "dtype": "float32"}, {"name": "191", "dtype": "float32"}, {"name": "192", "dtype": "float32"}, {"name": "193", "dtype": "float32"}, {"name": "194", "dtype": "float32"}, {"name": "195", "dtype": "float32"}, {"name": "196", "dtype": "float32"}, {"name": "197", "dtype": "float32"}, {"name": "198", "dtype": "float32"}, {"name": "199", "dtype": "float32"}, {"name": "200", "dtype": "float32"}, {"name": "201", "dtype": "float32"}, {"name": "202", "dtype": "float32"}, {"name": "203", "dtype": "float32"}, {"name": "204", "dtype": "float32"}, {"name": "205", "dtype": "float32"}, {"name": "206", "dtype": "float32"}, {"name": "207", "dtype": "float32"}, {"name": "208", "dtype": "float32"}, {"name": "209", "dtype": "float32"}, {"name": "210", "dtype": "float32"}, {"name": "211", "dtype": "float32"}, {"name": "212", "dtype": "float32"}, {"name": "213", "dtype": "float32"}, {"name": "214", "dtype": "float32"}, {"name": "215", "dtype": "float32"}, {"name": "216", "dtype": "float32"}, {"name": "217", "dtype": "float32"}, {"name": "218", "dtype": "float32"}, {"name": "219", "dtype": "float32"}, {"name": "220", "dtype": "float32"}, {"name": "221", "dtype": "float32"}, {"name": "222", "dtype": "float32"}, {"name": "223", "dtype": "float32"}, {"name": "224", "dtype": "float32"}, {"name": "225", "dtype": "float32"}, {"name": "226", "dtype": "float32"}, {"name": "227", "dtype": "float32"}, {"name": "228", "dtype": "float32"}, {"name": "229", "dtype": "float32"}, {"name": "230", "dtype": "float32"}, {"name": "231", "dtype": "float32"}, {"name": "232", "dtype": "float32"}, {"name": "233", "dtype": "float32"}, {"name": "234", "dtype": "float32"}, {"name": "235", "dtype": "float32"}, {"name": "236", "dtype": "float32"}, {"name": "237", "dtype": "float32"}, {"name": "238", "dtype": "float32"}, {"name": "239", "dtype": "float32"}, {"name": "240", "dtype": "float32"}, {"name": "241", "dtype": "float32"}, {"name": "242", "dtype": "float32"}, {"name": "243", "dtype": "float32"}, {"name": "244", "dtype": "float32"}, {"name": "245", "dtype": "float32"}, {"name": "246", "dtype": "float32"}, {"name": "247", "dtype": "float32"}, {"name": "248", "dtype": "float32"}, {"name": "249", "dtype": "float32"}, {"name": "250", "dtype": "float32"}, {"name": "251", "dtype": "float32"}, {"name": "252", "dtype": "float32"}, {"name": "253", "dtype": "float32"}, {"name": "254", "dtype": "float32"}, {"name": "255", "dtype": "float32"}, {"name": "256", "dtype": "float32"}, {"name": "257", "dtype": "float32"}, {"name": "258", "dtype": "float32"}, {"name": "259", "dtype": "float32"}, {"name": "260", "dtype": "float32"}, {"name": "261", "dtype": "float32"}, {"name": "262", "dtype": "float32"}, {"name": "263", "dtype": "float32"}, {"name": "264", "dtype": "float32"}, {"name": "265", "dtype": "float32"}, {"name": "266", "dtype": "float32"}, {"name": "267", "dtype": "float32"}, {"name": "268", "dtype": "float32"}, {"name": "269", "dtype": "float32"}, {"name": "270", "dtype": "float32"}, {"name": "271", "dtype": "float32"}, {"name": "272", "dtype": "float32"}, {"name": "273", "dtype": "float32"}, {"name": "274", "dtype": "float32"}, {"name": "275", "dtype": "float32"}, {"name": "276", "dtype": "float32"}, {"name": "277", "dtype": "float32"}, {"name": "278", "dtype": "float32"}, {"name": "279", "dtype": "float32"}, {"name": "280", "dtype": "float32"}, {"name": "281", "dtype": "float32"}, {"name": "282", "dtype": "float32"}, {"name": "283", "dtype": "float32"}, {"name": "284", "dtype": "float32"}, {"name": "285", "dtype": "float32"}, {"name": "286", "dtype": "float32"}, {"name": "287", "dtype": "float32"}, {"name": "288", "dtype": "float32"}, {"name": "289", "dtype": "float32"}, {"name": "290", "dtype": "float32"}, {"name": "291", "dtype": "float32"}, {"name": "292", "dtype": "float32"}, {"name": "293", "dtype": "float32"}, {"name": "294", "dtype": "float32"}, {"name": "295", "dtype": "float32"}, {"name": "296", "dtype": "float32"}, {"name": "297", "dtype": "float32"}, {"name": "298", "dtype": "float32"}, {"name": "299", "dtype": "float32"}, {"name": "300", "dtype": "float32"}, {"name": "301", "dtype": "float32"}, {"name": "302", "dtype": "float32"}, {"name": "303", "dtype": "float32"}, {"name": "304", "dtype": "float32"}, {"name": "305", "dtype": "float32"}, {"name": "306", "dtype": "float32"}, {"name": "307", "dtype": "float32"}, {"name": "308", "dtype": "float32"}, {"name": "309", "dtype": "float32"}, {"name": "310", "dtype": "float32"}, {"name": "311", "dtype": "float32"}, {"name": "312", "dtype": "float32"}, {"name": "313", "dtype": "float32"}, {"name": "314", "dtype": "float32"}, {"name": "315", "dtype": "float32"}, {"name": "316", "dtype": "float32"}, {"name": "317", "dtype": "float32"}, {"name": "318", "dtype": "float32"}, {"name": "319", "dtype": "float32"}, {"name": "320", "dtype": "float32"}, {"name": "321", "dtype": "float32"}, {"name": "322", "dtype": "float32"}, {"name": "323", "dtype": "float32"}, {"name": "324", "dtype": "float32"}, {"name": "325", "dtype": "float32"}, {"name": "326", "dtype": "float32"}, {"name": "327", "dtype": "float32"}, {"name": "328", "dtype": "float32"}, {"name": "329", "dtype": "float32"}, {"name": "330", "dtype": "float32"}, {"name": "331", "dtype": "float32"}, {"name": "332", "dtype": "float32"}, {"name": "333", "dtype": "float32"}, {"name": "334", "dtype": "float32"}, {"name": "335", "dtype": "float32"}, {"name": "336", "dtype": "float32"}, {"name": "337", "dtype": "float32"}, {"name": "338", "dtype": "float32"}, {"name": "339", "dtype": "float32"}, {"name": "340", "dtype": "float32"}, {"name": "341", "dtype": "float32"}, {"name": "342", "dtype": "float32"}, {"name": "343", "dtype": "float32"}, {"name": "344", "dtype": "float32"}, {"name": "345", "dtype": "float32"}, {"name": "346", "dtype": "float32"}, {"name": "347", "dtype": "float32"}, {"name": "348", "dtype": "float32"}, {"name": "349", "dtype": "float32"}, {"name": "350", "dtype": "float32"}, {"name": "351", "dtype": "float32"}, {"name": "352", "dtype": "float32"}, {"name": "353", "dtype": "float32"}, {"name": "354", "dtype": "float32"}, {"name": "355", "dtype": "float32"}, {"name": "356", "dtype": "float32"}, {"name": "357", "dtype": "float32"}, {"name": "358", "dtype": "float32"}, {"name": "359", "dtype": "float32"}, {"name": "360", "dtype": "float32"}, {"name": "361", "dtype": "float32"}, {"name": "362", "dtype": "float32"}, {"name": "363", "dtype": "float32"}, {"name": "364", "dtype": "float32"}, {"name": "365", "dtype": "float32"}, {"name": "366", "dtype": "float32"}, {"name": "367", "dtype": "float32"}, {"name": "368", "dtype": "float32"}, {"name": "369", "dtype": "float32"}, {"name": "370", "dtype": "float32"}, {"name": "371", "dtype": "float32"}, {"name": "372", "dtype": "float32"}, {"name": "373", "dtype": "float32"}, {"name": "374", "dtype": "float32"}, {"name": "375", "dtype": "float32"}, {"name": "376", "dtype": "float32"}, {"name": "377", "dtype": "float32"}, {"name": "378", "dtype": "float32"}, {"name": "379", "dtype": "float32"}, {"name": "380", "dtype": "float32"}, {"name": "381", "dtype": "float32"}, {"name": "382", "dtype": "float32"}, {"name": "383", "dtype": "float32"}, {"name": "384", "dtype": "float32"}, {"name": "385", "dtype": "float32"}, {"name": "386", "dtype": "float32"}, {"name": "387", "dtype": "float32"}, {"name": "388", "dtype": "float32"}, {"name": "389", "dtype": "float32"}, {"name": "390", "dtype": "float32"}, {"name": "391", "dtype": "float32"}, {"name": "392", "dtype": "float32"}, {"name": "393", "dtype": "float32"}, {"name": "394", "dtype": "float32"}, {"name": "395", "dtype": "float32"}, {"name": "396", "dtype": "float32"}, {"name": "397", "dtype": "float32"}, {"name": "398", "dtype": "float32"}, {"name": "399", "dtype": "float32"}, {"name": "400", "dtype": "float32"}, {"name": "401", "dtype": "float32"}, {"name": "402", "dtype": "float32"}, {"name": "403", "dtype": "float32"}, {"name": "404", "dtype": "float32"}, {"name": "405", "dtype": "float32"}, {"name": "406", "dtype": "float32"}, {"name": "407", "dtype": "float32"}, {"name": "408", "dtype": "float32"}, {"name": "409", "dtype": "float32"}, {"name": "410", "dtype": "float32"}, {"name": "411", "dtype": "float32"}, {"name": "412", "dtype": "float32"}, {"name": "413", "dtype": "float32"}, {"name": "414", "dtype": "float32"}, {"name": "415", "dtype": "float32"}, {"name": "416", "dtype": "float32"}, {"name": "417", "dtype": "float32"}, {"name": "418", "dtype": "float32"}, {"name": "419", "dtype": "float32"}, {"name": "420", "dtype": "float32"}, {"name": "421", "dtype": "float32"}, {"name": "422", "dtype": "float32"}, {"name": "423", "dtype": "float32"}, {"name": "424", "dtype": "float32"}, {"name": "425", "dtype": "float32"}, {"name": "426", "dtype": "float32"}, {"name": "427", "dtype": "float32"}, {"name": "428", "dtype": "float32"}, {"name": "429", "dtype": "float32"}, {"name": "430", "dtype": "float32"}, {"name": "431", "dtype": "float32"}, {"name": "432", "dtype": "float32"}, {"name": "433", "dtype": "float32"}, {"name": "434", "dtype": "float32"}, {"name": "435", "dtype": "float32"}, {"name": "436", "dtype": "float32"}, {"name": "437", "dtype": "float32"}, {"name": "438", "dtype": "float32"}, {"name": "439", "dtype": "float32"}, {"name": "440", "dtype": "float32"}, {"name": "441", "dtype": "float32"}, {"name": "442", "dtype": "float32"}, {"name": "443", "dtype": "float32"}, {"name": "444", "dtype": "float32"}, {"name": "445", "dtype": "float32"}, {"name": "446", "dtype": "float32"}, {"name": "447", "dtype": "float32"}, {"name": "448", "dtype": "float32"}, {"name": "449", "dtype": "float32"}, {"name": "450", "dtype": "float32"}, {"name": "451", "dtype": "float32"}, {"name": "452", "dtype": "float32"}, {"name": "453", "dtype": "float32"}, {"name": "454", "dtype": "float32"}, {"name": "455", "dtype": "float32"}, {"name": "456", "dtype": "float32"}, {"name": "457", "dtype": "float32"}, {"name": "458", "dtype": "float32"}, {"name": "459", "dtype": "float32"}, {"name": "460", "dtype": "float32"}, {"name": "461", "dtype": "float32"}, {"name": "462", "dtype": "float32"}, {"name": "463", "dtype": "float32"}, {"name": "464", "dtype": "float32"}, {"name": "465", "dtype": "float32"}, {"name": "466", "dtype": "float32"}, {"name": "467", "dtype": "float32"}, {"name": "468", "dtype": "float32"}, {"name": "469", "dtype": "float32"}, {"name": "470", "dtype": "float32"}, {"name": "471", "dtype": "float32"}, {"name": "472", "dtype": "float32"}, {"name": "473", "dtype": "float32"}, {"name": "474", "dtype": "float32"}, {"name": "475", "dtype": "float32"}, {"name": "476", "dtype": "float32"}, {"name": "477", "dtype": "float32"}, {"name": "478", "dtype": "float32"}, {"name": "479", "dtype": "float32"}, {"name": "480", "dtype": "float32"}, {"name": "481", "dtype": "float32"}, {"name": "482", "dtype": "float32"}, {"name": "483", "dtype": "float32"}, {"name": "484", "dtype": "float32"}, {"name": "485", "dtype": "float32"}, {"name": "486", "dtype": "float32"}, {"name": "487", "dtype": "float32"}, {"name": "488", "dtype": "float32"}, {"name": "489", "dtype": "float32"}, {"name": "490", "dtype": "float32"}, {"name": "491", "dtype": "float32"}, {"name": "492", "dtype": "float32"}, {"name": "493", "dtype": "float32"}, {"name": "494", "dtype": "float32"}, {"name": "495", "dtype": "float32"}, {"name": "496", "dtype": "float32"}, {"name": "497", "dtype": "float32"}, {"name": "498", "dtype": "float32"}, {"name": "499", "dtype": "float32"}, {"name": "500", "dtype": "float32"}, {"name": "501", "dtype": "float32"}, {"name": "502", "dtype": "float32"}, {"name": "503", "dtype": "float32"}, {"name": "504", "dtype": "float32"}, {"name": "505", "dtype": "float32"}, {"name": "506", "dtype": "float32"}, {"name": "507", "dtype": "float32"}, {"name": "508", "dtype": "float32"}, {"name": "509", "dtype": "float32"}, {"name": "510", "dtype": "float32"}, {"name": "511", "dtype": "float32"}, {"name": "512", "dtype": "float32"}, {"name": "513", "dtype": "float32"}, {"name": "514", "dtype": "float32"}, {"name": "515", "dtype": "float32"}, {"name": "516", "dtype": "float32"}, {"name": "517", "dtype": "float32"}, {"name": "518", "dtype": "float32"}, {"name": "519", "dtype": "float32"}, {"name": "520", "dtype": "float32"}, {"name": "521", "dtype": "float32"}, {"name": "522", "dtype": "float32"}, {"name": "523", "dtype": "float32"}, {"name": "524", "dtype": "float32"}, {"name": "525", "dtype": "float32"}, {"name": "526", "dtype": "float32"}, {"name": "527", "dtype": "float32"}, {"name": "528", "dtype": "float32"}, {"name": "529", "dtype": "float32"}, {"name": "530", "dtype": "float32"}, {"name": "531", "dtype": "float32"}, {"name": "532", "dtype": "float32"}, {"name": "533", "dtype": "float32"}, {"name": "534", "dtype": "float32"}, {"name": "535", "dtype": "float32"}, {"name": "536", "dtype": "float32"}, {"name": "537", "dtype": "float32"}, {"name": "538", "dtype": "float32"}, {"name": "539", "dtype": "float32"}, {"name": "540", "dtype": "float32"}, {"name": "541", "dtype": "float32"}, {"name": "542", "dtype": "float32"}, {"name": "543", "dtype": "float32"}, {"name": "544", "dtype": "float32"}, {"name": "545", "dtype": "float32"}, {"name": "546", "dtype": "float32"}, {"name": "547", "dtype": "float32"}, {"name": "548", "dtype": "float32"}, {"name": "549", "dtype": "float32"}, {"name": "550", "dtype": "float32"}, {"name": "551", "dtype": "float32"}, {"name": "552", "dtype": "float32"}, {"name": "553", "dtype": "float32"}, {"name": "554", "dtype": "float32"}, {"name": "555", "dtype": "float32"}, {"name": "556", "dtype": "float32"}, {"name": "557", "dtype": "float32"}, {"name": "558", "dtype": "float32"}, {"name": "559", "dtype": "float32"}, {"name": "560", "dtype": "float32"}, {"name": "561", "dtype": "float32"}, {"name": "562", "dtype": "float32"}, {"name": "563", "dtype": "float32"}, {"name": "564", "dtype": "float32"}, {"name": "565", "dtype": "float32"}, {"name": "566", "dtype": "float32"}, {"name": "567", "dtype": "float32"}, {"name": "568", "dtype": "float32"}, {"name": "569", "dtype": "float32"}, {"name": "570", "dtype": "float32"}, {"name": "571", "dtype": "float32"}, {"name": "572", "dtype": "float32"}, {"name": "573", "dtype": "float32"}, {"name": "574", "dtype": "float32"}, {"name": "575", "dtype": "float32"}, {"name": "576", "dtype": "float32"}, {"name": "577", "dtype": "float32"}, {"name": "578", "dtype": "float32"}, {"name": "579", "dtype": "float32"}, {"name": "580", "dtype": "float32"}, {"name": "581", "dtype": "float32"}, {"name": "582", "dtype": "float32"}, {"name": "583", "dtype": "float32"}, {"name": "584", "dtype": "float32"}, {"name": "585", "dtype": "float32"}, {"name": "586", "dtype": "float32"}, {"name": "587", "dtype": "float32"}, {"name": "588", "dtype": "float32"}, {"name": "589", "dtype": "float32"}, {"name": "590", "dtype": "float32"}, {"name": "591", "dtype": "float32"}, {"name": "592", "dtype": "float32"}, {"name": "593", "dtype": "float32"}, {"name": "594", "dtype": "float32"}, {"name": "595", "dtype": "float32"}, {"name": "596", "dtype": "float32"}, {"name": "597", "dtype": "float32"}, {"name": "598", "dtype": "float32"}, {"name": "599", "dtype": "float32"}, {"name": "600", "dtype": "float32"}, {"name": "601", "dtype": "float32"}, {"name": "602", "dtype": "float32"}, {"name": "603", "dtype": "float32"}, {"name": "604", "dtype": "float32"}, {"name": "605", "dtype": "float32"}, {"name": "606", "dtype": "float32"}, {"name": "607", "dtype": "float32"}, {"name": "608", "dtype": "float32"}, {"name": "609", "dtype": "float32"}, {"name": "610", "dtype": "float32"}, {"name": "611", "dtype": "float32"}, {"name": "612", "dtype": "float32"}, {"name": "613", "dtype": "float32"}, {"name": "614", "dtype": "float32"}, {"name": "615", "dtype": "float32"}, {"name": "616", "dtype": "float32"}, {"name": "617", "dtype": "float32"}, {"name": "618", "dtype": "float32"}, {"name": "619", "dtype": "float32"}, {"name": "620", "dtype": "float32"}, {"name": "621", "dtype": "float32"}, {"name": "622", "dtype": "float32"}, {"name": "623", "dtype": "float32"}, {"name": "624", "dtype": "float32"}, {"name": "625", "dtype": "float32"}, {"name": "626", "dtype": "float32"}, {"name": "627", "dtype": "float32"}, {"name": "628", "dtype": "float32"}, {"name": "629", "dtype": "float32"}, {"name": "630", "dtype": "float32"}, {"name": "631", "dtype": "float32"}, {"name": "632", "dtype": "float32"}, {"name": "633", "dtype": "float32"}, {"name": "634", "dtype": "float32"}, {"name": "635", "dtype": "float32"}, {"name": "636", "dtype": "float32"}, {"name": "637", "dtype": "float32"}, {"name": "638", "dtype": "float32"}, {"name": "639", "dtype": "float32"}, {"name": "640", "dtype": "float32"}, {"name": "641", "dtype": "float32"}, {"name": "642", "dtype": "float32"}, {"name": "643", "dtype": "float32"}, {"name": "644", "dtype": "float32"}, {"name": "645", "dtype": "float32"}, {"name": "646", "dtype": "float32"}, {"name": "647", "dtype": "float32"}, {"name": "648", "dtype": "float32"}, {"name": "649", "dtype": "float32"}, {"name": "650", "dtype": "float32"}, {"name": "651", "dtype": "float32"}, {"name": "652", "dtype": "float32"}, {"name": "653", "dtype": "float32"}, {"name": "654", "dtype": "float32"}, {"name": "655", "dtype": "float32"}, {"name": "656", "dtype": "float32"}, {"name": "657", "dtype": "float32"}, {"name": "658", "dtype": "float32"}, {"name": "659", "dtype": "float32"}, {"name": "660", "dtype": "float32"}, {"name": "661", "dtype": "float32"}, {"name": "662", "dtype": "float32"}, {"name": "663", "dtype": "float32"}, {"name": "664", "dtype": "float32"}, {"name": "665", "dtype": "float32"}, {"name": "666", "dtype": "float32"}, {"name": "667", "dtype": "float32"}, {"name": "668", "dtype": "float32"}, {"name": "669", "dtype": "float32"}, {"name": "670", "dtype": "float32"}, {"name": "671", "dtype": "float32"}, {"name": "672", "dtype": "float32"}, {"name": "673", "dtype": "float32"}, {"name": "674", "dtype": "float32"}, {"name": "675", "dtype": "float32"}, {"name": "676", "dtype": "float32"}, {"name": "677", "dtype": "float32"}, {"name": "678", "dtype": "float32"}, {"name": "679", "dtype": "float32"}, {"name": "680", "dtype": "float32"}, {"name": "681", "dtype": "float32"}, {"name": "682", "dtype": "float32"}, {"name": "683", "dtype": "float32"}, {"name": "684", "dtype": "float32"}, {"name": "685", "dtype": "float32"}, {"name": "686", "dtype": "float32"}, {"name": "687", "dtype": "float32"}, {"name": "688", "dtype": "float32"}, {"name": "689", "dtype": "float32"}, {"name": "690", "dtype": "float32"}, {"name": "691", "dtype": "float32"}, {"name": "692", "dtype": "float32"}, {"name": "693", "dtype": "float32"}, {"name": "694", "dtype": "float32"}, {"name": "695", "dtype": "float32"}, {"name": "696", "dtype": "float32"}, {"name": "697", "dtype": "float32"}, {"name": "698", "dtype": "float32"}, {"name": "699", "dtype": "float32"}, {"name": "700", "dtype": "float32"}, {"name": "701", "dtype": "float32"}, {"name": "702", "dtype": "float32"}, {"name": "703", "dtype": "float32"}, {"name": "704", "dtype": "float32"}, {"name": "705", "dtype": "float32"}, {"name": "706", "dtype": "float32"}, {"name": "707", "dtype": "float32"}, {"name": "708", "dtype": "float32"}, {"name": "709", "dtype": "float32"}, {"name": "710", "dtype": "float32"}, {"name": "711", "dtype": "float32"}, {"name": "712", "dtype": "float32"}, {"name": "713", "dtype": "float32"}, {"name": "714", "dtype": "float32"}, {"name": "715", "dtype": "float32"}, {"name": "716", "dtype": "float32"}, {"name": "717", "dtype": "float32"}, {"name": "718", "dtype": "float32"}, {"name": "719", "dtype": "float32"}, {"name": "720", "dtype": "float32"}, {"name": "721", "dtype": "float32"}, {"name": "722", "dtype": "float32"}, {"name": "723", "dtype": "float32"}, {"name": "724", "dtype": "float32"}, {"name": "725", "dtype": "float32"}, {"name": "726", "dtype": "float32"}, {"name": "727", "dtype": "float32"}, {"name": "728", "dtype": "float32"}, {"name": "729", "dtype": "float32"}, {"name": "730", "dtype": "float32"}, {"name": "731", "dtype": "float32"}, {"name": "732", "dtype": "float32"}, {"name": "733", "dtype": "float32"}, {"name": "734", "dtype": "float32"}, {"name": "735", "dtype": "float32"}, {"name": "736", "dtype": "float32"}, {"name": "737", "dtype": "float32"}, {"name": "738", "dtype": "float32"}, {"name": "739", "dtype": "float32"}, {"name": "740", "dtype": "float32"}, {"name": "741", "dtype": "float32"}, {"name": "742", "dtype": "float32"}, {"name": "743", "dtype": "float32"}, {"name": "744", "dtype": "float32"}, {"name": "745", "dtype": "float32"}, {"name": "746", "dtype": "float32"}, {"name": "747", "dtype": "float32"}, {"name": "748", "dtype": "float32"}, {"name": "749", "dtype": "float32"}, {"name": "750", "dtype": "float32"}, {"name": "751", "dtype": "float32"}, {"name": "752", "dtype": "float32"}, {"name": "753", "dtype": "float32"}, {"name": "754", "dtype": "float32"}, {"name": "755", "dtype": "float32"}, {"name": "756", "dtype": "float32"}, {"name": "757", "dtype": "float32"}, {"name": "758", "dtype": "float32"}, {"name": "759", "dtype": "float32"}, {"name": "760", "dtype": "float32"}, {"name": "761", "dtype": "float32"}, {"name": "762", "dtype": "float32"}, {"name": "763", "dtype": "float32"}, {"name": "764", "dtype": "float32"}, {"name": "765", "dtype": "float32"}, {"name": "766", "dtype": "float32"}, {"name": "767", "dtype": "float32"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 80318780.21618997, "num_examples": 26057}, {"name": "test", "num_bytes": 26774087.073587257, "num_examples": 8686}], "download_size": 147219122, "dataset_size": 107092867.28977722}}
|
2023-08-21T15:56:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "AA_ApplicationDistilRoBERTa_2"
More Information needed
|
[
"# Dataset Card for \"AA_ApplicationDistilRoBERTa_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"AA_ApplicationDistilRoBERTa_2\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"AA_ApplicationDistilRoBERTa_2\"\n\nMore Information needed"
] |
4c4e3f8bfdc37a8e58bf324f5988c22517eea317
|
# french_book_reviews_fr_prompt_binary_text_generation_from_title_of_a_review
## Summary
**french_book_reviews_fr_prompt_binary_text_generation_from_title_of_a_review** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **347,688** rows that can be used for a text generation task.
The original data (without prompts) comes from the dataset [french_book_reviews](https://huggingface.co/datasets/Abirate/french_book_reviews).
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
36 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
# négatifs
'Rédiger un commentaire négatif dont le titre est : "'+title+'"".',
'Rédige un commentaire négatif dont le titre est : "'+title+'"".',
'Rédigez un commentaire négatif dont le titre est : "'+title+'"".',
'Rédiger un avis négatif dont le titre est : "'+title+'"".',
'Rédige un avis négatif dont le titre est : "'+title+'"".',
'Rédigez un avis négatif dont le titre est : "'+title+'"".',
'Rédiger une critique négative dont le titre est : "'+title+'"".',
'Rédige une critique négative dont le titre est : "'+title+'"".',
'Rédigez une critique négative dont le titre est : "'+title+'"".',
'Rédiger une évaluation négative dont le titre est : "'+title+'"".',
'Rédige une évaluation négative dont le titre est : "'+title+'"".',
'Rédigez une évaluation négative dont le titre est : "'+title+'"".',
"""Générer un commentaire négatif d'un produit imaginaire dont le titre est : " """+title+""" "\nLe commentaire : """,
"""Génère un commentaire négatif d'un produit imaginaire dont le titre est : " """+title+""" "\nLe commentaire : """,
"""Générez un commentaire négatif d'un produit imaginaire dont le titre est : " """+title+""" "\nLe commentaire : """,
"""Générer un avis négatif d'un produit imaginaire dont le titre est : " """+title+""" "\nL'avis : """,
"""Génère un avis négatif d'un produit imaginaire dont le titre est : " """+title+""" "\nL'avis : """,
"""Générez un avis négatif d'un produit imaginaire dont le titre est : " """+title+""" "\nL'avis : """,
"""Générer une critique négative d'un produit imaginaire dont le titre est : " """+title+""" "\nLa critique : """,
"""Génère une critique négative d'un produit imaginaire dont le titre est : " """+title+""" "\nLa critique : """,
"""Générez une critique négative d'un produit imaginaire dont le titre est : " """+title+""" "\nLa critique : """,
"""Générer une évaluation négative d'un produit imaginaire dont le titre est : " """+title+""" "\nL'évaluation : """,
"""Génère une évaluation négative d'un produit imaginaire dont le titre est : " """+title+""" "\nL'évaluation : """,
"""Générez une évaluation négative d'un produit imaginaire dont le titre est : " """+title+""" "\nL'évaluation : """,
'Titre : "'+title +'"\n Ecrire un commentaire négatif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecris un commentaire négatif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrivez un commentaire négatif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrire un avis négatif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecris un avis négatif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrivez un avis négatif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrire une critique négative de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecris une critique négative de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrivez une critique négative de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrire une évaluation négative de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecris une évaluation négative de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrivez une évaluation négative de 1 à 5 phrases sur le titre précédent : ',
# positifs
'Rédiger un commentaire positif dont le titre est : '+title+'.',
'Rédige un commentaire positif dont le titre est : '+title+'.',
'Rédigez un commentaire positif dont le titre est : '+title+'.',
'Rédiger un avis positif dont le titre est : '+title+'.',
'Rédige un avis positif dont le titre est : '+title+'.',
'Rédigez un avis positif dont le titre est : '+title+'.',
'Rédiger une critique positive dont le titre est : '+title+'.',
'Rédige une critique positive dont le titre est : '+title+'.',
'Rédigez une critique positive dont le titre est : '+title+'.',
'Rédiger une évaluation positive dont le titre est : '+title+'.',
'Rédige une évaluation positive dont le titre est : '+title+'.',
'Rédigez une évaluation positive dont le titre est : '+title+'.',
"""Générer un commentaire positif d'un produit imaginaire dont le titre est : " """+title+""" "\nLe commentaire : """,
"""Génère un commentaire positif d'un produit imaginaire dont le titre est : " """+title+""" "\nLe commentaire : """,
"""Générez un commentaire positif d'un produit imaginaire dont le titre est : " """+title+""" "\nLe commentaire : """,
"""Générer un avis positif d'un produit imaginaire dont le titre est : " """+title+""" "\nL'avis : """,
"""Génère un avis positif d'un produit imaginaire dont le titre est : " """+title+""" "\nL'avis : """,
"""Générez un avis positif d'un produit imaginaire dont le titre est : " """+title+""" "\nL'avis : """,
"""Générer une critique positive d'un produit imaginaire dont le titre est : " """+title+""" "\nLa critique : """,
"""Génère une critique positive d'un produit imaginaire dont le titre est : " """+title+""" "\nLa critique : """,
"""Générez une critique positive d'un produit imaginaire dont le titre est : " """+title+""" "\nLa critique : """,
"""Générer une évaluation positive d'un produit imaginaire dont le titre est : " """+title+""" "\nL'évaluation : """,
"""Génère une évaluation positive d'un produit imaginaire dont le titre est : " """+title+""" "\nL'évaluation : """,
"""Générez une évaluation positive d'un produit imaginaire dont le titre est : " """+title+""" "\nL'évaluation : """,
'Titre : "'+title +'"\n Ecrire un commentaire positif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecris un commentaire positif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrivez un commentaire positif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrire un avis positif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecris un avis positif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrivez un avis positif de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrire une critique positive de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecris une critique positive de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrivez une critique positive de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrire une évaluation positive de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecris une évaluation positive de 1 à 5 phrases sur le titre précédent : ',
'Titre : "'+title +'"\n Ecrivez une évaluation positive de 1 à 5 phrases sur le titre précédent : ',
```
### Features used in the prompts
In the prompt list above, `title` and `targets` have been constructed from:
```
fbr = load_dataset('Abirate/french_book_reviews')
title = fbr['train']['book_title'][i]
targets = fbr['train']['reader_review'][i]
```
# Splits
- `train` with 347,220 samples
- no `valid` split
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/french_book_reviews_fr_prompt_binary_text_generation_from_title_of_a_review")
```
# Citation
## Original data
> @misc {abir_eltaief_2023,
author = { {Abir ELTAIEF} },
title = { french_book_reviews (Revision 534725e) },
year = 2023,
url = { https://huggingface.co/datasets/Abirate/french_book_reviews },
doi = { 10.57967/hf/1052 },
publisher = { Hugging Face }}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC0: Public Domain
|
CATIE-AQ/french_book_reviews_fr_prompt_binary_text_generation_from_title_of_a_review
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:french_book_reviews",
"language:fr",
"license:cc",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T14:04:43+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "cc", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["french_book_reviews"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:21:01+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-french_book_reviews #language-French #license-cc #DFP #french prompts #region-us
|
# french_book_reviews_fr_prompt_binary_text_generation_from_title_of_a_review
## Summary
french_book_reviews_fr_prompt_binary_text_generation_from_title_of_a_review is a subset of the Dataset of French Prompts (DFP).
It contains 347,688 rows that can be used for a text generation task.
The original data (without prompts) comes from the dataset french_book_reviews.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
36 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
### Features used in the prompts
In the prompt list above, 'title' and 'targets' have been constructed from:
# Splits
- 'train' with 347,220 samples
- no 'valid' split
- no 'test' split
# How to use?
## Original data
> @misc {abir_eltaief_2023,
author = { {Abir ELTAIEF} },
title = { french_book_reviews (Revision 534725e) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1052 },
publisher = { Hugging Face }}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC0: Public Domain
|
[
"# french_book_reviews_fr_prompt_binary_text_generation_from_title_of_a_review",
"## Summary\n\nfrench_book_reviews_fr_prompt_binary_text_generation_from_title_of_a_review is a subset of the Dataset of French Prompts (DFP). \nIt contains 347,688 rows that can be used for a text generation task. \nThe original data (without prompts) comes from the dataset french_book_reviews. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n36 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"### Features used in the prompts\nIn the prompt list above, 'title' and 'targets' have been constructed from:",
"# Splits\n- 'train' with 347,220 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> @misc {abir_eltaief_2023, \n\tauthor = { {Abir ELTAIEF} }, \n\ttitle = { french_book_reviews (Revision 534725e) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1052 }, \n\tpublisher = { Hugging Face }}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC0: Public Domain"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-french_book_reviews #language-French #license-cc #DFP #french prompts #region-us \n",
"# french_book_reviews_fr_prompt_binary_text_generation_from_title_of_a_review",
"## Summary\n\nfrench_book_reviews_fr_prompt_binary_text_generation_from_title_of_a_review is a subset of the Dataset of French Prompts (DFP). \nIt contains 347,688 rows that can be used for a text generation task. \nThe original data (without prompts) comes from the dataset french_book_reviews. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n36 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"### Features used in the prompts\nIn the prompt list above, 'title' and 'targets' have been constructed from:",
"# Splits\n- 'train' with 347,220 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> @misc {abir_eltaief_2023, \n\tauthor = { {Abir ELTAIEF} }, \n\ttitle = { french_book_reviews (Revision 534725e) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1052 }, \n\tpublisher = { Hugging Face }}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC0: Public Domain"
] |
[
90,
32,
138,
5,
46,
30,
28,
5,
83,
106,
7
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-french_book_reviews #language-French #license-cc #DFP #french prompts #region-us \n# french_book_reviews_fr_prompt_binary_text_generation_from_title_of_a_review## Summary\n\nfrench_book_reviews_fr_prompt_binary_text_generation_from_title_of_a_review is a subset of the Dataset of French Prompts (DFP). \nIt contains 347,688 rows that can be used for a text generation task. \nThe original data (without prompts) comes from the dataset french_book_reviews. \nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n36 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.### Features used in the prompts\nIn the prompt list above, 'title' and 'targets' have been constructed from:# Splits\n- 'train' with 347,220 samples\n- no 'valid' split\n- no 'test' split# How to use?## Original data\n> @misc {abir_eltaief_2023, \n\tauthor = { {Abir ELTAIEF} }, \n\ttitle = { french_book_reviews (Revision 534725e) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1052 }, \n\tpublisher = { Hugging Face }}"
] |
2b2f3647ca0cd128b4d08004a4ed40df46d07a74
|
# fquad_fr_prompt_qa
## Summary
**fquad_fr_prompt_qa** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **2,009,196** rows that can be used for a question-answering task.
The original data (without prompts) comes from the dataset [FQuAD]( https://huggingface.co/datasets/fquad) by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
As FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
# SQUAD 1.0 format
'Question : "'+question+'"\nContexte : "'+context+'" Réponse :',
'La réponse à la question "'+question+'" se trouve dans "'+context+'" Pouvez-vous me la dire ?',
'La réponse à la question "'+question+'" se trouve dans "'+context+'" Peux-tu me la dire ?',
'Extraire la réponse à la question à partir du contexte suivant.\n Question : "'+question+'" Contexte : "'+context+'"',
'Extrais la réponse à la question à partir du contexte suivant.\n Question : "'+question+'" Contexte : "'+context+'"',
'Extrayez la réponse à la question à partir du contexte suivant.\n Question : "'+question+'" Contexte : "'+context+'"',
'Étant donné le passage suivant : "'+context+'"\n Répondre à la question suivante sachant que la réponse est présente dans le texte.\n Question : "'+question+'"',
'Étant donné le passage suivant : "'+context+'"\n Réponds à la question suivante sachant que la réponse est présente dans le texte.\n Question : "'+question+'"',
'Étant donné le passage suivant : "'+context+'"\n Répondez à la question suivante sachant que la réponse est présente dans le texte.\n Question : "'+question+'"',
"""La réponse à la question : " """+question+""" " se trouve dans le texte : " """+context+""" "\n Peux-tu l'indiquer ?""",
"""La réponse à la question : " """+question+""" " se trouve dans le texte : " """+context+""" "\n Pouvez-vous l'indiquer ?""",
"""La réponse à la question : " """+question+""" " se trouve dans le texte : " """+context+""" "\n Qu'elle est-elle ?""",
# SQUAD 2.0 format
'"'+question+'"\n Répondre à la question ci-dessus en se basant sur le contexte suivant : "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+question+'"\n Réponds à la question ci-dessus en te basant sur le contexte suivant : "'+context+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'"'+question+'"\n Répondez à la question ci-dessus en vous basant sur le contexte suivant : "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Utiliser le texte suivant pour répondre à la question : '+question+ '\n\n "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Utilise le texte suivant pour répondre à la question : '+question+ '\n\n "'+context+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Utilisez le texte suivant pour répondre à la question : '+question+ '\n\n "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Lire le texte suivant et extraire la réponse à la question : "'+question+'"\n\n "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Lis le texte suivant et extrais la réponse à la question : "'+question+'"\n\n "'+context+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Lisez le texte suivant et extrayez la réponse à la question : "'+question+'"\n\n "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+context+'"\n\nSur la base du texte ci-dessus, répondre correctement à la question suivante : \n\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+context+'"\n\nSur la base du texte ci-dessus, réponds correctement à la question suivante : \n\n "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'"'+context+'"\n\nSur la base du texte ci-dessus, répondez répondre correctement à la question suivante : \n\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Contexte : '+ context +'\n Compte tenu du texte ci-dessus, répondre correctement à la question suivante : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Contexte : '+ context +'\n Compte tenu du texte ci-dessus, réponds correctement à la question suivante : "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Contexte : '+ context +'\n Compte tenu du texte ci-dessus, répondez correctement à la question suivante : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+context+'"\n Extraire du passage la réponse à la question suivante : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+context+'"\n Extrais du passage la réponse à la question suivante : "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'"'+context+'"\n Extrayez du passage la réponse à la question suivante : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Compte tenu du passage suivant, répondre à la question qui suit : "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Compte tenu du passage suivant, réponds à la question qui suit : "'+context+'"\n "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Compte tenu du passage suivant, répondez à la question qui suit : "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Après avoir lu le paragraphe, répondre à la question suivante : "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Après avoir lu le paragraphe, réponds à la question suivante : "'+context+'"\n "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Après avoir lu le paragraphe, répondez à la question suivante : "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Se référer au passage ci-dessous et répondre à la question suivante:\n Passage : "'+context+'"Question : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Référe-toi au passage ci-dessous et réponds à la question suivante:\n Passage : "'+context+'"Question : "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Référez-vous au passage ci-dessous et répondez à la question suivante:\n Passage : "'+context+'"Question : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Lire le passage suivant et répondez à la question qui suit : \n "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Lis le passage suivant et répondez à la question qui suit : \n "'+context+'"\n "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Lisez le passage suivant et répondez à la question qui suit : \n "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
```
# Splits
- `train` with 1,741,404 samples
- `valid` with 267,792 samples
- no test split
# How to use?
This repository doesn't contain any data.
# Citation
## Original data
> @ARTICLE{2020arXiv200206071
author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},
title = "{FQuAD: French Question Answering Dataset}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = "2020",
month = "Feb",
eid = {arXiv:2002.06071},
pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
eprint = {2002.06071},
primaryClass = {cs.CL}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 3.0
|
CATIE-AQ/fquad_fr_prompt_qa
|
[
"task_categories:question-answering",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:fquad",
"language:fr",
"license:cc-by-nc-sa-3.0",
"DFP",
"french prompts",
"arxiv:2002.06071",
"region:us"
] |
2023-08-21T14:07:25+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["fquad"], "task_categories": ["question-answering"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:20:06+00:00
|
[
"2002.06071"
] |
[
"fr"
] |
TAGS
#task_categories-question-answering #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us
|
# fquad_fr_prompt_qa
## Summary
fquad_fr_prompt_qa is a subset of the Dataset of French Prompts (DFP).
It contains 2,009,196 rows that can be used for a question-answering task.
The original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
As FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 1,741,404 samples
- 'valid' with 267,792 samples
- no test split
# How to use?
This repository doesn't contain any data.
## Original data
> @ARTICLE{2020arXiv200206071
author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},
title = "{FQuAD: French Question Answering Dataset}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = "2020",
month = "Feb",
eid = {arXiv:2002.06071},
pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
eprint = {2002.06071},
primaryClass = {cs.CL}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 3.0
|
[
"# fquad_fr_prompt_qa",
"## Summary\n\nfquad_fr_prompt_qa is a subset of the Dataset of French Prompts (DFP). \nIt contains 2,009,196 rows that can be used for a question-answering task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset. \nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 1,741,404 samples\n- 'valid' with 267,792 samples\n- no test split",
"# How to use?\nThis repository doesn't contain any data.",
"## Original data\n> @ARTICLE{2020arXiv200206071\n author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},\n title = \"{FQuAD: French Question Answering Dataset}\",\n journal = {arXiv e-prints},\n keywords = {Computer Science - Computation and Language},\n year = \"2020\",\n month = \"Feb\",\n eid = {arXiv:2002.06071},\n pages = {arXiv:2002.06071},\narchivePrefix = {arXiv},\n eprint = {2002.06071},\n primaryClass = {cs.CL}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 3.0"
] |
[
"TAGS\n#task_categories-question-answering #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us \n",
"# fquad_fr_prompt_qa",
"## Summary\n\nfquad_fr_prompt_qa is a subset of the Dataset of French Prompts (DFP). \nIt contains 2,009,196 rows that can be used for a question-answering task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset. \nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 1,741,404 samples\n- 'valid' with 267,792 samples\n- no test split",
"# How to use?\nThis repository doesn't contain any data.",
"## Original data\n> @ARTICLE{2020arXiv200206071\n author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},\n title = \"{FQuAD: French Question Answering Dataset}\",\n journal = {arXiv e-prints},\n keywords = {Computer Science - Computation and Language},\n year = \"2020\",\n month = \"Feb\",\n eid = {arXiv:2002.06071},\n pages = {arXiv:2002.06071},\narchivePrefix = {arXiv},\n eprint = {2002.06071},\n primaryClass = {cs.CL}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 3.0"
] |
[
102,
11,
153,
5,
46,
30,
16,
158,
106,
9
] |
[
"passage: TAGS\n#task_categories-question-answering #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us \n# fquad_fr_prompt_qa## Summary\n\nfquad_fr_prompt_qa is a subset of the Dataset of French Prompts (DFP). \nIt contains 2,009,196 rows that can be used for a question-answering task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset. \nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 1,741,404 samples\n- 'valid' with 267,792 samples\n- no test split# How to use?\nThis repository doesn't contain any data."
] |
dbdafc57e3e6a868ac31ecba36c3b7c5d9ac1599
|
# fquad_fr_prompt_context_generation_with_answer
## Summary
**fquad_fr_prompt_context_generation_with_answer** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **574,056** rows that can be used for a text generation task.
The original data (without prompts) comes from the dataset [FQuAD]( https://huggingface.co/datasets/fquad) by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
As FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Étant donné la réponse "'+ answer+'", écrire un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", écris un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", écrivez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", rédiger un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", rédige un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", rédigez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", générer un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", génère un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", générez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", créer un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", crée un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", créez un texte explicatif.\nTexte : ',
'Ecrire un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Ecris un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Ecrivez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Rédiger un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Rédige un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Rédigez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Générer un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Génère un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Générez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Créer un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Crée un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Créez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
```
# Splits
- `train` with 497,544 samples
- `valid` with 76,512 samples
- no test split
# How to use?
This repository doesn't contain any data.
# Citation
## Original data
> @ARTICLE{2020arXiv200206071
author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},
title = "{FQuAD: French Question Answering Dataset}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = "2020",
month = "Feb",
eid = {arXiv:2002.06071},
pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
eprint = {2002.06071},
primaryClass = {cs.CL}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 3.0
|
CATIE-AQ/fquad_fr_prompt_context_generation_with_answer
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100k<n<1M",
"source_datasets:fquad",
"language:fr",
"license:cc-by-nc-sa-3.0",
"DFP",
"french prompts",
"arxiv:2002.06071",
"region:us"
] |
2023-08-21T14:14:33+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100k<n<1M"], "source_datasets": ["fquad"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:17:17+00:00
|
[
"2002.06071"
] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100k<n<1M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us
|
# fquad_fr_prompt_context_generation_with_answer
## Summary
fquad_fr_prompt_context_generation_with_answer is a subset of the Dataset of French Prompts (DFP).
It contains 574,056 rows that can be used for a text generation task.
The original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
As FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 497,544 samples
- 'valid' with 76,512 samples
- no test split
# How to use?
This repository doesn't contain any data.
## Original data
> @ARTICLE{2020arXiv200206071
author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},
title = "{FQuAD: French Question Answering Dataset}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = "2020",
month = "Feb",
eid = {arXiv:2002.06071},
pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
eprint = {2002.06071},
primaryClass = {cs.CL}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 3.0
|
[
"# fquad_fr_prompt_context_generation_with_answer",
"## Summary\n\nfquad_fr_prompt_context_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 574,056 rows that can be used for a text generation task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 497,544 samples\n- 'valid' with 76,512 samples\n- no test split",
"# How to use?\nThis repository doesn't contain any data.",
"## Original data\n> @ARTICLE{2020arXiv200206071\n author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},\n title = \"{FQuAD: French Question Answering Dataset}\",\n journal = {arXiv e-prints},\n keywords = {Computer Science - Computation and Language},\n year = \"2020\",\n month = \"Feb\",\n eid = {arXiv:2002.06071},\n pages = {arXiv:2002.06071},\narchivePrefix = {arXiv},\n eprint = {2002.06071},\n primaryClass = {cs.CL}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 3.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100k<n<1M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us \n",
"# fquad_fr_prompt_context_generation_with_answer",
"## Summary\n\nfquad_fr_prompt_context_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 574,056 rows that can be used for a text generation task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 497,544 samples\n- 'valid' with 76,512 samples\n- no test split",
"# How to use?\nThis repository doesn't contain any data.",
"## Original data\n> @ARTICLE{2020arXiv200206071\n author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},\n title = \"{FQuAD: French Question Answering Dataset}\",\n journal = {arXiv e-prints},\n keywords = {Computer Science - Computation and Language},\n year = \"2020\",\n month = \"Feb\",\n eid = {arXiv:2002.06071},\n pages = {arXiv:2002.06071},\narchivePrefix = {arXiv},\n eprint = {2002.06071},\n primaryClass = {cs.CL}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 3.0"
] |
[
101,
20,
160,
5,
46,
30,
16,
158,
106,
9
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100k<n<1M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us \n# fquad_fr_prompt_context_generation_with_answer## Summary\n\nfquad_fr_prompt_context_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 574,056 rows that can be used for a text generation task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 497,544 samples\n- 'valid' with 76,512 samples\n- no test split# How to use?\nThis repository doesn't contain any data."
] |
a6bbe74c377e73d6842a432e376d300f98fec307
|
# fquad_fr_prompt_context_generation_with_answer_and_question
## Summary
**fquad_fr_prompt_context_generation_with_answer_and_question** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **574,056** rows that can be used for a context-generation (with answer and question) task.
The original data (without prompts) comes from the dataset [FQuAD]( https://huggingface.co/datasets/fquad) by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
As FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", écrire un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", écris un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", écrivez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", rédiger un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", rédige un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", rédigez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", générer un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", génère un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", générez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", créer un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", crée un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", créez un texte explicatif.\nTexte : ',
'Ecrire un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Ecris un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Ecrivez un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Rédiger un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Rédige un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Rédigez un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Générer un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Génère un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Générez un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Créer un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Crée un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Créez un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : '
```
# Splits
- `train` with 497,544 samples
- `valid` with 76,512 samples
- no test split
# How to use?
This repository doesn't contain any data.
# Citation
## Original data
> @ARTICLE{2020arXiv200206071
author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},
title = "{FQuAD: French Question Answering Dataset}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = "2020",
month = "Feb",
eid = {arXiv:2002.06071},
pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
eprint = {2002.06071},
primaryClass = {cs.CL}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 3.0
|
CATIE-AQ/fquad_fr_prompt_context_generation_with_answer_and_question
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100k<n<1M",
"source_datasets:fquad",
"language:fr",
"license:cc-by-nc-sa-3.0",
"DFP",
"french prompts",
"arxiv:2002.06071",
"region:us"
] |
2023-08-21T14:17:21+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100k<n<1M"], "source_datasets": ["fquad"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:17:25+00:00
|
[
"2002.06071"
] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100k<n<1M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us
|
# fquad_fr_prompt_context_generation_with_answer_and_question
## Summary
fquad_fr_prompt_context_generation_with_answer_and_question is a subset of the Dataset of French Prompts (DFP).
It contains 574,056 rows that can be used for a context-generation (with answer and question) task.
The original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
As FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 497,544 samples
- 'valid' with 76,512 samples
- no test split
# How to use?
This repository doesn't contain any data.
## Original data
> @ARTICLE{2020arXiv200206071
author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},
title = "{FQuAD: French Question Answering Dataset}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = "2020",
month = "Feb",
eid = {arXiv:2002.06071},
pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
eprint = {2002.06071},
primaryClass = {cs.CL}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 3.0
|
[
"# fquad_fr_prompt_context_generation_with_answer_and_question",
"## Summary\n\nfquad_fr_prompt_context_generation_with_answer_and_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 574,056 rows that can be used for a context-generation (with answer and question) task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 497,544 samples\n- 'valid' with 76,512 samples\n- no test split",
"# How to use?\nThis repository doesn't contain any data.",
"## Original data\n> @ARTICLE{2020arXiv200206071\n author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},\n title = \"{FQuAD: French Question Answering Dataset}\",\n journal = {arXiv e-prints},\n keywords = {Computer Science - Computation and Language},\n year = \"2020\",\n month = \"Feb\",\n eid = {arXiv:2002.06071},\n pages = {arXiv:2002.06071},\narchivePrefix = {arXiv},\n eprint = {2002.06071},\n primaryClass = {cs.CL}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 3.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100k<n<1M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us \n",
"# fquad_fr_prompt_context_generation_with_answer_and_question",
"## Summary\n\nfquad_fr_prompt_context_generation_with_answer_and_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 574,056 rows that can be used for a context-generation (with answer and question) task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 497,544 samples\n- 'valid' with 76,512 samples\n- no test split",
"# How to use?\nThis repository doesn't contain any data.",
"## Original data\n> @ARTICLE{2020arXiv200206071\n author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},\n title = \"{FQuAD: French Question Answering Dataset}\",\n journal = {arXiv e-prints},\n keywords = {Computer Science - Computation and Language},\n year = \"2020\",\n month = \"Feb\",\n eid = {arXiv:2002.06071},\n pages = {arXiv:2002.06071},\narchivePrefix = {arXiv},\n eprint = {2002.06071},\n primaryClass = {cs.CL}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 3.0"
] |
[
101,
25,
173,
5,
46,
30,
16,
158,
106,
9
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100k<n<1M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us \n# fquad_fr_prompt_context_generation_with_answer_and_question## Summary\n\nfquad_fr_prompt_context_generation_with_answer_and_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 574,056 rows that can be used for a context-generation (with answer and question) task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 497,544 samples\n- 'valid' with 76,512 samples\n- no test split# How to use?\nThis repository doesn't contain any data."
] |
1266fa135fffdf14e7857df3312bf07941f2b17d
|
# Dataset Card for "korfin-asc-hub"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
amphora/korfin-asc-test
|
[
"region:us"
] |
2023-08-21T14:18:22+00:00
|
{"dataset_info": {"features": [{"name": "SID", "dtype": "string"}, {"name": "TYPE", "dtype": "string"}, {"name": "SRC", "dtype": "string"}, {"name": "ASPECT", "dtype": "string"}, {"name": "SENTIMENT", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1012762, "num_examples": 3795}], "download_size": 379621, "dataset_size": 1012762}}
|
2023-08-21T14:18:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "korfin-asc-hub"
More Information needed
|
[
"# Dataset Card for \"korfin-asc-hub\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"korfin-asc-hub\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"korfin-asc-hub\"\n\nMore Information needed"
] |
b5f90f419b7489cdba26fdbc8c022fcb5562f968
|
Sampled version of [cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
[Since the original data was shuffled before chunking](https://huggingface.co/datasets/cerebras/SlimPajama-627B/discussions/4), I only downloaded train/chunk1 (of 10 total) and further sampled 10%. This should result in roughly 6B tokens, hence SlimPajama-6B.
The dataset is 24GBs in storage size when decompressed (original dataset is over 2TBs) and has 5489000 rows.
The validation set and test set were sampled as well.
---
#### Data source proportions for SlimPajama-627B and SlimPajama-6B
For sanity purpose, I caluclated the byte proportion of the sampled version.
| Data source | SlimPajama-627B | SlimPajama-6B |
| ------------- | ---------- | --------- |
| Commoncrawl | 52.2% | 54.1% |
| C4 | 26.7% | 28.7% |
| GitHub | 5.2% | 4.2% |
| Books | 4.2% | 3.7% |
| ArXiv | 4.6% | 3.4% |
| Wikpedia | 3.8% | 3.1% |
| StackExchange | 3.3% | 2.8% |
---
Please refer to the original dataset for other info.
```
@misc{cerebras2023slimpajama,
author = {Soboleva, Daria and Al-Khateeb, Faisal and Myers, Robert and Steeves, Jacob R and Hestness, Joel and Dey, Nolan},
title = {{SlimPajama: A 627B token cleaned and deduplicated version of RedPajama}},
month = June,
year = 2023,
howpublished = {\url{https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama}},
url = {https://huggingface.co/datasets/cerebras/SlimPajama-627B},
}
```
|
DKYoon/SlimPajama-6B
|
[
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:en",
"region:us"
] |
2023-08-21T14:25:52+00:00
|
{"language": ["en"], "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "pretty_name": "SlimPajama-6B", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "meta", "struct": [{"name": "redpajama_set_name", "dtype": "string"}]}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 23918118724, "num_examples": 5489000}, {"name": "validation", "num_bytes": 39109042, "num_examples": 9347}, {"name": "test", "num_bytes": 40114950, "num_examples": 9346}], "download_size": 14048972121, "dataset_size": 23997342716}}
|
2023-08-21T15:54:47+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-1M<n<10M #language-English #region-us
|
Sampled version of cerebras/SlimPajama-627B.
Since the original data was shuffled before chunking, I only downloaded train/chunk1 (of 10 total) and further sampled 10%. This should result in roughly 6B tokens, hence SlimPajama-6B.
The dataset is 24GBs in storage size when decompressed (original dataset is over 2TBs) and has 5489000 rows.
The validation set and test set were sampled as well.
---
#### Data source proportions for SlimPajama-627B and SlimPajama-6B
For sanity purpose, I caluclated the byte proportion of the sampled version.
Data source: Commoncrawl, SlimPajama-627B: 52.2%, SlimPajama-6B: 54.1%
Data source: C4, SlimPajama-627B: 26.7%, SlimPajama-6B: 28.7%
Data source: GitHub, SlimPajama-627B: 5.2%, SlimPajama-6B: 4.2%
Data source: Books, SlimPajama-627B: 4.2%, SlimPajama-6B: 3.7%
Data source: ArXiv, SlimPajama-627B: 4.6%, SlimPajama-6B: 3.4%
Data source: Wikpedia, SlimPajama-627B: 3.8%, SlimPajama-6B: 3.1%
Data source: StackExchange, SlimPajama-627B: 3.3%, SlimPajama-6B: 2.8%
---
Please refer to the original dataset for other info.
|
[
"#### Data source proportions for SlimPajama-627B and SlimPajama-6B\n\n\nFor sanity purpose, I caluclated the byte proportion of the sampled version.\n\n\nData source: Commoncrawl, SlimPajama-627B: 52.2%, SlimPajama-6B: 54.1%\nData source: C4, SlimPajama-627B: 26.7%, SlimPajama-6B: 28.7%\nData source: GitHub, SlimPajama-627B: 5.2%, SlimPajama-6B: 4.2%\nData source: Books, SlimPajama-627B: 4.2%, SlimPajama-6B: 3.7%\nData source: ArXiv, SlimPajama-627B: 4.6%, SlimPajama-6B: 3.4%\nData source: Wikpedia, SlimPajama-627B: 3.8%, SlimPajama-6B: 3.1%\nData source: StackExchange, SlimPajama-627B: 3.3%, SlimPajama-6B: 2.8%\n\n\n\n\n---\n\n\nPlease refer to the original dataset for other info."
] |
[
"TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-English #region-us \n",
"#### Data source proportions for SlimPajama-627B and SlimPajama-6B\n\n\nFor sanity purpose, I caluclated the byte proportion of the sampled version.\n\n\nData source: Commoncrawl, SlimPajama-627B: 52.2%, SlimPajama-6B: 54.1%\nData source: C4, SlimPajama-627B: 26.7%, SlimPajama-6B: 28.7%\nData source: GitHub, SlimPajama-627B: 5.2%, SlimPajama-6B: 4.2%\nData source: Books, SlimPajama-627B: 4.2%, SlimPajama-6B: 3.7%\nData source: ArXiv, SlimPajama-627B: 4.6%, SlimPajama-6B: 3.4%\nData source: Wikpedia, SlimPajama-627B: 3.8%, SlimPajama-6B: 3.1%\nData source: StackExchange, SlimPajama-627B: 3.3%, SlimPajama-6B: 2.8%\n\n\n\n\n---\n\n\nPlease refer to the original dataset for other info."
] |
[
33,
226
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-English #region-us \n#### Data source proportions for SlimPajama-627B and SlimPajama-6B\n\n\nFor sanity purpose, I caluclated the byte proportion of the sampled version.\n\n\nData source: Commoncrawl, SlimPajama-627B: 52.2%, SlimPajama-6B: 54.1%\nData source: C4, SlimPajama-627B: 26.7%, SlimPajama-6B: 28.7%\nData source: GitHub, SlimPajama-627B: 5.2%, SlimPajama-6B: 4.2%\nData source: Books, SlimPajama-627B: 4.2%, SlimPajama-6B: 3.7%\nData source: ArXiv, SlimPajama-627B: 4.6%, SlimPajama-6B: 3.4%\nData source: Wikpedia, SlimPajama-627B: 3.8%, SlimPajama-6B: 3.1%\nData source: StackExchange, SlimPajama-627B: 3.3%, SlimPajama-6B: 2.8%\n\n\n\n\n---\n\n\nPlease refer to the original dataset for other info."
] |
f29be9f5383a8e47954684debd88894767b5c437
|
# Dataset of arashio/荒潮/荒潮 (Kantai Collection)
This is the dataset of arashio/荒潮/荒潮 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `brown_hair, long_hair, brown_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 459.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arashio_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 290.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arashio_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1113 | 596.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arashio_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 422.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arashio_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1113 | 806.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arashio_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/arashio_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 18 |  |  |  |  |  | 1girl, looking_at_viewer, cowboy_shot, solo, white_background, collarbone, simple_background, smile, small_breasts, blue_one-piece_swimsuit, school_swimsuit, blush, navel, standing |
| 1 | 21 |  |  |  |  |  | 1girl, long_sleeves, pinafore_dress, solo, white_shirt, black_dress, frilled_dress, looking_at_viewer, smile, black_pantyhose, white_background, simple_background, belt, cowboy_shot, school_uniform |
| 2 | 9 |  |  |  |  |  | 1girl, looking_at_viewer, pinafore_dress, simple_background, smile, solo, upper_body, white_shirt, long_sleeves, white_background, school_uniform |
| 3 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, pleated_skirt, school_uniform, short_sleeves, simple_background, solo, white_shirt, arm_warmers, ass, bike_shorts, black_shorts, blush, open_mouth, shorts_under_skirt, smile, white_background, from_behind, looking_back, cowboy_shot, heart, lifted_by_self, panties, skirt_lift, suspender_skirt |
| 4 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, pleated_skirt, school_uniform, smile, solo, suspenders, arm_warmers, blush, short_sleeves, white_shirt, character_name, dated, open_mouth, simple_background, twitter_username |
| 5 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, navel, solo, underwear_only, small_breasts, smile, blush, collarbone, hair_between_eyes, lips, simple_background, white_background, blue_bra, side-tie_panties, wariza |
| 6 | 17 |  |  |  |  |  | 1girl, floral_print, solo, alternate_costume, alternate_hairstyle, obi, hair_flower, looking_at_viewer, simple_background, smile, wide_sleeves, hair_bun, white_background, blush, upper_body, long_sleeves, yukata |
| 7 | 20 |  |  |  |  |  | detached_collar, looking_at_viewer, playboy_bunny, rabbit_ears, 1girl, solo, strapless_leotard, wrist_cuffs, black_leotard, fake_animal_ears, simple_background, smile, white_background, black_pantyhose, brown_pantyhose, cowboy_shot, small_breasts, black_bowtie, covered_navel, medium_breasts, rabbit_tail, twitter_username, alternate_costume |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | cowboy_shot | solo | white_background | collarbone | simple_background | smile | small_breasts | blue_one-piece_swimsuit | school_swimsuit | blush | navel | standing | long_sleeves | pinafore_dress | white_shirt | black_dress | frilled_dress | black_pantyhose | belt | school_uniform | upper_body | pleated_skirt | short_sleeves | arm_warmers | ass | bike_shorts | black_shorts | open_mouth | shorts_under_skirt | from_behind | looking_back | heart | lifted_by_self | panties | skirt_lift | suspender_skirt | suspenders | character_name | dated | twitter_username | underwear_only | hair_between_eyes | lips | blue_bra | side-tie_panties | wariza | floral_print | alternate_costume | alternate_hairstyle | obi | hair_flower | wide_sleeves | hair_bun | yukata | detached_collar | playboy_bunny | rabbit_ears | strapless_leotard | wrist_cuffs | black_leotard | fake_animal_ears | brown_pantyhose | black_bowtie | covered_navel | medium_breasts | rabbit_tail |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:--------------|:-------|:-------------------|:-------------|:--------------------|:--------|:----------------|:--------------------------|:------------------|:--------|:--------|:-----------|:---------------|:-----------------|:--------------|:--------------|:----------------|:------------------|:-------|:-----------------|:-------------|:----------------|:----------------|:--------------|:------|:--------------|:---------------|:-------------|:---------------------|:--------------|:---------------|:--------|:-----------------|:----------|:-------------|:------------------|:-------------|:-----------------|:--------|:-------------------|:-----------------|:--------------------|:-------|:-----------|:-------------------|:---------|:---------------|:--------------------|:----------------------|:------|:--------------|:---------------|:-----------|:---------|:------------------|:----------------|:--------------|:--------------------|:--------------|:----------------|:-------------------|:------------------|:---------------|:----------------|:-----------------|:--------------|
| 0 | 18 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 21 |  |  |  |  |  | X | X | X | X | X | | X | X | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | X | X | | X | X | | X | X | | | | | | | X | X | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | X | X | X | | X | X | | | | X | | | | | X | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | | X | | | X | X | | | | X | | | | | X | | | | | X | | X | X | X | | | | X | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | | X | X | X | X | X | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 6 | 17 |  |  |  |  |  | X | X | | X | X | | X | X | | | | X | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 7 | 20 |  |  |  |  |  | X | X | X | X | X | | X | X | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/arashio_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T14:30:39+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T11:03:47+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of arashio/荒潮/荒潮 (Kantai Collection)
============================================
This is the dataset of arashio/荒潮/荒潮 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are 'brown\_hair, long\_hair, brown\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
115b8056eaa9285c8518c63056fcabf6e7fa5138
|
# MusicQA Dataset
This is the dataset used for training and testing the Music Understanding Large Language Model (MU-LLaMA).
|
mu-llama/MusicQA
|
[
"license:mit",
"region:us"
] |
2023-08-21T14:45:49+00:00
|
{"license": "mit"}
|
2023-09-13T13:45:00+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
# MusicQA Dataset
This is the dataset used for training and testing the Music Understanding Large Language Model (MU-LLaMA).
|
[
"# MusicQA Dataset\n\nThis is the dataset used for training and testing the Music Understanding Large Language Model (MU-LLaMA)."
] |
[
"TAGS\n#license-mit #region-us \n",
"# MusicQA Dataset\n\nThis is the dataset used for training and testing the Music Understanding Large Language Model (MU-LLaMA)."
] |
[
11,
29
] |
[
"passage: TAGS\n#license-mit #region-us \n# MusicQA Dataset\n\nThis is the dataset used for training and testing the Music Understanding Large Language Model (MU-LLaMA)."
] |
bc66299de78accbbc41304798a5cf282c97544ba
|
# Dataset Card for "merge_new_para_detection_data_v6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mHossain/merge_new_para_detection_data_v6
|
[
"region:us"
] |
2023-08-21T14:46:12+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 18268704.9, "num_examples": 108000}, {"name": "test", "num_bytes": 2029856.1, "num_examples": 12000}], "download_size": 9186455, "dataset_size": 20298561.0}}
|
2023-08-21T14:46:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "merge_new_para_detection_data_v6"
More Information needed
|
[
"# Dataset Card for \"merge_new_para_detection_data_v6\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"merge_new_para_detection_data_v6\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"merge_new_para_detection_data_v6\"\n\nMore Information needed"
] |
2ceec5fd3ff6d2ea74b3c107d36d3588a9045e96
|
# Dataset Card for "buet_model_buet_test_data_paraphrase_detection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mHossain/buet_model_buet_test_data_paraphrase_detection
|
[
"region:us"
] |
2023-08-21T14:47:09+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 7576560.9, "num_examples": 36000}, {"name": "test", "num_bytes": 841840.1, "num_examples": 4000}], "download_size": 3715813, "dataset_size": 8418401.0}}
|
2023-08-21T14:47:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "buet_model_buet_test_data_paraphrase_detection"
More Information needed
|
[
"# Dataset Card for \"buet_model_buet_test_data_paraphrase_detection\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"buet_model_buet_test_data_paraphrase_detection\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"buet_model_buet_test_data_paraphrase_detection\"\n\nMore Information needed"
] |
ae2367ebbfd059bea173447b8625e2f1dc2b04d2
|
# Dataset Card for "low_all_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Jing24/low_all_train
|
[
"region:us"
] |
2023-08-21T14:47:28+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 79656651, "num_examples": 87599}], "download_size": 14271933, "dataset_size": 79656651}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-24T20:25:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "low_all_train"
More Information needed
|
[
"# Dataset Card for \"low_all_train\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"low_all_train\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"low_all_train\"\n\nMore Information needed"
] |
04356c71fd90417ee59cffe16234be6643794849
|
# fquad_fr_prompt_context_generation_with_question
## Summary
**fquad_fr_prompt_context_generation_with_question** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP)).
It contains **574,056** rows that can be used for a context-generation (with question) task.
The original data (without prompts) comes from the dataset [FQuAD]( https://huggingface.co/datasets/fquad) by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
As FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Étant donné la question "'+question+'", écrire un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", écris un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", écrivez un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", rédiger un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", rédige un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", rédigez un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", générer un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", génère un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", générez un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", créer un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", crée un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", créez un texte explicatif.\nTexte : ',
'Ecrire un texte comme contexte à la question "'+question+'" \nTexte : ',
'Ecris un texte comme contexte à la question "'+question+'" \nTexte : ',
'Ecrivez un texte comme contexte à la question "'+question+'" \nTexte : ',
'Rédiger un texte comme contexte à la question "'+question+'" \nTexte : ',
'Rédige un texte comme contexte à la question "'+question+'" \nTexte : ',
'Rédigez un texte comme contexte à la question "'+question+'" \nTexte : ',
'Générer un texte comme contexte à la question "'+question+'" \nTexte : ',
'Génère un texte comme contexte à la question "'+question+'" \nTexte : ',
'Générez un texte comme contexte à la question "'+question+'" \nTexte : ',
'Créer un texte comme contexte à la question "'+question+'" \nTexte : ',
'Crée un texte comme contexte à la question "'+question+'" \nTexte : ',
'Créez un texte comme contexte à la question "'+question+'" \nTexte : '
```
# Splits
- `train` with 497,544 samples
- `valid` with 76,512 samples
- no test split
# How to use?
This repository doesn't contain any data.
# Citation
## Original data
> @ARTICLE{2020arXiv200206071
author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},
title = "{FQuAD: French Question Answering Dataset}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = "2020",
month = "Feb",
eid = {arXiv:2002.06071},
pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
eprint = {2002.06071},
primaryClass = {cs.CL}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 3.0
|
CATIE-AQ/fquad_fr_prompt_context_generation_with_question
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100k<n<1M",
"source_datasets:fquad",
"language:fr",
"license:cc-by-nc-sa-3.0",
"DFP",
"french prompts",
"arxiv:2002.06071",
"region:us"
] |
2023-08-21T14:48:07+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100k<n<1M"], "source_datasets": ["fquad"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:19:52+00:00
|
[
"2002.06071"
] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100k<n<1M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us
|
# fquad_fr_prompt_context_generation_with_question
## Summary
fquad_fr_prompt_context_generation_with_question is a subset of the Dataset of French Prompts (DFP)).
It contains 574,056 rows that can be used for a context-generation (with question) task.
The original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
As FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 497,544 samples
- 'valid' with 76,512 samples
- no test split
# How to use?
This repository doesn't contain any data.
## Original data
> @ARTICLE{2020arXiv200206071
author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},
title = "{FQuAD: French Question Answering Dataset}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = "2020",
month = "Feb",
eid = {arXiv:2002.06071},
pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
eprint = {2002.06071},
primaryClass = {cs.CL}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 3.0
|
[
"# fquad_fr_prompt_context_generation_with_question",
"## Summary\n\nfquad_fr_prompt_context_generation_with_question is a subset of the Dataset of French Prompts (DFP)). \nIt contains 574,056 rows that can be used for a context-generation (with question) task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 497,544 samples\n- 'valid' with 76,512 samples\n- no test split",
"# How to use?\nThis repository doesn't contain any data.",
"## Original data\n> @ARTICLE{2020arXiv200206071\n author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},\n title = \"{FQuAD: French Question Answering Dataset}\",\n journal = {arXiv e-prints},\n keywords = {Computer Science - Computation and Language},\n year = \"2020\",\n month = \"Feb\",\n eid = {arXiv:2002.06071},\n pages = {arXiv:2002.06071},\narchivePrefix = {arXiv},\n eprint = {2002.06071},\n primaryClass = {cs.CL}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 3.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100k<n<1M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us \n",
"# fquad_fr_prompt_context_generation_with_question",
"## Summary\n\nfquad_fr_prompt_context_generation_with_question is a subset of the Dataset of French Prompts (DFP)). \nIt contains 574,056 rows that can be used for a context-generation (with question) task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 497,544 samples\n- 'valid' with 76,512 samples\n- no test split",
"# How to use?\nThis repository doesn't contain any data.",
"## Original data\n> @ARTICLE{2020arXiv200206071\n author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},\n title = \"{FQuAD: French Question Answering Dataset}\",\n journal = {arXiv e-prints},\n keywords = {Computer Science - Computation and Language},\n year = \"2020\",\n month = \"Feb\",\n eid = {arXiv:2002.06071},\n pages = {arXiv:2002.06071},\narchivePrefix = {arXiv},\n eprint = {2002.06071},\n primaryClass = {cs.CL}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 3.0"
] |
[
101,
20,
167,
5,
46,
30,
16,
158,
106,
9
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100k<n<1M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us \n# fquad_fr_prompt_context_generation_with_question## Summary\n\nfquad_fr_prompt_context_generation_with_question is a subset of the Dataset of French Prompts (DFP)). \nIt contains 574,056 rows that can be used for a context-generation (with question) task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 497,544 samples\n- 'valid' with 76,512 samples\n- no test split# How to use?\nThis repository doesn't contain any data."
] |
317ce66a2a98fcea0040553267ba6caa12131bd2
|
# fquad_fr_prompt_question_generation_with_answer
## Summary
**fquad_fr_prompt_question_generation_with_answer** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP)).
It contains **526,218** rows that can be used for a question-generation (with answer) task.
The original data (without prompts) comes from the dataset [FQuAD]( https://huggingface.co/datasets/fquad) by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
As FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Quelle question donnerait la réponse suivante ? Réponse : "'+answer+'";\nQuestion :',
'Déterminer la question qui aurait pu être posée pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Détermine la question que tu aurais pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Déterminez la question que vous auriez pu poser pour obtenir la réponse suivante . \n Réponse : "'+answer+'";\n Question :',
'Quelle question aurait pu être posée pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question aurais-tu pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question auriez-vous pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question aurait pu être posée pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question aurais-tu pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question auriez-vous pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Sachant la réponse suivante : "'+answer+'"\n Générer une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Génère une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Générez une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Trouver une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Trouves une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Trouvez une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Créer une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Crée trouver une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Créez trouver une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Ecrire une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Ecris une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Ecrivez une bonne question
```
# Splits
- `train` with 456,082 samples
- `valid` with 70,136 samples
- no test split
# How to use?
This repository doesn't contain any data.
# Citation
## Original data
> @ARTICLE{2020arXiv200206071
author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},
title = "{FQuAD: French Question Answering Dataset}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = "2020",
month = "Feb",
eid = {arXiv:2002.06071},
pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
eprint = {2002.06071},
primaryClass = {cs.CL}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 3.0
|
CATIE-AQ/fquad_fr_prompt_question_generation_with_answer
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100k<n<1M",
"source_datasets:fquad",
"language:fr",
"license:cc-by-nc-sa-3.0",
"DFP",
"french prompts",
"arxiv:2002.06071",
"region:us"
] |
2023-08-21T14:52:07+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100k<n<1M"], "source_datasets": ["fquad"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:20:22+00:00
|
[
"2002.06071"
] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100k<n<1M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us
|
# fquad_fr_prompt_question_generation_with_answer
## Summary
fquad_fr_prompt_question_generation_with_answer is a subset of the Dataset of French Prompts (DFP)).
It contains 526,218 rows that can be used for a question-generation (with answer) task.
The original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
As FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 456,082 samples
- 'valid' with 70,136 samples
- no test split
# How to use?
This repository doesn't contain any data.
## Original data
> @ARTICLE{2020arXiv200206071
author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},
title = "{FQuAD: French Question Answering Dataset}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = "2020",
month = "Feb",
eid = {arXiv:2002.06071},
pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
eprint = {2002.06071},
primaryClass = {cs.CL}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 3.0
|
[
"# fquad_fr_prompt_question_generation_with_answer",
"## Summary\n\nfquad_fr_prompt_question_generation_with_answer is a subset of the Dataset of French Prompts (DFP)). \nIt contains 526,218 rows that can be used for a question-generation (with answer) task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 456,082 samples\n- 'valid' with 70,136 samples\n- no test split",
"# How to use?\nThis repository doesn't contain any data.",
"## Original data\n> @ARTICLE{2020arXiv200206071\n author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},\n title = \"{FQuAD: French Question Answering Dataset}\",\n journal = {arXiv e-prints},\n keywords = {Computer Science - Computation and Language},\n year = \"2020\",\n month = \"Feb\",\n eid = {arXiv:2002.06071},\n pages = {arXiv:2002.06071},\narchivePrefix = {arXiv},\n eprint = {2002.06071},\n primaryClass = {cs.CL}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 3.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100k<n<1M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us \n",
"# fquad_fr_prompt_question_generation_with_answer",
"## Summary\n\nfquad_fr_prompt_question_generation_with_answer is a subset of the Dataset of French Prompts (DFP)). \nIt contains 526,218 rows that can be used for a question-generation (with answer) task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 456,082 samples\n- 'valid' with 70,136 samples\n- no test split",
"# How to use?\nThis repository doesn't contain any data.",
"## Original data\n> @ARTICLE{2020arXiv200206071\n author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},\n title = \"{FQuAD: French Question Answering Dataset}\",\n journal = {arXiv e-prints},\n keywords = {Computer Science - Computation and Language},\n year = \"2020\",\n month = \"Feb\",\n eid = {arXiv:2002.06071},\n pages = {arXiv:2002.06071},\narchivePrefix = {arXiv},\n eprint = {2002.06071},\n primaryClass = {cs.CL}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 3.0"
] |
[
101,
20,
167,
5,
46,
29,
16,
158,
106,
9
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100k<n<1M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us \n# fquad_fr_prompt_question_generation_with_answer## Summary\n\nfquad_fr_prompt_question_generation_with_answer is a subset of the Dataset of French Prompts (DFP)). \nIt contains 526,218 rows that can be used for a question-generation (with answer) task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 456,082 samples\n- 'valid' with 70,136 samples\n- no test split# How to use?\nThis repository doesn't contain any data."
] |
368a75061b06f3fa0436d2336622346c5868752a
|
# fquad_fr_prompt_question_generation_with_context
## Summary
**fquad_fr_prompt_question_generation_with_context** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP)).
It contains **574,056** rows that can be used for a question-generation (with context) task.
The original data (without prompts) comes from the dataset [FQuAD]( https://huggingface.co/datasets/fquad) by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
As FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'"'+context+'"\n Générer une question à partir du texte ci-dessus : ',
'"'+context+'"\n Génère une question à partir du texte ci-dessus : ',
'"'+context+'"\n Générez une question à partir du texte ci-dessus : ',
'"'+context+'"\n Trouver une question à partir du texte ci-dessus : ',
'"'+context+'"\n Trouve une question à partir du texte ci-dessus : ',
'"'+context+'"\n Trouvez une question à partir du texte ci-dessus : ',
'"'+context+'"\n Créer une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Crée trouver une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Créez trouver une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Ecrire une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Ecris une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Ecrivez une bonne question à partir du texte ci-dessus : ',
'Générer une bonne question pour le texte suivant : "'+context+'"',
'Génère une bonne question pour le texte suivant : "'+context+'"',
'Générez une bonne question pour le texte suivant : "'+context+'"',
'Trouver une bonne question pour le texte suivant : "'+context+'"',
'Trouve une bonne question pour le texte suivant : "'+context+'"',
'Trouvez trouver une bonne question pour le texte suivant : "'+context+'"',
'Créer une bonne question pour le texte suivant : "'+context+'"',
'Crée trouver une bonne question pour le texte suivant : "'+context+'"',
'Créez trouver une bonne question pour le texte suivant : "'+context+'"',
'Ecrire une bonne question pour le texte suivant : "'+context+'"',
'Ecris une bonne question pour le texte suivant : "'+context+'"',
'Ecrivez une bonne question pour le texte suivant : "'+context+'"'
```
# Splits
- `train` with 497,544 samples
- `valid` with 76,512 samples
- no test split
# How to use?
This repository doesn't contain any data.
# Citation
## Original data
> @ARTICLE{2020arXiv200206071
author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},
title = "{FQuAD: French Question Answering Dataset}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = "2020",
month = "Feb",
eid = {arXiv:2002.06071},
pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
eprint = {2002.06071},
primaryClass = {cs.CL}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 3.0
|
CATIE-AQ/fquad_fr_prompt_question_generation_with_context
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100k<n<1M",
"source_datasets:fquad",
"language:fr",
"license:cc-by-nc-sa-3.0",
"DFP",
"french prompts",
"arxiv:2002.06071",
"region:us"
] |
2023-08-21T14:52:26+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100k<n<1M"], "source_datasets": ["fquad"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:20:34+00:00
|
[
"2002.06071"
] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100k<n<1M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us
|
# fquad_fr_prompt_question_generation_with_context
## Summary
fquad_fr_prompt_question_generation_with_context is a subset of the Dataset of French Prompts (DFP)).
It contains 574,056 rows that can be used for a question-generation (with context) task.
The original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
As FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 497,544 samples
- 'valid' with 76,512 samples
- no test split
# How to use?
This repository doesn't contain any data.
## Original data
> @ARTICLE{2020arXiv200206071
author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},
title = "{FQuAD: French Question Answering Dataset}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = "2020",
month = "Feb",
eid = {arXiv:2002.06071},
pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
eprint = {2002.06071},
primaryClass = {cs.CL}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 3.0
|
[
"# fquad_fr_prompt_question_generation_with_context",
"## Summary\n\nfquad_fr_prompt_question_generation_with_context is a subset of the Dataset of French Prompts (DFP)). \nIt contains 574,056 rows that can be used for a question-generation (with context) task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 497,544 samples\n- 'valid' with 76,512 samples\n- no test split",
"# How to use?\nThis repository doesn't contain any data.",
"## Original data\n> @ARTICLE{2020arXiv200206071\n author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},\n title = \"{FQuAD: French Question Answering Dataset}\",\n journal = {arXiv e-prints},\n keywords = {Computer Science - Computation and Language},\n year = \"2020\",\n month = \"Feb\",\n eid = {arXiv:2002.06071},\n pages = {arXiv:2002.06071},\narchivePrefix = {arXiv},\n eprint = {2002.06071},\n primaryClass = {cs.CL}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 3.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100k<n<1M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us \n",
"# fquad_fr_prompt_question_generation_with_context",
"## Summary\n\nfquad_fr_prompt_question_generation_with_context is a subset of the Dataset of French Prompts (DFP)). \nIt contains 574,056 rows that can be used for a question-generation (with context) task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 497,544 samples\n- 'valid' with 76,512 samples\n- no test split",
"# How to use?\nThis repository doesn't contain any data.",
"## Original data\n> @ARTICLE{2020arXiv200206071\n author = {Martin, d'Hoffschmidt and Maxime, Vidal and Wacim, Belblidia and Tom, Brendlé},\n title = \"{FQuAD: French Question Answering Dataset}\",\n journal = {arXiv e-prints},\n keywords = {Computer Science - Computation and Language},\n year = \"2020\",\n month = \"Feb\",\n eid = {arXiv:2002.06071},\n pages = {arXiv:2002.06071},\narchivePrefix = {arXiv},\n eprint = {2002.06071},\n primaryClass = {cs.CL}\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 3.0"
] |
[
101,
20,
167,
5,
46,
30,
16,
158,
106,
9
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100k<n<1M #source_datasets-fquad #language-French #license-cc-by-nc-sa-3.0 #DFP #french prompts #arxiv-2002.06071 #region-us \n# fquad_fr_prompt_question_generation_with_context## Summary\n\nfquad_fr_prompt_question_generation_with_context is a subset of the Dataset of French Prompts (DFP)). \nIt contains 574,056 rows that can be used for a question-generation (with context) task. \nThe original data (without prompts) comes from the dataset FQuAD by d'Hoffschmidt et al. and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nAs FQuAD's license does not allow data to be shared, we simply share the prompts used, so that users can recreate the dataset themselves in the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 497,544 samples\n- 'valid' with 76,512 samples\n- no test split# How to use?\nThis repository doesn't contain any data."
] |
083f0f460fc8bc34e1ec45d6838e65c38a235cd1
|
# squad_v2_french_translated_fr_prompt_context_generation_with_answer
## Summary
**squad_v2_french_translated_fr_prompt_context_generation_with_answer** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **1,271,928** rows that can be used for a context-generation (with answer) task.
The original data (without prompts) comes from the dataset [pragnakalp/squad_v2_french_translated](https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Étant donné la réponse "'+ answer+'", écrire un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", écris un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", écrivez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", rédiger un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", rédige un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", rédigez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", générer un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", génère un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", générez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", créer un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", crée un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", créez un texte explicatif.\nTexte : ',
'Ecrire un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Ecris un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Ecrivez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Rédiger un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Rédige un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Rédigez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Générer un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Génère un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Générez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Créer un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Crée un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Créez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
```
# Splits
- `train` with 1,271,928 samples
- no `valid` split
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/squad_v2_french_translated_fr_prompt_context_generation_with_answer")
```
# Citation
## Original data
> Hugging Face repository: https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
apache-2.0
|
CATIE-AQ/squad_v2_french_translated_fr_prompt_context_generation_with_answer
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:squad_v2_french_translated",
"language:fr",
"license:apache-2.0",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T14:58:11+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "apache-2.0", "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["squad_v2_french_translated"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:09:32+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us
|
# squad_v2_french_translated_fr_prompt_context_generation_with_answer
## Summary
squad_v2_french_translated_fr_prompt_context_generation_with_answer is a subset of the Dataset of French Prompts (DFP).
It contains 1,271,928 rows that can be used for a context-generation (with answer) task.
The original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 1,271,928 samples
- no 'valid' split
- no 'test' split
# How to use?
## Original data
> Hugging Face repository: URL
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
apache-2.0
|
[
"# squad_v2_french_translated_fr_prompt_context_generation_with_answer",
"## Summary\n\nsquad_v2_french_translated_fr_prompt_context_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 1,271,928 rows that can be used for a context-generation (with answer) task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 1,271,928 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\napache-2.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us \n",
"# squad_v2_french_translated_fr_prompt_context_generation_with_answer",
"## Summary\n\nsquad_v2_french_translated_fr_prompt_context_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 1,271,928 rows that can be used for a context-generation (with answer) task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 1,271,928 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\napache-2.0"
] |
[
98,
31,
172,
5,
46,
28,
5,
12,
106,
6
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us \n# squad_v2_french_translated_fr_prompt_context_generation_with_answer## Summary\n\nsquad_v2_french_translated_fr_prompt_context_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 1,271,928 rows that can be used for a context-generation (with answer) task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 1,271,928 samples\n- no 'valid' split\n- no 'test' split# How to use?## Original data\n> Hugging Face repository: URL## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}"
] |
84e72f311564f8db226aa86d2c228e7294da7bb0
|
# squad_v2_french_translated_fr_prompt_qa
## Summary
**squad_v2_french_translated_fr_prompt_qa** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **3,320,898** rows that can be used for a question-answering task.
The original data (without prompts) comes from the dataset [pragnakalp/squad_v2_french_translated](https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
# SQUAD 1.0 format
'Question : "'+question+'"\nContexte : "'+context+'" Réponse :',
'La réponse à la question "'+question+'" se trouve dans "'+context+'" Pouvez-vous me la dire ?',
'La réponse à la question "'+question+'" se trouve dans "'+context+'" Peux-tu me la dire ?',
'Extraire la réponse à la question à partir du contexte suivant.\n Question : "'+question+'" Contexte : "'+context+'"',
'Extrais la réponse à la question à partir du contexte suivant.\n Question : "'+question+'" Contexte : "'+context+'"',
'Extrayez la réponse à la question à partir du contexte suivant.\n Question : "'+question+'" Contexte : "'+context+'"',
'Étant donné le passage suivant : "'+context+'"\n Répondre à la question suivante sachant que la réponse est présente dans le texte.\n Question : "'+question+'"',
'Étant donné le passage suivant : "'+context+'"\n Réponds à la question suivante sachant que la réponse est présente dans le texte.\n Question : "'+question+'"',
'Étant donné le passage suivant : "'+context+'"\n Répondez à la question suivante sachant que la réponse est présente dans le texte.\n Question : "'+question+'"',
"""La réponse à la question : " """+question+""" " se trouve dans le texte : " """+context+""" "\n Peux-tu l'indiquer ?""",
"""La réponse à la question : " """+question+""" " se trouve dans le texte : " """+context+""" "\n Pouvez-vous l'indiquer ?""",
"""La réponse à la question : " """+question+""" " se trouve dans le texte : " """+context+""" "\n Qu'elle est-elle ?""",
# SQUAD 2.0 format
'"'+question+'"\n Répondre à la question ci-dessus en se basant sur le contexte suivant : "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+question+'"\n Réponds à la question ci-dessus en te basant sur le contexte suivant : "'+context+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'"'+question+'"\n Répondez à la question ci-dessus en vous basant sur le contexte suivant : "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Utiliser le texte suivant pour répondre à la question : '+question+ '\n\n "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Utilise le texte suivant pour répondre à la question : '+question+ '\n\n "'+context+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Utilisez le texte suivant pour répondre à la question : '+question+ '\n\n "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Lire le texte suivant et extraire la réponse à la question : "'+question+'"\n\n "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Lis le texte suivant et extrais la réponse à la question : "'+question+'"\n\n "'+context+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Lisez le texte suivant et extrayez la réponse à la question : "'+question+'"\n\n "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+context+'"\n\nSur la base du texte ci-dessus, répondre correctement à la question suivante : \n\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+context+'"\n\nSur la base du texte ci-dessus, réponds correctement à la question suivante : \n\n "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'"'+context+'"\n\nSur la base du texte ci-dessus, répondez répondre correctement à la question suivante : \n\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Contexte : '+ context +'\n Compte tenu du texte ci-dessus, répondre correctement à la question suivante : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Contexte : '+ context +'\n Compte tenu du texte ci-dessus, réponds correctement à la question suivante : "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Contexte : '+ context +'\n Compte tenu du texte ci-dessus, répondez correctement à la question suivante : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+context+'"\n Extraire du passage la réponse à la question suivante : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+context+'"\n Extrais du passage la réponse à la question suivante : "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'"'+context+'"\n Extrayez du passage la réponse à la question suivante : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Compte tenu du passage suivant, répondre à la question qui suit : "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Compte tenu du passage suivant, réponds à la question qui suit : "'+context+'"\n "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Compte tenu du passage suivant, répondez à la question qui suit : "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Après avoir lu le paragraphe, répondre à la question suivante : "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Après avoir lu le paragraphe, réponds à la question suivante : "'+context+'"\n "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Après avoir lu le paragraphe, répondez à la question suivante : "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Se référer au passage ci-dessous et répondre à la question suivante:\n Passage : "'+context+'"Question : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Référe-toi au passage ci-dessous et réponds à la question suivante:\n Passage : "'+context+'"Question : "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Référez-vous au passage ci-dessous et répondez à la question suivante:\n Passage : "'+context+'"Question : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Lire le passage suivant et répondez à la question qui suit : \n "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Lis le passage suivant et répondez à la question qui suit : \n "'+context+'"\n "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Lisez le passage suivant et répondez à la question qui suit : \n "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
```
# Splits
- `train` with 3,320,898 samples
- no `valid` split
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/squad_v2_french_translated_fr_prompt_qa")
```
# Citation
## Original data
> Hugging Face repository: https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
apache-2.0
|
CATIE-AQ/squad_v2_french_translated_fr_prompt_qa
|
[
"task_categories:question-answering",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:squad_v2_french_translated",
"language:fr",
"license:apache-2.0",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T14:58:43+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "apache-2.0", "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["squad_v2_french_translated"], "task_categories": ["question-answering"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:11:01+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-question-answering #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us
|
# squad_v2_french_translated_fr_prompt_qa
## Summary
squad_v2_french_translated_fr_prompt_qa is a subset of the Dataset of French Prompts (DFP).
It contains 3,320,898 rows that can be used for a question-answering task.
The original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 3,320,898 samples
- no 'valid' split
- no 'test' split
# How to use?
## Original data
> Hugging Face repository: URL
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
apache-2.0
|
[
"# squad_v2_french_translated_fr_prompt_qa",
"## Summary\n\nsquad_v2_french_translated_fr_prompt_qa is a subset of the Dataset of French Prompts (DFP). \nIt contains 3,320,898 rows that can be used for a question-answering task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 3,320,898 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\napache-2.0"
] |
[
"TAGS\n#task_categories-question-answering #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us \n",
"# squad_v2_french_translated_fr_prompt_qa",
"## Summary\n\nsquad_v2_french_translated_fr_prompt_qa is a subset of the Dataset of French Prompts (DFP). \nIt contains 3,320,898 rows that can be used for a question-answering task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 3,320,898 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\napache-2.0"
] |
[
99,
22,
160,
5,
46,
29,
5,
12,
106,
6
] |
[
"passage: TAGS\n#task_categories-question-answering #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us \n# squad_v2_french_translated_fr_prompt_qa## Summary\n\nsquad_v2_french_translated_fr_prompt_qa is a subset of the Dataset of French Prompts (DFP). \nIt contains 3,320,898 rows that can be used for a question-answering task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 3,320,898 samples\n- no 'valid' split\n- no 'test' split# How to use?## Original data\n> Hugging Face repository: URL## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}## License\napache-2.0"
] |
7677328f52fc2d4058b2174b088d5c3cd139ad95
|
# Habitat v0.3.x Benchmark Dataset
Assets, configs, and episodes for reproduceable benchmarking on Habitat v0.3.x.
## Setup
Clone this repo and symblink as `data/hab3_bench_assets` in habitat-lab directory.
Download the [Habitat compatable YCB SceneDataset](https://huggingface.co/datasets/ai-habitat/ycb) and create a symbolic link in `data/objects/ycb` or use the habitat-sim datasets_download script ([README](https://github.com/facebookresearch/habitat-sim/blob/main/DATASETS.md#ycb-benchmarks---object-and-model-set)).
## Contents:
- Scene Dataset: `hab3-hssd/` - the necessary configs and assets to load a subset of HSSD dataset into habitat-lab and utilize it for Hab3 rearrangement tasks.
- Episode Datasets: `episode_datasets` - a set of serialized RearrangeDataset files generated for the benchmark SceneDataset. See "Generating New Episodes" below for details.
- `hab3_bench_ep_gen_config.yaml` - config file for generating new RearrangeDataset files.
- Example Humanoid assets - URDF, skin meshes, motion files for one humanoid.
## Generating New Episodes:
The provided config `hab3_bench_ep_gen_config.yaml` is available for generating new hab3 benchmarking episodes. It defines the scene, objects, and generator configs (e.g. number of clutter objects).
The generator command should be run on a Habitat 3.0 compatable branch (e.g. SIRo) with the included assets from `fpss/fphab` commit `cd1549303d759abacbb377a8dd52c5f7af0d0e5a` as follows:
```
python -u habitat-lab/habitat/datasets/rearrange/run_episode_generator.py --config data/hab3_bench_assets/hab3_bench_ep_gen_config.yaml --run --verbose --num-episodes 10 --seed 0 --out data/hab3_bench_assets/episode_datasets/large_large.json.gz
```
Naming of the episode file `<scene_complexity>_<object_complexity>.json.gz` depends on the following parameters:
### Scene Complexity:
Currently we are testing on 3 differently sized scenes:
- `small`: 103997919_171031233 (area 35.92) - 1 bed, 1 bath
- `medium`: 108736635_177263256 (area 55.49) - 3 bed, 2 bath
- `large`: 102816009 (area 172.43) 4 bed, 4 bath + den & office
One of these scene sets must be selected in the config before generation.
### Object Complexity:
Currently we are testing 3 clutter object size complexities:
- `small`: 2 objects
- `medium`: 5 objects
- `large`: 10 objects
One of these sampler params must be selected in the config before generation.
## License Notes:
HSSD assets and episodes are provided under cc-by-nc license as a subset of the dataset described here: https://3dlg-hcvc.github.io/hssd/
Example humanoid asset shapes are provided under cc-by-nc license and motions under [SMPL Body Motion File License ](https://smpl.is.tue.mpg.de/bodylicense.html) as a subset of https://huggingface.co/datasets/ai-habitat/habitat_humanoids
|
ai-habitat/hab3_bench_assets
|
[
"license:cc-by-nc-4.0",
"region:us"
] |
2023-08-21T14:58:43+00:00
|
{"license": "cc-by-nc-4.0", "viewer": false}
|
2023-10-18T16:56:38+00:00
|
[] |
[] |
TAGS
#license-cc-by-nc-4.0 #region-us
|
# Habitat v0.3.x Benchmark Dataset
Assets, configs, and episodes for reproduceable benchmarking on Habitat v0.3.x.
## Setup
Clone this repo and symblink as 'data/hab3_bench_assets' in habitat-lab directory.
Download the Habitat compatable YCB SceneDataset and create a symbolic link in 'data/objects/ycb' or use the habitat-sim datasets_download script (README).
## Contents:
- Scene Dataset: 'hab3-hssd/' - the necessary configs and assets to load a subset of HSSD dataset into habitat-lab and utilize it for Hab3 rearrangement tasks.
- Episode Datasets: 'episode_datasets' - a set of serialized RearrangeDataset files generated for the benchmark SceneDataset. See "Generating New Episodes" below for details.
- 'hab3_bench_ep_gen_config.yaml' - config file for generating new RearrangeDataset files.
- Example Humanoid assets - URDF, skin meshes, motion files for one humanoid.
## Generating New Episodes:
The provided config 'hab3_bench_ep_gen_config.yaml' is available for generating new hab3 benchmarking episodes. It defines the scene, objects, and generator configs (e.g. number of clutter objects).
The generator command should be run on a Habitat 3.0 compatable branch (e.g. SIRo) with the included assets from 'fpss/fphab' commit 'cd1549303d759abacbb377a8dd52c5f7af0d0e5a' as follows:
Naming of the episode file '<scene_complexity>_<object_complexity>.URL' depends on the following parameters:
### Scene Complexity:
Currently we are testing on 3 differently sized scenes:
- 'small': 103997919_171031233 (area 35.92) - 1 bed, 1 bath
- 'medium': 108736635_177263256 (area 55.49) - 3 bed, 2 bath
- 'large': 102816009 (area 172.43) 4 bed, 4 bath + den & office
One of these scene sets must be selected in the config before generation.
### Object Complexity:
Currently we are testing 3 clutter object size complexities:
- 'small': 2 objects
- 'medium': 5 objects
- 'large': 10 objects
One of these sampler params must be selected in the config before generation.
## License Notes:
HSSD assets and episodes are provided under cc-by-nc license as a subset of the dataset described here: URL
Example humanoid asset shapes are provided under cc-by-nc license and motions under SMPL Body Motion File License as a subset of URL
|
[
"# Habitat v0.3.x Benchmark Dataset\n\nAssets, configs, and episodes for reproduceable benchmarking on Habitat v0.3.x.",
"## Setup\n\nClone this repo and symblink as 'data/hab3_bench_assets' in habitat-lab directory.\nDownload the Habitat compatable YCB SceneDataset and create a symbolic link in 'data/objects/ycb' or use the habitat-sim datasets_download script (README).",
"## Contents:\n\n- Scene Dataset: 'hab3-hssd/' - the necessary configs and assets to load a subset of HSSD dataset into habitat-lab and utilize it for Hab3 rearrangement tasks.\n\n- Episode Datasets: 'episode_datasets' - a set of serialized RearrangeDataset files generated for the benchmark SceneDataset. See \"Generating New Episodes\" below for details.\n\n- 'hab3_bench_ep_gen_config.yaml' - config file for generating new RearrangeDataset files.\n\n- Example Humanoid assets - URDF, skin meshes, motion files for one humanoid.",
"## Generating New Episodes:\n\nThe provided config 'hab3_bench_ep_gen_config.yaml' is available for generating new hab3 benchmarking episodes. It defines the scene, objects, and generator configs (e.g. number of clutter objects).\n\nThe generator command should be run on a Habitat 3.0 compatable branch (e.g. SIRo) with the included assets from 'fpss/fphab' commit 'cd1549303d759abacbb377a8dd52c5f7af0d0e5a' as follows:\n\n\n\nNaming of the episode file '<scene_complexity>_<object_complexity>.URL' depends on the following parameters:",
"### Scene Complexity:\nCurrently we are testing on 3 differently sized scenes:\n- 'small': 103997919_171031233 (area 35.92) - 1 bed, 1 bath\n- 'medium': 108736635_177263256 (area 55.49) - 3 bed, 2 bath\n- 'large': 102816009 (area 172.43) 4 bed, 4 bath + den & office\n\nOne of these scene sets must be selected in the config before generation.",
"### Object Complexity:\nCurrently we are testing 3 clutter object size complexities:\n- 'small': 2 objects\n- 'medium': 5 objects\n- 'large': 10 objects\n\nOne of these sampler params must be selected in the config before generation.",
"## License Notes:\n\nHSSD assets and episodes are provided under cc-by-nc license as a subset of the dataset described here: URL\n\nExample humanoid asset shapes are provided under cc-by-nc license and motions under SMPL Body Motion File License as a subset of URL"
] |
[
"TAGS\n#license-cc-by-nc-4.0 #region-us \n",
"# Habitat v0.3.x Benchmark Dataset\n\nAssets, configs, and episodes for reproduceable benchmarking on Habitat v0.3.x.",
"## Setup\n\nClone this repo and symblink as 'data/hab3_bench_assets' in habitat-lab directory.\nDownload the Habitat compatable YCB SceneDataset and create a symbolic link in 'data/objects/ycb' or use the habitat-sim datasets_download script (README).",
"## Contents:\n\n- Scene Dataset: 'hab3-hssd/' - the necessary configs and assets to load a subset of HSSD dataset into habitat-lab and utilize it for Hab3 rearrangement tasks.\n\n- Episode Datasets: 'episode_datasets' - a set of serialized RearrangeDataset files generated for the benchmark SceneDataset. See \"Generating New Episodes\" below for details.\n\n- 'hab3_bench_ep_gen_config.yaml' - config file for generating new RearrangeDataset files.\n\n- Example Humanoid assets - URDF, skin meshes, motion files for one humanoid.",
"## Generating New Episodes:\n\nThe provided config 'hab3_bench_ep_gen_config.yaml' is available for generating new hab3 benchmarking episodes. It defines the scene, objects, and generator configs (e.g. number of clutter objects).\n\nThe generator command should be run on a Habitat 3.0 compatable branch (e.g. SIRo) with the included assets from 'fpss/fphab' commit 'cd1549303d759abacbb377a8dd52c5f7af0d0e5a' as follows:\n\n\n\nNaming of the episode file '<scene_complexity>_<object_complexity>.URL' depends on the following parameters:",
"### Scene Complexity:\nCurrently we are testing on 3 differently sized scenes:\n- 'small': 103997919_171031233 (area 35.92) - 1 bed, 1 bath\n- 'medium': 108736635_177263256 (area 55.49) - 3 bed, 2 bath\n- 'large': 102816009 (area 172.43) 4 bed, 4 bath + den & office\n\nOne of these scene sets must be selected in the config before generation.",
"### Object Complexity:\nCurrently we are testing 3 clutter object size complexities:\n- 'small': 2 objects\n- 'medium': 5 objects\n- 'large': 10 objects\n\nOne of these sampler params must be selected in the config before generation.",
"## License Notes:\n\nHSSD assets and episodes are provided under cc-by-nc license as a subset of the dataset described here: URL\n\nExample humanoid asset shapes are provided under cc-by-nc license and motions under SMPL Body Motion File License as a subset of URL"
] |
[
17,
37,
76,
152,
171,
110,
64,
67
] |
[
"passage: TAGS\n#license-cc-by-nc-4.0 #region-us \n# Habitat v0.3.x Benchmark Dataset\n\nAssets, configs, and episodes for reproduceable benchmarking on Habitat v0.3.x.## Setup\n\nClone this repo and symblink as 'data/hab3_bench_assets' in habitat-lab directory.\nDownload the Habitat compatable YCB SceneDataset and create a symbolic link in 'data/objects/ycb' or use the habitat-sim datasets_download script (README).## Contents:\n\n- Scene Dataset: 'hab3-hssd/' - the necessary configs and assets to load a subset of HSSD dataset into habitat-lab and utilize it for Hab3 rearrangement tasks.\n\n- Episode Datasets: 'episode_datasets' - a set of serialized RearrangeDataset files generated for the benchmark SceneDataset. See \"Generating New Episodes\" below for details.\n\n- 'hab3_bench_ep_gen_config.yaml' - config file for generating new RearrangeDataset files.\n\n- Example Humanoid assets - URDF, skin meshes, motion files for one humanoid.## Generating New Episodes:\n\nThe provided config 'hab3_bench_ep_gen_config.yaml' is available for generating new hab3 benchmarking episodes. It defines the scene, objects, and generator configs (e.g. number of clutter objects).\n\nThe generator command should be run on a Habitat 3.0 compatable branch (e.g. SIRo) with the included assets from 'fpss/fphab' commit 'cd1549303d759abacbb377a8dd52c5f7af0d0e5a' as follows:\n\n\n\nNaming of the episode file '<scene_complexity>_<object_complexity>.URL' depends on the following parameters:"
] |
7153297a02c66feba1af89e9f76fbf9c0e65e0d2
|
# squad_v2_french_translated_fr_prompt_context_generation_with_answer_and_question
## Summary
**squad_v2_french_translated_fr_prompt_context_generation_with_answer_and_question** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **1,271,928** rows that can be used for a context-generation (with answer and question) task.
The original data (without prompts) comes from the dataset [pragnakalp/squad_v2_french_translated](https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", écrire un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", écris un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", écrivez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", rédiger un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", rédige un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", rédigez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", générer un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", génère un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", générez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", créer un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", crée un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", créez un texte explicatif.\nTexte : ',
'Ecrire un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Ecris un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Ecrivez un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Rédiger un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Rédige un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Rédigez un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Générer un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Génère un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Générez un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Créer un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Crée un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Créez un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : '
```
# Splits
- `train` with 442,752 samples
- no `valid` split
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/squad_v2_french_translated_fr_prompt_context_generation_with_answer_and_question")
```
# Citation
## Original data
> Hugging Face repository: https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
apache-2.0
|
CATIE-AQ/squad_v2_french_translated_fr_prompt_context_generation_with_answer_and_question
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:squad_v2_french_translated",
"language:fr",
"license:apache-2.0",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T15:08:08+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "apache-2.0", "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["squad_v2_french_translated"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:08:36+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us
|
# squad_v2_french_translated_fr_prompt_context_generation_with_answer_and_question
## Summary
squad_v2_french_translated_fr_prompt_context_generation_with_answer_and_question is a subset of the Dataset of French Prompts (DFP).
It contains 1,271,928 rows that can be used for a context-generation (with answer and question) task.
The original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 442,752 samples
- no 'valid' split
- no 'test' split
# How to use?
## Original data
> Hugging Face repository: URL
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
apache-2.0
|
[
"# squad_v2_french_translated_fr_prompt_context_generation_with_answer_and_question",
"## Summary\n\nsquad_v2_french_translated_fr_prompt_context_generation_with_answer_and_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 1,271,928 rows that can be used for a context-generation (with answer and question) task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 442,752 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\napache-2.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us \n",
"# squad_v2_french_translated_fr_prompt_context_generation_with_answer_and_question",
"## Summary\n\nsquad_v2_french_translated_fr_prompt_context_generation_with_answer_and_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 1,271,928 rows that can be used for a context-generation (with answer and question) task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 442,752 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\napache-2.0"
] |
[
98,
36,
179,
5,
46,
29,
5,
12,
106,
6
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us \n# squad_v2_french_translated_fr_prompt_context_generation_with_answer_and_question## Summary\n\nsquad_v2_french_translated_fr_prompt_context_generation_with_answer_and_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 1,271,928 rows that can be used for a context-generation (with answer and question) task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 442,752 samples\n- no 'valid' split\n- no 'test' split# How to use?## Original data\n> Hugging Face repository: URL"
] |
898c713d2f4a9a25d2b3015cc3a024d8d2551d07
|
# squad_v2_french_translated_fr_prompt_context_generation_with_question
## Summary
**squad_v2_french_translated_fr_prompt_context_generation_with_question** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **3,795,312** rows that can be used for a context-generation (with question) task.
The original data (without prompts) comes from the dataset [pragnakalp/squad_v2_french_translated](https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Étant donné la question "'+question+'", écrire un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", écris un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", écrivez un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", rédiger un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", rédige un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", rédigez un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", générer un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", génère un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", générez un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", créer un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", crée un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", créez un texte explicatif.\nTexte : ',
'Ecrire un texte comme contexte à la question "'+question+'" \nTexte : ',
'Ecris un texte comme contexte à la question "'+question+'" \nTexte : ',
'Ecrivez un texte comme contexte à la question "'+question+'" \nTexte : ',
'Rédiger un texte comme contexte à la question "'+question+'" \nTexte : ',
'Rédige un texte comme contexte à la question "'+question+'" \nTexte : ',
'Rédigez un texte comme contexte à la question "'+question+'" \nTexte : ',
'Générer un texte comme contexte à la question "'+question+'" \nTexte : ',
'Génère un texte comme contexte à la question "'+question+'" \nTexte : ',
'Générez un texte comme contexte à la question "'+question+'" \nTexte : ',
'Créer un texte comme contexte à la question "'+question+'" \nTexte : ',
'Crée un texte comme contexte à la question "'+question+'" \nTexte : ',
'Créez un texte comme contexte à la question "'+question+'" \nTexte : '
```
# Splits
- `train` with 3,795,312 samples
- no `valid` split
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/squad_v2_french_translated_fr_prompt_context_generation_with_question")
```
# Citation
## Original data
> Hugging Face repository: https://huggingface.co/datasets/lincoln/newsquadfr
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
apache-2.0
|
CATIE-AQ/squad_v2_french_translated_fr_prompt_context_generation_with_question
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:squad_v2_french_translated",
"language:fr",
"license:apache-2.0",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T15:09:35+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "apache-2.0", "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["squad_v2_french_translated"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:07:22+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us
|
# squad_v2_french_translated_fr_prompt_context_generation_with_question
## Summary
squad_v2_french_translated_fr_prompt_context_generation_with_question is a subset of the Dataset of French Prompts (DFP).
It contains 3,795,312 rows that can be used for a context-generation (with question) task.
The original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 3,795,312 samples
- no 'valid' split
- no 'test' split
# How to use?
## Original data
> Hugging Face repository: URL
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
apache-2.0
|
[
"# squad_v2_french_translated_fr_prompt_context_generation_with_question",
"## Summary\n\nsquad_v2_french_translated_fr_prompt_context_generation_with_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 3,795,312 rows that can be used for a context-generation (with question) task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 3,795,312 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\napache-2.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us \n",
"# squad_v2_french_translated_fr_prompt_context_generation_with_question",
"## Summary\n\nsquad_v2_french_translated_fr_prompt_context_generation_with_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 3,795,312 rows that can be used for a context-generation (with question) task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 3,795,312 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\napache-2.0"
] |
[
98,
31,
174,
5,
46,
30,
5,
12,
106,
6
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us \n# squad_v2_french_translated_fr_prompt_context_generation_with_question## Summary\n\nsquad_v2_french_translated_fr_prompt_context_generation_with_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 3,795,312 rows that can be used for a context-generation (with question) task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 3,795,312 samples\n- no 'valid' split\n- no 'test' split# How to use?## Original data\n> Hugging Face repository: URL## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}"
] |
eec9485b45e6f32d97b35548d87c233cb3216b9e
|
# squad_v2_french_translated_fr_prompt_question_generation_with_answer
## Summary
**squad_v2_french_translated_fr_prompt_question_generation_with_answer** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **1,165,934** rows that can be used for a question-generation (with answer) task.
The original data (without prompts) comes from the dataset [pragnakalp/squad_v2_french_translated](https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Quelle question donnerait la réponse suivante ? Réponse : "'+answer+'";\nQuestion :',
'Déterminer la question qui aurait pu être posée pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Détermine la question que tu aurais pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Déterminez la question que vous auriez pu poser pour obtenir la réponse suivante . \n Réponse : "'+answer+'";\n Question :',
'Quelle question aurait pu être posée pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question aurais-tu pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question auriez-vous pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question aurait pu être posée pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question aurais-tu pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question auriez-vous pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Sachant la réponse suivante : "'+answer+'"\n Générer une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Génère une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Générez une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Trouver une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Trouves une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Trouvez une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Créer une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Crée trouver une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Créez trouver une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Ecrire une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Ecris une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Ecrivez une bonne question
```
# Splits
- `train` with 1,165,934 samples
- no `valid` split
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/squad_v2_french_translated_fr_prompt_question_generation_with_answer")
```
# Citation
## Original data
> Hugging Face repository: https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
apache-2.0
|
CATIE-AQ/squad_v2_french_translated_fr_prompt_question_generation_with_answer
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:squad_v2_french_translated",
"language:fr",
"license:apache-2.0",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T15:09:47+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "apache-2.0", "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["squad_v2_french_translated"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:11:13+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us
|
# squad_v2_french_translated_fr_prompt_question_generation_with_answer
## Summary
squad_v2_french_translated_fr_prompt_question_generation_with_answer is a subset of the Dataset of French Prompts (DFP).
It contains 1,165,934 rows that can be used for a question-generation (with answer) task.
The original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 1,165,934 samples
- no 'valid' split
- no 'test' split
# How to use?
## Original data
> Hugging Face repository: URL
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
apache-2.0
|
[
"# squad_v2_french_translated_fr_prompt_question_generation_with_answer",
"## Summary\n\nsquad_v2_french_translated_fr_prompt_question_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 1,165,934 rows that can be used for a question-generation (with answer) task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 1,165,934 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\napache-2.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us \n",
"# squad_v2_french_translated_fr_prompt_question_generation_with_answer",
"## Summary\n\nsquad_v2_french_translated_fr_prompt_question_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 1,165,934 rows that can be used for a question-generation (with answer) task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 1,165,934 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\napache-2.0"
] |
[
98,
31,
173,
5,
46,
29,
5,
12,
106,
6
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us \n# squad_v2_french_translated_fr_prompt_question_generation_with_answer## Summary\n\nsquad_v2_french_translated_fr_prompt_question_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 1,165,934 rows that can be used for a question-generation (with answer) task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 1,165,934 samples\n- no 'valid' split\n- no 'test' split# How to use?## Original data\n> Hugging Face repository: URL## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}"
] |
21aad95a8ae1bc823eff2f110518778783224ce4
|
# squad_v2_french_translated_fr_prompt_question_generation_with_context
## Summary
**squad_v2_french_translated_fr_prompt_question_generation_with_context** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **3,795,312** rows that can be used for a question-generation (with context) task.
The original data (without prompts) comes from the dataset [pragnakalp/squad_v2_french_translated](https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'"'+context+'"\n Générer une question à partir du texte ci-dessus : ',
'"'+context+'"\n Génère une question à partir du texte ci-dessus : ',
'"'+context+'"\n Générez une question à partir du texte ci-dessus : ',
'"'+context+'"\n Trouver une question à partir du texte ci-dessus : ',
'"'+context+'"\n Trouve une question à partir du texte ci-dessus : ',
'"'+context+'"\n Trouvez une question à partir du texte ci-dessus : ',
'"'+context+'"\n Créer une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Crée trouver une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Créez trouver une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Ecrire une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Ecris une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Ecrivez une bonne question à partir du texte ci-dessus : ',
'Générer une bonne question pour le texte suivant : "'+context+'"',
'Génère une bonne question pour le texte suivant : "'+context+'"',
'Générez une bonne question pour le texte suivant : "'+context+'"',
'Trouver une bonne question pour le texte suivant : "'+context+'"',
'Trouve une bonne question pour le texte suivant : "'+context+'"',
'Trouvez trouver une bonne question pour le texte suivant : "'+context+'"',
'Créer une bonne question pour le texte suivant : "'+context+'"',
'Crée trouver une bonne question pour le texte suivant : "'+context+'"',
'Créez trouver une bonne question pour le texte suivant : "'+context+'"',
'Ecrire une bonne question pour le texte suivant : "'+context+'"',
'Ecris une bonne question pour le texte suivant : "'+context+'"',
'Ecrivez une bonne question pour le texte suivant : "'+context+'"'
```
# Splits
- `train` with 3,795,312 samples
- no `valid` split
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/squad_v2_french_translated_fr_prompt_question_generation_with_context")
```
# Citation
## Original data
> Hugging Face repository: https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
apache-2.0
|
CATIE-AQ/squad_v2_french_translated_fr_prompt_question_generation_with_context
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:squad_v2_french_translated",
"language:fr",
"license:apache-2.0",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T15:09:59+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "apache-2.0", "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["squad_v2_french_translated"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:08:57+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us
|
# squad_v2_french_translated_fr_prompt_question_generation_with_context
## Summary
squad_v2_french_translated_fr_prompt_question_generation_with_context is a subset of the Dataset of French Prompts (DFP).
It contains 3,795,312 rows that can be used for a question-generation (with context) task.
The original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 3,795,312 samples
- no 'valid' split
- no 'test' split
# How to use?
## Original data
> Hugging Face repository: URL
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
apache-2.0
|
[
"# squad_v2_french_translated_fr_prompt_question_generation_with_context",
"## Summary\n\nsquad_v2_french_translated_fr_prompt_question_generation_with_context is a subset of the Dataset of French Prompts (DFP). \nIt contains 3,795,312 rows that can be used for a question-generation (with context) task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 3,795,312 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\napache-2.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us \n",
"# squad_v2_french_translated_fr_prompt_question_generation_with_context",
"## Summary\n\nsquad_v2_french_translated_fr_prompt_question_generation_with_context is a subset of the Dataset of French Prompts (DFP). \nIt contains 3,795,312 rows that can be used for a question-generation (with context) task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 3,795,312 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\napache-2.0"
] |
[
98,
31,
174,
5,
46,
30,
5,
12,
106,
6
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-squad_v2_french_translated #language-French #license-apache-2.0 #DFP #french prompts #region-us \n# squad_v2_french_translated_fr_prompt_question_generation_with_context## Summary\n\nsquad_v2_french_translated_fr_prompt_question_generation_with_context is a subset of the Dataset of French Prompts (DFP). \nIt contains 3,795,312 rows that can be used for a question-generation (with context) task. \nThe original data (without prompts) comes from the dataset pragnakalp/squad_v2_french_translated and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 3,795,312 samples\n- no 'valid' split\n- no 'test' split# How to use?## Original data\n> Hugging Face repository: URL## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}"
] |
92b3dce2d13db90db3bd72f881d46b11eec80d57
|
# Dataset Card for "refuse-to-answer-prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
notrichardren/refuse-to-answer-prompts
|
[
"region:us"
] |
2023-08-21T15:25:09+00:00
|
{"dataset_info": {"features": [{"name": "dataset", "dtype": "string"}, {"name": "claim", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "ind", "dtype": "int64"}, {"name": "qa_type", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 192628, "num_examples": 1353}], "download_size": 74990, "dataset_size": 192628}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-24T20:18:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "refuse-to-answer-prompts"
More Information needed
|
[
"# Dataset Card for \"refuse-to-answer-prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"refuse-to-answer-prompts\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"refuse-to-answer-prompts\"\n\nMore Information needed"
] |
6bc07191435a35500fcf14839acf5ab74e18eefe
|
# Dataset Card for "bincorp-26m-all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
PurCL/bincorp-26m-all
|
[
"region:us"
] |
2023-08-21T15:25:16+00:00
|
{"viewer": true, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "code", "dtype": "string"}, {"name": "data_dep", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39826202125.70429, "num_examples": 14019961}, {"name": "test", "num_bytes": 11713589027.6, "num_examples": 4123518}, {"name": "valid", "num_bytes": 7028153984.695704, "num_examples": 2474111}], "download_size": 19420221346, "dataset_size": 58567945137.99999}}
|
2023-08-22T19:07:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bincorp-26m-all"
More Information needed
|
[
"# Dataset Card for \"bincorp-26m-all\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bincorp-26m-all\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bincorp-26m-all\"\n\nMore Information needed"
] |
be19f3831fda763ea89ba3d917eb58fcccf77d0e
|
# piaf_fr_prompt_qa
## Summary
**piaf_fr_prompt_qa** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **387,408** rows that can be used for a question-answering task.
The original data (without prompts) comes from the dataset [PIAF](https://huggingface.co/datasets/etalab-ia/piaf) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
# SQUAD 1.0 format
'Question : "'+question+'"\nContexte : "'+context+'" Réponse :',
'La réponse à la question "'+question+'" se trouve dans "'+context+'" Pouvez-vous me la dire ?',
'La réponse à la question "'+question+'" se trouve dans "'+context+'" Peux-tu me la dire ?',
'Extraire la réponse à la question à partir du contexte suivant.\n Question : "'+question+'" Contexte : "'+context+'"',
'Extrais la réponse à la question à partir du contexte suivant.\n Question : "'+question+'" Contexte : "'+context+'"',
'Extrayez la réponse à la question à partir du contexte suivant.\n Question : "'+question+'" Contexte : "'+context+'"',
'Étant donné le passage suivant : "'+context+'"\n Répondre à la question suivante sachant que la réponse est présente dans le texte.\n Question : "'+question+'"',
'Étant donné le passage suivant : "'+context+'"\n Réponds à la question suivante sachant que la réponse est présente dans le texte.\n Question : "'+question+'"',
'Étant donné le passage suivant : "'+context+'"\n Répondez à la question suivante sachant que la réponse est présente dans le texte.\n Question : "'+question+'"',
"""La réponse à la question : " """+question+""" " se trouve dans le texte : " """+context+""" "\n Peux-tu l'indiquer ?""",
"""La réponse à la question : " """+question+""" " se trouve dans le texte : " """+context+""" "\n Pouvez-vous l'indiquer ?""",
"""La réponse à la question : " """+question+""" " se trouve dans le texte : " """+context+""" "\n Qu'elle est-elle ?""",
# SQUAD 2.0 format
'"'+question+'"\n Répondre à la question ci-dessus en se basant sur le contexte suivant : "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+question+'"\n Réponds à la question ci-dessus en te basant sur le contexte suivant : "'+context+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'"'+question+'"\n Répondez à la question ci-dessus en vous basant sur le contexte suivant : "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Utiliser le texte suivant pour répondre à la question : '+question+ '\n\n "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Utilise le texte suivant pour répondre à la question : '+question+ '\n\n "'+context+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Utilisez le texte suivant pour répondre à la question : '+question+ '\n\n "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Lire le texte suivant et extraire la réponse à la question : "'+question+'"\n\n "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Lis le texte suivant et extrais la réponse à la question : "'+question+'"\n\n "'+context+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Lisez le texte suivant et extrayez la réponse à la question : "'+question+'"\n\n "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+context+'"\n\nSur la base du texte ci-dessus, répondre correctement à la question suivante : \n\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+context+'"\n\nSur la base du texte ci-dessus, réponds correctement à la question suivante : \n\n "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'"'+context+'"\n\nSur la base du texte ci-dessus, répondez répondre correctement à la question suivante : \n\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Contexte : '+ context +'\n Compte tenu du texte ci-dessus, répondre correctement à la question suivante : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Contexte : '+ context +'\n Compte tenu du texte ci-dessus, réponds correctement à la question suivante : "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Contexte : '+ context +'\n Compte tenu du texte ci-dessus, répondez correctement à la question suivante : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+context+'"\n Extraire du passage la réponse à la question suivante : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+context+'"\n Extrais du passage la réponse à la question suivante : "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'"'+context+'"\n Extrayez du passage la réponse à la question suivante : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Compte tenu du passage suivant, répondre à la question qui suit : "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Compte tenu du passage suivant, réponds à la question qui suit : "'+context+'"\n "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Compte tenu du passage suivant, répondez à la question qui suit : "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Après avoir lu le paragraphe, répondre à la question suivante : "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Après avoir lu le paragraphe, réponds à la question suivante : "'+context+'"\n "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Après avoir lu le paragraphe, répondez à la question suivante : "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Se référer au passage ci-dessous et répondre à la question suivante:\n Passage : "'+context+'"Question : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Référe-toi au passage ci-dessous et réponds à la question suivante:\n Passage : "'+context+'"Question : "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Référez-vous au passage ci-dessous et répondez à la question suivante:\n Passage : "'+context+'"Question : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Lire le passage suivant et répondez à la question qui suit : \n "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Lis le passage suivant et répondez à la question qui suit : \n "'+context+'"\n "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Lisez le passage suivant et répondez à la question qui suit : \n "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
```
# Splits
- `train` with 387,408 samples
- no `valid` split
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/piaf_fr_prompt_qa")
```
# Citation
## Original data
> @InProceedings{keraron-EtAl:2020:LREC,
author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},
title = {Project PIAF: Building a Native French Question-Answering Dataset},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {5483--5492},
url = {https://www.aclweb.org/anthology/2020.lrec-1.673}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
MIT
|
CATIE-AQ/piaf_fr_prompt_qa
|
[
"task_categories:question-answering",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:etalab-ia/piaf",
"language:fr",
"license:mit",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T15:32:18+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "mit", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["etalab-ia/piaf"], "task_categories": ["question-answering"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:16:54+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-question-answering #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us
|
# piaf_fr_prompt_qa
## Summary
piaf_fr_prompt_qa is a subset of the Dataset of French Prompts (DFP).
It contains 387,408 rows that can be used for a question-answering task.
The original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 387,408 samples
- no 'valid' split
- no 'test' split
# How to use?
## Original data
> @InProceedings{keraron-EtAl:2020:LREC,
author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},
title = {Project PIAF: Building a Native French Question-Answering Dataset},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {5483--5492},
url = {URL
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
MIT
|
[
"# piaf_fr_prompt_qa",
"## Summary\n\npiaf_fr_prompt_qa is a subset of the Dataset of French Prompts (DFP). \nIt contains 387,408 rows that can be used for a question-answering task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 387,408 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> @InProceedings{keraron-EtAl:2020:LREC,\n author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},\n title = {Project PIAF: Building a Native French Question-Answering Dataset},\n booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},\n month = {May},\n year = {2020},\n address = {Marseille, France},\n publisher = {European Language Resources Association},\n pages = {5483--5492},\n url = {URL\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nMIT"
] |
[
"TAGS\n#task_categories-question-answering #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us \n",
"# piaf_fr_prompt_qa",
"## Summary\n\npiaf_fr_prompt_qa is a subset of the Dataset of French Prompts (DFP). \nIt contains 387,408 rows that can be used for a question-answering task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 387,408 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> @InProceedings{keraron-EtAl:2020:LREC,\n author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},\n title = {Project PIAF: Building a Native French Question-Answering Dataset},\n booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},\n month = {May},\n year = {2020},\n address = {Marseille, France},\n publisher = {European Language Resources Association},\n pages = {5483--5492},\n url = {URL\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nMIT"
] |
[
90,
11,
133,
5,
46,
28,
5,
178,
106,
3
] |
[
"passage: TAGS\n#task_categories-question-answering #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us \n# piaf_fr_prompt_qa## Summary\n\npiaf_fr_prompt_qa is a subset of the Dataset of French Prompts (DFP). \nIt contains 387,408 rows that can be used for a question-answering task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 387,408 samples\n- no 'valid' split\n- no 'test' split# How to use?## Original data\n> @InProceedings{keraron-EtAl:2020:LREC,\n author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},\n title = {Project PIAF: Building a Native French Question-Answering Dataset},\n booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},\n month = {May},\n year = {2020},\n address = {Marseille, France},\n publisher = {European Language Resources Association},\n pages = {5483--5492},\n url = {URL\n}"
] |
a0cff0279c49c0754cc4aa0fc730b55441900486
|
# piaf_fr_prompt_context_generation_with_answer
## Summary
**piaf_fr_prompt_context_generation_with_answer** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **442,752** rows that can be used for a context-generation (with answer) task.
The original data (without prompts) comes from the dataset [PIAF](https://huggingface.co/datasets/etalab-ia/piaf) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Étant donné la réponse "'+ answer+'", écrire un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", écris un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", écrivez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", rédiger un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", rédige un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", rédigez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", générer un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", génère un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", générez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", créer un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", crée un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", créez un texte explicatif.\nTexte : ',
'Ecrire un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Ecris un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Ecrivez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Rédiger un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Rédige un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Rédigez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Générer un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Génère un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Générez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Créer un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Crée un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Créez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
```
# Splits
- `train` with 442,752 samples
- no `valid` split
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/piaf_fr_prompt_context_generation_with_answer")
```
# Citation
## Original data
> @InProceedings{keraron-EtAl:2020:LREC,
author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},
title = {Project PIAF: Building a Native French Question-Answering Dataset},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {5483--5492},
url = {https://www.aclweb.org/anthology/2020.lrec-1.673}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
MIT
|
CATIE-AQ/piaf_fr_prompt_context_generation_with_answer
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:etalab-ia/piaf",
"language:fr",
"license:mit",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T15:34:32+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "mit", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["etalab-ia/piaf"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:12:59+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us
|
# piaf_fr_prompt_context_generation_with_answer
## Summary
piaf_fr_prompt_context_generation_with_answer is a subset of the Dataset of French Prompts (DFP).
It contains 442,752 rows that can be used for a context-generation (with answer) task.
The original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 442,752 samples
- no 'valid' split
- no 'test' split
# How to use?
## Original data
> @InProceedings{keraron-EtAl:2020:LREC,
author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},
title = {Project PIAF: Building a Native French Question-Answering Dataset},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {5483--5492},
url = {URL
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
MIT
|
[
"# piaf_fr_prompt_context_generation_with_answer",
"## Summary\n\npiaf_fr_prompt_context_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 442,752 rows that can be used for a context-generation (with answer) task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 442,752 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> @InProceedings{keraron-EtAl:2020:LREC,\n author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},\n title = {Project PIAF: Building a Native French Question-Answering Dataset},\n booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},\n month = {May},\n year = {2020},\n address = {Marseille, France},\n publisher = {European Language Resources Association},\n pages = {5483--5492},\n url = {URL\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nMIT"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us \n",
"# piaf_fr_prompt_context_generation_with_answer",
"## Summary\n\npiaf_fr_prompt_context_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 442,752 rows that can be used for a context-generation (with answer) task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 442,752 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> @InProceedings{keraron-EtAl:2020:LREC,\n author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},\n title = {Project PIAF: Building a Native French Question-Answering Dataset},\n booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},\n month = {May},\n year = {2020},\n address = {Marseille, France},\n publisher = {European Language Resources Association},\n pages = {5483--5492},\n url = {URL\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nMIT"
] |
[
89,
20,
147,
5,
46,
29,
5,
178,
106,
3
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us \n# piaf_fr_prompt_context_generation_with_answer## Summary\n\npiaf_fr_prompt_context_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 442,752 rows that can be used for a context-generation (with answer) task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 442,752 samples\n- no 'valid' split\n- no 'test' split# How to use?"
] |
f310ca7db8bd9ab9aff3431340f5302db93e5970
|
# piaf_fr_prompt_context_generation_with_answer_and_question
## Summary
**piaf_fr_prompt_context_generation_with_answer_and_question** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **442,752** rows that can be used for a context-generation (with answer and question) task.
The original data (without prompts) comes from the dataset [PIAF](https://huggingface.co/datasets/etalab-ia/piaf) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", écrire un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", écris un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", écrivez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", rédiger un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", rédige un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", rédigez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", générer un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", génère un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", générez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", créer un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", crée un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'" à la question "'+question+'", créez un texte explicatif.\nTexte : ',
'Ecrire un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Ecris un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Ecrivez un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Rédiger un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Rédige un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Rédigez un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Générer un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Génère un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Générez un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Créer un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Crée un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : ',
'Créez un texte comme contexte de la réponse "'+ answer+'" à la question "'+question+'" \nTexte : '
```
# Splits
- `train` with 442,752 samples
- no `valid` split
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/piaf_fr_prompt_context_generation_with_answer_and_question")
```
# Citation
## Original data
> @InProceedings{keraron-EtAl:2020:LREC,
author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},
title = {Project PIAF: Building a Native French Question-Answering Dataset},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {5483--5492},
url = {https://www.aclweb.org/anthology/2020.lrec-1.673}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
MIT
|
CATIE-AQ/piaf_fr_prompt_context_generation_with_answer_and_question
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:etalab-ia/piaf",
"language:fr",
"license:mit",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T15:35:32+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "mit", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["etalab-ia/piaf"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:13:13+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us
|
# piaf_fr_prompt_context_generation_with_answer_and_question
## Summary
piaf_fr_prompt_context_generation_with_answer_and_question is a subset of the Dataset of French Prompts (DFP).
It contains 442,752 rows that can be used for a context-generation (with answer and question) task.
The original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 442,752 samples
- no 'valid' split
- no 'test' split
# How to use?
## Original data
> @InProceedings{keraron-EtAl:2020:LREC,
author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},
title = {Project PIAF: Building a Native French Question-Answering Dataset},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {5483--5492},
url = {URL
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
MIT
|
[
"# piaf_fr_prompt_context_generation_with_answer_and_question",
"## Summary\n\npiaf_fr_prompt_context_generation_with_answer_and_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 442,752 rows that can be used for a context-generation (with answer and question) task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 442,752 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> @InProceedings{keraron-EtAl:2020:LREC,\n author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},\n title = {Project PIAF: Building a Native French Question-Answering Dataset},\n booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},\n month = {May},\n year = {2020},\n address = {Marseille, France},\n publisher = {European Language Resources Association},\n pages = {5483--5492},\n url = {URL\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nMIT"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us \n",
"# piaf_fr_prompt_context_generation_with_answer_and_question",
"## Summary\n\npiaf_fr_prompt_context_generation_with_answer_and_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 442,752 rows that can be used for a context-generation (with answer and question) task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 442,752 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> @InProceedings{keraron-EtAl:2020:LREC,\n author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},\n title = {Project PIAF: Building a Native French Question-Answering Dataset},\n booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},\n month = {May},\n year = {2020},\n address = {Marseille, France},\n publisher = {European Language Resources Association},\n pages = {5483--5492},\n url = {URL\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nMIT"
] |
[
89,
25,
154,
5,
46,
29,
5,
178,
106,
3
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us \n# piaf_fr_prompt_context_generation_with_answer_and_question## Summary\n\npiaf_fr_prompt_context_generation_with_answer_and_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 442,752 rows that can be used for a context-generation (with answer and question) task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 442,752 samples\n- no 'valid' split\n- no 'test' split# How to use?"
] |
fbbb86b239fb1c4059d9a54f5bc3feb39d3b2d46
|
# piaf_fr_prompt_context_generation_with_question
## Summary
**piaf_fr_prompt_context_generation_with_question** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **442,752** rows that can be used for a context-generation (with answer and question) task.
The original data (without prompts) comes from the dataset [PIAF](https://huggingface.co/datasets/etalab-ia/piaf) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Étant donné la question "'+question+'", écrire un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", écris un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", écrivez un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", rédiger un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", rédige un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", rédigez un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", générer un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", génère un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", générez un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", créer un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", crée un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", créez un texte explicatif.\nTexte : ',
'Ecrire un texte comme contexte à la question "'+question+'" \nTexte : ',
'Ecris un texte comme contexte à la question "'+question+'" \nTexte : ',
'Ecrivez un texte comme contexte à la question "'+question+'" \nTexte : ',
'Rédiger un texte comme contexte à la question "'+question+'" \nTexte : ',
'Rédige un texte comme contexte à la question "'+question+'" \nTexte : ',
'Rédigez un texte comme contexte à la question "'+question+'" \nTexte : ',
'Générer un texte comme contexte à la question "'+question+'" \nTexte : ',
'Génère un texte comme contexte à la question "'+question+'" \nTexte : ',
'Générez un texte comme contexte à la question "'+question+'" \nTexte : ',
'Créer un texte comme contexte à la question "'+question+'" \nTexte : ',
'Crée un texte comme contexte à la question "'+question+'" \nTexte : ',
'Créez un texte comme contexte à la question "'+question+'" \nTexte : '
```
# Splits
- `train` with 442,752 samples
- no `valid` split
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/piaf_fr_prompt_context_generation_with_question")
```
# Citation
## Original data
> @InProceedings{keraron-EtAl:2020:LREC,
author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},
title = {Project PIAF: Building a Native French Question-Answering Dataset},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {5483--5492},
url = {https://www.aclweb.org/anthology/2020.lrec-1.673}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
MIT
|
CATIE-AQ/piaf_fr_prompt_context_generation_with_question
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:etalab-ia/piaf",
"language:fr",
"license:mit",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T15:39:36+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "mit", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["etalab-ia/piaf"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:16:13+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us
|
# piaf_fr_prompt_context_generation_with_question
## Summary
piaf_fr_prompt_context_generation_with_question is a subset of the Dataset of French Prompts (DFP).
It contains 442,752 rows that can be used for a context-generation (with answer and question) task.
The original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 442,752 samples
- no 'valid' split
- no 'test' split
# How to use?
## Original data
> @InProceedings{keraron-EtAl:2020:LREC,
author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},
title = {Project PIAF: Building a Native French Question-Answering Dataset},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {5483--5492},
url = {URL
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
MIT
|
[
"# piaf_fr_prompt_context_generation_with_question",
"## Summary\n\npiaf_fr_prompt_context_generation_with_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 442,752 rows that can be used for a context-generation (with answer and question) task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 442,752 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> @InProceedings{keraron-EtAl:2020:LREC,\n author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},\n title = {Project PIAF: Building a Native French Question-Answering Dataset},\n booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},\n month = {May},\n year = {2020},\n address = {Marseille, France},\n publisher = {European Language Resources Association},\n pages = {5483--5492},\n url = {URL\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nMIT"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us \n",
"# piaf_fr_prompt_context_generation_with_question",
"## Summary\n\npiaf_fr_prompt_context_generation_with_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 442,752 rows that can be used for a context-generation (with answer and question) task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 442,752 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> @InProceedings{keraron-EtAl:2020:LREC,\n author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},\n title = {Project PIAF: Building a Native French Question-Answering Dataset},\n booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},\n month = {May},\n year = {2020},\n address = {Marseille, France},\n publisher = {European Language Resources Association},\n pages = {5483--5492},\n url = {URL\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nMIT"
] |
[
89,
20,
149,
5,
46,
29,
5,
178,
106,
3
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us \n# piaf_fr_prompt_context_generation_with_question## Summary\n\npiaf_fr_prompt_context_generation_with_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 442,752 rows that can be used for a context-generation (with answer and question) task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 442,752 samples\n- no 'valid' split\n- no 'test' split# How to use?"
] |
40e78ffdc8de863bbcd93420b8cb38e76ebece79
|
# piaf_fr_prompt_question_generation_with_answer
## Summary
**piaf_fr_prompt_question_generation_with_answer** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **387,408** rows that can be used for a question-generation (with answer) task.
The original data (without prompts) comes from the dataset [PIAF](https://huggingface.co/datasets/etalab-ia/piaf) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Quelle question donnerait la réponse suivante ? Réponse : "'+answer+'";\nQuestion :',
'Déterminer la question qui aurait pu être posée pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Détermine la question que tu aurais pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Déterminez la question que vous auriez pu poser pour obtenir la réponse suivante . \n Réponse : "'+answer+'";\n Question :',
'Quelle question aurait pu être posée pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question aurais-tu pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question auriez-vous pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question aurait pu être posée pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question aurais-tu pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question auriez-vous pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Sachant la réponse suivante : "'+answer+'"\n Générer une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Génère une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Générez une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Trouver une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Trouves une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Trouvez une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Créer une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Crée trouver une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Créez trouver une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Ecrire une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Ecris une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Ecrivez une bonne question
```
# Splits
- `train` with 387,408 samples
- no `valid` split
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/piaf_fr_prompt_question_generation_with_answer")
```
# Citation
## Original data
> @InProceedings{keraron-EtAl:2020:LREC,
author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},
title = {Project PIAF: Building a Native French Question-Answering Dataset},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {5483--5492},
url = {https://www.aclweb.org/anthology/2020.lrec-1.673}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
MIT
|
CATIE-AQ/piaf_fr_prompt_question_generation_with_answer
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:etalab-ia/piaf",
"language:fr",
"license:mit",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T15:40:20+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "mit", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["etalab-ia/piaf"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:16:23+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us
|
# piaf_fr_prompt_question_generation_with_answer
## Summary
piaf_fr_prompt_question_generation_with_answer is a subset of the Dataset of French Prompts (DFP).
It contains 387,408 rows that can be used for a question-generation (with answer) task.
The original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 387,408 samples
- no 'valid' split
- no 'test' split
# How to use?
## Original data
> @InProceedings{keraron-EtAl:2020:LREC,
author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},
title = {Project PIAF: Building a Native French Question-Answering Dataset},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {5483--5492},
url = {URL
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
MIT
|
[
"# piaf_fr_prompt_question_generation_with_answer",
"## Summary\n\npiaf_fr_prompt_question_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 387,408 rows that can be used for a question-generation (with answer) task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 387,408 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> @InProceedings{keraron-EtAl:2020:LREC,\n author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},\n title = {Project PIAF: Building a Native French Question-Answering Dataset},\n booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},\n month = {May},\n year = {2020},\n address = {Marseille, France},\n publisher = {European Language Resources Association},\n pages = {5483--5492},\n url = {URL\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nMIT"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us \n",
"# piaf_fr_prompt_question_generation_with_answer",
"## Summary\n\npiaf_fr_prompt_question_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 387,408 rows that can be used for a question-generation (with answer) task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 387,408 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> @InProceedings{keraron-EtAl:2020:LREC,\n author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},\n title = {Project PIAF: Building a Native French Question-Answering Dataset},\n booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},\n month = {May},\n year = {2020},\n address = {Marseille, France},\n publisher = {European Language Resources Association},\n pages = {5483--5492},\n url = {URL\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nMIT"
] |
[
89,
20,
146,
5,
46,
28,
5,
178,
106,
3
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us \n# piaf_fr_prompt_question_generation_with_answer## Summary\n\npiaf_fr_prompt_question_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 387,408 rows that can be used for a question-generation (with answer) task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 387,408 samples\n- no 'valid' split\n- no 'test' split# How to use?"
] |
ae336411f6e26096151d638cca028ac50ce63558
|
# piaf_fr_prompt_question_generation_with_context
## Summary
**piaf_fr_prompt_question_generation_with_context** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **442,752** rows that can be used for a question-generation (with context) task.
The original data (without prompts) comes from the dataset [PIAF](https://huggingface.co/datasets/etalab-ia/piaf) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'"'+context+'"\n Générer une question à partir du texte ci-dessus : ',
'"'+context+'"\n Génère une question à partir du texte ci-dessus : ',
'"'+context+'"\n Générez une question à partir du texte ci-dessus : ',
'"'+context+'"\n Trouver une question à partir du texte ci-dessus : ',
'"'+context+'"\n Trouve une question à partir du texte ci-dessus : ',
'"'+context+'"\n Trouvez une question à partir du texte ci-dessus : ',
'"'+context+'"\n Créer une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Crée trouver une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Créez trouver une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Ecrire une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Ecris une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Ecrivez une bonne question à partir du texte ci-dessus : ',
'Générer une bonne question pour le texte suivant : "'+context+'"',
'Génère une bonne question pour le texte suivant : "'+context+'"',
'Générez une bonne question pour le texte suivant : "'+context+'"',
'Trouver une bonne question pour le texte suivant : "'+context+'"',
'Trouve une bonne question pour le texte suivant : "'+context+'"',
'Trouvez trouver une bonne question pour le texte suivant : "'+context+'"',
'Créer une bonne question pour le texte suivant : "'+context+'"',
'Crée trouver une bonne question pour le texte suivant : "'+context+'"',
'Créez trouver une bonne question pour le texte suivant : "'+context+'"',
'Ecrire une bonne question pour le texte suivant : "'+context+'"',
'Ecris une bonne question pour le texte suivant : "'+context+'"',
'Ecrivez une bonne question pour le texte suivant : "'+context+'"'
```
# Splits
- `train` with 442,752 samples
- no `valid` split
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/piaf_fr_prompt_question_generation_with_context")
```
# Citation
## Original data
> @InProceedings{keraron-EtAl:2020:LREC,
author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},
title = {Project PIAF: Building a Native French Question-Answering Dataset},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {5483--5492},
url = {https://www.aclweb.org/anthology/2020.lrec-1.673}
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
MIT
|
CATIE-AQ/piaf_fr_prompt_question_generation_with_context
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:etalab-ia/piaf",
"language:fr",
"license:mit",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T15:41:02+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "mit", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["etalab-ia/piaf"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:17:02+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us
|
# piaf_fr_prompt_question_generation_with_context
## Summary
piaf_fr_prompt_question_generation_with_context is a subset of the Dataset of French Prompts (DFP).
It contains 442,752 rows that can be used for a question-generation (with context) task.
The original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 442,752 samples
- no 'valid' split
- no 'test' split
# How to use?
## Original data
> @InProceedings{keraron-EtAl:2020:LREC,
author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},
title = {Project PIAF: Building a Native French Question-Answering Dataset},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {5483--5492},
url = {URL
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
MIT
|
[
"# piaf_fr_prompt_question_generation_with_context",
"## Summary\n\npiaf_fr_prompt_question_generation_with_context is a subset of the Dataset of French Prompts (DFP). \nIt contains 442,752 rows that can be used for a question-generation (with context) task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 442,752 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> @InProceedings{keraron-EtAl:2020:LREC,\n author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},\n title = {Project PIAF: Building a Native French Question-Answering Dataset},\n booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},\n month = {May},\n year = {2020},\n address = {Marseille, France},\n publisher = {European Language Resources Association},\n pages = {5483--5492},\n url = {URL\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nMIT"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us \n",
"# piaf_fr_prompt_question_generation_with_context",
"## Summary\n\npiaf_fr_prompt_question_generation_with_context is a subset of the Dataset of French Prompts (DFP). \nIt contains 442,752 rows that can be used for a question-generation (with context) task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 442,752 samples\n- no 'valid' split\n- no 'test' split",
"# How to use?",
"## Original data\n> @InProceedings{keraron-EtAl:2020:LREC,\n author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},\n title = {Project PIAF: Building a Native French Question-Answering Dataset},\n booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},\n month = {May},\n year = {2020},\n address = {Marseille, France},\n publisher = {European Language Resources Association},\n pages = {5483--5492},\n url = {URL\n}",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nMIT"
] |
[
89,
20,
147,
5,
46,
29,
5,
178,
106,
3
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-etalab-ia/piaf #language-French #license-mit #DFP #french prompts #region-us \n# piaf_fr_prompt_question_generation_with_context## Summary\n\npiaf_fr_prompt_question_generation_with_context is a subset of the Dataset of French Prompts (DFP). \nIt contains 442,752 rows that can be used for a question-generation (with context) task. \nThe original data (without prompts) comes from the dataset PIAF and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 442,752 samples\n- no 'valid' split\n- no 'test' split# How to use?"
] |
73024a72234312784575c7b30c54e7230e84b82b
|
# newsquadfr_fr_prompt_qa
## Summary
**newsquadfr_fr_prompt_qa** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **88,410** rows that can be used for a question-answering task.
The original data (without prompts) comes from the dataset [newsquadfr]( https://huggingface.co/datasets/lincoln/newsquadfr) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
# SQUAD 1.0 format
'Question : "'+question+'"\nContexte : "'+context+'" Réponse :',
'La réponse à la question "'+question+'" se trouve dans "'+context+'" Pouvez-vous me la dire ?',
'La réponse à la question "'+question+'" se trouve dans "'+context+'" Peux-tu me la dire ?',
'Extraire la réponse à la question à partir du contexte suivant.\n Question : "'+question+'" Contexte : "'+context+'"',
'Extrais la réponse à la question à partir du contexte suivant.\n Question : "'+question+'" Contexte : "'+context+'"',
'Extrayez la réponse à la question à partir du contexte suivant.\n Question : "'+question+'" Contexte : "'+context+'"',
'Étant donné le passage suivant : "'+context+'"\n Répondre à la question suivante sachant que la réponse est présente dans le texte.\n Question : "'+question+'"',
'Étant donné le passage suivant : "'+context+'"\n Réponds à la question suivante sachant que la réponse est présente dans le texte.\n Question : "'+question+'"',
'Étant donné le passage suivant : "'+context+'"\n Répondez à la question suivante sachant que la réponse est présente dans le texte.\n Question : "'+question+'"',
"""La réponse à la question : " """+question+""" " se trouve dans le texte : " """+context+""" "\n Peux-tu l'indiquer ?""",
"""La réponse à la question : " """+question+""" " se trouve dans le texte : " """+context+""" "\n Pouvez-vous l'indiquer ?""",
"""La réponse à la question : " """+question+""" " se trouve dans le texte : " """+context+""" "\n Qu'elle est-elle ?""",
# SQUAD 2.0 format
'"'+question+'"\n Répondre à la question ci-dessus en se basant sur le contexte suivant : "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+question+'"\n Réponds à la question ci-dessus en te basant sur le contexte suivant : "'+context+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'"'+question+'"\n Répondez à la question ci-dessus en vous basant sur le contexte suivant : "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Utiliser le texte suivant pour répondre à la question : '+question+ '\n\n "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Utilise le texte suivant pour répondre à la question : '+question+ '\n\n "'+context+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Utilisez le texte suivant pour répondre à la question : '+question+ '\n\n "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Lire le texte suivant et extraire la réponse à la question : "'+question+'"\n\n "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Lis le texte suivant et extrais la réponse à la question : "'+question+'"\n\n "'+context+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Lisez le texte suivant et extrayez la réponse à la question : "'+question+'"\n\n "'+context+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+context+'"\n\nSur la base du texte ci-dessus, répondre correctement à la question suivante : \n\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+context+'"\n\nSur la base du texte ci-dessus, réponds correctement à la question suivante : \n\n "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'"'+context+'"\n\nSur la base du texte ci-dessus, répondez répondre correctement à la question suivante : \n\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Contexte : '+ context +'\n Compte tenu du texte ci-dessus, répondre correctement à la question suivante : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Contexte : '+ context +'\n Compte tenu du texte ci-dessus, réponds correctement à la question suivante : "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Contexte : '+ context +'\n Compte tenu du texte ci-dessus, répondez correctement à la question suivante : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+context+'"\n Extraire du passage la réponse à la question suivante : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'"'+context+'"\n Extrais du passage la réponse à la question suivante : "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'"'+context+'"\n Extrayez du passage la réponse à la question suivante : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Compte tenu du passage suivant, répondre à la question qui suit : "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Compte tenu du passage suivant, réponds à la question qui suit : "'+context+'"\n "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Compte tenu du passage suivant, répondez à la question qui suit : "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Après avoir lu le paragraphe, répondre à la question suivante : "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Après avoir lu le paragraphe, réponds à la question suivante : "'+context+'"\n "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Après avoir lu le paragraphe, répondez à la question suivante : "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Se référer au passage ci-dessous et répondre à la question suivante:\n Passage : "'+context+'"Question : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Référe-toi au passage ci-dessous et réponds à la question suivante:\n Passage : "'+context+'"Question : "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Référez-vous au passage ci-dessous et répondez à la question suivante:\n Passage : "'+context+'"Question : "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Lire le passage suivant et répondez à la question qui suit : \n "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
'Lis le passage suivant et répondez à la question qui suit : \n "'+context+'"\n "'+question+'"\n Si tu ne trouves pas la réponse, répondre "sans réponse".',
'Lisez le passage suivant et répondez à la question qui suit : \n "'+context+'"\n "'+question+'"\n Si vous ne trouvez pas la réponse, répondre "sans réponse".',
```
# Splits
- `train` with 69,300 samples
- `valid` with 19,110 samples
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/newsquadfr_fr_prompt_qa")
```
# Citation
## Original data
> Hugging Face repository: https://huggingface.co/datasets/lincoln/newsquadfr
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 4.0
|
CATIE-AQ/newsquadfr_fr_prompt_qa
|
[
"task_categories:question-answering",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:newsquadfr",
"language:fr",
"license:cc-by-nc-sa-4.0",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T15:45:06+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["newsquadfr"], "task_categories": ["question-answering"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:11:58+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-question-answering #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us
|
# newsquadfr_fr_prompt_qa
## Summary
newsquadfr_fr_prompt_qa is a subset of the Dataset of French Prompts (DFP).
It contains 88,410 rows that can be used for a question-answering task.
The original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 69,300 samples
- 'valid' with 19,110 samples
- no 'test' split
# How to use?
## Original data
> Hugging Face repository: URL
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 4.0
|
[
"# newsquadfr_fr_prompt_qa",
"## Summary\n\nnewsquadfr_fr_prompt_qa is a subset of the Dataset of French Prompts (DFP). \nIt contains 88,410 rows that can be used for a question-answering task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 69,300 samples\n- 'valid' with 19,110 samples\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 4.0"
] |
[
"TAGS\n#task_categories-question-answering #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us \n",
"# newsquadfr_fr_prompt_qa",
"## Summary\n\nnewsquadfr_fr_prompt_qa is a subset of the Dataset of French Prompts (DFP). \nIt contains 88,410 rows that can be used for a question-answering task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 69,300 samples\n- 'valid' with 19,110 samples\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 4.0"
] |
[
94,
12,
135,
5,
46,
30,
5,
12,
106,
9
] |
[
"passage: TAGS\n#task_categories-question-answering #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us \n# newsquadfr_fr_prompt_qa## Summary\n\nnewsquadfr_fr_prompt_qa is a subset of the Dataset of French Prompts (DFP). \nIt contains 88,410 rows that can be used for a question-answering task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n42 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 69,300 samples\n- 'valid' with 19,110 samples\n- no 'test' split# How to use?## Original data\n> Hugging Face repository: URL## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}## License\nCC BY-NC-SA 4.0"
] |
4b795fc801c84a4c8a7866c190fa5c6a7e40df3c
|
# newsquadfr_fr_prompt_context_generation_with_answer
## Summary
**newsquadfr_fr_prompt_context_generation_with_answer** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **101,040** rows that can be used for a context-generation (with answer)task.
The original data (without prompts) comes from the dataset [newsquadfr](https://huggingface.co/datasets/lincoln/newsquadfr) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Étant donné la réponse "'+ answer+'", écrire un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", écris un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", écrivez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", rédiger un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", rédige un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", rédigez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", générer un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", génère un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", générez un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", créer un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", crée un texte explicatif.\nTexte : ',
'Étant donné la réponse "'+ answer+'", créez un texte explicatif.\nTexte : ',
'Ecrire un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Ecris un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Ecrivez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Rédiger un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Rédige un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Rédigez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Générer un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Génère un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Générez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Créer un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Crée un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
'Créez un texte comme contexte de la réponse "'+ answer+'" \nTexte : ',
```
# Splits
- `train` with 79,200 samples
- `valid` with 21,800 samples
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/newsquadfr_fr_prompt_context_generation_with_answer")
```
# Citation
## Original data
> Hugging Face repository: https://huggingface.co/datasets/lincoln/newsquadfr
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 4.0
|
CATIE-AQ/newsquadfr_fr_prompt_context_generation_with_answer
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:newsquadfr",
"language:fr",
"license:cc-by-nc-sa-4.0",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T15:49:36+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["newsquadfr"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:11:47+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us
|
# newsquadfr_fr_prompt_context_generation_with_answer
## Summary
newsquadfr_fr_prompt_context_generation_with_answer is a subset of the Dataset of French Prompts (DFP).
It contains 101,040 rows that can be used for a context-generation (with answer)task.
The original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 79,200 samples
- 'valid' with 21,800 samples
- no 'test' split
# How to use?
## Original data
> Hugging Face repository: URL
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 4.0
|
[
"# newsquadfr_fr_prompt_context_generation_with_answer",
"## Summary\n\nnewsquadfr_fr_prompt_context_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 101,040 rows that can be used for a context-generation (with answer)task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 79,200 samples\n- 'valid' with 21,800 samples\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 4.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us \n",
"# newsquadfr_fr_prompt_context_generation_with_answer",
"## Summary\n\nnewsquadfr_fr_prompt_context_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 101,040 rows that can be used for a context-generation (with answer)task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 79,200 samples\n- 'valid' with 21,800 samples\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 4.0"
] |
[
93,
21,
148,
5,
46,
30,
5,
12,
106,
9
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us \n# newsquadfr_fr_prompt_context_generation_with_answer## Summary\n\nnewsquadfr_fr_prompt_context_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 101,040 rows that can be used for a context-generation (with answer)task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 79,200 samples\n- 'valid' with 21,800 samples\n- no 'test' split# How to use?## Original data\n> Hugging Face repository: URL## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}## License\nCC BY-NC-SA 4.0"
] |
20878ec11fe57ff9aac0b379d4a43c2b82fd90dc
|
NOMBRE : ALEX RODRIGUEZ - DIPLOMAD0 2023
Objetivo : Plicar técnicas de deep learning para resolver un problema de clasificación de imágenes
se creo las carpetas train y val , dentro se crearon 2 capertas calzados de mujer (CALZADOMUJER) y calzado de hombre
(CALZADOHOMBRE)
|
diplomado2023/mini-croupier
|
[
"license:apache-2.0",
"region:us"
] |
2023-08-21T15:49:39+00:00
|
{"license": "apache-2.0"}
|
2023-08-21T16:58:24+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
NOMBRE : ALEX RODRIGUEZ - DIPLOMAD0 2023
Objetivo : Plicar técnicas de deep learning para resolver un problema de clasificación de imágenes
se creo las carpetas train y val , dentro se crearon 2 capertas calzados de mujer (CALZADOMUJER) y calzado de hombre
(CALZADOHOMBRE)
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
705e2894fd0578b3f33cca54bc80bc796ef7d8a9
|
# Dataset Card for "refuse-to-answer-statements"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
notrichardren/unlabelled-statements
|
[
"region:us"
] |
2023-08-21T15:49:56+00:00
|
{"dataset_info": {"features": [{"name": "Question", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 113287.0, "num_examples": 1353}], "download_size": 62189, "dataset_size": 113287.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-21T15:49:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "refuse-to-answer-statements"
More Information needed
|
[
"# Dataset Card for \"refuse-to-answer-statements\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"refuse-to-answer-statements\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"refuse-to-answer-statements\"\n\nMore Information needed"
] |
97eefbad2f029c04f1063b5815c7368ee46370af
|
# newsquadfr_fr_prompt_context_generation_with_answer_and_question
## Summary
**newsquadfr_fr_prompt_context_generation_with_answer_and_question** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **101,040** rows that can be used for a context-generation (with answer)task.
The original data (without prompts) comes from the dataset [newsquadfr](https://huggingface.co/datasets/lincoln/newsquadfr) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
21 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Déterminer la question qui aurait pu être posée pour obtenir la réponse suivante dans le contexte donné. \n Contexte : "'+context+'";\n Réponse : "'+answer+'";\n Question :',
'Détermine la question que tu aurais pu poser pour obtenir la réponse suivante dans le contexte donné. \n Contexte : "'+context+'";\n Réponse : "'+answer+'";\n Question :',
'Déterminez la question que vous auriez pu poser pour obtenir la réponse suivante dans le contexte donné. \n Contexte : "'+context+'";\n Réponse : "'+answer+'";\n Question :',
'Quelle question aurait pu être posée pour obtenir la réponse suivante dans le contexte donné. \n Contexte : "'+context+'";\n Réponse : "'+answer+'";\n Question :',
'Quelle question aurais-tu pu poser pour obtenir la réponse suivante dans le contexte donné. \n Contexte : "'+context+'";\n Réponse : "'+answer+'";\n Question :',
'Quelle question auriez-vous pu poser pour obtenir la réponse suivante dans le contexte donné. \n Contexte : "'+context+'";\n Réponse : "'+answer+'";\n Question :',
'Quelle question peut être posée pour obtenir la réponse suivante dans le contexte donné. \n Contexte : "'+context+'";\n Réponse : "'+answer+'";\n Question :',
'Quelle question peux-tu poser pour obtenir la réponse suivante dans le contexte donné. \n Contexte : "'+context+'";\n Réponse : "'+answer+'";\n Question :',
'Quelle question pouvez-vous poser pour obtenir la réponse suivante dans le contexte donné. \n Contexte : "'+context+'";\n Réponse : "'+answer+'";\n Question :',
'Sachant la réponse suivante : "'+answer+'"\n Générer une bonne question pour le texte suivant : "'+context+'"',
'Sachant la réponse suivante : "'+answer+'"\n Génère une bonne question pour le texte suivant : "'+context+'"',
'Sachant la réponse suivante : "'+answer+'"\n Générez une bonne question pour le texte suivant : "'+context+'"',
'Sachant la réponse suivante : "'+answer+'"\n Trouver une bonne question pour le texte suivant : "'+context+'"',
'Sachant la réponse suivante : "'+answer+'"\n Trouves une bonne question pour le texte suivant : "'+context+'"',
'Sachant la réponse suivante : "'+answer+'"\n Trouvez une bonne question pour le texte suivant : "'+context+'"',
'Sachant la réponse suivante : "'+answer+'"\n Créer une bonne question pour le texte suivant : "'+context+'"',
'Sachant la réponse suivante : "'+answer+'"\n Crée trouver une bonne question pour le texte suivant : "'+context+'"',
'Sachant la réponse suivante : "'+answer+'"\n Créez trouver une bonne question pour le texte suivant : "'+context+'"',
'Sachant la réponse suivante : "'+answer+'"\n Ecrire une bonne question pour le texte suivant : "'+context+'"',
'Sachant la réponse suivante : "'+answer+'"\n Ecris une bonne question pour le texte suivant : "'+context+'"',
'Sachant la réponse suivante : "'+answer+'"\n Ecrivez une bonne question pour le texte suivant : "'+context+'"'
```
# Splits
- `train` with 79,200 samples
- `valid` with 21,800 samples
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/newsquadfr_fr_prompt_context_generation_with_answer_and_question")
```
# Citation
## Original data
> Hugging Face repository: https://huggingface.co/datasets/lincoln/newsquadfr
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 4.0
|
CATIE-AQ/newsquadfr_fr_prompt_context_generation_with_answer_and_question
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:newsquadfr",
"language:fr",
"license:cc-by-nc-sa-4.0",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T15:50:30+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["newsquadfr"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:06:43+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us
|
# newsquadfr_fr_prompt_context_generation_with_answer_and_question
## Summary
newsquadfr_fr_prompt_context_generation_with_answer_and_question is a subset of the Dataset of French Prompts (DFP).
It contains 101,040 rows that can be used for a context-generation (with answer)task.
The original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
21 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 79,200 samples
- 'valid' with 21,800 samples
- no 'test' split
# How to use?
## Original data
> Hugging Face repository: URL
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 4.0
|
[
"# newsquadfr_fr_prompt_context_generation_with_answer_and_question",
"## Summary\n\nnewsquadfr_fr_prompt_context_generation_with_answer_and_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 101,040 rows that can be used for a context-generation (with answer)task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n21 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 79,200 samples\n- 'valid' with 21,800 samples\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 4.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us \n",
"# newsquadfr_fr_prompt_context_generation_with_answer_and_question",
"## Summary\n\nnewsquadfr_fr_prompt_context_generation_with_answer_and_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 101,040 rows that can be used for a context-generation (with answer)task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n21 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 79,200 samples\n- 'valid' with 21,800 samples\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 4.0"
] |
[
93,
26,
153,
5,
46,
30,
5,
12,
106,
9
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us \n# newsquadfr_fr_prompt_context_generation_with_answer_and_question## Summary\n\nnewsquadfr_fr_prompt_context_generation_with_answer_and_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 101,040 rows that can be used for a context-generation (with answer)task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n21 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 79,200 samples\n- 'valid' with 21,800 samples\n- no 'test' split# How to use?## Original data\n> Hugging Face repository: URL## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}## License\nCC BY-NC-SA 4.0"
] |
beed497c5dc20945d36f45b826ebb898cf66218d
|
# newsquadfr_fr_prompt_context_generation_with_question
## Summary
**newsquadfr_fr_prompt_context_generation_with_question** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **101,040** rows that can be used for a context-generation (with question) task.
The original data (without prompts) comes from the dataset [newsquadfr](https://huggingface.co/datasets/lincoln/newsquadfr) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Étant donné la question "'+question+'", écrire un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", écris un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", écrivez un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", rédiger un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", rédige un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", rédigez un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", générer un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", génère un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", générez un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", créer un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", crée un texte explicatif.\nTexte : ',
'Étant donné la question "'+question+'", créez un texte explicatif.\nTexte : ',
'Ecrire un texte comme contexte à la question "'+question+'" \nTexte : ',
'Ecris un texte comme contexte à la question "'+question+'" \nTexte : ',
'Ecrivez un texte comme contexte à la question "'+question+'" \nTexte : ',
'Rédiger un texte comme contexte à la question "'+question+'" \nTexte : ',
'Rédige un texte comme contexte à la question "'+question+'" \nTexte : ',
'Rédigez un texte comme contexte à la question "'+question+'" \nTexte : ',
'Générer un texte comme contexte à la question "'+question+'" \nTexte : ',
'Génère un texte comme contexte à la question "'+question+'" \nTexte : ',
'Générez un texte comme contexte à la question "'+question+'" \nTexte : ',
'Créer un texte comme contexte à la question "'+question+'" \nTexte : ',
'Crée un texte comme contexte à la question "'+question+'" \nTexte : ',
'Créez un texte comme contexte à la question "'+question+'" \nTexte : '
```
# Splits
- `train` with 79,200 samples
- `valid` with 21,800 samples
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/newsquadfr_fr_prompt_context_generation_with_question")
```
# Citation
## Original data
> Hugging Face repository: https://huggingface.co/datasets/lincoln/newsquadfr
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 4.0
|
CATIE-AQ/newsquadfr_fr_prompt_context_generation_with_question
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:newsquadfr",
"language:fr",
"license:cc-by-nc-sa-4.0",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T15:51:16+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["newsquadfr"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:12:19+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us
|
# newsquadfr_fr_prompt_context_generation_with_question
## Summary
newsquadfr_fr_prompt_context_generation_with_question is a subset of the Dataset of French Prompts (DFP).
It contains 101,040 rows that can be used for a context-generation (with question) task.
The original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 79,200 samples
- 'valid' with 21,800 samples
- no 'test' split
# How to use?
## Original data
> Hugging Face repository: URL
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 4.0
|
[
"# newsquadfr_fr_prompt_context_generation_with_question",
"## Summary\n\nnewsquadfr_fr_prompt_context_generation_with_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 101,040 rows that can be used for a context-generation (with question) task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 79,200 samples\n- 'valid' with 21,800 samples\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 4.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us \n",
"# newsquadfr_fr_prompt_context_generation_with_question",
"## Summary\n\nnewsquadfr_fr_prompt_context_generation_with_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 101,040 rows that can be used for a context-generation (with question) task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 79,200 samples\n- 'valid' with 21,800 samples\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 4.0"
] |
[
93,
21,
147,
5,
46,
30,
5,
12,
106,
9
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us \n# newsquadfr_fr_prompt_context_generation_with_question## Summary\n\nnewsquadfr_fr_prompt_context_generation_with_question is a subset of the Dataset of French Prompts (DFP). \nIt contains 101,040 rows that can be used for a context-generation (with question) task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 79,200 samples\n- 'valid' with 21,800 samples\n- no 'test' split# How to use?## Original data\n> Hugging Face repository: URL## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}## License\nCC BY-NC-SA 4.0"
] |
8a4f31c18b5dc972147fb415d1af57eb351bb0c0
|
# newsquadfr_fr_prompt_question_generation_with_answer
## Summary
**newsquadfr_fr_prompt_question_generation_with_answer** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **92,620** rows that can be used for a question-generation (with answer) task.
The original data (without prompts) comes from the dataset [newsquadfr](https://huggingface.co/datasets/lincoln/newsquadfr) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Quelle question donnerait la réponse suivante ? Réponse : "'+answer+'";\nQuestion :',
'Déterminer la question qui aurait pu être posée pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Détermine la question que tu aurais pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Déterminez la question que vous auriez pu poser pour obtenir la réponse suivante . \n Réponse : "'+answer+'";\n Question :',
'Quelle question aurait pu être posée pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question aurais-tu pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question auriez-vous pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question aurait pu être posée pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question aurais-tu pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Quelle question auriez-vous pu poser pour obtenir la réponse suivante. \n Réponse : "'+answer+'";\n Question :',
'Sachant la réponse suivante : "'+answer+'"\n Générer une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Génère une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Générez une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Trouver une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Trouves une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Trouvez une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Créer une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Crée trouver une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Créez trouver une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Ecrire une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Ecris une bonne question : ',
'Sachant la réponse suivante : "'+answer+'"\n Ecrivez une bonne question
```
# Splits
- `train` with 72,600 samples
- `valid` with 20,000 samples
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/newsquadfr_fr_prompt_question_generation_with_answer")
```
# Citation
## Original data
> Hugging Face repository: https://huggingface.co/datasets/lincoln/newsquadfr
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 4.0
|
CATIE-AQ/newsquadfr_fr_prompt_question_generation_with_answer
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:newsquadfr",
"language:fr",
"license:cc-by-nc-sa-4.0",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T15:51:54+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["newsquadfr"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:06:28+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us
|
# newsquadfr_fr_prompt_question_generation_with_answer
## Summary
newsquadfr_fr_prompt_question_generation_with_answer is a subset of the Dataset of French Prompts (DFP).
It contains 92,620 rows that can be used for a question-generation (with answer) task.
The original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 72,600 samples
- 'valid' with 20,000 samples
- no 'test' split
# How to use?
## Original data
> Hugging Face repository: URL
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 4.0
|
[
"# newsquadfr_fr_prompt_question_generation_with_answer",
"## Summary\n\nnewsquadfr_fr_prompt_question_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 92,620 rows that can be used for a question-generation (with answer) task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 72,600 samples\n- 'valid' with 20,000 samples\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 4.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us \n",
"# newsquadfr_fr_prompt_question_generation_with_answer",
"## Summary\n\nnewsquadfr_fr_prompt_question_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 92,620 rows that can be used for a question-generation (with answer) task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 72,600 samples\n- 'valid' with 20,000 samples\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 4.0"
] |
[
93,
21,
147,
5,
46,
30,
5,
12,
106,
9
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us \n# newsquadfr_fr_prompt_question_generation_with_answer## Summary\n\nnewsquadfr_fr_prompt_question_generation_with_answer is a subset of the Dataset of French Prompts (DFP). \nIt contains 92,620 rows that can be used for a question-generation (with answer) task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 72,600 samples\n- 'valid' with 20,000 samples\n- no 'test' split# How to use?## Original data\n> Hugging Face repository: URL## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}## License\nCC BY-NC-SA 4.0"
] |
e602f0aeec84d1200f819662ceb162bc38d89f43
|
# Dataset of tama/多摩/多摩 (Kantai Collection)
This is the dataset of tama/多摩/多摩 (Kantai Collection), containing 393 images and their tags.
The core tags of this character are `short_hair, red_eyes, pink_hair, purple_hair, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 393 | 298.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tama_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 393 | 211.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tama_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 819 | 420.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tama_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 393 | 280.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tama_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 819 | 525.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tama_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tama_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, looking_at_viewer, navel, serafuku, shorts, solo, midriff, open_mouth, paw_pose, sailor_collar, neckerchief, blush, fang, short_sleeves, machinery, simple_background, white_background |
| 1 | 25 |  |  |  |  |  | 1girl, cat_ears, cat_tail, kemonomimi_mode, serafuku, solo, shorts, neckerchief, blush, looking_at_viewer, paw_pose, short_sleeves, sailor_collar, open_mouth, white_background, simple_background, fang |
| 2 | 35 |  |  |  |  |  | serafuku, 1girl, solo, long_sleeves, black_pantyhose, green_sailor_collar, looking_at_viewer, pleated_skirt, blush, green_skirt, black_cardigan, hair_between_eyes, red_neckerchief, open_mouth, simple_background, white_background, paw_pose |
| 3 | 6 |  |  |  |  |  | 1girl, black_gloves, black_skirt, black_thighhighs, hair_ornament, sailor_collar, serafuku, short_sleeves, simple_background, solo, belt, pleated_skirt, thigh_boots, white_background, hair_between_eyes, shirt, buckle, looking_at_viewer, machinery, medium_breasts, rigging, smile |
| 4 | 5 |  |  |  |  |  | 1girl, black_gloves, serafuku, shirt, short_sleeves, simple_background, solo, upper_body, white_background, black_sailor_collar, smile, hair_between_eyes, hair_ornament, dated, looking_at_viewer, medium_breasts, paw_pose |
| 5 | 8 |  |  |  |  |  | 1girl, blush, looking_at_viewer, solo, cat_cutout, cat_lingerie, cleavage_cutout, navel, simple_background, underwear_only, cat_ear_panties, frilled_bra, white_background, black_bra, black_panties, cat_ears, cat_tail, open_mouth, paw_pose, side-tie_panties, collarbone, hair_between_eyes, barefoot, choker, cropped_legs, hair_ornament, jingle_bell, large_breasts, medium_breasts, neck_bell, sitting, twitter_username |
| 6 | 5 |  |  |  |  |  | 1girl, collarbone, hair_between_eyes, solo, blush, cleavage, large_breasts, looking_at_viewer, navel, alternate_costume, cowboy_shot, jacket, open_mouth, outdoors, side-tie_bikini_bottom, simple_background |
| 7 | 16 |  |  |  |  |  | 1girl, yukata, hairclip, solo, obi, blush, alternate_costume, blue_kimono, food, wide_sleeves, bagged_fish, goldfish, long_sleeves |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | navel | serafuku | shorts | solo | midriff | open_mouth | paw_pose | sailor_collar | neckerchief | blush | fang | short_sleeves | machinery | simple_background | white_background | cat_ears | cat_tail | kemonomimi_mode | long_sleeves | black_pantyhose | green_sailor_collar | pleated_skirt | green_skirt | black_cardigan | hair_between_eyes | red_neckerchief | black_gloves | black_skirt | black_thighhighs | hair_ornament | belt | thigh_boots | shirt | buckle | medium_breasts | rigging | smile | upper_body | black_sailor_collar | dated | cat_cutout | cat_lingerie | cleavage_cutout | underwear_only | cat_ear_panties | frilled_bra | black_bra | black_panties | side-tie_panties | collarbone | barefoot | choker | cropped_legs | jingle_bell | large_breasts | neck_bell | sitting | twitter_username | cleavage | alternate_costume | cowboy_shot | jacket | outdoors | side-tie_bikini_bottom | yukata | hairclip | obi | blue_kimono | food | wide_sleeves | bagged_fish | goldfish |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:--------|:-----------|:---------|:-------|:----------|:-------------|:-----------|:----------------|:--------------|:--------|:-------|:----------------|:------------|:--------------------|:-------------------|:-----------|:-----------|:------------------|:---------------|:------------------|:----------------------|:----------------|:--------------|:-----------------|:--------------------|:------------------|:---------------|:--------------|:-------------------|:----------------|:-------|:--------------|:--------|:---------|:-----------------|:----------|:--------|:-------------|:----------------------|:--------|:-------------|:---------------|:------------------|:-----------------|:------------------|:--------------|:------------|:----------------|:-------------------|:-------------|:-----------|:---------|:---------------|:--------------|:----------------|:------------|:----------|:-------------------|:-----------|:--------------------|:--------------|:---------|:-----------|:-------------------------|:---------|:-----------|:------|:--------------|:-------|:---------------|:--------------|:-----------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 25 |  |  |  |  |  | X | X | | X | X | X | | X | X | X | X | X | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 35 |  |  |  |  |  | X | X | | X | | X | | X | X | | | X | | | | X | X | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | | X | | X | | | | X | | | | X | X | X | X | | | | | | | X | | | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | | X | | X | | | X | | | | | X | | X | X | | | | | | | | | | X | | X | | | X | | | X | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 8 |  |  |  |  |  | X | X | X | | | X | | X | X | | | X | | | | X | X | X | X | | | | | | | | X | | | | | X | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | X | X | | | X | | X | | | | X | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | | | | X | X | X | X | X | X | | | | | | | | |
| 7 | 16 |  |  |  |  |  | X | | | | | X | | | | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X |
|
CyberHarem/tama_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T15:52:09+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T10:13:36+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of tama/多摩/多摩 (Kantai Collection)
=========================================
This is the dataset of tama/多摩/多摩 (Kantai Collection), containing 393 images and their tags.
The core tags of this character are 'short\_hair, red\_eyes, pink\_hair, purple\_hair, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
25d31795c18e626c4889ae45827bd4eb3b4fbd90
|
# newsquadfr_fr_prompt_question_generation_with_context
## Summary
**newsquadfr_fr_prompt_question_generation_with_context** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **101,040** rows that can be used for a question-generation (with context) task.
The original data (without prompts) comes from the dataset [newsquadfr](https://huggingface.co/datasets/lincoln/newsquadfr) and was augmented by questions in SQUAD 2.0 format in the [FrenchQA]( https://huggingface.co/datasets/CATIE-AQ/frenchQA) dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'"'+context+'"\n Générer une question à partir du texte ci-dessus : ',
'"'+context+'"\n Génère une question à partir du texte ci-dessus : ',
'"'+context+'"\n Générez une question à partir du texte ci-dessus : ',
'"'+context+'"\n Trouver une question à partir du texte ci-dessus : ',
'"'+context+'"\n Trouve une question à partir du texte ci-dessus : ',
'"'+context+'"\n Trouvez une question à partir du texte ci-dessus : ',
'"'+context+'"\n Créer une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Crée trouver une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Créez trouver une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Ecrire une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Ecris une bonne question à partir du texte ci-dessus : ',
'"'+context+'"\n Ecrivez une bonne question à partir du texte ci-dessus : ',
'Générer une bonne question pour le texte suivant : "'+context+'"',
'Génère une bonne question pour le texte suivant : "'+context+'"',
'Générez une bonne question pour le texte suivant : "'+context+'"',
'Trouver une bonne question pour le texte suivant : "'+context+'"',
'Trouve une bonne question pour le texte suivant : "'+context+'"',
'Trouvez trouver une bonne question pour le texte suivant : "'+context+'"',
'Créer une bonne question pour le texte suivant : "'+context+'"',
'Crée trouver une bonne question pour le texte suivant : "'+context+'"',
'Créez trouver une bonne question pour le texte suivant : "'+context+'"',
'Ecrire une bonne question pour le texte suivant : "'+context+'"',
'Ecris une bonne question pour le texte suivant : "'+context+'"',
'Ecrivez une bonne question pour le texte suivant : "'+context+'"'
```
# Splits
- `train` with 79,200 samples
- `valid` with 21,800 samples
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/newsquadfr_fr_prompt_question_generation_with_context")
```
# Citation
## Original data
> Hugging Face repository: https://huggingface.co/datasets/lincoln/newsquadfr
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 4.0
|
CATIE-AQ/newsquadfr_fr_prompt_question_generation_with_context
|
[
"task_categories:text-generation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:newsquadfr",
"language:fr",
"license:cc-by-nc-sa-4.0",
"DFP",
"french prompts",
"region:us"
] |
2023-08-21T15:52:34+00:00
|
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["newsquadfr"], "task_categories": ["text-generation"], "tags": ["DFP", "french prompts"]}
|
2023-10-11T11:12:37+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us
|
# newsquadfr_fr_prompt_question_generation_with_context
## Summary
newsquadfr_fr_prompt_question_generation_with_context is a subset of the Dataset of French Prompts (DFP).
It contains 101,040 rows that can be used for a question-generation (with context) task.
The original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.
## Prompts used
### List
24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
# Splits
- 'train' with 79,200 samples
- 'valid' with 21,800 samples
- no 'test' split
# How to use?
## Original data
> Hugging Face repository: URL
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { URL },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
CC BY-NC-SA 4.0
|
[
"# newsquadfr_fr_prompt_question_generation_with_context",
"## Summary\n\nnewsquadfr_fr_prompt_question_generation_with_context is a subset of the Dataset of French Prompts (DFP). \nIt contains 101,040 rows that can be used for a question-generation (with context) task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 79,200 samples\n- 'valid' with 21,800 samples\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 4.0"
] |
[
"TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us \n",
"# newsquadfr_fr_prompt_question_generation_with_context",
"## Summary\n\nnewsquadfr_fr_prompt_question_generation_with_context is a subset of the Dataset of French Prompts (DFP). \nIt contains 101,040 rows that can be used for a question-generation (with context) task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.",
"## Prompts used",
"### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.",
"# Splits\n- 'train' with 79,200 samples\n- 'valid' with 21,800 samples\n- no 'test' split",
"# How to use?",
"## Original data\n> Hugging Face repository: URL",
"## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}",
"## License\nCC BY-NC-SA 4.0"
] |
[
93,
21,
147,
5,
46,
30,
5,
12,
106,
9
] |
[
"passage: TAGS\n#task_categories-text-generation #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-newsquadfr #language-French #license-cc-by-nc-sa-4.0 #DFP #french prompts #region-us \n# newsquadfr_fr_prompt_question_generation_with_context## Summary\n\nnewsquadfr_fr_prompt_question_generation_with_context is a subset of the Dataset of French Prompts (DFP). \nIt contains 101,040 rows that can be used for a question-generation (with context) task. \nThe original data (without prompts) comes from the dataset newsquadfr and was augmented by questions in SQUAD 2.0 format in the FrenchQA dataset.\nA list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the xP3 dataset by Muennighoff et al.## Prompts used### List\n24 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.# Splits\n- 'train' with 79,200 samples\n- 'valid' with 21,800 samples\n- no 'test' split# How to use?## Original data\n> Hugging Face repository: URL## This Dataset\n> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023, \n\tauthor = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, \n\ttitle = { DFP (Revision 1d24c09) }, \n\tyear = 2023, \n\turl = { URL }, \n\tdoi = { 10.57967/hf/1200 }, \n\tpublisher = { Hugging Face } \n}## License\nCC BY-NC-SA 4.0"
] |
bf80cc141705d6ad135604e277410d38580aab27
|
# Dataset of yura/由良/由良 (Kantai Collection)
This is the dataset of yura/由良/由良 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `long_hair, pink_hair, very_long_hair, ponytail, ribbon, hair_ribbon, breasts, yellow_eyes, brown_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 536.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yura_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 330.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yura_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1165 | 692.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yura_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 485.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yura_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1165 | 939.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yura_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/yura_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, hair_flaps, serafuku, simple_background, upper_body, white_background, black_ribbon, green_eyes, green_sailor_collar, grey_sailor_collar, looking_at_viewer, open_mouth, short_sleeves, smile, solo, blush, sidelocks |
| 1 | 5 |  |  |  |  |  | 1girl, black_jacket, grey_sailor_collar, hair_flaps, neck_ribbon, red_ribbon, serafuku, short_sleeves, solo, upper_body, looking_at_viewer, simple_background, white_background, smile |
| 2 | 23 |  |  |  |  |  | 1girl, black_jacket, grey_sailor_collar, grey_skirt, hair_flaps, pleated_skirt, serafuku, short_sleeves, solo, black_gloves, neck_ribbon, partially_fingerless_gloves, red_ribbon, simple_background, looking_at_viewer, white_background, cowboy_shot, smile |
| 3 | 6 |  |  |  |  |  | 1girl, hair_ornament, looking_at_viewer, pleated_skirt, serafuku, smile, solo, green_eyes, side_ponytail, blush, turret |
| 4 | 5 |  |  |  |  |  | 1girl, green_eyes, hair_ornament, looking_at_viewer, pleated_skirt, serafuku, smile, solo, knee_boots, full_body |
| 5 | 19 |  |  |  |  |  | 1girl, alternate_costume, hair_flaps, looking_at_viewer, solo, simple_background, smile, white_sweater, black_jacket, twitter_username, white_background, one-hour_drawing_challenge, blush, jacket_on_shoulders, turtleneck, black_ribbon, long_sleeves, pleated_skirt, upper_body, black_pantyhose, coat, large_breasts |
| 6 | 20 |  |  |  |  |  | 1girl, solo, yukata, hair_flaps, looking_at_viewer, obi, alternate_costume, floral_print, smile, blush, simple_background, upper_body, white_background, white_kimono, wide_sleeves, open_mouth |
| 7 | 12 |  |  |  |  |  | 1girl, black_shirt, solo, hair_flaps, black_ribbon, short_sleeves, looking_at_viewer, official_alternate_costume, upper_body, collarbone, medium_breasts, simple_background, smile, swimsuit |
| 8 | 5 |  |  |  |  |  | 1girl, hair_flaps, simple_background, solo, white_background, white_bikini, black_shirt, blush, cleavage, cowboy_shot, looking_at_viewer, navel, shirt_lift, large_breasts, lifted_by_self, open_mouth, skirt, smile, undressing, black_ribbon, blue_sarong, medium_breasts, official_alternate_costume, short_sleeves, twitter_username |
| 9 | 8 |  |  |  |  |  | 1girl, cleavage, looking_at_viewer, medium_breasts, solo, white_bikini, hair_flaps, navel, sitting, sarong, cowboy_shot |
| 10 | 5 |  |  |  |  |  | 1girl, hair_flaps, one-hour_drawing_challenge, simple_background, solo, white_background, dated, medium_breasts, twitter_username, white_bikini, cleavage, cowboy_shot, large_breasts, looking_at_viewer, black_ribbon, jacket, navel, official_alternate_costume, sitting, upper_body |
| 11 | 7 |  |  |  |  |  | 1girl, solo, hair_flaps, navel, panties, simple_background, underwear_only, bra, looking_at_viewer, medium_breasts, armpits, cleavage, cowboy_shot, white_background |
| 12 | 9 |  |  |  |  |  | 1girl, detached_collar, fake_animal_ears, hair_flaps, playboy_bunny, rabbit_ears, solo, strapless_leotard, wrist_cuffs, alternate_costume, black_leotard, looking_at_viewer, medium_breasts, bowtie, cowboy_shot, black_pantyhose, cleavage, simple_background, fishnet_pantyhose, rabbit_tail, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | hair_flaps | serafuku | simple_background | upper_body | white_background | black_ribbon | green_eyes | green_sailor_collar | grey_sailor_collar | looking_at_viewer | open_mouth | short_sleeves | smile | solo | blush | sidelocks | black_jacket | neck_ribbon | red_ribbon | grey_skirt | pleated_skirt | black_gloves | partially_fingerless_gloves | cowboy_shot | hair_ornament | side_ponytail | turret | knee_boots | full_body | alternate_costume | white_sweater | twitter_username | one-hour_drawing_challenge | jacket_on_shoulders | turtleneck | long_sleeves | black_pantyhose | coat | large_breasts | yukata | obi | floral_print | white_kimono | wide_sleeves | black_shirt | official_alternate_costume | collarbone | medium_breasts | swimsuit | white_bikini | cleavage | navel | shirt_lift | lifted_by_self | skirt | undressing | blue_sarong | sitting | sarong | dated | jacket | panties | underwear_only | bra | armpits | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | strapless_leotard | wrist_cuffs | black_leotard | bowtie | fishnet_pantyhose | rabbit_tail |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:-------------|:-----------|:--------------------|:-------------|:-------------------|:---------------|:-------------|:----------------------|:---------------------|:--------------------|:-------------|:----------------|:--------|:-------|:--------|:------------|:---------------|:--------------|:-------------|:-------------|:----------------|:---------------|:------------------------------|:--------------|:----------------|:----------------|:---------|:-------------|:------------|:--------------------|:----------------|:-------------------|:-----------------------------|:----------------------|:-------------|:---------------|:------------------|:-------|:----------------|:---------|:------|:---------------|:---------------|:---------------|:--------------|:-----------------------------|:-------------|:-----------------|:-----------|:---------------|:-----------|:--------|:-------------|:-----------------|:--------|:-------------|:--------------|:----------|:---------|:--------|:---------|:----------|:-----------------|:------|:----------|:------------------|:-------------------|:----------------|:--------------|:--------------------|:--------------|:----------------|:---------|:--------------------|:--------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | | | | X | X | | X | X | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 23 |  |  |  |  |  | X | X | X | X | | X | | | | X | X | | X | X | X | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | X | | | | | X | | | X | | | X | X | X | | | | | | X | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | | X | | | | | X | | | X | | | X | X | | | | | | | X | | | | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 19 |  |  |  |  |  | X | X | | X | X | X | X | | | | X | | | X | X | X | | X | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 20 |  |  |  |  |  | X | X | | X | X | X | | | | | X | X | | X | X | X | | | | | | | | | | | | | | | X | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 12 |  |  |  |  |  | X | X | | X | X | | X | | | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | X | | X | | X | X | | | | X | X | X | X | X | X | | | | | | | | | X | | | | | | | | X | | | | | | | X | | | | | | X | X | | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 9 | 8 |  |  |  |  |  | X | X | | | | | | | | | X | | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | | X | X | X | | | | | | X | X | | | | | | | | | | | | | | | | |
| 10 | 5 |  |  |  |  |  | X | X | | X | X | X | X | | | | X | | | | X | | | | | | | | | | X | | | | | | | | X | X | | | | | | X | | | | | | | X | | X | | X | X | X | | | | | | X | | X | X | | | | | | | | | | | | | | |
| 11 | 7 |  |  |  |  |  | X | X | | X | | X | | | | | X | | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | X | X | | | | | | | | | | X | X | X | X | | | | | | | | | | |
| 12 | 9 |  |  |  |  |  | X | X | | X | | X | | | | | X | | | | X | | | | | | | | | | X | | | | | | X | | | | | | | X | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/yura_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T15:57:45+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T07:32:03+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of yura/由良/由良 (Kantai Collection)
=========================================
This is the dataset of yura/由良/由良 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are 'long\_hair, pink\_hair, very\_long\_hair, ponytail, ribbon, hair\_ribbon, breasts, yellow\_eyes, brown\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
f8bacc775c991ac4b822e08343b00e4d00786a5d
|
# Dataset of i_26/伊26/伊26 (Kantai Collection)
This is the dataset of i_26/伊26/伊26 (Kantai Collection), containing 46 images and their tags.
The core tags of this character are `hairband, light_brown_hair, long_hair, two_side_up, breasts, two-tone_hairband, brown_eyes, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 46 | 70.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_26_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 46 | 35.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_26_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 122 | 86.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_26_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 46 | 62.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_26_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 122 | 137.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_26_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/i_26_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 46 |  |  |  |  |  | 1girl, solo, one-piece_swimsuit, looking_at_viewer, new_school_swimsuit, smile, short_sleeves, open_mouth, sailor_collar, swimsuit_under_clothes, blush, name_tag, collarbone, open_clothes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | one-piece_swimsuit | looking_at_viewer | new_school_swimsuit | smile | short_sleeves | open_mouth | sailor_collar | swimsuit_under_clothes | blush | name_tag | collarbone | open_clothes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:---------------------|:--------------------|:----------------------|:--------|:----------------|:-------------|:----------------|:-------------------------|:--------|:-----------|:-------------|:---------------|
| 0 | 46 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/i_26_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T16:02:54+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T21:17:06+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of i\_26/伊26/伊26 (Kantai Collection)
============================================
This is the dataset of i\_26/伊26/伊26 (Kantai Collection), containing 46 images and their tags.
The core tags of this character are 'hairband, light\_brown\_hair, long\_hair, two\_side\_up, breasts, two-tone\_hairband, brown\_eyes, large\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
cebd7df8621f3624e3e630542425f46e214464c7
|
# LibriSQA Dataset
- [LibriSQA Dataset](#librisqa-dataset)
- [Dataset Structure](#dataset-structure)
- [Keyword Explanation](#keyword-explanation)
## Dataset Structure
- `LibriSQA-PartI\LibriSQA-PartI-train.json`: metafile of train set of LibriSQA Part I
- `LibriSQA-PartI\LibriSQA-PartI-test.json`: metafile of test set of LibriSQA Part I
- `LibriSQA-PartII\LibriSQA-PartII-train.json`: metafile of train set of LibriSQA Part II
- `LibriSQA-PartII\LibriSQA-PartII-test.json`: metafile of test set of LibriSQA Part II
## Keyword Explanation
Explanation to each key in part I:
- speech_path: path to the speech
- question: question corresponding to the speech
- answer: reference answer regarding the speech
- text: authentic text corresponding to the speech
- duration: the duration of the speech
Explanation to each key in part II:
- speech_path: path to the speech
- question_and_option: question along with four options corresponding to the speech
- answer: correct answer label for the option (e.g. A)
- answer_and_analysis: reference answer and analysis regarding the speech
- text: authentic text corresponding to the speech
- duration: the duration of the speech
|
ZihanZhao/LibriSQA
|
[
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] |
2023-08-21T16:11:20+00:00
|
{"language": ["en"], "size_categories": ["100K<n<1M"], "task_categories": ["question-answering"], "configs": [{"config_name": "default", "data_files": [{"split": "partI.train", "path": "LibriSQA-PartI/LibriSQA-PartI-train.json"}, {"split": "partI.test", "path": "LibriSQA-PartI/LibriSQA-PartI-test.json"}, {"split": "partII.train", "path": "LibriSQA-PartII/LibriSQA-PartII-train.json"}, {"split": "partII.test", "path": "LibriSQA-PartII/LibriSQA-PartII-test.json"}]}]}
|
2023-08-22T09:55:03+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-question-answering #size_categories-100K<n<1M #language-English #region-us
|
# LibriSQA Dataset
- LibriSQA Dataset
- Dataset Structure
- Keyword Explanation
## Dataset Structure
- 'LibriSQA-PartI\URL': metafile of train set of LibriSQA Part I
- 'LibriSQA-PartI\URL': metafile of test set of LibriSQA Part I
- 'LibriSQA-PartII\URL': metafile of train set of LibriSQA Part II
- 'LibriSQA-PartII\URL': metafile of test set of LibriSQA Part II
## Keyword Explanation
Explanation to each key in part I:
- speech_path: path to the speech
- question: question corresponding to the speech
- answer: reference answer regarding the speech
- text: authentic text corresponding to the speech
- duration: the duration of the speech
Explanation to each key in part II:
- speech_path: path to the speech
- question_and_option: question along with four options corresponding to the speech
- answer: correct answer label for the option (e.g. A)
- answer_and_analysis: reference answer and analysis regarding the speech
- text: authentic text corresponding to the speech
- duration: the duration of the speech
|
[
"# LibriSQA Dataset\n\n- LibriSQA Dataset\n - Dataset Structure\n - Keyword Explanation",
"## Dataset Structure\n\n- 'LibriSQA-PartI\\URL': metafile of train set of LibriSQA Part I\n- 'LibriSQA-PartI\\URL': metafile of test set of LibriSQA Part I\n- 'LibriSQA-PartII\\URL': metafile of train set of LibriSQA Part II\n- 'LibriSQA-PartII\\URL': metafile of test set of LibriSQA Part II",
"## Keyword Explanation\n\nExplanation to each key in part I:\n- speech_path: path to the speech\n- question: question corresponding to the speech\n- answer: reference answer regarding the speech\n- text: authentic text corresponding to the speech\n- duration: the duration of the speech\n\nExplanation to each key in part II:\n- speech_path: path to the speech\n- question_and_option: question along with four options corresponding to the speech\n- answer: correct answer label for the option (e.g. A)\n- answer_and_analysis: reference answer and analysis regarding the speech\n- text: authentic text corresponding to the speech\n- duration: the duration of the speech"
] |
[
"TAGS\n#task_categories-question-answering #size_categories-100K<n<1M #language-English #region-us \n",
"# LibriSQA Dataset\n\n- LibriSQA Dataset\n - Dataset Structure\n - Keyword Explanation",
"## Dataset Structure\n\n- 'LibriSQA-PartI\\URL': metafile of train set of LibriSQA Part I\n- 'LibriSQA-PartI\\URL': metafile of test set of LibriSQA Part I\n- 'LibriSQA-PartII\\URL': metafile of train set of LibriSQA Part II\n- 'LibriSQA-PartII\\URL': metafile of test set of LibriSQA Part II",
"## Keyword Explanation\n\nExplanation to each key in part I:\n- speech_path: path to the speech\n- question: question corresponding to the speech\n- answer: reference answer regarding the speech\n- text: authentic text corresponding to the speech\n- duration: the duration of the speech\n\nExplanation to each key in part II:\n- speech_path: path to the speech\n- question_and_option: question along with four options corresponding to the speech\n- answer: correct answer label for the option (e.g. A)\n- answer_and_analysis: reference answer and analysis regarding the speech\n- text: authentic text corresponding to the speech\n- duration: the duration of the speech"
] |
[
34,
23,
102,
148
] |
[
"passage: TAGS\n#task_categories-question-answering #size_categories-100K<n<1M #language-English #region-us \n# LibriSQA Dataset\n\n- LibriSQA Dataset\n - Dataset Structure\n - Keyword Explanation## Dataset Structure\n\n- 'LibriSQA-PartI\\URL': metafile of train set of LibriSQA Part I\n- 'LibriSQA-PartI\\URL': metafile of test set of LibriSQA Part I\n- 'LibriSQA-PartII\\URL': metafile of train set of LibriSQA Part II\n- 'LibriSQA-PartII\\URL': metafile of test set of LibriSQA Part II## Keyword Explanation\n\nExplanation to each key in part I:\n- speech_path: path to the speech\n- question: question corresponding to the speech\n- answer: reference answer regarding the speech\n- text: authentic text corresponding to the speech\n- duration: the duration of the speech\n\nExplanation to each key in part II:\n- speech_path: path to the speech\n- question_and_option: question along with four options corresponding to the speech\n- answer: correct answer label for the option (e.g. A)\n- answer_and_analysis: reference answer and analysis regarding the speech\n- text: authentic text corresponding to the speech\n- duration: the duration of the speech"
] |
f02f51ed2c752a2abe47da670453d100571da854
|
# Dataset Card for "AA_ApplicationDistilRoBERTa_Revert"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
EgilKarlsen/AA_ApplicationDistilRoBERTa_Revert
|
[
"region:us"
] |
2023-08-21T16:42:14+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "0", "dtype": "float32"}, {"name": "1", "dtype": "float32"}, {"name": "2", "dtype": "float32"}, {"name": "3", "dtype": "float32"}, {"name": "4", "dtype": "float32"}, {"name": "5", "dtype": "float32"}, {"name": "6", "dtype": "float32"}, {"name": "7", "dtype": "float32"}, {"name": "8", "dtype": "float32"}, {"name": "9", "dtype": "float32"}, {"name": "10", "dtype": "float32"}, {"name": "11", "dtype": "float32"}, {"name": "12", "dtype": "float32"}, {"name": "13", "dtype": "float32"}, {"name": "14", "dtype": "float32"}, {"name": "15", "dtype": "float32"}, {"name": "16", "dtype": "float32"}, {"name": "17", "dtype": "float32"}, {"name": "18", "dtype": "float32"}, {"name": "19", "dtype": "float32"}, {"name": "20", "dtype": "float32"}, {"name": "21", "dtype": "float32"}, {"name": "22", "dtype": "float32"}, {"name": "23", "dtype": "float32"}, {"name": "24", "dtype": "float32"}, {"name": "25", "dtype": "float32"}, {"name": "26", "dtype": "float32"}, {"name": "27", "dtype": "float32"}, {"name": "28", "dtype": "float32"}, {"name": "29", "dtype": "float32"}, {"name": "30", "dtype": "float32"}, {"name": "31", "dtype": "float32"}, {"name": "32", "dtype": "float32"}, {"name": "33", "dtype": "float32"}, {"name": "34", "dtype": "float32"}, {"name": "35", "dtype": "float32"}, {"name": "36", "dtype": "float32"}, {"name": "37", "dtype": "float32"}, {"name": "38", "dtype": "float32"}, {"name": "39", "dtype": "float32"}, {"name": "40", "dtype": "float32"}, {"name": "41", "dtype": "float32"}, {"name": "42", "dtype": "float32"}, {"name": "43", "dtype": "float32"}, {"name": "44", "dtype": "float32"}, {"name": "45", "dtype": "float32"}, {"name": "46", "dtype": "float32"}, {"name": "47", "dtype": "float32"}, {"name": "48", "dtype": "float32"}, {"name": "49", "dtype": "float32"}, {"name": "50", "dtype": "float32"}, {"name": "51", "dtype": "float32"}, {"name": "52", "dtype": "float32"}, {"name": "53", "dtype": "float32"}, {"name": "54", "dtype": "float32"}, {"name": "55", "dtype": "float32"}, {"name": "56", "dtype": "float32"}, {"name": "57", "dtype": "float32"}, {"name": "58", "dtype": "float32"}, {"name": "59", "dtype": "float32"}, {"name": "60", "dtype": "float32"}, {"name": "61", "dtype": "float32"}, {"name": "62", "dtype": "float32"}, {"name": "63", "dtype": "float32"}, {"name": "64", "dtype": "float32"}, {"name": "65", "dtype": "float32"}, {"name": "66", "dtype": "float32"}, {"name": "67", "dtype": "float32"}, {"name": "68", "dtype": "float32"}, {"name": "69", "dtype": "float32"}, {"name": "70", "dtype": "float32"}, {"name": "71", "dtype": "float32"}, {"name": "72", "dtype": "float32"}, {"name": "73", "dtype": "float32"}, {"name": "74", "dtype": "float32"}, {"name": "75", "dtype": "float32"}, {"name": "76", "dtype": "float32"}, {"name": "77", "dtype": "float32"}, {"name": "78", "dtype": "float32"}, {"name": "79", "dtype": "float32"}, {"name": "80", "dtype": "float32"}, {"name": "81", "dtype": "float32"}, {"name": "82", "dtype": "float32"}, {"name": "83", "dtype": "float32"}, {"name": "84", "dtype": "float32"}, {"name": "85", "dtype": "float32"}, {"name": "86", "dtype": "float32"}, {"name": "87", "dtype": "float32"}, {"name": "88", "dtype": "float32"}, {"name": "89", "dtype": "float32"}, {"name": "90", "dtype": "float32"}, {"name": "91", "dtype": "float32"}, {"name": "92", "dtype": "float32"}, {"name": "93", "dtype": "float32"}, {"name": "94", "dtype": "float32"}, {"name": "95", "dtype": "float32"}, {"name": "96", "dtype": "float32"}, {"name": "97", "dtype": "float32"}, {"name": "98", "dtype": "float32"}, {"name": "99", "dtype": "float32"}, {"name": "100", "dtype": "float32"}, {"name": "101", "dtype": "float32"}, {"name": "102", "dtype": "float32"}, {"name": "103", "dtype": "float32"}, {"name": "104", "dtype": "float32"}, {"name": "105", "dtype": "float32"}, {"name": "106", "dtype": "float32"}, {"name": "107", "dtype": "float32"}, {"name": "108", "dtype": "float32"}, {"name": "109", "dtype": "float32"}, {"name": "110", "dtype": "float32"}, {"name": "111", "dtype": "float32"}, {"name": "112", "dtype": "float32"}, {"name": "113", "dtype": "float32"}, {"name": "114", "dtype": "float32"}, {"name": "115", "dtype": "float32"}, {"name": "116", "dtype": "float32"}, {"name": "117", "dtype": "float32"}, {"name": "118", "dtype": "float32"}, {"name": "119", "dtype": "float32"}, {"name": "120", "dtype": "float32"}, {"name": "121", "dtype": "float32"}, {"name": "122", "dtype": "float32"}, {"name": "123", "dtype": "float32"}, {"name": "124", "dtype": "float32"}, {"name": "125", "dtype": "float32"}, {"name": "126", "dtype": "float32"}, {"name": "127", "dtype": "float32"}, {"name": "128", "dtype": "float32"}, {"name": "129", "dtype": "float32"}, {"name": "130", "dtype": "float32"}, {"name": "131", "dtype": "float32"}, {"name": "132", "dtype": "float32"}, {"name": "133", "dtype": "float32"}, {"name": "134", "dtype": "float32"}, {"name": "135", "dtype": "float32"}, {"name": "136", "dtype": "float32"}, {"name": "137", "dtype": "float32"}, {"name": "138", "dtype": "float32"}, {"name": "139", "dtype": "float32"}, {"name": "140", "dtype": "float32"}, {"name": "141", "dtype": "float32"}, {"name": "142", "dtype": "float32"}, {"name": "143", "dtype": "float32"}, {"name": "144", "dtype": "float32"}, {"name": "145", "dtype": "float32"}, {"name": "146", "dtype": "float32"}, {"name": "147", "dtype": "float32"}, {"name": "148", "dtype": "float32"}, {"name": "149", "dtype": "float32"}, {"name": "150", "dtype": "float32"}, {"name": "151", "dtype": "float32"}, {"name": "152", "dtype": "float32"}, {"name": "153", "dtype": "float32"}, {"name": "154", "dtype": "float32"}, {"name": "155", "dtype": "float32"}, {"name": "156", "dtype": "float32"}, {"name": "157", "dtype": "float32"}, {"name": "158", "dtype": "float32"}, {"name": "159", "dtype": "float32"}, {"name": "160", "dtype": "float32"}, {"name": "161", "dtype": "float32"}, {"name": "162", "dtype": "float32"}, {"name": "163", "dtype": "float32"}, {"name": "164", "dtype": "float32"}, {"name": "165", "dtype": "float32"}, {"name": "166", "dtype": "float32"}, {"name": "167", "dtype": "float32"}, {"name": "168", "dtype": "float32"}, {"name": "169", "dtype": "float32"}, {"name": "170", "dtype": "float32"}, {"name": "171", "dtype": "float32"}, {"name": "172", "dtype": "float32"}, {"name": "173", "dtype": "float32"}, {"name": "174", "dtype": "float32"}, {"name": "175", "dtype": "float32"}, {"name": "176", "dtype": "float32"}, {"name": "177", "dtype": "float32"}, {"name": "178", "dtype": "float32"}, {"name": "179", "dtype": "float32"}, {"name": "180", "dtype": "float32"}, {"name": "181", "dtype": "float32"}, {"name": "182", "dtype": "float32"}, {"name": "183", "dtype": "float32"}, {"name": "184", "dtype": "float32"}, {"name": "185", "dtype": "float32"}, {"name": "186", "dtype": "float32"}, {"name": "187", "dtype": "float32"}, {"name": "188", "dtype": "float32"}, {"name": "189", "dtype": "float32"}, {"name": "190", "dtype": "float32"}, {"name": "191", "dtype": "float32"}, {"name": "192", "dtype": "float32"}, {"name": "193", "dtype": "float32"}, {"name": "194", "dtype": "float32"}, {"name": "195", "dtype": "float32"}, {"name": "196", "dtype": "float32"}, {"name": "197", "dtype": "float32"}, {"name": "198", "dtype": "float32"}, {"name": "199", "dtype": "float32"}, {"name": "200", "dtype": "float32"}, {"name": "201", "dtype": "float32"}, {"name": "202", "dtype": "float32"}, {"name": "203", "dtype": "float32"}, {"name": "204", "dtype": "float32"}, {"name": "205", "dtype": "float32"}, {"name": "206", "dtype": "float32"}, {"name": "207", "dtype": "float32"}, {"name": "208", "dtype": "float32"}, {"name": "209", "dtype": "float32"}, {"name": "210", "dtype": "float32"}, {"name": "211", "dtype": "float32"}, {"name": "212", "dtype": "float32"}, {"name": "213", "dtype": "float32"}, {"name": "214", "dtype": "float32"}, {"name": "215", "dtype": "float32"}, {"name": "216", "dtype": "float32"}, {"name": "217", "dtype": "float32"}, {"name": "218", "dtype": "float32"}, {"name": "219", "dtype": "float32"}, {"name": "220", "dtype": "float32"}, {"name": "221", "dtype": "float32"}, {"name": "222", "dtype": "float32"}, {"name": "223", "dtype": "float32"}, {"name": "224", "dtype": "float32"}, {"name": "225", "dtype": "float32"}, {"name": "226", "dtype": "float32"}, {"name": "227", "dtype": "float32"}, {"name": "228", "dtype": "float32"}, {"name": "229", "dtype": "float32"}, {"name": "230", "dtype": "float32"}, {"name": "231", "dtype": "float32"}, {"name": "232", "dtype": "float32"}, {"name": "233", "dtype": "float32"}, {"name": "234", "dtype": "float32"}, {"name": "235", "dtype": "float32"}, {"name": "236", "dtype": "float32"}, {"name": "237", "dtype": "float32"}, {"name": "238", "dtype": "float32"}, {"name": "239", "dtype": "float32"}, {"name": "240", "dtype": "float32"}, {"name": "241", "dtype": "float32"}, {"name": "242", "dtype": "float32"}, {"name": "243", "dtype": "float32"}, {"name": "244", "dtype": "float32"}, {"name": "245", "dtype": "float32"}, {"name": "246", "dtype": "float32"}, {"name": "247", "dtype": "float32"}, {"name": "248", "dtype": "float32"}, {"name": "249", "dtype": "float32"}, {"name": "250", "dtype": "float32"}, {"name": "251", "dtype": "float32"}, {"name": "252", "dtype": "float32"}, {"name": "253", "dtype": "float32"}, {"name": "254", "dtype": "float32"}, {"name": "255", "dtype": "float32"}, {"name": "256", "dtype": "float32"}, {"name": "257", "dtype": "float32"}, {"name": "258", "dtype": "float32"}, {"name": "259", "dtype": "float32"}, {"name": "260", "dtype": "float32"}, {"name": "261", "dtype": "float32"}, {"name": "262", "dtype": "float32"}, {"name": "263", "dtype": "float32"}, {"name": "264", "dtype": "float32"}, {"name": "265", "dtype": "float32"}, {"name": "266", "dtype": "float32"}, {"name": "267", "dtype": "float32"}, {"name": "268", "dtype": "float32"}, {"name": "269", "dtype": "float32"}, {"name": "270", "dtype": "float32"}, {"name": "271", "dtype": "float32"}, {"name": "272", "dtype": "float32"}, {"name": "273", "dtype": "float32"}, {"name": "274", "dtype": "float32"}, {"name": "275", "dtype": "float32"}, {"name": "276", "dtype": "float32"}, {"name": "277", "dtype": "float32"}, {"name": "278", "dtype": "float32"}, {"name": "279", "dtype": "float32"}, {"name": "280", "dtype": "float32"}, {"name": "281", "dtype": "float32"}, {"name": "282", "dtype": "float32"}, {"name": "283", "dtype": "float32"}, {"name": "284", "dtype": "float32"}, {"name": "285", "dtype": "float32"}, {"name": "286", "dtype": "float32"}, {"name": "287", "dtype": "float32"}, {"name": "288", "dtype": "float32"}, {"name": "289", "dtype": "float32"}, {"name": "290", "dtype": "float32"}, {"name": "291", "dtype": "float32"}, {"name": "292", "dtype": "float32"}, {"name": "293", "dtype": "float32"}, {"name": "294", "dtype": "float32"}, {"name": "295", "dtype": "float32"}, {"name": "296", "dtype": "float32"}, {"name": "297", "dtype": "float32"}, {"name": "298", "dtype": "float32"}, {"name": "299", "dtype": "float32"}, {"name": "300", "dtype": "float32"}, {"name": "301", "dtype": "float32"}, {"name": "302", "dtype": "float32"}, {"name": "303", "dtype": "float32"}, {"name": "304", "dtype": "float32"}, {"name": "305", "dtype": "float32"}, {"name": "306", "dtype": "float32"}, {"name": "307", "dtype": "float32"}, {"name": "308", "dtype": "float32"}, {"name": "309", "dtype": "float32"}, {"name": "310", "dtype": "float32"}, {"name": "311", "dtype": "float32"}, {"name": "312", "dtype": "float32"}, {"name": "313", "dtype": "float32"}, {"name": "314", "dtype": "float32"}, {"name": "315", "dtype": "float32"}, {"name": "316", "dtype": "float32"}, {"name": "317", "dtype": "float32"}, {"name": "318", "dtype": "float32"}, {"name": "319", "dtype": "float32"}, {"name": "320", "dtype": "float32"}, {"name": "321", "dtype": "float32"}, {"name": "322", "dtype": "float32"}, {"name": "323", "dtype": "float32"}, {"name": "324", "dtype": "float32"}, {"name": "325", "dtype": "float32"}, {"name": "326", "dtype": "float32"}, {"name": "327", "dtype": "float32"}, {"name": "328", "dtype": "float32"}, {"name": "329", "dtype": "float32"}, {"name": "330", "dtype": "float32"}, {"name": "331", "dtype": "float32"}, {"name": "332", "dtype": "float32"}, {"name": "333", "dtype": "float32"}, {"name": "334", "dtype": "float32"}, {"name": "335", "dtype": "float32"}, {"name": "336", "dtype": "float32"}, {"name": "337", "dtype": "float32"}, {"name": "338", "dtype": "float32"}, {"name": "339", "dtype": "float32"}, {"name": "340", "dtype": "float32"}, {"name": "341", "dtype": "float32"}, {"name": "342", "dtype": "float32"}, {"name": "343", "dtype": "float32"}, {"name": "344", "dtype": "float32"}, {"name": "345", "dtype": "float32"}, {"name": "346", "dtype": "float32"}, {"name": "347", "dtype": "float32"}, {"name": "348", "dtype": "float32"}, {"name": "349", "dtype": "float32"}, {"name": "350", "dtype": "float32"}, {"name": "351", "dtype": "float32"}, {"name": "352", "dtype": "float32"}, {"name": "353", "dtype": "float32"}, {"name": "354", "dtype": "float32"}, {"name": "355", "dtype": "float32"}, {"name": "356", "dtype": "float32"}, {"name": "357", "dtype": "float32"}, {"name": "358", "dtype": "float32"}, {"name": "359", "dtype": "float32"}, {"name": "360", "dtype": "float32"}, {"name": "361", "dtype": "float32"}, {"name": "362", "dtype": "float32"}, {"name": "363", "dtype": "float32"}, {"name": "364", "dtype": "float32"}, {"name": "365", "dtype": "float32"}, {"name": "366", "dtype": "float32"}, {"name": "367", "dtype": "float32"}, {"name": "368", "dtype": "float32"}, {"name": "369", "dtype": "float32"}, {"name": "370", "dtype": "float32"}, {"name": "371", "dtype": "float32"}, {"name": "372", "dtype": "float32"}, {"name": "373", "dtype": "float32"}, {"name": "374", "dtype": "float32"}, {"name": "375", "dtype": "float32"}, {"name": "376", "dtype": "float32"}, {"name": "377", "dtype": "float32"}, {"name": "378", "dtype": "float32"}, {"name": "379", "dtype": "float32"}, {"name": "380", "dtype": "float32"}, {"name": "381", "dtype": "float32"}, {"name": "382", "dtype": "float32"}, {"name": "383", "dtype": "float32"}, {"name": "384", "dtype": "float32"}, {"name": "385", "dtype": "float32"}, {"name": "386", "dtype": "float32"}, {"name": "387", "dtype": "float32"}, {"name": "388", "dtype": "float32"}, {"name": "389", "dtype": "float32"}, {"name": "390", "dtype": "float32"}, {"name": "391", "dtype": "float32"}, {"name": "392", "dtype": "float32"}, {"name": "393", "dtype": "float32"}, {"name": "394", "dtype": "float32"}, {"name": "395", "dtype": "float32"}, {"name": "396", "dtype": "float32"}, {"name": "397", "dtype": "float32"}, {"name": "398", "dtype": "float32"}, {"name": "399", "dtype": "float32"}, {"name": "400", "dtype": "float32"}, {"name": "401", "dtype": "float32"}, {"name": "402", "dtype": "float32"}, {"name": "403", "dtype": "float32"}, {"name": "404", "dtype": "float32"}, {"name": "405", "dtype": "float32"}, {"name": "406", "dtype": "float32"}, {"name": "407", "dtype": "float32"}, {"name": "408", "dtype": "float32"}, {"name": "409", "dtype": "float32"}, {"name": "410", "dtype": "float32"}, {"name": "411", "dtype": "float32"}, {"name": "412", "dtype": "float32"}, {"name": "413", "dtype": "float32"}, {"name": "414", "dtype": "float32"}, {"name": "415", "dtype": "float32"}, {"name": "416", "dtype": "float32"}, {"name": "417", "dtype": "float32"}, {"name": "418", "dtype": "float32"}, {"name": "419", "dtype": "float32"}, {"name": "420", "dtype": "float32"}, {"name": "421", "dtype": "float32"}, {"name": "422", "dtype": "float32"}, {"name": "423", "dtype": "float32"}, {"name": "424", "dtype": "float32"}, {"name": "425", "dtype": "float32"}, {"name": "426", "dtype": "float32"}, {"name": "427", "dtype": "float32"}, {"name": "428", "dtype": "float32"}, {"name": "429", "dtype": "float32"}, {"name": "430", "dtype": "float32"}, {"name": "431", "dtype": "float32"}, {"name": "432", "dtype": "float32"}, {"name": "433", "dtype": "float32"}, {"name": "434", "dtype": "float32"}, {"name": "435", "dtype": "float32"}, {"name": "436", "dtype": "float32"}, {"name": "437", "dtype": "float32"}, {"name": "438", "dtype": "float32"}, {"name": "439", "dtype": "float32"}, {"name": "440", "dtype": "float32"}, {"name": "441", "dtype": "float32"}, {"name": "442", "dtype": "float32"}, {"name": "443", "dtype": "float32"}, {"name": "444", "dtype": "float32"}, {"name": "445", "dtype": "float32"}, {"name": "446", "dtype": "float32"}, {"name": "447", "dtype": "float32"}, {"name": "448", "dtype": "float32"}, {"name": "449", "dtype": "float32"}, {"name": "450", "dtype": "float32"}, {"name": "451", "dtype": "float32"}, {"name": "452", "dtype": "float32"}, {"name": "453", "dtype": "float32"}, {"name": "454", "dtype": "float32"}, {"name": "455", "dtype": "float32"}, {"name": "456", "dtype": "float32"}, {"name": "457", "dtype": "float32"}, {"name": "458", "dtype": "float32"}, {"name": "459", "dtype": "float32"}, {"name": "460", "dtype": "float32"}, {"name": "461", "dtype": "float32"}, {"name": "462", "dtype": "float32"}, {"name": "463", "dtype": "float32"}, {"name": "464", "dtype": "float32"}, {"name": "465", "dtype": "float32"}, {"name": "466", "dtype": "float32"}, {"name": "467", "dtype": "float32"}, {"name": "468", "dtype": "float32"}, {"name": "469", "dtype": "float32"}, {"name": "470", "dtype": "float32"}, {"name": "471", "dtype": "float32"}, {"name": "472", "dtype": "float32"}, {"name": "473", "dtype": "float32"}, {"name": "474", "dtype": "float32"}, {"name": "475", "dtype": "float32"}, {"name": "476", "dtype": "float32"}, {"name": "477", "dtype": "float32"}, {"name": "478", "dtype": "float32"}, {"name": "479", "dtype": "float32"}, {"name": "480", "dtype": "float32"}, {"name": "481", "dtype": "float32"}, {"name": "482", "dtype": "float32"}, {"name": "483", "dtype": "float32"}, {"name": "484", "dtype": "float32"}, {"name": "485", "dtype": "float32"}, {"name": "486", "dtype": "float32"}, {"name": "487", "dtype": "float32"}, {"name": "488", "dtype": "float32"}, {"name": "489", "dtype": "float32"}, {"name": "490", "dtype": "float32"}, {"name": "491", "dtype": "float32"}, {"name": "492", "dtype": "float32"}, {"name": "493", "dtype": "float32"}, {"name": "494", "dtype": "float32"}, {"name": "495", "dtype": "float32"}, {"name": "496", "dtype": "float32"}, {"name": "497", "dtype": "float32"}, {"name": "498", "dtype": "float32"}, {"name": "499", "dtype": "float32"}, {"name": "500", "dtype": "float32"}, {"name": "501", "dtype": "float32"}, {"name": "502", "dtype": "float32"}, {"name": "503", "dtype": "float32"}, {"name": "504", "dtype": "float32"}, {"name": "505", "dtype": "float32"}, {"name": "506", "dtype": "float32"}, {"name": "507", "dtype": "float32"}, {"name": "508", "dtype": "float32"}, {"name": "509", "dtype": "float32"}, {"name": "510", "dtype": "float32"}, {"name": "511", "dtype": "float32"}, {"name": "512", "dtype": "float32"}, {"name": "513", "dtype": "float32"}, {"name": "514", "dtype": "float32"}, {"name": "515", "dtype": "float32"}, {"name": "516", "dtype": "float32"}, {"name": "517", "dtype": "float32"}, {"name": "518", "dtype": "float32"}, {"name": "519", "dtype": "float32"}, {"name": "520", "dtype": "float32"}, {"name": "521", "dtype": "float32"}, {"name": "522", "dtype": "float32"}, {"name": "523", "dtype": "float32"}, {"name": "524", "dtype": "float32"}, {"name": "525", "dtype": "float32"}, {"name": "526", "dtype": "float32"}, {"name": "527", "dtype": "float32"}, {"name": "528", "dtype": "float32"}, {"name": "529", "dtype": "float32"}, {"name": "530", "dtype": "float32"}, {"name": "531", "dtype": "float32"}, {"name": "532", "dtype": "float32"}, {"name": "533", "dtype": "float32"}, {"name": "534", "dtype": "float32"}, {"name": "535", "dtype": "float32"}, {"name": "536", "dtype": "float32"}, {"name": "537", "dtype": "float32"}, {"name": "538", "dtype": "float32"}, {"name": "539", "dtype": "float32"}, {"name": "540", "dtype": "float32"}, {"name": "541", "dtype": "float32"}, {"name": "542", "dtype": "float32"}, {"name": "543", "dtype": "float32"}, {"name": "544", "dtype": "float32"}, {"name": "545", "dtype": "float32"}, {"name": "546", "dtype": "float32"}, {"name": "547", "dtype": "float32"}, {"name": "548", "dtype": "float32"}, {"name": "549", "dtype": "float32"}, {"name": "550", "dtype": "float32"}, {"name": "551", "dtype": "float32"}, {"name": "552", "dtype": "float32"}, {"name": "553", "dtype": "float32"}, {"name": "554", "dtype": "float32"}, {"name": "555", "dtype": "float32"}, {"name": "556", "dtype": "float32"}, {"name": "557", "dtype": "float32"}, {"name": "558", "dtype": "float32"}, {"name": "559", "dtype": "float32"}, {"name": "560", "dtype": "float32"}, {"name": "561", "dtype": "float32"}, {"name": "562", "dtype": "float32"}, {"name": "563", "dtype": "float32"}, {"name": "564", "dtype": "float32"}, {"name": "565", "dtype": "float32"}, {"name": "566", "dtype": "float32"}, {"name": "567", "dtype": "float32"}, {"name": "568", "dtype": "float32"}, {"name": "569", "dtype": "float32"}, {"name": "570", "dtype": "float32"}, {"name": "571", "dtype": "float32"}, {"name": "572", "dtype": "float32"}, {"name": "573", "dtype": "float32"}, {"name": "574", "dtype": "float32"}, {"name": "575", "dtype": "float32"}, {"name": "576", "dtype": "float32"}, {"name": "577", "dtype": "float32"}, {"name": "578", "dtype": "float32"}, {"name": "579", "dtype": "float32"}, {"name": "580", "dtype": "float32"}, {"name": "581", "dtype": "float32"}, {"name": "582", "dtype": "float32"}, {"name": "583", "dtype": "float32"}, {"name": "584", "dtype": "float32"}, {"name": "585", "dtype": "float32"}, {"name": "586", "dtype": "float32"}, {"name": "587", "dtype": "float32"}, {"name": "588", "dtype": "float32"}, {"name": "589", "dtype": "float32"}, {"name": "590", "dtype": "float32"}, {"name": "591", "dtype": "float32"}, {"name": "592", "dtype": "float32"}, {"name": "593", "dtype": "float32"}, {"name": "594", "dtype": "float32"}, {"name": "595", "dtype": "float32"}, {"name": "596", "dtype": "float32"}, {"name": "597", "dtype": "float32"}, {"name": "598", "dtype": "float32"}, {"name": "599", "dtype": "float32"}, {"name": "600", "dtype": "float32"}, {"name": "601", "dtype": "float32"}, {"name": "602", "dtype": "float32"}, {"name": "603", "dtype": "float32"}, {"name": "604", "dtype": "float32"}, {"name": "605", "dtype": "float32"}, {"name": "606", "dtype": "float32"}, {"name": "607", "dtype": "float32"}, {"name": "608", "dtype": "float32"}, {"name": "609", "dtype": "float32"}, {"name": "610", "dtype": "float32"}, {"name": "611", "dtype": "float32"}, {"name": "612", "dtype": "float32"}, {"name": "613", "dtype": "float32"}, {"name": "614", "dtype": "float32"}, {"name": "615", "dtype": "float32"}, {"name": "616", "dtype": "float32"}, {"name": "617", "dtype": "float32"}, {"name": "618", "dtype": "float32"}, {"name": "619", "dtype": "float32"}, {"name": "620", "dtype": "float32"}, {"name": "621", "dtype": "float32"}, {"name": "622", "dtype": "float32"}, {"name": "623", "dtype": "float32"}, {"name": "624", "dtype": "float32"}, {"name": "625", "dtype": "float32"}, {"name": "626", "dtype": "float32"}, {"name": "627", "dtype": "float32"}, {"name": "628", "dtype": "float32"}, {"name": "629", "dtype": "float32"}, {"name": "630", "dtype": "float32"}, {"name": "631", "dtype": "float32"}, {"name": "632", "dtype": "float32"}, {"name": "633", "dtype": "float32"}, {"name": "634", "dtype": "float32"}, {"name": "635", "dtype": "float32"}, {"name": "636", "dtype": "float32"}, {"name": "637", "dtype": "float32"}, {"name": "638", "dtype": "float32"}, {"name": "639", "dtype": "float32"}, {"name": "640", "dtype": "float32"}, {"name": "641", "dtype": "float32"}, {"name": "642", "dtype": "float32"}, {"name": "643", "dtype": "float32"}, {"name": "644", "dtype": "float32"}, {"name": "645", "dtype": "float32"}, {"name": "646", "dtype": "float32"}, {"name": "647", "dtype": "float32"}, {"name": "648", "dtype": "float32"}, {"name": "649", "dtype": "float32"}, {"name": "650", "dtype": "float32"}, {"name": "651", "dtype": "float32"}, {"name": "652", "dtype": "float32"}, {"name": "653", "dtype": "float32"}, {"name": "654", "dtype": "float32"}, {"name": "655", "dtype": "float32"}, {"name": "656", "dtype": "float32"}, {"name": "657", "dtype": "float32"}, {"name": "658", "dtype": "float32"}, {"name": "659", "dtype": "float32"}, {"name": "660", "dtype": "float32"}, {"name": "661", "dtype": "float32"}, {"name": "662", "dtype": "float32"}, {"name": "663", "dtype": "float32"}, {"name": "664", "dtype": "float32"}, {"name": "665", "dtype": "float32"}, {"name": "666", "dtype": "float32"}, {"name": "667", "dtype": "float32"}, {"name": "668", "dtype": "float32"}, {"name": "669", "dtype": "float32"}, {"name": "670", "dtype": "float32"}, {"name": "671", "dtype": "float32"}, {"name": "672", "dtype": "float32"}, {"name": "673", "dtype": "float32"}, {"name": "674", "dtype": "float32"}, {"name": "675", "dtype": "float32"}, {"name": "676", "dtype": "float32"}, {"name": "677", "dtype": "float32"}, {"name": "678", "dtype": "float32"}, {"name": "679", "dtype": "float32"}, {"name": "680", "dtype": "float32"}, {"name": "681", "dtype": "float32"}, {"name": "682", "dtype": "float32"}, {"name": "683", "dtype": "float32"}, {"name": "684", "dtype": "float32"}, {"name": "685", "dtype": "float32"}, {"name": "686", "dtype": "float32"}, {"name": "687", "dtype": "float32"}, {"name": "688", "dtype": "float32"}, {"name": "689", "dtype": "float32"}, {"name": "690", "dtype": "float32"}, {"name": "691", "dtype": "float32"}, {"name": "692", "dtype": "float32"}, {"name": "693", "dtype": "float32"}, {"name": "694", "dtype": "float32"}, {"name": "695", "dtype": "float32"}, {"name": "696", "dtype": "float32"}, {"name": "697", "dtype": "float32"}, {"name": "698", "dtype": "float32"}, {"name": "699", "dtype": "float32"}, {"name": "700", "dtype": "float32"}, {"name": "701", "dtype": "float32"}, {"name": "702", "dtype": "float32"}, {"name": "703", "dtype": "float32"}, {"name": "704", "dtype": "float32"}, {"name": "705", "dtype": "float32"}, {"name": "706", "dtype": "float32"}, {"name": "707", "dtype": "float32"}, {"name": "708", "dtype": "float32"}, {"name": "709", "dtype": "float32"}, {"name": "710", "dtype": "float32"}, {"name": "711", "dtype": "float32"}, {"name": "712", "dtype": "float32"}, {"name": "713", "dtype": "float32"}, {"name": "714", "dtype": "float32"}, {"name": "715", "dtype": "float32"}, {"name": "716", "dtype": "float32"}, {"name": "717", "dtype": "float32"}, {"name": "718", "dtype": "float32"}, {"name": "719", "dtype": "float32"}, {"name": "720", "dtype": "float32"}, {"name": "721", "dtype": "float32"}, {"name": "722", "dtype": "float32"}, {"name": "723", "dtype": "float32"}, {"name": "724", "dtype": "float32"}, {"name": "725", "dtype": "float32"}, {"name": "726", "dtype": "float32"}, {"name": "727", "dtype": "float32"}, {"name": "728", "dtype": "float32"}, {"name": "729", "dtype": "float32"}, {"name": "730", "dtype": "float32"}, {"name": "731", "dtype": "float32"}, {"name": "732", "dtype": "float32"}, {"name": "733", "dtype": "float32"}, {"name": "734", "dtype": "float32"}, {"name": "735", "dtype": "float32"}, {"name": "736", "dtype": "float32"}, {"name": "737", "dtype": "float32"}, {"name": "738", "dtype": "float32"}, {"name": "739", "dtype": "float32"}, {"name": "740", "dtype": "float32"}, {"name": "741", "dtype": "float32"}, {"name": "742", "dtype": "float32"}, {"name": "743", "dtype": "float32"}, {"name": "744", "dtype": "float32"}, {"name": "745", "dtype": "float32"}, {"name": "746", "dtype": "float32"}, {"name": "747", "dtype": "float32"}, {"name": "748", "dtype": "float32"}, {"name": "749", "dtype": "float32"}, {"name": "750", "dtype": "float32"}, {"name": "751", "dtype": "float32"}, {"name": "752", "dtype": "float32"}, {"name": "753", "dtype": "float32"}, {"name": "754", "dtype": "float32"}, {"name": "755", "dtype": "float32"}, {"name": "756", "dtype": "float32"}, {"name": "757", "dtype": "float32"}, {"name": "758", "dtype": "float32"}, {"name": "759", "dtype": "float32"}, {"name": "760", "dtype": "float32"}, {"name": "761", "dtype": "float32"}, {"name": "762", "dtype": "float32"}, {"name": "763", "dtype": "float32"}, {"name": "764", "dtype": "float32"}, {"name": "765", "dtype": "float32"}, {"name": "766", "dtype": "float32"}, {"name": "767", "dtype": "float32"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 80318780.21618997, "num_examples": 26057}, {"name": "test", "num_bytes": 26774087.073587257, "num_examples": 8686}], "download_size": 147219399, "dataset_size": 107092867.28977722}}
|
2023-08-21T16:46:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "AA_ApplicationDistilRoBERTa_Revert"
More Information needed
|
[
"# Dataset Card for \"AA_ApplicationDistilRoBERTa_Revert\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"AA_ApplicationDistilRoBERTa_Revert\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"AA_ApplicationDistilRoBERTa_Revert\"\n\nMore Information needed"
] |
996c67c4c2d94a5a84f364fb83a41bf019e4348e
|
This dataset is generated by [Lilac](http://lilacml.com) for a HuggingFace Space: [huggingface.co/spaces/nsthorat-lilac/nikhil_no_persistent](https://huggingface.co/spaces/nsthorat-lilac/nikhil_no_persistent).
Original dataset: [https://huggingface.co/datasets/glue](https://huggingface.co/datasets/glue)
Lilac dataset config:
```name: glue
namespace: local
settings:
ui:
media_paths: [premise]
source: {config_name: ax, dataset_name: glue, source_name: huggingface}
```
|
nsthorat-lilac/nikhil_no_persistent-local-glue
|
[
"region:us"
] |
2023-08-21T16:48:48+00:00
|
{}
|
2023-08-21T17:49:53+00:00
|
[] |
[] |
TAGS
#region-us
|
This dataset is generated by Lilac for a HuggingFace Space: URL
Original dataset: URL
Lilac dataset config:
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
5d88539dd3ec0b748587d3c6da8c41ce430c0d4e
|
# Dataset of katsuragi/葛城 (Kantai Collection)
This is the dataset of katsuragi/葛城 (Kantai Collection), containing 423 images and their tags.
The core tags of this character are `black_hair, long_hair, ribbon, ponytail, hair_ribbon, blue_eyes, white_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 423 | 400.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katsuragi_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 423 | 278.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katsuragi_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 940 | 559.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katsuragi_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 423 | 376.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katsuragi_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 940 | 706.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katsuragi_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/katsuragi_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, bow_(weapon), fingerless_gloves, japanese_clothes, looking_at_viewer, midriff, smile, solo, black_thighhighs, navel, arrow_(projectile), armor, pleated_skirt, elbow_gloves, simple_background, uneven_gloves, white_background |
| 1 | 8 |  |  |  |  |  | 1girl, fingerless_gloves, looking_at_viewer, midriff, solo, navel, japanese_clothes, smile, uneven_gloves, black_thighhighs, elbow_gloves, bow_(weapon), pleated_skirt |
| 2 | 9 |  |  |  |  |  | 1girl, japanese_clothes, midriff, open_mouth, solo, looking_at_viewer, navel, :d, skirt |
| 3 | 9 |  |  |  |  |  | 1girl, japanese_clothes, solo, upper_body, looking_at_viewer, smile, simple_background, midriff, open_mouth |
| 4 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, navel, small_breasts, solo, blush, collarbone, simple_background, white_background, groin, nude |
| 5 | 5 |  |  |  |  |  | 1girl, obi, solo, alternate_costume, furisode, looking_at_viewer, open_mouth, wide_sleeves, green_kimono, simple_background, white_background, floral_print, hair_between_eyes, long_sleeves, smile |
| 6 | 7 |  |  |  |  |  | detached_collar, fake_animal_ears, playboy_bunny, rabbit_ears, bowtie, strapless_leotard, wrist_cuffs, looking_at_viewer, simple_background, small_breasts, white_background, 1girl, cowboy_shot, solo, 2girls, black_pantyhose |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bow_(weapon) | fingerless_gloves | japanese_clothes | looking_at_viewer | midriff | smile | solo | black_thighhighs | navel | arrow_(projectile) | armor | pleated_skirt | elbow_gloves | simple_background | uneven_gloves | white_background | open_mouth | :d | skirt | upper_body | small_breasts | blush | collarbone | groin | nude | obi | alternate_costume | furisode | wide_sleeves | green_kimono | floral_print | hair_between_eyes | long_sleeves | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | bowtie | strapless_leotard | wrist_cuffs | cowboy_shot | 2girls | black_pantyhose |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:--------------------|:-------------------|:--------------------|:----------|:--------|:-------|:-------------------|:--------|:---------------------|:--------|:----------------|:---------------|:--------------------|:----------------|:-------------------|:-------------|:-----|:--------|:-------------|:----------------|:--------|:-------------|:--------|:-------|:------|:--------------------|:-----------|:---------------|:---------------|:---------------|:--------------------|:---------------|:------------------|:-------------------|:----------------|:--------------|:---------|:--------------------|:--------------|:--------------|:---------|:------------------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | X | | | X | X | X | | X | | X | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 9 |  |  |  |  |  | X | | | X | X | X | X | X | | | | | | | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | | | | X | | | X | | X | | | | | X | | X | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | | | X | | X | X | | | | | | | X | | X | X | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | | | | X | | | X | | | | | | | X | | X | | | | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/katsuragi_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T17:02:03+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T20:05:31+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of katsuragi/葛城 (Kantai Collection)
===========================================
This is the dataset of katsuragi/葛城 (Kantai Collection), containing 423 images and their tags.
The core tags of this character are 'black\_hair, long\_hair, ribbon, ponytail, hair\_ribbon, blue\_eyes, white\_ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
4a7e0c1603ba6a11feeee76ae1ebbdbe8ba5ddb7
|
# Dataset Card for "xp3_ar_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Zaid/xp3_ar_cleaned
|
[
"region:us"
] |
2023-08-21T17:12:20+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1791795999.591099, "num_examples": 809742}], "download_size": 1025699058, "dataset_size": 1791795999.591099}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-21T17:13:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xp3_ar_cleaned"
More Information needed
|
[
"# Dataset Card for \"xp3_ar_cleaned\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xp3_ar_cleaned\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xp3_ar_cleaned\"\n\nMore Information needed"
] |
fb0bf4b0f4bba2abdf06747697c1904d92ccabbd
|
# Dataset of noshiro/能代/能代 (Kantai Collection)
This is the dataset of noshiro/能代/能代 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `brown_hair, braid, long_hair, twin_braids, green_eyes, breasts, bangs, large_breasts, swept_bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 544.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/noshiro_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 330.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/noshiro_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1201 | 707.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/noshiro_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 494.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/noshiro_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1201 | 971.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/noshiro_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/noshiro_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 16 |  |  |  |  |  | 1girl, black_bikini, solo, white_shirt, cleavage, tied_shirt, bikini_under_clothes, looking_at_viewer, one-hour_drawing_challenge, cowboy_shot, upper_body, simple_background, official_alternate_costume, twitter_username, white_background, wrist_scrunchie, red_skirt, dated, midriff, navel, red_scrunchie |
| 1 | 11 |  |  |  |  |  | 1girl, black_bikini, cleavage, day, white_shirt, blue_sky, cloud, looking_at_viewer, navel, outdoors, solo, tied_shirt, collarbone, blush, cowboy_shot, hair_between_eyes, red_shorts, beach, bikini_under_clothes, collared_shirt, wrist_scrunchie, ocean, smile, open_mouth, red_scrunchie |
| 2 | 6 |  |  |  |  |  | 1girl, black_bikini, cleavage, collarbone, looking_at_viewer, smile, solo, tied_shirt, white_shirt, beachball, blush, red_scrunchie, wrist_scrunchie, gradient_background, navel, upper_body, open_mouth, twitter_username |
| 3 | 31 |  |  |  |  |  | 1girl, serafuku, necktie, pleated_skirt, red_skirt, solo, white_gloves, black_sailor_collar, midriff, sleeveless_shirt, anchor_symbol, looking_at_viewer, cleavage, simple_background, navel, single_thighhigh, garter_straps, white_background, cowboy_shot, uneven_legwear |
| 4 | 19 |  |  |  |  |  | 1girl, serafuku, solo, looking_at_viewer, white_gloves, cleavage, garter_straps, single_thighhigh, pleated_skirt, blush, midriff, navel, open_mouth, necktie |
| 5 | 11 |  |  |  |  |  | 1girl, black_sailor_collar, black_skirt, dress_shirt, long_sleeves, pleated_skirt, sailor_shirt, serafuku, solo, black_shirt, garter_straps, looking_at_viewer, cowboy_shot, simple_background, white_background, belt, black_thighhighs |
| 6 | 6 |  |  |  |  |  | 1girl, black_sailor_collar, black_shirt, black_skirt, dress_shirt, hair_between_eyes, long_sleeves, pleated_skirt, solo, black_belt, looking_at_viewer, sailor_shirt, serafuku, simple_background, cowboy_shot, smile, white_background, blush, garter_straps |
| 7 | 5 |  |  |  |  |  | 1girl, black_sailor_collar, long_sleeves, sailor_shirt, serafuku, solo, upper_body, black_shirt, dress_shirt, looking_at_viewer, blue_sailor_collar, simple_background, white_background |
| 8 | 5 |  |  |  |  |  | 1girl, alternate_costume, black_sweater, long_sleeves, looking_at_viewer, ribbed_sweater, solo, black_pantyhose, blush, pleated_skirt, simple_background, brown_skirt, cowboy_shot, hair_between_eyes, turtleneck_sweater, blue_sweater, open_mouth, smile, white_background |
| 9 | 19 |  |  |  |  |  | 1girl, 1boy, blush, hetero, solo_focus, white_gloves, penis, nipples, open_mouth, paizuri, looking_at_viewer, cum_on_breasts, school_uniform, bar_censor, pubic_hair |
| 10 | 5 |  |  |  |  |  | 1boy, 1girl, cowgirl_position, girl_on_top, hetero, navel, nipples, sex, solo_focus, vaginal, cum_in_pussy, open_mouth, blush, completely_nude, female_pubic_hair, looking_at_viewer, bouncing_breasts, censored, penis, sweat, white_gloves |
| 11 | 24 |  |  |  |  |  | playboy_bunny, rabbit_ears, 1girl, solo, detached_collar, strapless_leotard, fake_animal_ears, wrist_cuffs, looking_at_viewer, cleavage, rabbit_tail, black_pantyhose, cowboy_shot, simple_background, white_background, alternate_costume, necktie, bowtie, red_leotard, black_leotard |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_bikini | solo | white_shirt | cleavage | tied_shirt | bikini_under_clothes | looking_at_viewer | one-hour_drawing_challenge | cowboy_shot | upper_body | simple_background | official_alternate_costume | twitter_username | white_background | wrist_scrunchie | red_skirt | dated | midriff | navel | red_scrunchie | day | blue_sky | cloud | outdoors | collarbone | blush | hair_between_eyes | red_shorts | beach | collared_shirt | ocean | smile | open_mouth | beachball | gradient_background | serafuku | necktie | pleated_skirt | white_gloves | black_sailor_collar | sleeveless_shirt | anchor_symbol | single_thighhigh | garter_straps | uneven_legwear | black_skirt | dress_shirt | long_sleeves | sailor_shirt | black_shirt | belt | black_thighhighs | black_belt | blue_sailor_collar | alternate_costume | black_sweater | ribbed_sweater | black_pantyhose | brown_skirt | turtleneck_sweater | blue_sweater | 1boy | hetero | solo_focus | penis | nipples | paizuri | cum_on_breasts | school_uniform | bar_censor | pubic_hair | cowgirl_position | girl_on_top | sex | vaginal | cum_in_pussy | completely_nude | female_pubic_hair | bouncing_breasts | censored | sweat | playboy_bunny | rabbit_ears | detached_collar | strapless_leotard | fake_animal_ears | wrist_cuffs | rabbit_tail | bowtie | red_leotard | black_leotard |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:---------------|:-------|:--------------|:-----------|:-------------|:-----------------------|:--------------------|:-----------------------------|:--------------|:-------------|:--------------------|:-----------------------------|:-------------------|:-------------------|:------------------|:------------|:--------|:----------|:--------|:----------------|:------|:-----------|:--------|:-----------|:-------------|:--------|:--------------------|:-------------|:--------|:-----------------|:--------|:--------|:-------------|:------------|:----------------------|:-----------|:----------|:----------------|:---------------|:----------------------|:-------------------|:----------------|:-------------------|:----------------|:-----------------|:--------------|:--------------|:---------------|:---------------|:--------------|:-------|:-------------------|:-------------|:---------------------|:--------------------|:----------------|:-----------------|:------------------|:--------------|:---------------------|:---------------|:-------|:---------|:-------------|:--------|:----------|:----------|:-----------------|:-----------------|:-------------|:-------------|:-------------------|:--------------|:------|:----------|:---------------|:------------------|:--------------------|:-------------------|:-----------|:--------|:----------------|:--------------|:------------------|:--------------------|:-------------------|:--------------|:--------------|:---------|:--------------|:----------------|
| 0 | 16 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | | X | | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | X | X | X | | X | | | X | | | X | | X | | | | X | X | | | | | X | X | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 31 |  |  |  |  |  | X | | X | | X | | | X | | X | | X | | | X | | X | | X | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 19 |  |  |  |  |  | X | | X | | X | | | X | | | | | | | | | | | X | X | | | | | | | X | | | | | | | X | | | X | X | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 11 |  |  |  |  |  | X | | X | | | | | X | | X | | X | | | X | | | | | | | | | | | | | | | | | | | | | | X | | X | | X | | | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | | X | | | | | X | | X | | X | | | X | | | | | | | | | | | | X | X | | | | | X | | | | X | | X | | X | | | | X | | X | X | X | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | | X | | | | | X | | | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | X | | | | X | | | | | | | X | X | X | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | | X | | | | | X | | X | | X | | | X | | | | | | | | | | | | X | X | | | | | X | X | | | | | X | | | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 19 |  |  |  |  |  | X | | | | | | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 10 | 5 |  |  |  |  |  | X | | | | | | | X | | | | | | | | | | | | X | | | | | | | X | | | | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 11 | 24 |  |  |  |  |  | X | | X | | X | | | X | | X | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/noshiro_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T17:30:41+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T13:05:18+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of noshiro/能代/能代 (Kantai Collection)
============================================
This is the dataset of noshiro/能代/能代 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are 'brown\_hair, braid, long\_hair, twin\_braids, green\_eyes, breasts, bangs, large\_breasts, swept\_bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
6bd151b5fa55b3d565e876425a1cde4b0f08340a
|
INTEGRANTES :
1. Alex Rodriguez - Diplomado 2023
2. Gabriel Muñoz - Diplomado 2023
Objetivo : Plicar técnicas de deep learning para resolver un problema de clasificación de imágenes
se creo las carpetas CALZADOS y dentro 2 carpetas train y val , dentro se crearon 2 capertas calzados de mujer (CALZADOMUJER) y calzado de hombre (CALZADOHOMBRE) en ambos directorio (train y val)
|
diplomado2023/calzados
|
[
"license:apache-2.0",
"region:us"
] |
2023-08-21T17:33:15+00:00
|
{"license": "apache-2.0"}
|
2023-08-22T20:24:23+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
INTEGRANTES :
1. Alex Rodriguez - Diplomado 2023
2. Gabriel Muñoz - Diplomado 2023
Objetivo : Plicar técnicas de deep learning para resolver un problema de clasificación de imágenes
se creo las carpetas CALZADOS y dentro 2 carpetas train y val , dentro se crearon 2 capertas calzados de mujer (CALZADOMUJER) y calzado de hombre (CALZADOHOMBRE) en ambos directorio (train y val)
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
b0ba20823511bc1f137a15c2c9a39765fbc983e4
|
# Dataset of asashimo/朝霜/朝霜 (Kantai Collection)
This is the dataset of asashimo/朝霜/朝霜 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `long_hair, ahoge, grey_hair, hair_over_one_eye, ponytail, grey_eyes, bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 501.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asashimo_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 312.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asashimo_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1084 | 642.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asashimo_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 444.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asashimo_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1084 | 855.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/asashimo_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/asashimo_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 21 |  |  |  |  |  | 1girl, solo, sharp_teeth, looking_at_viewer, happi, sarashi, simple_background, smile, white_shorts, open_mouth, white_background, blush, breasts, green_eyes, paper_fan |
| 1 | 5 |  |  |  |  |  | 1girl, bowtie, dress, grey_pantyhose, halterneck, looking_at_viewer, school_uniform, simple_background, solo, white_shirt, cowboy_shot, white_background, long_sleeves, smile, white_hair |
| 2 | 13 |  |  |  |  |  | 1girl, looking_at_viewer, school_uniform, solo, white_shirt, bowtie, halterneck, simple_background, grin, sharp_teeth, white_background, one-hour_drawing_challenge, twitter_username, long_sleeves, purple_dress, grey_pantyhose |
| 3 | 8 |  |  |  |  |  | 1girl, blazer, halterneck, purple_dress, school_uniform, sharp_teeth, sleeves_rolled_up, solo, looking_at_viewer, mismatched_legwear, white_background, cowboy_shot, grin, simple_background, multicolored_hair, grey_thighhighs, twitter_username, aqua_bowtie, one-hour_drawing_challenge |
| 4 | 6 |  |  |  |  |  | bowtie, open_mouth, pantyhose, school_uniform, solo_focus, white_shirt, 2girls, sharp_teeth, simple_background, white_background, lace-up_boots, long_sleeves, sleeveless_dress, halterneck, multicolored_hair, smile |
| 5 | 5 |  |  |  |  |  | 1girl, pleated_skirt, purple_skirt, purple_vest, school_uniform, short_sleeves, solo, white_shirt, aqua_bowtie, grey_thighhighs, fingerless_gloves, holding, asymmetrical_legwear, cowboy_shot, sharp_teeth, smile |
| 6 | 11 |  |  |  |  |  | 1girl, solo, yukata, looking_at_viewer, food, open_mouth, sharp_teeth, smile, alternate_costume, obi, simple_background, white_background |
| 7 | 11 |  |  |  |  |  | playboy_bunny, rabbit_ears, detached_collar, fake_animal_ears, strapless_leotard, 1girl, purple_leotard, solo, wrist_cuffs, bowtie, grey_pantyhose, sharp_teeth, looking_at_viewer, simple_background, fishnet_pantyhose, grin, rabbit_tail, thighband_pantyhose, white_background, aqua_bow, purple_footwear, small_breasts |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | sharp_teeth | looking_at_viewer | happi | sarashi | simple_background | smile | white_shorts | open_mouth | white_background | blush | breasts | green_eyes | paper_fan | bowtie | dress | grey_pantyhose | halterneck | school_uniform | white_shirt | cowboy_shot | long_sleeves | white_hair | grin | one-hour_drawing_challenge | twitter_username | purple_dress | blazer | sleeves_rolled_up | mismatched_legwear | multicolored_hair | grey_thighhighs | aqua_bowtie | pantyhose | solo_focus | 2girls | lace-up_boots | sleeveless_dress | pleated_skirt | purple_skirt | purple_vest | short_sleeves | fingerless_gloves | holding | asymmetrical_legwear | yukata | food | alternate_costume | obi | playboy_bunny | rabbit_ears | detached_collar | fake_animal_ears | strapless_leotard | purple_leotard | wrist_cuffs | fishnet_pantyhose | rabbit_tail | thighband_pantyhose | aqua_bow | purple_footwear | small_breasts |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------|:--------------------|:--------|:----------|:--------------------|:--------|:---------------|:-------------|:-------------------|:--------|:----------|:-------------|:------------|:---------|:--------|:-----------------|:-------------|:-----------------|:--------------|:--------------|:---------------|:-------------|:-------|:-----------------------------|:-------------------|:---------------|:---------|:--------------------|:---------------------|:--------------------|:------------------|:--------------|:------------|:-------------|:---------|:----------------|:-------------------|:----------------|:---------------|:--------------|:----------------|:--------------------|:----------|:-----------------------|:---------|:-------|:--------------------|:------|:----------------|:--------------|:------------------|:-------------------|:--------------------|:-----------------|:--------------|:--------------------|:--------------|:----------------------|:-----------|:------------------|:----------------|
| 0 | 21 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | | X | | | X | X | | | X | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 13 |  |  |  |  |  | X | X | X | X | | | X | | | | X | | | | | X | | X | X | X | X | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | X | X | X | X | | | X | | | | X | | | | | | | | X | X | | X | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | | | X | | | | X | X | | X | X | | | | | X | | | X | X | X | | X | | | | | | | | | X | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | X | | | | | X | | | | | | | | | | | | X | X | X | | | | | | | | | | | X | X | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 6 | 11 |  |  |  |  |  | X | X | X | X | | | X | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | |
| 7 | 11 |  |  |  |  |  | X | X | X | X | | | X | | | | X | | | | | X | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/asashimo_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T17:59:15+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T14:27:31+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of asashimo/朝霜/朝霜 (Kantai Collection)
=============================================
This is the dataset of asashimo/朝霜/朝霜 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are 'long\_hair, ahoge, grey\_hair, hair\_over\_one\_eye, ponytail, grey\_eyes, bow', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
d0bc3009fc13bf712ea2cb0aee50b6a94973f2c8
|
# Dataset of etorofu (Kantai Collection)
This is the dataset of etorofu (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `braid, red_hair, twin_braids, purple_eyes, thick_eyebrows, bob_cut, hat, white_headwear, short_hair, gradient_hair, sailor_hat, multicolored_hair, ribbon, side_braid, blonde_hair, blue_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 409.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/etorofu_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 276.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/etorofu_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1094 | 587.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/etorofu_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 379.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/etorofu_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1094 | 766.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/etorofu_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/etorofu_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, playboy_bunny, rabbit_ears, detached_collar, fake_animal_ears, solo, strapless_leotard, white_gloves, looking_at_viewer, simple_background, white_background, wrist_cuffs, rabbit_tail, black_pantyhose, cowboy_shot, adapted_costume, blue_leotard, bowtie, covered_navel, small_breasts |
| 1 | 23 |  |  |  |  |  | 1girl, blue_neckerchief, blue_sailor_collar, blue_skirt, pleated_skirt, serafuku, solo, bike_shorts, long_sleeves, looking_at_viewer, shorts_under_skirt, white_gloves, open_mouth, cowboy_shot, white_background, simple_background |
| 2 | 14 |  |  |  |  |  | 1girl, bike_shorts, black_socks, blue_neckerchief, blue_sailor_collar, blue_skirt, long_sleeves, pleated_skirt, serafuku, shorts_under_skirt, solo, white_background, white_gloves, simple_background, full_body, open_mouth, looking_at_viewer |
| 3 | 12 |  |  |  |  |  | 1girl, blue_sailor_collar, serafuku, solo, upper_body, looking_at_viewer, blue_neckerchief, white_gloves, open_mouth, simple_background, white_background, long_sleeves, smile |
| 4 | 9 |  |  |  |  |  | 1girl, looking_at_viewer, white_bikini, blush, solo, flat_chest, navel, cowboy_shot, micro_bikini, side-tie_bikini_bottom, simple_background, white_background, collarbone, white_gloves |
| 5 | 8 |  |  |  |  |  | 1girl, simple_background, solo, white_background, overalls, short_sleeves, alternate_costume, white_shirt, blush, dress, open_mouth, holding, orange_hair, shopping_bag, upper_body |
| 6 | 5 |  |  |  |  |  | 1girl, black_skirt, solo, white_shirt, bag, full_body, looking_at_viewer, official_alternate_costume, rubber_boots, simple_background, yellow_footwear, pink_umbrella, polka_dot, socks, striped_shirt, puffy_short_sleeves, white_background |
| 7 | 6 |  |  |  |  |  | 1girl, alternate_costume, bag, smile, jacket, long_sleeves, open_mouth, solo, suspender_skirt, plaid_skirt, sweater, blue_skirt, full_body, looking_at_viewer, mary_janes, pleated_skirt, simple_background, socks, white_background, white_shirt |
| 8 | 12 |  |  |  |  |  | 1girl, wide_sleeves, yukata, long_sleeves, solo, obi, smile, alternate_costume, checkered_kimono, open_mouth, blue_kimono, cotton_candy, holding, white_background, food, looking_at_viewer, simple_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | playboy_bunny | rabbit_ears | detached_collar | fake_animal_ears | solo | strapless_leotard | white_gloves | looking_at_viewer | simple_background | white_background | wrist_cuffs | rabbit_tail | black_pantyhose | cowboy_shot | adapted_costume | blue_leotard | bowtie | covered_navel | small_breasts | blue_neckerchief | blue_sailor_collar | blue_skirt | pleated_skirt | serafuku | bike_shorts | long_sleeves | shorts_under_skirt | open_mouth | black_socks | full_body | upper_body | smile | white_bikini | blush | flat_chest | navel | micro_bikini | side-tie_bikini_bottom | collarbone | overalls | short_sleeves | alternate_costume | white_shirt | dress | holding | orange_hair | shopping_bag | black_skirt | bag | official_alternate_costume | rubber_boots | yellow_footwear | pink_umbrella | polka_dot | socks | striped_shirt | puffy_short_sleeves | jacket | suspender_skirt | plaid_skirt | sweater | mary_janes | wide_sleeves | yukata | obi | checkered_kimono | blue_kimono | cotton_candy | food |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------|:--------------|:------------------|:-------------------|:-------|:--------------------|:---------------|:--------------------|:--------------------|:-------------------|:--------------|:--------------|:------------------|:--------------|:------------------|:---------------|:---------|:----------------|:----------------|:-------------------|:---------------------|:-------------|:----------------|:-----------|:--------------|:---------------|:---------------------|:-------------|:--------------|:------------|:-------------|:--------|:---------------|:--------|:-------------|:--------|:---------------|:-------------------------|:-------------|:-----------|:----------------|:--------------------|:--------------|:--------|:----------|:--------------|:---------------|:--------------|:------|:-----------------------------|:---------------|:------------------|:----------------|:------------|:--------|:----------------|:----------------------|:---------|:------------------|:--------------|:----------|:-------------|:---------------|:---------|:------|:-------------------|:--------------|:---------------|:-------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 23 |  |  |  |  |  | X | | | | | X | | X | X | X | X | | | | X | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 14 |  |  |  |  |  | X | | | | | X | | X | X | X | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 12 |  |  |  |  |  | X | | | | | X | | X | X | X | X | | | | | | | | | | X | X | | | X | | X | | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 9 |  |  |  |  |  | X | | | | | X | | X | X | X | X | | | | X | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 8 |  |  |  |  |  | X | | | | | X | | | | X | X | | | | | | | | | | | | | | | | | | X | | | X | | | X | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | | | | | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | | | | | X | | | X | X | X | | | | | | | | | | | | X | X | | | X | | X | | X | | X | | | | | | | | | | X | X | | | | | | X | | | | | | X | | | X | X | X | X | X | | | | | | | |
| 8 | 12 |  |  |  |  |  | X | | | | | X | | | X | X | X | | | | | | | | | | | | | | | | X | | X | | | | X | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X |
|
CyberHarem/etorofu_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T19:04:20+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T08:12:02+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of etorofu (Kantai Collection)
======================================
This is the dataset of etorofu (Kantai Collection), containing 500 images and their tags.
The core tags of this character are 'braid, red\_hair, twin\_braids, purple\_eyes, thick\_eyebrows, bob\_cut, hat, white\_headwear, short\_hair, gradient\_hair, sailor\_hat, multicolored\_hair, ribbon, side\_braid, blonde\_hair, blue\_ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
840cdcf9ce70acca170690e944162933eb48c2ba
|
# Dataset of nachi/那智/那智 (Kantai Collection)
This is the dataset of nachi/那智/那智 (Kantai Collection), containing 415 images and their tags.
The core tags of this character are `long_hair, side_ponytail, black_hair, brown_eyes, very_long_hair, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 415 | 349.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nachi_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 415 | 236.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nachi_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 898 | 465.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nachi_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 415 | 322.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nachi_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 898 | 593.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nachi_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nachi_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, black_skirt, cowboy_shot, looking_at_viewer, pencil_skirt, solo, white_gloves, white_pantyhose, simple_background, white_background, hand_on_hip, military_uniform, long_sleeves |
| 1 | 10 |  |  |  |  |  | 1girl, elbow_gloves, solo, skirt, white_gloves, brown_hair, white_pantyhose, looking_at_viewer, turret |
| 2 | 23 |  |  |  |  |  | military_uniform, 1girl, hair_between_eyes, solo, upper_body, white_gloves, simple_background, looking_at_viewer, long_sleeves, white_background, jacket, blush |
| 3 | 6 |  |  |  |  |  | 1girl, solo, upper_body, white_shirt, collared_shirt, hair_between_eyes, blush, dress_shirt, long_sleeves, looking_at_viewer, simple_background, smile, white_background |
| 4 | 12 |  |  |  |  |  | 1girl, playboy_bunny, solo, fake_animal_ears, rabbit_ears, detached_collar, cleavage, blush, large_breasts, looking_at_viewer, rabbit_tail, simple_background, wrist_cuffs, black_pantyhose, hair_between_eyes, strapless_leotard, cowboy_shot, necktie |
| 5 | 6 |  |  |  |  |  | 1girl, navel, solo, black_bikini, blush, looking_at_viewer, cleavage, large_breasts, smile, dated, gloves, simple_background |
| 6 | 5 |  |  |  |  |  | 1girl, alternate_costume, solo, yukata, looking_at_viewer, obi, twitter_username, fireworks, floral_print, one-hour_drawing_challenge, purple_kimono, upper_body |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_skirt | cowboy_shot | looking_at_viewer | pencil_skirt | solo | white_gloves | white_pantyhose | simple_background | white_background | hand_on_hip | military_uniform | long_sleeves | elbow_gloves | skirt | brown_hair | turret | hair_between_eyes | upper_body | jacket | blush | white_shirt | collared_shirt | dress_shirt | smile | playboy_bunny | fake_animal_ears | rabbit_ears | detached_collar | cleavage | large_breasts | rabbit_tail | wrist_cuffs | black_pantyhose | strapless_leotard | necktie | navel | black_bikini | dated | gloves | alternate_costume | yukata | obi | twitter_username | fireworks | floral_print | one-hour_drawing_challenge | purple_kimono |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:--------------|:--------------------|:---------------|:-------|:---------------|:------------------|:--------------------|:-------------------|:--------------|:-------------------|:---------------|:---------------|:--------|:-------------|:---------|:--------------------|:-------------|:---------|:--------|:--------------|:-----------------|:--------------|:--------|:----------------|:-------------------|:--------------|:------------------|:-----------|:----------------|:--------------|:--------------|:------------------|:--------------------|:----------|:--------|:---------------|:--------|:---------|:--------------------|:---------|:------|:-------------------|:------------|:---------------|:-----------------------------|:----------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | | | X | | X | X | X | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 23 |  |  |  |  |  | X | | | X | | X | X | | X | X | | X | X | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | | X | | X | | | X | X | | | X | | | | | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 12 |  |  |  |  |  | X | | X | X | | X | | | X | | | | | | | | | X | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | | | X | | X | | | X | | | | | | | | | | | | X | | | | X | | | | | X | X | | | | | | X | X | X | X | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | | | X | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X |
|
CyberHarem/nachi_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T19:05:27+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T09:25:16+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of nachi/那智/那智 (Kantai Collection)
==========================================
This is the dataset of nachi/那智/那智 (Kantai Collection), containing 415 images and their tags.
The core tags of this character are 'long\_hair, side\_ponytail, black\_hair, brown\_eyes, very\_long\_hair, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
74e8369e0ea5c000bbf87f8e39ef22ee86562923
|
This dataset is generated by [Lilac](http://lilacml.com) for a HuggingFace Space: [huggingface.co/spaces/lilacai/lilac](https://huggingface.co/spaces/lilacai/lilac).
Lilac dataset config:
```namespace: lilac
name: the_movies_dataset
source:
filepaths:
- https://storage.googleapis.com/lilac-data/datasets/the_movies_dataset/the_movies_dataset.csv
source_name: csv
embeddings:
- path: overview
embedding: gte-small
signals:
- path: overview
signal:
signal_name: near_dup
- path: overview
signal:
signal_name: pii
- path: overview
signal:
signal_name: lang_detection
- path: overview
signal:
signal_name: text_statistics
- path: overview
signal:
embedding: gte-small
namespace: lilac
concept_name: legal-termination
signal_name: concept_score
- path: overview
signal:
embedding: gte-small
namespace: lilac
concept_name: negative-sentiment
signal_name: concept_score
- path: overview
signal:
embedding: gte-small
namespace: lilac
concept_name: non-english
signal_name: concept_score
- path: overview
signal:
embedding: gte-small
namespace: lilac
concept_name: positive-sentiment
signal_name: concept_score
- path: overview
signal:
embedding: gte-small
namespace: lilac
concept_name: profanity
signal_name: concept_score
- path: overview
signal:
embedding: gte-small
namespace: lilac
concept_name: question
signal_name: concept_score
- path: overview
signal:
embedding: gte-small
namespace: lilac
concept_name: source-code
signal_name: concept_score
- path: overview
signal:
embedding: gte-small
namespace: lilac
concept_name: toxicity
signal_name: concept_score
- path: overview
signal:
embedding: gte-small
namespace: lilac
concept_name: legal-termination
signal_name: concept_score
- path: overview
signal:
embedding: gte-small
namespace: lilac
concept_name: negative-sentiment
signal_name: concept_score
- path: overview
signal:
embedding: gte-small
namespace: lilac
concept_name: non-english
signal_name: concept_score
- path: overview
signal:
embedding: gte-small
namespace: lilac
concept_name: positive-sentiment
signal_name: concept_score
- path: overview
signal:
embedding: gte-small
namespace: lilac
concept_name: profanity
signal_name: concept_score
- path: overview
signal:
embedding: gte-small
namespace: lilac
concept_name: question
signal_name: concept_score
- path: overview
signal:
embedding: gte-small
namespace: lilac
concept_name: source-code
signal_name: concept_score
- path: overview
signal:
embedding: gte-small
namespace: lilac
concept_name: toxicity
signal_name: concept_score
- path: overview
signal:
signal_name: cluster_dbscan
- path: overview
signal:
embedding: gte-small
signal_name: cluster_hdbscan
settings:
ui:
media_paths:
- overview
markdown_paths: []
tags:
- other
```
|
lilacai/lilac-the_movies_dataset
|
[
"region:us"
] |
2023-08-21T19:26:10+00:00
|
{}
|
2023-12-07T13:57:40+00:00
|
[] |
[] |
TAGS
#region-us
|
This dataset is generated by Lilac for a HuggingFace Space: URL
Lilac dataset config:
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
d4ec2452349f36873820d697d6898ce039018a87
|
# Dataset Card for "Application_110K"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
EgilKarlsen/Application_110K
|
[
"region:us"
] |
2023-08-21T19:33:06+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "log", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 31417397, "num_examples": 100000}, {"name": "validation", "num_bytes": 3119424, "num_examples": 10000}], "download_size": 6859931, "dataset_size": 34536821}}
|
2023-08-21T19:33:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Application_110K"
More Information needed
|
[
"# Dataset Card for \"Application_110K\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Application_110K\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Application_110K\"\n\nMore Information needed"
] |
fe8bfda22d0228e88b449e8cdb64b6a08e34edd8
|
# Estimating Solar Irradiance with Image Regression
- **Homepage:** [Sage Continuum](https://sagecontinuum.org/)
- **Author:** Alex Shen, Northwestern University
- **Mentors:** Bhupendra Raut, Seongha Park
- **Repository:** [GitHub Repository](https://github.com/waggle-sensor/summer2023/tree/main/Shen)
# Goal and Importance
Our goal was to create a model to estimate solar irradiance in the sky based on ground images taken from waggle nodes. This could help in the following ways:
- Solar energy generation: It could help in predicting energy generation more accurately resulting in improved efficiency and grid management
- Weather forecasting- Could assist meteorologists in predicting weather patterns using solar irradiance levels, and in analyzing current weather conditions
- Climate change: Would help with modeling climate change, could contribute to understanding and assist in mitigating global warming
- Smart Homes: Would be able to help smart homes manage energy more efficiently (control certain devices based on irradiance levels)
# Data Preprocessing
In the data preprocessing stage we created a csv file that stored all the images to their matching solar irradiance values. The images were taken from the Sage Waggle Node's top camera and the solar irradiance values were taken from the Argonne National Laboratory tower readings. We made sure to exclude night time photos since there is no sun and we exclusively used summer-time photos as we wanted to stick to a seasonal model that would be able to make estimates more consistently. Furthermore we also eventually downsized the images original 2000x2000 images to 500x500 images since the training was taking a bit too long when the images were larger.

*Example training image taken from waggle node W039*, 2000x2000 pixels
# Training and Model
In our training, before the image was transformed to a tensor, the image was resized down to 224x224 to stay consistent with the pre-trained models. The image was also randomly flipped with a 50% chance and rotated randomly between 0-359 degrees so the model would be able to generalize better. For our model we compared all of the pretrained ResNet models and the VGG-16 model. However we replaced the last fc layer so that the model would give us a continuous value as an estimate instead of a range. We found that the ResNet 50 model performed the best with the lowest mean absolute error of 82. All in all, I think that the error was small enough to justify creating the plugin. In the plugin the waggle node simply snaps an image of the sky using its top camera, and notes the solar irradiance that the model predicts and publishes it to the Beehive Repository.
# Graphs

<br>
_Graph showing the # of times that each margin of error appeared in our tesing images. For example, the model predicting 10 when the irradiance is 20 would result in an error of 10, raising the first bar of the bar graph 1 occurence higher_
<br>

_This graph plots the predicted irradiance of a test image against its actual irradiance value. The dots are centering mostly around the y=x line meaning the model is predicting accurately on average. Also since there are points both above and below the line the model is not biased towards either overestimating or underestimating also causing it to predict well on average_
# Future Directions
- Increase training data to decrease MAE
- Work around identifying through the thin cloud layers since it causes mistakes in the model by severely underestimating the irradiance value due to thin clouds covering the image
- Work on identifying correct irradiance values during sunsets and sunrises. The model occasionally overestimates irradiance when the sun is at its perimeter due to greater light exposure in the image
- Implement a feature to forecast solar irradiance levels based on the patterns of data gathered
|
sagecontinuum/solarirradiancedataset
|
[
"license:mit",
"climate",
"region:us"
] |
2023-08-21T19:41:01+00:00
|
{"license": "mit", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "irradiance", "dtype": "float32"}], "splits": [{"name": "full", "num_bytes": 13466250, "num_examples": 1000}], "download_size": 14234112, "dataset_size": 13466250}, "tags": ["climate"]}
|
2023-09-11T19:56:09+00:00
|
[] |
[] |
TAGS
#license-mit #climate #region-us
|
# Estimating Solar Irradiance with Image Regression
- Homepage: Sage Continuum
- Author: Alex Shen, Northwestern University
- Mentors: Bhupendra Raut, Seongha Park
- Repository: GitHub Repository
# Goal and Importance
Our goal was to create a model to estimate solar irradiance in the sky based on ground images taken from waggle nodes. This could help in the following ways:
- Solar energy generation: It could help in predicting energy generation more accurately resulting in improved efficiency and grid management
- Weather forecasting- Could assist meteorologists in predicting weather patterns using solar irradiance levels, and in analyzing current weather conditions
- Climate change: Would help with modeling climate change, could contribute to understanding and assist in mitigating global warming
- Smart Homes: Would be able to help smart homes manage energy more efficiently (control certain devices based on irradiance levels)
# Data Preprocessing
In the data preprocessing stage we created a csv file that stored all the images to their matching solar irradiance values. The images were taken from the Sage Waggle Node's top camera and the solar irradiance values were taken from the Argonne National Laboratory tower readings. We made sure to exclude night time photos since there is no sun and we exclusively used summer-time photos as we wanted to stick to a seasonal model that would be able to make estimates more consistently. Furthermore we also eventually downsized the images original 2000x2000 images to 500x500 images since the training was taking a bit too long when the images were larger.
!alt text
*Example training image taken from waggle node W039*, 2000x2000 pixels
# Training and Model
In our training, before the image was transformed to a tensor, the image was resized down to 224x224 to stay consistent with the pre-trained models. The image was also randomly flipped with a 50% chance and rotated randomly between 0-359 degrees so the model would be able to generalize better. For our model we compared all of the pretrained ResNet models and the VGG-16 model. However we replaced the last fc layer so that the model would give us a continuous value as an estimate instead of a range. We found that the ResNet 50 model performed the best with the lowest mean absolute error of 82. All in all, I think that the error was small enough to justify creating the plugin. In the plugin the waggle node simply snaps an image of the sky using its top camera, and notes the solar irradiance that the model predicts and publishes it to the Beehive Repository.
# Graphs
!alt text
<br>
_Graph showing the # of times that each margin of error appeared in our tesing images. For example, the model predicting 10 when the irradiance is 20 would result in an error of 10, raising the first bar of the bar graph 1 occurence higher_
<br>
!alt text
_This graph plots the predicted irradiance of a test image against its actual irradiance value. The dots are centering mostly around the y=x line meaning the model is predicting accurately on average. Also since there are points both above and below the line the model is not biased towards either overestimating or underestimating also causing it to predict well on average_
# Future Directions
- Increase training data to decrease MAE
- Work around identifying through the thin cloud layers since it causes mistakes in the model by severely underestimating the irradiance value due to thin clouds covering the image
- Work on identifying correct irradiance values during sunsets and sunrises. The model occasionally overestimates irradiance when the sun is at its perimeter due to greater light exposure in the image
- Implement a feature to forecast solar irradiance levels based on the patterns of data gathered
|
[
"# Estimating Solar Irradiance with Image Regression\n- Homepage: Sage Continuum\n- Author: Alex Shen, Northwestern University\n- Mentors: Bhupendra Raut, Seongha Park\n- Repository: GitHub Repository",
"# Goal and Importance\nOur goal was to create a model to estimate solar irradiance in the sky based on ground images taken from waggle nodes. This could help in the following ways:\n- Solar energy generation: It could help in predicting energy generation more accurately resulting in improved efficiency and grid management\n- Weather forecasting- Could assist meteorologists in predicting weather patterns using solar irradiance levels, and in analyzing current weather conditions\n- Climate change: Would help with modeling climate change, could contribute to understanding and assist in mitigating global warming\n- Smart Homes: Would be able to help smart homes manage energy more efficiently (control certain devices based on irradiance levels)",
"# Data Preprocessing\nIn the data preprocessing stage we created a csv file that stored all the images to their matching solar irradiance values. The images were taken from the Sage Waggle Node's top camera and the solar irradiance values were taken from the Argonne National Laboratory tower readings. We made sure to exclude night time photos since there is no sun and we exclusively used summer-time photos as we wanted to stick to a seasonal model that would be able to make estimates more consistently. Furthermore we also eventually downsized the images original 2000x2000 images to 500x500 images since the training was taking a bit too long when the images were larger.\n\n!alt text\n*Example training image taken from waggle node W039*, 2000x2000 pixels",
"# Training and Model\nIn our training, before the image was transformed to a tensor, the image was resized down to 224x224 to stay consistent with the pre-trained models. The image was also randomly flipped with a 50% chance and rotated randomly between 0-359 degrees so the model would be able to generalize better. For our model we compared all of the pretrained ResNet models and the VGG-16 model. However we replaced the last fc layer so that the model would give us a continuous value as an estimate instead of a range. We found that the ResNet 50 model performed the best with the lowest mean absolute error of 82. All in all, I think that the error was small enough to justify creating the plugin. In the plugin the waggle node simply snaps an image of the sky using its top camera, and notes the solar irradiance that the model predicts and publishes it to the Beehive Repository.",
"# Graphs\n!alt text\n\n<br>\n\n_Graph showing the # of times that each margin of error appeared in our tesing images. For example, the model predicting 10 when the irradiance is 20 would result in an error of 10, raising the first bar of the bar graph 1 occurence higher_\n\n<br>\n\n!alt text\n\n_This graph plots the predicted irradiance of a test image against its actual irradiance value. The dots are centering mostly around the y=x line meaning the model is predicting accurately on average. Also since there are points both above and below the line the model is not biased towards either overestimating or underestimating also causing it to predict well on average_",
"# Future Directions\n- Increase training data to decrease MAE\n- Work around identifying through the thin cloud layers since it causes mistakes in the model by severely underestimating the irradiance value due to thin clouds covering the image\n- Work on identifying correct irradiance values during sunsets and sunrises. The model occasionally overestimates irradiance when the sun is at its perimeter due to greater light exposure in the image\n- Implement a feature to forecast solar irradiance levels based on the patterns of data gathered"
] |
[
"TAGS\n#license-mit #climate #region-us \n",
"# Estimating Solar Irradiance with Image Regression\n- Homepage: Sage Continuum\n- Author: Alex Shen, Northwestern University\n- Mentors: Bhupendra Raut, Seongha Park\n- Repository: GitHub Repository",
"# Goal and Importance\nOur goal was to create a model to estimate solar irradiance in the sky based on ground images taken from waggle nodes. This could help in the following ways:\n- Solar energy generation: It could help in predicting energy generation more accurately resulting in improved efficiency and grid management\n- Weather forecasting- Could assist meteorologists in predicting weather patterns using solar irradiance levels, and in analyzing current weather conditions\n- Climate change: Would help with modeling climate change, could contribute to understanding and assist in mitigating global warming\n- Smart Homes: Would be able to help smart homes manage energy more efficiently (control certain devices based on irradiance levels)",
"# Data Preprocessing\nIn the data preprocessing stage we created a csv file that stored all the images to their matching solar irradiance values. The images were taken from the Sage Waggle Node's top camera and the solar irradiance values were taken from the Argonne National Laboratory tower readings. We made sure to exclude night time photos since there is no sun and we exclusively used summer-time photos as we wanted to stick to a seasonal model that would be able to make estimates more consistently. Furthermore we also eventually downsized the images original 2000x2000 images to 500x500 images since the training was taking a bit too long when the images were larger.\n\n!alt text\n*Example training image taken from waggle node W039*, 2000x2000 pixels",
"# Training and Model\nIn our training, before the image was transformed to a tensor, the image was resized down to 224x224 to stay consistent with the pre-trained models. The image was also randomly flipped with a 50% chance and rotated randomly between 0-359 degrees so the model would be able to generalize better. For our model we compared all of the pretrained ResNet models and the VGG-16 model. However we replaced the last fc layer so that the model would give us a continuous value as an estimate instead of a range. We found that the ResNet 50 model performed the best with the lowest mean absolute error of 82. All in all, I think that the error was small enough to justify creating the plugin. In the plugin the waggle node simply snaps an image of the sky using its top camera, and notes the solar irradiance that the model predicts and publishes it to the Beehive Repository.",
"# Graphs\n!alt text\n\n<br>\n\n_Graph showing the # of times that each margin of error appeared in our tesing images. For example, the model predicting 10 when the irradiance is 20 would result in an error of 10, raising the first bar of the bar graph 1 occurence higher_\n\n<br>\n\n!alt text\n\n_This graph plots the predicted irradiance of a test image against its actual irradiance value. The dots are centering mostly around the y=x line meaning the model is predicting accurately on average. Also since there are points both above and below the line the model is not biased towards either overestimating or underestimating also causing it to predict well on average_",
"# Future Directions\n- Increase training data to decrease MAE\n- Work around identifying through the thin cloud layers since it causes mistakes in the model by severely underestimating the irradiance value due to thin clouds covering the image\n- Work on identifying correct irradiance values during sunsets and sunrises. The model occasionally overestimates irradiance when the sun is at its perimeter due to greater light exposure in the image\n- Implement a feature to forecast solar irradiance levels based on the patterns of data gathered"
] |
[
15,
56,
153,
179,
211,
157,
122
] |
[
"passage: TAGS\n#license-mit #climate #region-us \n# Estimating Solar Irradiance with Image Regression\n- Homepage: Sage Continuum\n- Author: Alex Shen, Northwestern University\n- Mentors: Bhupendra Raut, Seongha Park\n- Repository: GitHub Repository# Goal and Importance\nOur goal was to create a model to estimate solar irradiance in the sky based on ground images taken from waggle nodes. This could help in the following ways:\n- Solar energy generation: It could help in predicting energy generation more accurately resulting in improved efficiency and grid management\n- Weather forecasting- Could assist meteorologists in predicting weather patterns using solar irradiance levels, and in analyzing current weather conditions\n- Climate change: Would help with modeling climate change, could contribute to understanding and assist in mitigating global warming\n- Smart Homes: Would be able to help smart homes manage energy more efficiently (control certain devices based on irradiance levels)# Data Preprocessing\nIn the data preprocessing stage we created a csv file that stored all the images to their matching solar irradiance values. The images were taken from the Sage Waggle Node's top camera and the solar irradiance values were taken from the Argonne National Laboratory tower readings. We made sure to exclude night time photos since there is no sun and we exclusively used summer-time photos as we wanted to stick to a seasonal model that would be able to make estimates more consistently. Furthermore we also eventually downsized the images original 2000x2000 images to 500x500 images since the training was taking a bit too long when the images were larger.\n\n!alt text\n*Example training image taken from waggle node W039*, 2000x2000 pixels"
] |
dae0b056c40a66bfa9f62cfb0587fd6af7be4272
|
This dataset is generated by [Lilac](http://lilacml.com) for a HuggingFace Space: [huggingface.co/spaces/lilacai/lilac](https://huggingface.co/spaces/lilacai/lilac).
Original dataset: [https://huggingface.co/datasets/opus100](https://huggingface.co/datasets/opus100)
Lilac dataset config:
```namespace: lilac
name: opus100-en-es-validation
source:
dataset_name: opus100
config_name: en-es
split: validation
source_name: huggingface
embeddings:
- path:
- translation
- en
embedding: gte-small
- path:
- translation
- es
embedding: gte-small
signals:
- path:
- translation
- en
signal:
signal_name: near_dup
- path:
- translation
- en
signal:
signal_name: pii
- path:
- translation
- en
signal:
signal_name: lang_detection
- path:
- translation
- en
signal:
embedding: gte-small
namespace: lilac
concept_name: positive-sentiment
signal_name: concept_score
- path:
- translation
- en
signal:
embedding: gte-small
namespace: lilac
concept_name: non-english
signal_name: concept_score
- path:
- translation
- en
signal:
embedding: gte-small
namespace: lilac
concept_name: toxicity
signal_name: concept_score
- path:
- translation
- en
signal:
embedding: gte-small
namespace: lilac
concept_name: question
signal_name: concept_score
- path:
- translation
- en
signal:
embedding: gte-small
namespace: lilac
concept_name: legal-termination
signal_name: concept_score
- path:
- translation
- en
signal:
embedding: gte-small
namespace: lilac
concept_name: source-code
signal_name: concept_score
- path:
- translation
- en
signal:
embedding: gte-small
namespace: lilac
concept_name: negative-sentiment
signal_name: concept_score
- path:
- translation
- en
signal:
embedding: gte-small
namespace: lilac
concept_name: profanity
signal_name: concept_score
- path:
- translation
- en
signal:
signal_name: text_statistics
- path:
- translation
- es
signal:
signal_name: near_dup
- path:
- translation
- es
signal:
signal_name: pii
- path:
- translation
- es
signal:
signal_name: lang_detection
- path:
- translation
- es
signal:
embedding: gte-small
namespace: lilac
concept_name: positive-sentiment
signal_name: concept_score
- path:
- translation
- es
signal:
embedding: gte-small
namespace: lilac
concept_name: non-english
signal_name: concept_score
- path:
- translation
- es
signal:
embedding: gte-small
namespace: lilac
concept_name: toxicity
signal_name: concept_score
- path:
- translation
- es
signal:
embedding: gte-small
namespace: lilac
concept_name: question
signal_name: concept_score
- path:
- translation
- es
signal:
embedding: gte-small
namespace: lilac
concept_name: legal-termination
signal_name: concept_score
- path:
- translation
- es
signal:
embedding: gte-small
namespace: lilac
concept_name: source-code
signal_name: concept_score
- path:
- translation
- es
signal:
embedding: gte-small
namespace: lilac
concept_name: negative-sentiment
signal_name: concept_score
- path:
- translation
- es
signal:
embedding: gte-small
namespace: lilac
concept_name: profanity
signal_name: concept_score
- path:
- translation
- es
signal:
signal_name: text_statistics
- path:
- translation
- es
signal:
embedding: gte-small
namespace: lilac
concept_name: legal-termination
signal_name: concept_score
- path:
- translation
- es
signal:
embedding: gte-small
namespace: lilac
concept_name: negative-sentiment
signal_name: concept_score
- path:
- translation
- es
signal:
embedding: gte-small
namespace: lilac
concept_name: non-english
signal_name: concept_score
- path:
- translation
- es
signal:
embedding: gte-small
namespace: lilac
concept_name: positive-sentiment
signal_name: concept_score
- path:
- translation
- es
signal:
embedding: gte-small
namespace: lilac
concept_name: profanity
signal_name: concept_score
- path:
- translation
- es
signal:
embedding: gte-small
namespace: lilac
concept_name: question
signal_name: concept_score
- path:
- translation
- es
signal:
embedding: gte-small
namespace: lilac
concept_name: source-code
signal_name: concept_score
- path:
- translation
- es
signal:
embedding: gte-small
namespace: lilac
concept_name: toxicity
signal_name: concept_score
- path:
- translation
- en
signal:
embedding: gte-small
namespace: lilac
concept_name: legal-termination
signal_name: concept_score
- path:
- translation
- en
signal:
embedding: gte-small
namespace: lilac
concept_name: negative-sentiment
signal_name: concept_score
- path:
- translation
- en
signal:
embedding: gte-small
namespace: lilac
concept_name: non-english
signal_name: concept_score
- path:
- translation
- en
signal:
embedding: gte-small
namespace: lilac
concept_name: positive-sentiment
signal_name: concept_score
- path:
- translation
- en
signal:
embedding: gte-small
namespace: lilac
concept_name: profanity
signal_name: concept_score
- path:
- translation
- en
signal:
embedding: gte-small
namespace: lilac
concept_name: question
signal_name: concept_score
- path:
- translation
- en
signal:
embedding: gte-small
namespace: lilac
concept_name: source-code
signal_name: concept_score
- path:
- translation
- en
signal:
embedding: gte-small
namespace: lilac
concept_name: toxicity
signal_name: concept_score
- path:
- translation
- es
signal:
signal_name: cluster_dbscan
- path:
- translation
- en
signal:
signal_name: cluster_dbscan
- path:
- translation
- es
signal:
embedding: gte-small
signal_name: cluster_hdbscan
- path:
- translation
- en
signal:
embedding: gte-small
signal_name: cluster_hdbscan
settings:
ui:
media_paths:
- - translation
- es
- - translation
- en
markdown_paths: []
tags:
- machine-learning
```
|
lilacai/lilac-opus100-en-es-validation
|
[
"region:us"
] |
2023-08-21T19:48:27+00:00
|
{}
|
2023-12-07T13:57:30+00:00
|
[] |
[] |
TAGS
#region-us
|
This dataset is generated by Lilac for a HuggingFace Space: URL
Original dataset: URL
Lilac dataset config:
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
f2ed6f49608abe3cd9b4e294a1913e4d3a35a936
|
# Dataset of kinugasa (Kantai Collection)
This is the dataset of kinugasa (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `green_eyes, grey_hair, antenna_hair, breasts, long_hair, medium_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 696.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kinugasa_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 398.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kinugasa_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1215 | 833.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kinugasa_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 621.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kinugasa_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1215 | 1.16 GiB | [Download](https://huggingface.co/datasets/CyberHarem/kinugasa_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kinugasa_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, beach, blue_sky, cleavage, day, looking_at_viewer, navel, ocean, outdoors, solo, yellow_bikini, cowboy_shot, horizon, smile, cloud, standing, medium_hair, sand |
| 1 | 5 |  |  |  |  |  | 1girl, blue_sky, cloud, day, horizon, navel, ocean, open_mouth, outdoors, smile, solo, standing, water, barefoot, beach, looking_at_viewer, yellow_bikini, cleavage, hair_tie, medium_hair, running, feet_out_of_frame |
| 2 | 5 |  |  |  |  |  | 1girl, blue_sky, cloud, cowboy_shot, day, frilled_bikini, looking_at_viewer, official_alternate_costume, outdoors, short_hair, short_twintails, side-tie_bikini_bottom, solo, white_shirt, beachball, blue_bikini, floral_print, large_breasts, ocean, smile, standing, tied_shirt |
| 3 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, navel, solo, alternate_costume, cleavage, full_body, side-tie_bikini_bottom, standing, yellow_bikini, barefoot, gold_bikini, large_breasts, open_mouth |
| 4 | 5 |  |  |  |  |  | 1girl, beachball, cleavage, looking_at_viewer, official_alternate_costume, sarong, solo, yellow_bikini, floral_print, navel, single_braid, collarbone, hair_over_shoulder, hair_tie, open_mouth, full_body, large_breasts, medium_hair, sandals, side-tie_bikini_bottom, sitting |
| 5 | 8 |  |  |  |  |  | 1girl, bikini, collarbone, looking_at_viewer, bangs, cleavage, simple_background, solo, smile, alternate_costume, yellow_background, barefoot, full_body, standing, upper_body |
| 6 | 5 |  |  |  |  |  | 1girl, alternate_costume, full_body, solo, yellow_shirt, hair_tie, long_sleeves, looking_at_viewer, red_footwear, sneakers, standing, white_skirt, one_side_up, open_mouth, smile, bangs, pink_background, shorts, simple_background, white_background |
| 7 | 8 |  |  |  |  |  | 1girl, alternate_costume, black_footwear, black_pantyhose, full_body, simple_background, sweater, white_background, solo, standing, long_sleeves, smile, looking_at_viewer, white_coat, black_skirt, high_heels, dress, fur-trimmed_coat, holding, open_mouth, scrunchie, shoes |
| 8 | 9 |  |  |  |  |  | 1girl, black_shirt, simple_background, official_alternate_costume, polka_dot_shirt, white_background, green_skirt, jacket, coat, smile, full_body, medium_hair, solo_focus, standing |
| 9 | 9 |  |  |  |  |  | 1girl, serafuku, short_sleeves, upper_body, blue_sailor_collar, looking_at_viewer, one_side_up, solo, white_background, simple_background, yellow_necktie, smile, gloves, open_mouth, blush, neckerchief |
| 10 | 21 |  |  |  |  |  | pleated_skirt, serafuku, yellow_necktie, 1girl, hair_tie, looking_at_viewer, solo, purple_skirt, simple_background, smile, one_side_up, purple_sailor_collar, black_thighhighs, white_background, black_gloves, blue_skirt |
| 11 | 26 |  |  |  |  |  | 1girl, alternate_costume, detached_collar, rabbit_ears, looking_at_viewer, playboy_bunny, simple_background, fake_animal_ears, solo, wrist_cuffs, bowtie, cleavage, strapless_leotard, white_background, black_pantyhose, open_mouth, cowboy_shot |
| 12 | 5 |  |  |  |  |  | 1girl, cleavage, navel, solo, simple_background, underwear_only, yellow_bra, collarbone, looking_at_viewer, lying, white_background, yellow_panties, arms_up, bangs, blush, medium_hair, smile |
| 13 | 5 |  |  |  |  |  | blush, large_breasts, nipples, nude, solo_focus, 1boy, 1girl, hair_tie, hetero, mosaic_censoring, sweat, navel, pussy, smile, breast_grab, collarbone, grabbing, lying, one_side_up, open_mouth, penis, tears, trembling |
| 14 | 9 |  |  |  |  |  | 1girl, obi, solo, alternate_costume, floral_print, looking_at_viewer, smile, wide_sleeves, open_mouth, print_kimono, hair_ornament, long_sleeves, ahoge, alternate_hairstyle, flower, new_year |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | beach | blue_sky | cleavage | day | looking_at_viewer | navel | ocean | outdoors | solo | yellow_bikini | cowboy_shot | horizon | smile | cloud | standing | medium_hair | sand | open_mouth | water | barefoot | hair_tie | running | feet_out_of_frame | frilled_bikini | official_alternate_costume | short_hair | short_twintails | side-tie_bikini_bottom | white_shirt | beachball | blue_bikini | floral_print | large_breasts | tied_shirt | alternate_costume | full_body | gold_bikini | sarong | single_braid | collarbone | hair_over_shoulder | sandals | sitting | bikini | bangs | simple_background | yellow_background | upper_body | yellow_shirt | long_sleeves | red_footwear | sneakers | white_skirt | one_side_up | pink_background | shorts | white_background | black_footwear | black_pantyhose | sweater | white_coat | black_skirt | high_heels | dress | fur-trimmed_coat | holding | scrunchie | shoes | black_shirt | polka_dot_shirt | green_skirt | jacket | coat | solo_focus | serafuku | short_sleeves | blue_sailor_collar | yellow_necktie | gloves | blush | neckerchief | pleated_skirt | purple_skirt | purple_sailor_collar | black_thighhighs | black_gloves | blue_skirt | detached_collar | rabbit_ears | playboy_bunny | fake_animal_ears | wrist_cuffs | bowtie | strapless_leotard | underwear_only | yellow_bra | lying | yellow_panties | arms_up | nipples | nude | 1boy | hetero | mosaic_censoring | sweat | pussy | breast_grab | grabbing | penis | tears | trembling | obi | wide_sleeves | print_kimono | hair_ornament | ahoge | alternate_hairstyle | flower | new_year |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------|:-----------|:-----------|:------|:--------------------|:--------|:--------|:-----------|:-------|:----------------|:--------------|:----------|:--------|:--------|:-----------|:--------------|:-------|:-------------|:--------|:-----------|:-----------|:----------|:--------------------|:-----------------|:-----------------------------|:-------------|:------------------|:-------------------------|:--------------|:------------|:--------------|:---------------|:----------------|:-------------|:--------------------|:------------|:--------------|:---------|:---------------|:-------------|:---------------------|:----------|:----------|:---------|:--------|:--------------------|:--------------------|:-------------|:---------------|:---------------|:---------------|:-----------|:--------------|:--------------|:------------------|:---------|:-------------------|:-----------------|:------------------|:----------|:-------------|:--------------|:-------------|:--------|:-------------------|:----------|:------------|:--------|:--------------|:------------------|:--------------|:---------|:-------|:-------------|:-----------|:----------------|:---------------------|:-----------------|:---------|:--------|:--------------|:----------------|:---------------|:-----------------------|:-------------------|:---------------|:-------------|:------------------|:--------------|:----------------|:-------------------|:--------------|:---------|:--------------------|:-----------------|:-------------|:--------|:-----------------|:----------|:----------|:-------|:-------|:---------|:-------------------|:--------|:--------|:--------------|:-----------|:--------|:--------|:------------|:------|:---------------|:---------------|:----------------|:--------|:----------------------|:---------|:-----------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | X | X | X | X | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | | X | | X | X | | X | X | X | | X | | X | X | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | | X | | X | X | | | X | X | | | | | X | | | X | | X | | | | | | | | X | | | | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | | | X | | X | X | | | X | X | | | | | | X | | X | | | X | | | | X | | | X | | X | | X | X | | | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 8 |  |  |  |  |  | X | | | X | | X | | | | X | | | | X | | X | | | | | X | | | | | | | | | | | | | | | X | X | | | | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | | | | | X | | | | X | | | | X | | X | | | X | | | X | | | | | | | | | | | | | | X | X | | | | | | | | | X | X | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 8 |  |  |  |  |  | X | | | | | X | | | | X | | | | X | | X | | | X | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | X | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 9 |  |  |  |  |  | X | | | | | | | | | | | | | X | | X | X | | | | | | | | | X | | | | | | | | | | | X | | | | | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 9 |  |  |  |  |  | X | | | | | X | | | | X | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | | | | | | X | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 10 | 21 |  |  |  |  |  | X | | | | | X | | | | X | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | X | | | X | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 11 | 26 |  |  |  |  |  | X | | | X | | X | | | | X | | X | | | | | | | X | | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 12 | 5 |  |  |  |  |  | X | | | X | | X | X | | | X | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 13 | 5 |  |  |  |  |  | X | | | | | | X | | | | | | | X | | | | | X | | | X | | | | | | | | | | | | X | | | | | | | X | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | X | | | | | | | | | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | |
| 14 | 9 |  |  |  |  |  | X | | | | | X | | | | X | | | | X | | | | | X | | | | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X |
|
CyberHarem/kinugasa_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T19:56:11+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T08:19:18+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of kinugasa (Kantai Collection)
=======================================
This is the dataset of kinugasa (Kantai Collection), containing 500 images and their tags.
The core tags of this character are 'green\_eyes, grey\_hair, antenna\_hair, breasts, long\_hair, medium\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
3150cd1049e57d4f10e1a2385bd78c87d5e36080
|
# Dataset of battleship_symbiotic_hime/戦艦棲姫 (Kantai Collection)
This is the dataset of battleship_symbiotic_hime/戦艦棲姫 (Kantai Collection), containing 290 images and their tags.
The core tags of this character are `black_hair, horns, long_hair, red_eyes, oni_horns, breasts, pale_skin, very_long_hair, large_breasts, hair_between_eyes, glowing_eyes, colored_skin, white_skin`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 290 | 270.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/battleship_symbiotic_hime_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 290 | 193.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/battleship_symbiotic_hime_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 562 | 343.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/battleship_symbiotic_hime_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 290 | 254.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/battleship_symbiotic_hime_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 562 | 425.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/battleship_symbiotic_hime_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/battleship_symbiotic_hime_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------|
| 0 | 15 |  |  |  |  |  | 1girl, abyssal_ship, black_dress, glowing, looking_at_viewer, short_dress, solo, cleavage |
| 1 | 5 |  |  |  |  |  | 1girl, abyssal_ship, black_dress, cleavage, glowing, looking_at_viewer, short_dress, solo, medium_breasts, spaghetti_strap |
| 2 | 7 |  |  |  |  |  | 1girl, abyssal_ship, black_bikini, side-tie_bikini_bottom, cleavage, smile, solo, glowing, looking_at_viewer, navel, open_mouth |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | abyssal_ship | black_dress | glowing | looking_at_viewer | short_dress | solo | cleavage | medium_breasts | spaghetti_strap | black_bikini | side-tie_bikini_bottom | smile | navel | open_mouth |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:--------------|:----------|:--------------------|:--------------|:-------|:-----------|:-----------------|:------------------|:---------------|:-------------------------|:--------|:--------|:-------------|
| 0 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | | X | X | | X | X | | | X | X | X | X | X |
|
CyberHarem/battleship_symbiotic_hime_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T19:57:30+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T18:01:48+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of battleship\_symbiotic\_hime/戦艦棲姫 (Kantai Collection)
===============================================================
This is the dataset of battleship\_symbiotic\_hime/戦艦棲姫 (Kantai Collection), containing 290 images and their tags.
The core tags of this character are 'black\_hair, horns, long\_hair, red\_eyes, oni\_horns, breasts, pale\_skin, very\_long\_hair, large\_breasts, hair\_between\_eyes, glowing\_eyes, colored\_skin, white\_skin', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
3312fd13e84f09dfbf307b108e01e379cb497f2f
|
# Dataset Card for "AA_ApplicationDistilRoBERTa_110K_5_L2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
EgilKarlsen/AA_ApplicationDistilRoBERTa_110K_5_L2
|
[
"region:us"
] |
2023-08-21T20:26:37+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "0", "dtype": "float32"}, {"name": "1", "dtype": "float32"}, {"name": "2", "dtype": "float32"}, {"name": "3", "dtype": "float32"}, {"name": "4", "dtype": "float32"}, {"name": "5", "dtype": "float32"}, {"name": "6", "dtype": "float32"}, {"name": "7", "dtype": "float32"}, {"name": "8", "dtype": "float32"}, {"name": "9", "dtype": "float32"}, {"name": "10", "dtype": "float32"}, {"name": "11", "dtype": "float32"}, {"name": "12", "dtype": "float32"}, {"name": "13", "dtype": "float32"}, {"name": "14", "dtype": "float32"}, {"name": "15", "dtype": "float32"}, {"name": "16", "dtype": "float32"}, {"name": "17", "dtype": "float32"}, {"name": "18", "dtype": "float32"}, {"name": "19", "dtype": "float32"}, {"name": "20", "dtype": "float32"}, {"name": "21", "dtype": "float32"}, {"name": "22", "dtype": "float32"}, {"name": "23", "dtype": "float32"}, {"name": "24", "dtype": "float32"}, {"name": "25", "dtype": "float32"}, {"name": "26", "dtype": "float32"}, {"name": "27", "dtype": "float32"}, {"name": "28", "dtype": "float32"}, {"name": "29", "dtype": "float32"}, {"name": "30", "dtype": "float32"}, {"name": "31", "dtype": "float32"}, {"name": "32", "dtype": "float32"}, {"name": "33", "dtype": "float32"}, {"name": "34", "dtype": "float32"}, {"name": "35", "dtype": "float32"}, {"name": "36", "dtype": "float32"}, {"name": "37", "dtype": "float32"}, {"name": "38", "dtype": "float32"}, {"name": "39", "dtype": "float32"}, {"name": "40", "dtype": "float32"}, {"name": "41", "dtype": "float32"}, {"name": "42", "dtype": "float32"}, {"name": "43", "dtype": "float32"}, {"name": "44", "dtype": "float32"}, {"name": "45", "dtype": "float32"}, {"name": "46", "dtype": "float32"}, {"name": "47", "dtype": "float32"}, {"name": "48", "dtype": "float32"}, {"name": "49", "dtype": "float32"}, {"name": "50", "dtype": "float32"}, {"name": "51", "dtype": "float32"}, {"name": "52", "dtype": "float32"}, {"name": "53", "dtype": "float32"}, {"name": "54", "dtype": "float32"}, {"name": "55", "dtype": "float32"}, {"name": "56", "dtype": "float32"}, {"name": "57", "dtype": "float32"}, {"name": "58", "dtype": "float32"}, {"name": "59", "dtype": "float32"}, {"name": "60", "dtype": "float32"}, {"name": "61", "dtype": "float32"}, {"name": "62", "dtype": "float32"}, {"name": "63", "dtype": "float32"}, {"name": "64", "dtype": "float32"}, {"name": "65", "dtype": "float32"}, {"name": "66", "dtype": "float32"}, {"name": "67", "dtype": "float32"}, {"name": "68", "dtype": "float32"}, {"name": "69", "dtype": "float32"}, {"name": "70", "dtype": "float32"}, {"name": "71", "dtype": "float32"}, {"name": "72", "dtype": "float32"}, {"name": "73", "dtype": "float32"}, {"name": "74", "dtype": "float32"}, {"name": "75", "dtype": "float32"}, {"name": "76", "dtype": "float32"}, {"name": "77", "dtype": "float32"}, {"name": "78", "dtype": "float32"}, {"name": "79", "dtype": "float32"}, {"name": "80", "dtype": "float32"}, {"name": "81", "dtype": "float32"}, {"name": "82", "dtype": "float32"}, {"name": "83", "dtype": "float32"}, {"name": "84", "dtype": "float32"}, {"name": "85", "dtype": "float32"}, {"name": "86", "dtype": "float32"}, {"name": "87", "dtype": "float32"}, {"name": "88", "dtype": "float32"}, {"name": "89", "dtype": "float32"}, {"name": "90", "dtype": "float32"}, {"name": "91", "dtype": "float32"}, {"name": "92", "dtype": "float32"}, {"name": "93", "dtype": "float32"}, {"name": "94", "dtype": "float32"}, {"name": "95", "dtype": "float32"}, {"name": "96", "dtype": "float32"}, {"name": "97", "dtype": "float32"}, {"name": "98", "dtype": "float32"}, {"name": "99", "dtype": "float32"}, {"name": "100", "dtype": "float32"}, {"name": "101", "dtype": "float32"}, {"name": "102", "dtype": "float32"}, {"name": "103", "dtype": "float32"}, {"name": "104", "dtype": "float32"}, {"name": "105", "dtype": "float32"}, {"name": "106", "dtype": "float32"}, {"name": "107", "dtype": "float32"}, {"name": "108", "dtype": "float32"}, {"name": "109", "dtype": "float32"}, {"name": "110", "dtype": "float32"}, {"name": "111", "dtype": "float32"}, {"name": "112", "dtype": "float32"}, {"name": "113", "dtype": "float32"}, {"name": "114", "dtype": "float32"}, {"name": "115", "dtype": "float32"}, {"name": "116", "dtype": "float32"}, {"name": "117", "dtype": "float32"}, {"name": "118", "dtype": "float32"}, {"name": "119", "dtype": "float32"}, {"name": "120", "dtype": "float32"}, {"name": "121", "dtype": "float32"}, {"name": "122", "dtype": "float32"}, {"name": "123", "dtype": "float32"}, {"name": "124", "dtype": "float32"}, {"name": "125", "dtype": "float32"}, {"name": "126", "dtype": "float32"}, {"name": "127", "dtype": "float32"}, {"name": "128", "dtype": "float32"}, {"name": "129", "dtype": "float32"}, {"name": "130", "dtype": "float32"}, {"name": "131", "dtype": "float32"}, {"name": "132", "dtype": "float32"}, {"name": "133", "dtype": "float32"}, {"name": "134", "dtype": "float32"}, {"name": "135", "dtype": "float32"}, {"name": "136", "dtype": "float32"}, {"name": "137", "dtype": "float32"}, {"name": "138", "dtype": "float32"}, {"name": "139", "dtype": "float32"}, {"name": "140", "dtype": "float32"}, {"name": "141", "dtype": "float32"}, {"name": "142", "dtype": "float32"}, {"name": "143", "dtype": "float32"}, {"name": "144", "dtype": "float32"}, {"name": "145", "dtype": "float32"}, {"name": "146", "dtype": "float32"}, {"name": "147", "dtype": "float32"}, {"name": "148", "dtype": "float32"}, {"name": "149", "dtype": "float32"}, {"name": "150", "dtype": "float32"}, {"name": "151", "dtype": "float32"}, {"name": "152", "dtype": "float32"}, {"name": "153", "dtype": "float32"}, {"name": "154", "dtype": "float32"}, {"name": "155", "dtype": "float32"}, {"name": "156", "dtype": "float32"}, {"name": "157", "dtype": "float32"}, {"name": "158", "dtype": "float32"}, {"name": "159", "dtype": "float32"}, {"name": "160", "dtype": "float32"}, {"name": "161", "dtype": "float32"}, {"name": "162", "dtype": "float32"}, {"name": "163", "dtype": "float32"}, {"name": "164", "dtype": "float32"}, {"name": "165", "dtype": "float32"}, {"name": "166", "dtype": "float32"}, {"name": "167", "dtype": "float32"}, {"name": "168", "dtype": "float32"}, {"name": "169", "dtype": "float32"}, {"name": "170", "dtype": "float32"}, {"name": "171", "dtype": "float32"}, {"name": "172", "dtype": "float32"}, {"name": "173", "dtype": "float32"}, {"name": "174", "dtype": "float32"}, {"name": "175", "dtype": "float32"}, {"name": "176", "dtype": "float32"}, {"name": "177", "dtype": "float32"}, {"name": "178", "dtype": "float32"}, {"name": "179", "dtype": "float32"}, {"name": "180", "dtype": "float32"}, {"name": "181", "dtype": "float32"}, {"name": "182", "dtype": "float32"}, {"name": "183", "dtype": "float32"}, {"name": "184", "dtype": "float32"}, {"name": "185", "dtype": "float32"}, {"name": "186", "dtype": "float32"}, {"name": "187", "dtype": "float32"}, {"name": "188", "dtype": "float32"}, {"name": "189", "dtype": "float32"}, {"name": "190", "dtype": "float32"}, {"name": "191", "dtype": "float32"}, {"name": "192", "dtype": "float32"}, {"name": "193", "dtype": "float32"}, {"name": "194", "dtype": "float32"}, {"name": "195", "dtype": "float32"}, {"name": "196", "dtype": "float32"}, {"name": "197", "dtype": "float32"}, {"name": "198", "dtype": "float32"}, {"name": "199", "dtype": "float32"}, {"name": "200", "dtype": "float32"}, {"name": "201", "dtype": "float32"}, {"name": "202", "dtype": "float32"}, {"name": "203", "dtype": "float32"}, {"name": "204", "dtype": "float32"}, {"name": "205", "dtype": "float32"}, {"name": "206", "dtype": "float32"}, {"name": "207", "dtype": "float32"}, {"name": "208", "dtype": "float32"}, {"name": "209", "dtype": "float32"}, {"name": "210", "dtype": "float32"}, {"name": "211", "dtype": "float32"}, {"name": "212", "dtype": "float32"}, {"name": "213", "dtype": "float32"}, {"name": "214", "dtype": "float32"}, {"name": "215", "dtype": "float32"}, {"name": "216", "dtype": "float32"}, {"name": "217", "dtype": "float32"}, {"name": "218", "dtype": "float32"}, {"name": "219", "dtype": "float32"}, {"name": "220", "dtype": "float32"}, {"name": "221", "dtype": "float32"}, {"name": "222", "dtype": "float32"}, {"name": "223", "dtype": "float32"}, {"name": "224", "dtype": "float32"}, {"name": "225", "dtype": "float32"}, {"name": "226", "dtype": "float32"}, {"name": "227", "dtype": "float32"}, {"name": "228", "dtype": "float32"}, {"name": "229", "dtype": "float32"}, {"name": "230", "dtype": "float32"}, {"name": "231", "dtype": "float32"}, {"name": "232", "dtype": "float32"}, {"name": "233", "dtype": "float32"}, {"name": "234", "dtype": "float32"}, {"name": "235", "dtype": "float32"}, {"name": "236", "dtype": "float32"}, {"name": "237", "dtype": "float32"}, {"name": "238", "dtype": "float32"}, {"name": "239", "dtype": "float32"}, {"name": "240", "dtype": "float32"}, {"name": "241", "dtype": "float32"}, {"name": "242", "dtype": "float32"}, {"name": "243", "dtype": "float32"}, {"name": "244", "dtype": "float32"}, {"name": "245", "dtype": "float32"}, {"name": "246", "dtype": "float32"}, {"name": "247", "dtype": "float32"}, {"name": "248", "dtype": "float32"}, {"name": "249", "dtype": "float32"}, {"name": "250", "dtype": "float32"}, {"name": "251", "dtype": "float32"}, {"name": "252", "dtype": "float32"}, {"name": "253", "dtype": "float32"}, {"name": "254", "dtype": "float32"}, {"name": "255", "dtype": "float32"}, {"name": "256", "dtype": "float32"}, {"name": "257", "dtype": "float32"}, {"name": "258", "dtype": "float32"}, {"name": "259", "dtype": "float32"}, {"name": "260", "dtype": "float32"}, {"name": "261", "dtype": "float32"}, {"name": "262", "dtype": "float32"}, {"name": "263", "dtype": "float32"}, {"name": "264", "dtype": "float32"}, {"name": "265", "dtype": "float32"}, {"name": "266", "dtype": "float32"}, {"name": "267", "dtype": "float32"}, {"name": "268", "dtype": "float32"}, {"name": "269", "dtype": "float32"}, {"name": "270", "dtype": "float32"}, {"name": "271", "dtype": "float32"}, {"name": "272", "dtype": "float32"}, {"name": "273", "dtype": "float32"}, {"name": "274", "dtype": "float32"}, {"name": "275", "dtype": "float32"}, {"name": "276", "dtype": "float32"}, {"name": "277", "dtype": "float32"}, {"name": "278", "dtype": "float32"}, {"name": "279", "dtype": "float32"}, {"name": "280", "dtype": "float32"}, {"name": "281", "dtype": "float32"}, {"name": "282", "dtype": "float32"}, {"name": "283", "dtype": "float32"}, {"name": "284", "dtype": "float32"}, {"name": "285", "dtype": "float32"}, {"name": "286", "dtype": "float32"}, {"name": "287", "dtype": "float32"}, {"name": "288", "dtype": "float32"}, {"name": "289", "dtype": "float32"}, {"name": "290", "dtype": "float32"}, {"name": "291", "dtype": "float32"}, {"name": "292", "dtype": "float32"}, {"name": "293", "dtype": "float32"}, {"name": "294", "dtype": "float32"}, {"name": "295", "dtype": "float32"}, {"name": "296", "dtype": "float32"}, {"name": "297", "dtype": "float32"}, {"name": "298", "dtype": "float32"}, {"name": "299", "dtype": "float32"}, {"name": "300", "dtype": "float32"}, {"name": "301", "dtype": "float32"}, {"name": "302", "dtype": "float32"}, {"name": "303", "dtype": "float32"}, {"name": "304", "dtype": "float32"}, {"name": "305", "dtype": "float32"}, {"name": "306", "dtype": "float32"}, {"name": "307", "dtype": "float32"}, {"name": "308", "dtype": "float32"}, {"name": "309", "dtype": "float32"}, {"name": "310", "dtype": "float32"}, {"name": "311", "dtype": "float32"}, {"name": "312", "dtype": "float32"}, {"name": "313", "dtype": "float32"}, {"name": "314", "dtype": "float32"}, {"name": "315", "dtype": "float32"}, {"name": "316", "dtype": "float32"}, {"name": "317", "dtype": "float32"}, {"name": "318", "dtype": "float32"}, {"name": "319", "dtype": "float32"}, {"name": "320", "dtype": "float32"}, {"name": "321", "dtype": "float32"}, {"name": "322", "dtype": "float32"}, {"name": "323", "dtype": "float32"}, {"name": "324", "dtype": "float32"}, {"name": "325", "dtype": "float32"}, {"name": "326", "dtype": "float32"}, {"name": "327", "dtype": "float32"}, {"name": "328", "dtype": "float32"}, {"name": "329", "dtype": "float32"}, {"name": "330", "dtype": "float32"}, {"name": "331", "dtype": "float32"}, {"name": "332", "dtype": "float32"}, {"name": "333", "dtype": "float32"}, {"name": "334", "dtype": "float32"}, {"name": "335", "dtype": "float32"}, {"name": "336", "dtype": "float32"}, {"name": "337", "dtype": "float32"}, {"name": "338", "dtype": "float32"}, {"name": "339", "dtype": "float32"}, {"name": "340", "dtype": "float32"}, {"name": "341", "dtype": "float32"}, {"name": "342", "dtype": "float32"}, {"name": "343", "dtype": "float32"}, {"name": "344", "dtype": "float32"}, {"name": "345", "dtype": "float32"}, {"name": "346", "dtype": "float32"}, {"name": "347", "dtype": "float32"}, {"name": "348", "dtype": "float32"}, {"name": "349", "dtype": "float32"}, {"name": "350", "dtype": "float32"}, {"name": "351", "dtype": "float32"}, {"name": "352", "dtype": "float32"}, {"name": "353", "dtype": "float32"}, {"name": "354", "dtype": "float32"}, {"name": "355", "dtype": "float32"}, {"name": "356", "dtype": "float32"}, {"name": "357", "dtype": "float32"}, {"name": "358", "dtype": "float32"}, {"name": "359", "dtype": "float32"}, {"name": "360", "dtype": "float32"}, {"name": "361", "dtype": "float32"}, {"name": "362", "dtype": "float32"}, {"name": "363", "dtype": "float32"}, {"name": "364", "dtype": "float32"}, {"name": "365", "dtype": "float32"}, {"name": "366", "dtype": "float32"}, {"name": "367", "dtype": "float32"}, {"name": "368", "dtype": "float32"}, {"name": "369", "dtype": "float32"}, {"name": "370", "dtype": "float32"}, {"name": "371", "dtype": "float32"}, {"name": "372", "dtype": "float32"}, {"name": "373", "dtype": "float32"}, {"name": "374", "dtype": "float32"}, {"name": "375", "dtype": "float32"}, {"name": "376", "dtype": "float32"}, {"name": "377", "dtype": "float32"}, {"name": "378", "dtype": "float32"}, {"name": "379", "dtype": "float32"}, {"name": "380", "dtype": "float32"}, {"name": "381", "dtype": "float32"}, {"name": "382", "dtype": "float32"}, {"name": "383", "dtype": "float32"}, {"name": "384", "dtype": "float32"}, {"name": "385", "dtype": "float32"}, {"name": "386", "dtype": "float32"}, {"name": "387", "dtype": "float32"}, {"name": "388", "dtype": "float32"}, {"name": "389", "dtype": "float32"}, {"name": "390", "dtype": "float32"}, {"name": "391", "dtype": "float32"}, {"name": "392", "dtype": "float32"}, {"name": "393", "dtype": "float32"}, {"name": "394", "dtype": "float32"}, {"name": "395", "dtype": "float32"}, {"name": "396", "dtype": "float32"}, {"name": "397", "dtype": "float32"}, {"name": "398", "dtype": "float32"}, {"name": "399", "dtype": "float32"}, {"name": "400", "dtype": "float32"}, {"name": "401", "dtype": "float32"}, {"name": "402", "dtype": "float32"}, {"name": "403", "dtype": "float32"}, {"name": "404", "dtype": "float32"}, {"name": "405", "dtype": "float32"}, {"name": "406", "dtype": "float32"}, {"name": "407", "dtype": "float32"}, {"name": "408", "dtype": "float32"}, {"name": "409", "dtype": "float32"}, {"name": "410", "dtype": "float32"}, {"name": "411", "dtype": "float32"}, {"name": "412", "dtype": "float32"}, {"name": "413", "dtype": "float32"}, {"name": "414", "dtype": "float32"}, {"name": "415", "dtype": "float32"}, {"name": "416", "dtype": "float32"}, {"name": "417", "dtype": "float32"}, {"name": "418", "dtype": "float32"}, {"name": "419", "dtype": "float32"}, {"name": "420", "dtype": "float32"}, {"name": "421", "dtype": "float32"}, {"name": "422", "dtype": "float32"}, {"name": "423", "dtype": "float32"}, {"name": "424", "dtype": "float32"}, {"name": "425", "dtype": "float32"}, {"name": "426", "dtype": "float32"}, {"name": "427", "dtype": "float32"}, {"name": "428", "dtype": "float32"}, {"name": "429", "dtype": "float32"}, {"name": "430", "dtype": "float32"}, {"name": "431", "dtype": "float32"}, {"name": "432", "dtype": "float32"}, {"name": "433", "dtype": "float32"}, {"name": "434", "dtype": "float32"}, {"name": "435", "dtype": "float32"}, {"name": "436", "dtype": "float32"}, {"name": "437", "dtype": "float32"}, {"name": "438", "dtype": "float32"}, {"name": "439", "dtype": "float32"}, {"name": "440", "dtype": "float32"}, {"name": "441", "dtype": "float32"}, {"name": "442", "dtype": "float32"}, {"name": "443", "dtype": "float32"}, {"name": "444", "dtype": "float32"}, {"name": "445", "dtype": "float32"}, {"name": "446", "dtype": "float32"}, {"name": "447", "dtype": "float32"}, {"name": "448", "dtype": "float32"}, {"name": "449", "dtype": "float32"}, {"name": "450", "dtype": "float32"}, {"name": "451", "dtype": "float32"}, {"name": "452", "dtype": "float32"}, {"name": "453", "dtype": "float32"}, {"name": "454", "dtype": "float32"}, {"name": "455", "dtype": "float32"}, {"name": "456", "dtype": "float32"}, {"name": "457", "dtype": "float32"}, {"name": "458", "dtype": "float32"}, {"name": "459", "dtype": "float32"}, {"name": "460", "dtype": "float32"}, {"name": "461", "dtype": "float32"}, {"name": "462", "dtype": "float32"}, {"name": "463", "dtype": "float32"}, {"name": "464", "dtype": "float32"}, {"name": "465", "dtype": "float32"}, {"name": "466", "dtype": "float32"}, {"name": "467", "dtype": "float32"}, {"name": "468", "dtype": "float32"}, {"name": "469", "dtype": "float32"}, {"name": "470", "dtype": "float32"}, {"name": "471", "dtype": "float32"}, {"name": "472", "dtype": "float32"}, {"name": "473", "dtype": "float32"}, {"name": "474", "dtype": "float32"}, {"name": "475", "dtype": "float32"}, {"name": "476", "dtype": "float32"}, {"name": "477", "dtype": "float32"}, {"name": "478", "dtype": "float32"}, {"name": "479", "dtype": "float32"}, {"name": "480", "dtype": "float32"}, {"name": "481", "dtype": "float32"}, {"name": "482", "dtype": "float32"}, {"name": "483", "dtype": "float32"}, {"name": "484", "dtype": "float32"}, {"name": "485", "dtype": "float32"}, {"name": "486", "dtype": "float32"}, {"name": "487", "dtype": "float32"}, {"name": "488", "dtype": "float32"}, {"name": "489", "dtype": "float32"}, {"name": "490", "dtype": "float32"}, {"name": "491", "dtype": "float32"}, {"name": "492", "dtype": "float32"}, {"name": "493", "dtype": "float32"}, {"name": "494", "dtype": "float32"}, {"name": "495", "dtype": "float32"}, {"name": "496", "dtype": "float32"}, {"name": "497", "dtype": "float32"}, {"name": "498", "dtype": "float32"}, {"name": "499", "dtype": "float32"}, {"name": "500", "dtype": "float32"}, {"name": "501", "dtype": "float32"}, {"name": "502", "dtype": "float32"}, {"name": "503", "dtype": "float32"}, {"name": "504", "dtype": "float32"}, {"name": "505", "dtype": "float32"}, {"name": "506", "dtype": "float32"}, {"name": "507", "dtype": "float32"}, {"name": "508", "dtype": "float32"}, {"name": "509", "dtype": "float32"}, {"name": "510", "dtype": "float32"}, {"name": "511", "dtype": "float32"}, {"name": "512", "dtype": "float32"}, {"name": "513", "dtype": "float32"}, {"name": "514", "dtype": "float32"}, {"name": "515", "dtype": "float32"}, {"name": "516", "dtype": "float32"}, {"name": "517", "dtype": "float32"}, {"name": "518", "dtype": "float32"}, {"name": "519", "dtype": "float32"}, {"name": "520", "dtype": "float32"}, {"name": "521", "dtype": "float32"}, {"name": "522", "dtype": "float32"}, {"name": "523", "dtype": "float32"}, {"name": "524", "dtype": "float32"}, {"name": "525", "dtype": "float32"}, {"name": "526", "dtype": "float32"}, {"name": "527", "dtype": "float32"}, {"name": "528", "dtype": "float32"}, {"name": "529", "dtype": "float32"}, {"name": "530", "dtype": "float32"}, {"name": "531", "dtype": "float32"}, {"name": "532", "dtype": "float32"}, {"name": "533", "dtype": "float32"}, {"name": "534", "dtype": "float32"}, {"name": "535", "dtype": "float32"}, {"name": "536", "dtype": "float32"}, {"name": "537", "dtype": "float32"}, {"name": "538", "dtype": "float32"}, {"name": "539", "dtype": "float32"}, {"name": "540", "dtype": "float32"}, {"name": "541", "dtype": "float32"}, {"name": "542", "dtype": "float32"}, {"name": "543", "dtype": "float32"}, {"name": "544", "dtype": "float32"}, {"name": "545", "dtype": "float32"}, {"name": "546", "dtype": "float32"}, {"name": "547", "dtype": "float32"}, {"name": "548", "dtype": "float32"}, {"name": "549", "dtype": "float32"}, {"name": "550", "dtype": "float32"}, {"name": "551", "dtype": "float32"}, {"name": "552", "dtype": "float32"}, {"name": "553", "dtype": "float32"}, {"name": "554", "dtype": "float32"}, {"name": "555", "dtype": "float32"}, {"name": "556", "dtype": "float32"}, {"name": "557", "dtype": "float32"}, {"name": "558", "dtype": "float32"}, {"name": "559", "dtype": "float32"}, {"name": "560", "dtype": "float32"}, {"name": "561", "dtype": "float32"}, {"name": "562", "dtype": "float32"}, {"name": "563", "dtype": "float32"}, {"name": "564", "dtype": "float32"}, {"name": "565", "dtype": "float32"}, {"name": "566", "dtype": "float32"}, {"name": "567", "dtype": "float32"}, {"name": "568", "dtype": "float32"}, {"name": "569", "dtype": "float32"}, {"name": "570", "dtype": "float32"}, {"name": "571", "dtype": "float32"}, {"name": "572", "dtype": "float32"}, {"name": "573", "dtype": "float32"}, {"name": "574", "dtype": "float32"}, {"name": "575", "dtype": "float32"}, {"name": "576", "dtype": "float32"}, {"name": "577", "dtype": "float32"}, {"name": "578", "dtype": "float32"}, {"name": "579", "dtype": "float32"}, {"name": "580", "dtype": "float32"}, {"name": "581", "dtype": "float32"}, {"name": "582", "dtype": "float32"}, {"name": "583", "dtype": "float32"}, {"name": "584", "dtype": "float32"}, {"name": "585", "dtype": "float32"}, {"name": "586", "dtype": "float32"}, {"name": "587", "dtype": "float32"}, {"name": "588", "dtype": "float32"}, {"name": "589", "dtype": "float32"}, {"name": "590", "dtype": "float32"}, {"name": "591", "dtype": "float32"}, {"name": "592", "dtype": "float32"}, {"name": "593", "dtype": "float32"}, {"name": "594", "dtype": "float32"}, {"name": "595", "dtype": "float32"}, {"name": "596", "dtype": "float32"}, {"name": "597", "dtype": "float32"}, {"name": "598", "dtype": "float32"}, {"name": "599", "dtype": "float32"}, {"name": "600", "dtype": "float32"}, {"name": "601", "dtype": "float32"}, {"name": "602", "dtype": "float32"}, {"name": "603", "dtype": "float32"}, {"name": "604", "dtype": "float32"}, {"name": "605", "dtype": "float32"}, {"name": "606", "dtype": "float32"}, {"name": "607", "dtype": "float32"}, {"name": "608", "dtype": "float32"}, {"name": "609", "dtype": "float32"}, {"name": "610", "dtype": "float32"}, {"name": "611", "dtype": "float32"}, {"name": "612", "dtype": "float32"}, {"name": "613", "dtype": "float32"}, {"name": "614", "dtype": "float32"}, {"name": "615", "dtype": "float32"}, {"name": "616", "dtype": "float32"}, {"name": "617", "dtype": "float32"}, {"name": "618", "dtype": "float32"}, {"name": "619", "dtype": "float32"}, {"name": "620", "dtype": "float32"}, {"name": "621", "dtype": "float32"}, {"name": "622", "dtype": "float32"}, {"name": "623", "dtype": "float32"}, {"name": "624", "dtype": "float32"}, {"name": "625", "dtype": "float32"}, {"name": "626", "dtype": "float32"}, {"name": "627", "dtype": "float32"}, {"name": "628", "dtype": "float32"}, {"name": "629", "dtype": "float32"}, {"name": "630", "dtype": "float32"}, {"name": "631", "dtype": "float32"}, {"name": "632", "dtype": "float32"}, {"name": "633", "dtype": "float32"}, {"name": "634", "dtype": "float32"}, {"name": "635", "dtype": "float32"}, {"name": "636", "dtype": "float32"}, {"name": "637", "dtype": "float32"}, {"name": "638", "dtype": "float32"}, {"name": "639", "dtype": "float32"}, {"name": "640", "dtype": "float32"}, {"name": "641", "dtype": "float32"}, {"name": "642", "dtype": "float32"}, {"name": "643", "dtype": "float32"}, {"name": "644", "dtype": "float32"}, {"name": "645", "dtype": "float32"}, {"name": "646", "dtype": "float32"}, {"name": "647", "dtype": "float32"}, {"name": "648", "dtype": "float32"}, {"name": "649", "dtype": "float32"}, {"name": "650", "dtype": "float32"}, {"name": "651", "dtype": "float32"}, {"name": "652", "dtype": "float32"}, {"name": "653", "dtype": "float32"}, {"name": "654", "dtype": "float32"}, {"name": "655", "dtype": "float32"}, {"name": "656", "dtype": "float32"}, {"name": "657", "dtype": "float32"}, {"name": "658", "dtype": "float32"}, {"name": "659", "dtype": "float32"}, {"name": "660", "dtype": "float32"}, {"name": "661", "dtype": "float32"}, {"name": "662", "dtype": "float32"}, {"name": "663", "dtype": "float32"}, {"name": "664", "dtype": "float32"}, {"name": "665", "dtype": "float32"}, {"name": "666", "dtype": "float32"}, {"name": "667", "dtype": "float32"}, {"name": "668", "dtype": "float32"}, {"name": "669", "dtype": "float32"}, {"name": "670", "dtype": "float32"}, {"name": "671", "dtype": "float32"}, {"name": "672", "dtype": "float32"}, {"name": "673", "dtype": "float32"}, {"name": "674", "dtype": "float32"}, {"name": "675", "dtype": "float32"}, {"name": "676", "dtype": "float32"}, {"name": "677", "dtype": "float32"}, {"name": "678", "dtype": "float32"}, {"name": "679", "dtype": "float32"}, {"name": "680", "dtype": "float32"}, {"name": "681", "dtype": "float32"}, {"name": "682", "dtype": "float32"}, {"name": "683", "dtype": "float32"}, {"name": "684", "dtype": "float32"}, {"name": "685", "dtype": "float32"}, {"name": "686", "dtype": "float32"}, {"name": "687", "dtype": "float32"}, {"name": "688", "dtype": "float32"}, {"name": "689", "dtype": "float32"}, {"name": "690", "dtype": "float32"}, {"name": "691", "dtype": "float32"}, {"name": "692", "dtype": "float32"}, {"name": "693", "dtype": "float32"}, {"name": "694", "dtype": "float32"}, {"name": "695", "dtype": "float32"}, {"name": "696", "dtype": "float32"}, {"name": "697", "dtype": "float32"}, {"name": "698", "dtype": "float32"}, {"name": "699", "dtype": "float32"}, {"name": "700", "dtype": "float32"}, {"name": "701", "dtype": "float32"}, {"name": "702", "dtype": "float32"}, {"name": "703", "dtype": "float32"}, {"name": "704", "dtype": "float32"}, {"name": "705", "dtype": "float32"}, {"name": "706", "dtype": "float32"}, {"name": "707", "dtype": "float32"}, {"name": "708", "dtype": "float32"}, {"name": "709", "dtype": "float32"}, {"name": "710", "dtype": "float32"}, {"name": "711", "dtype": "float32"}, {"name": "712", "dtype": "float32"}, {"name": "713", "dtype": "float32"}, {"name": "714", "dtype": "float32"}, {"name": "715", "dtype": "float32"}, {"name": "716", "dtype": "float32"}, {"name": "717", "dtype": "float32"}, {"name": "718", "dtype": "float32"}, {"name": "719", "dtype": "float32"}, {"name": "720", "dtype": "float32"}, {"name": "721", "dtype": "float32"}, {"name": "722", "dtype": "float32"}, {"name": "723", "dtype": "float32"}, {"name": "724", "dtype": "float32"}, {"name": "725", "dtype": "float32"}, {"name": "726", "dtype": "float32"}, {"name": "727", "dtype": "float32"}, {"name": "728", "dtype": "float32"}, {"name": "729", "dtype": "float32"}, {"name": "730", "dtype": "float32"}, {"name": "731", "dtype": "float32"}, {"name": "732", "dtype": "float32"}, {"name": "733", "dtype": "float32"}, {"name": "734", "dtype": "float32"}, {"name": "735", "dtype": "float32"}, {"name": "736", "dtype": "float32"}, {"name": "737", "dtype": "float32"}, {"name": "738", "dtype": "float32"}, {"name": "739", "dtype": "float32"}, {"name": "740", "dtype": "float32"}, {"name": "741", "dtype": "float32"}, {"name": "742", "dtype": "float32"}, {"name": "743", "dtype": "float32"}, {"name": "744", "dtype": "float32"}, {"name": "745", "dtype": "float32"}, {"name": "746", "dtype": "float32"}, {"name": "747", "dtype": "float32"}, {"name": "748", "dtype": "float32"}, {"name": "749", "dtype": "float32"}, {"name": "750", "dtype": "float32"}, {"name": "751", "dtype": "float32"}, {"name": "752", "dtype": "float32"}, {"name": "753", "dtype": "float32"}, {"name": "754", "dtype": "float32"}, {"name": "755", "dtype": "float32"}, {"name": "756", "dtype": "float32"}, {"name": "757", "dtype": "float32"}, {"name": "758", "dtype": "float32"}, {"name": "759", "dtype": "float32"}, {"name": "760", "dtype": "float32"}, {"name": "761", "dtype": "float32"}, {"name": "762", "dtype": "float32"}, {"name": "763", "dtype": "float32"}, {"name": "764", "dtype": "float32"}, {"name": "765", "dtype": "float32"}, {"name": "766", "dtype": "float32"}, {"name": "767", "dtype": "float32"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 80318780.21618997, "num_examples": 26057}, {"name": "test", "num_bytes": 26774087.073587257, "num_examples": 8686}], "download_size": 147218699, "dataset_size": 107092867.28977722}}
|
2023-08-21T20:30:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "AA_ApplicationDistilRoBERTa_110K_5_L2"
More Information needed
|
[
"# Dataset Card for \"AA_ApplicationDistilRoBERTa_110K_5_L2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"AA_ApplicationDistilRoBERTa_110K_5_L2\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"AA_ApplicationDistilRoBERTa_110K_5_L2\"\n\nMore Information needed"
] |
2f5e46ae6a69cf0dce4b12f78241c408936ca0e4
|
This is a backup for the pile val dataset downloaded from here: `https://the-eye.eu/public/AI/pile/val.jsonl.zst`
Please respect the original license of the dataset.
|
mit-han-lab/pile-val-backup
|
[
"region:us"
] |
2023-08-21T20:33:21+00:00
|
{}
|
2023-08-21T20:37:19+00:00
|
[] |
[] |
TAGS
#region-us
|
This is a backup for the pile val dataset downloaded from here: 'URL
Please respect the original license of the dataset.
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
ba49c3fdf695826aa6b5532738864b1d0a1ab59a
|
# Dataset of hatsushimo/初霜/初霜 (Kantai Collection)
This is the dataset of hatsushimo/初霜/初霜 (Kantai Collection), containing 453 images and their tags.
The core tags of this character are `black_hair, long_hair, low-tied_long_hair, headband, red_eyes, hair_between_eyes, blue_headband, brown_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 453 | 347.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hatsushimo_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 453 | 248.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hatsushimo_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 992 | 493.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hatsushimo_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 453 | 327.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hatsushimo_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 992 | 617.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hatsushimo_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hatsushimo_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 30 |  |  |  |  |  | 1girl, blazer, school_uniform, solo, pleated_skirt, single_thighhigh, simple_background, looking_at_viewer, black_thighhighs, white_background, black_skirt, open_mouth, shirt, turret |
| 1 | 31 |  |  |  |  |  | pleated_skirt, single_thighhigh, 1girl, blazer, solo, black_skirt, school_uniform, single_kneehigh, looking_at_viewer, open_mouth, uneven_legwear, blush, simple_background, white_background, smile, white_shirt, black_jacket, full_body |
| 2 | 7 |  |  |  |  |  | 1girl, black_jacket, blazer, blush, collared_shirt, looking_at_viewer, school_uniform, white_shirt, black_skirt, long_sleeves, open_mouth, red_necktie, pleated_skirt, solo, :d, gradient_background |
| 3 | 19 |  |  |  |  |  | 1girl, blazer, solo, upper_body, looking_at_viewer, school_uniform, simple_background, white_shirt, collared_shirt, smile, black_jacket, white_background |
| 4 | 8 |  |  |  |  |  | 1girl, blush, looking_at_viewer, solo, collarbone, simple_background, twitter_username, small_breasts, white_background, bikini, cowboy_shot, navel, jacket, one-piece_swimsuit, school_swimsuit, smile |
| 5 | 11 |  |  |  |  |  | fake_animal_ears, playboy_bunny, rabbit_ears, 1girl, blush, detached_collar, solo, looking_at_viewer, wrist_cuffs, black_leotard, open_mouth, bowtie, rabbit_tail, small_breasts, covered_navel, smile, strapless_leotard, bare_shoulders, black_pantyhose, fake_tail, simple_background, twitter_username, white_background, alternate_costume, brown_pantyhose, cowboy_shot, cropped_legs, high_heels, white_gloves |
| 6 | 13 |  |  |  |  |  | gym_shirt, gym_uniform, 1girl, blue_buruma, solo, white_shirt, short_sleeves, simple_background, white_background, looking_at_viewer, blush, character_name, cowboy_shot, sidelocks, very_long_hair, dated, kneehighs, open_mouth, flexible, shoes, stretching, white_footwear |
| 7 | 5 |  |  |  |  |  | 1girl, black_dress, frilled_apron, looking_at_viewer, maid_apron, solo, enmaided, maid_headdress, simple_background, white_apron, white_background, long_sleeves, black_footwear, blush, cat_ears, full_body, puffy_short_sleeves, standing, thighhighs, wrist_cuffs |
| 8 | 7 |  |  |  |  |  | 1girl, solo, looking_at_viewer, simple_background, blush, white_background, alternate_costume, holding, smile, wide_sleeves, floral_print, hakama_skirt, long_sleeves, miko, open_mouth, red_hakama, standing, white_kimono, yukata |
| 9 | 5 |  |  |  |  |  | 1girl, black_panties, crop_top, elbow_gloves, hairband, miniskirt, pleated_skirt, shimakaze_(kancolle)_(cosplay), solo, white_gloves, blue_sailor_collar, blue_skirt, blush, cowboy_shot, highleg_panties, looking_at_viewer, microskirt, serafuku, simple_background, striped_thighhighs, black_neckerchief, navel, sleeveless, twitter_username, white_background, bare_shoulders, embarrassed, open_mouth, smile, thong |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blazer | school_uniform | solo | pleated_skirt | single_thighhigh | simple_background | looking_at_viewer | black_thighhighs | white_background | black_skirt | open_mouth | shirt | turret | single_kneehigh | uneven_legwear | blush | smile | white_shirt | black_jacket | full_body | collared_shirt | long_sleeves | red_necktie | :d | gradient_background | upper_body | collarbone | twitter_username | small_breasts | bikini | cowboy_shot | navel | jacket | one-piece_swimsuit | school_swimsuit | fake_animal_ears | playboy_bunny | rabbit_ears | detached_collar | wrist_cuffs | black_leotard | bowtie | rabbit_tail | covered_navel | strapless_leotard | bare_shoulders | black_pantyhose | fake_tail | alternate_costume | brown_pantyhose | cropped_legs | high_heels | white_gloves | gym_shirt | gym_uniform | blue_buruma | short_sleeves | character_name | sidelocks | very_long_hair | dated | kneehighs | flexible | shoes | stretching | white_footwear | black_dress | frilled_apron | maid_apron | enmaided | maid_headdress | white_apron | black_footwear | cat_ears | puffy_short_sleeves | standing | thighhighs | holding | wide_sleeves | floral_print | hakama_skirt | miko | red_hakama | white_kimono | yukata | black_panties | crop_top | elbow_gloves | hairband | miniskirt | shimakaze_(kancolle)_(cosplay) | blue_sailor_collar | blue_skirt | highleg_panties | microskirt | serafuku | striped_thighhighs | black_neckerchief | sleeveless | embarrassed | thong |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:-----------------|:-------|:----------------|:-------------------|:--------------------|:--------------------|:-------------------|:-------------------|:--------------|:-------------|:--------|:---------|:------------------|:-----------------|:--------|:--------|:--------------|:---------------|:------------|:-----------------|:---------------|:--------------|:-----|:----------------------|:-------------|:-------------|:-------------------|:----------------|:---------|:--------------|:--------|:---------|:---------------------|:------------------|:-------------------|:----------------|:--------------|:------------------|:--------------|:----------------|:---------|:--------------|:----------------|:--------------------|:-----------------|:------------------|:------------|:--------------------|:------------------|:---------------|:-------------|:---------------|:------------|:--------------|:--------------|:----------------|:-----------------|:------------|:-----------------|:--------|:------------|:-----------|:--------|:-------------|:-----------------|:--------------|:----------------|:-------------|:-----------|:-----------------|:--------------|:-----------------|:-----------|:----------------------|:-----------|:-------------|:----------|:---------------|:---------------|:---------------|:-------|:-------------|:---------------|:---------|:----------------|:-----------|:---------------|:-----------|:------------|:---------------------------------|:---------------------|:-------------|:------------------|:-------------|:-----------|:---------------------|:--------------------|:-------------|:--------------|:--------|
| 0 | 30 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 31 |  |  |  |  |  | X | X | X | X | X | X | X | X | | X | X | X | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | X | X | X | | | X | | | X | X | | | | | X | | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 19 |  |  |  |  |  | X | X | X | X | | | X | X | | X | | | | | | | | X | X | X | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | | | X | | | X | X | | X | | | | | | | X | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 11 |  |  |  |  |  | X | | | X | | | X | X | | X | | X | | | | | X | X | | | | | | | | | | | X | X | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 13 |  |  |  |  |  | X | | | X | | | X | X | | X | | X | | | | | X | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | | | X | | | X | X | | X | | | | | | | X | | | | X | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 7 |  |  |  |  |  | X | | | X | | | X | X | | X | | X | | | | | X | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 9 | 5 |  |  |  |  |  | X | | | X | X | | X | X | | X | | X | | | | | X | X | | | | | | | | | | | X | | | X | X | | | | | | | | | | | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/hatsushimo_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T20:54:12+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T08:20:14+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of hatsushimo/初霜/初霜 (Kantai Collection)
===============================================
This is the dataset of hatsushimo/初霜/初霜 (Kantai Collection), containing 453 images and their tags.
The core tags of this character are 'black\_hair, long\_hair, low-tied\_long\_hair, headband, red\_eyes, hair\_between\_eyes, blue\_headband, brown\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
ad8f5b5e02bcfc4eadfeeccc4799f951d52e331d
|
# Dataset Card for "TinyStories_MarsStories"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jason-lee08/TinyStories_MarsStories
|
[
"region:us"
] |
2023-08-21T20:55:09+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "null"}], "splits": [{"name": "train"}], "download_size": 0, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-21T20:57:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "TinyStories_MarsStories"
More Information needed
|
[
"# Dataset Card for \"TinyStories_MarsStories\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"TinyStories_MarsStories\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"TinyStories_MarsStories\"\n\nMore Information needed"
] |
ea65d075f30310a6cd3429a81a9eaec8bc43fd28
|
# Dataset Card for "render-heb-oscar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yair-elboher/render-heb-oscar
|
[
"region:us"
] |
2023-08-21T21:15:11+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "num_patches", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 86429.0, "num_examples": 9}, {"name": "validation", "num_bytes": 48578.0, "num_examples": 4}], "download_size": 161145, "dataset_size": 135007.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
|
2023-08-29T07:29:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "render-heb-oscar"
More Information needed
|
[
"# Dataset Card for \"render-heb-oscar\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"render-heb-oscar\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"render-heb-oscar\"\n\nMore Information needed"
] |
98faaebaa1ec1839122d505b496705c5b9370d3a
|
# Dataset Card for "text-toy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yair-elboher/text-toy
|
[
"region:us"
] |
2023-08-21T21:20:17+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 752912, "num_examples": 200}, {"name": "validation", "num_bytes": 453235, "num_examples": 100}], "download_size": 234944, "dataset_size": 1206147}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
|
2024-01-24T09:24:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "text-toy"
More Information needed
|
[
"# Dataset Card for \"text-toy\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"text-toy\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"text-toy\"\n\nMore Information needed"
] |
8c21775c2765f4c569416afac5e94856af4b0221
|
This dataset is from the paper: "Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for Multi-Hop QA" by Yichen Jiang and Mohit Bansal.
The dataset was created using the code provided in the [original Github repo ](https://github.com/jiangycTarheel-zz/Adversarial-MultiHopQA).
This is the ACL citation for the paper:
```
@inproceedings{jiang-bansal-2019-avoiding,
title = "Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for Multi-Hop {QA}",
author = "Jiang, Yichen and
Bansal, Mohit",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1262",
doi = "10.18653/v1/P19-1262",
pages = "2726--2736",
abstract = "Multi-hop question answering requires a model to connect multiple pieces of evidence scattered in a long context to answer the question. In this paper, we show that in the multi-hop HotpotQA (Yang et al., 2018) dataset, the examples often contain reasoning shortcuts through which models can directly locate the answer by word-matching the question with a sentence in the context. We demonstrate this issue by constructing adversarial documents that create contradicting answers to the shortcut but do not affect the validity of the original answer. The performance of strong baseline models drops significantly on our adversarial test, indicating that they are indeed exploiting the shortcuts rather than performing multi-hop reasoning. After adversarial training, the baseline{'}s performance improves but is still limited on the adversarial test. Hence, we use a control unit that dynamically attends to the question at different reasoning hops to guide the model{'}s multi-hop reasoning. We show that our 2-hop model trained on the regular data is more robust to the adversaries than the baseline. After adversarial training, it not only achieves significant improvements over its counterpart trained on regular data, but also outperforms the adversarially-trained baseline significantly. Finally, we sanity-check that these improvements are not obtained by exploiting potential new shortcuts in the adversarial data, but indeed due to robust multi-hop reasoning skills of the models.",
}
```
|
sagnikrayc/adversarial_hotpotqa
|
[
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:afl-3.0",
"region:us"
] |
2023-08-21T21:45:49+00:00
|
{"language": ["en"], "license": "afl-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "pretty_name": "Adversarial-MultiHopQA"}
|
2023-08-21T21:47:53+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-question-answering #size_categories-10K<n<100K #language-English #license-afl-3.0 #region-us
|
This dataset is from the paper: "Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for Multi-Hop QA" by Yichen Jiang and Mohit Bansal.
The dataset was created using the code provided in the original Github repo .
This is the ACL citation for the paper:
|
[] |
[
"TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-English #license-afl-3.0 #region-us \n"
] |
[
42
] |
[
"passage: TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-English #license-afl-3.0 #region-us \n"
] |
147f7a28a631a086c30494387820d1ca013674f5
|
Daniel Alegria
|
danito10/mini-croupier
|
[
"license:apache-2.0",
"region:us"
] |
2023-08-21T21:46:09+00:00
|
{"license": "apache-2.0"}
|
2023-08-25T15:44:20+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
Daniel Alegria
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
126772fcae9eaa4180eae6847ed72a5165abf5ce
|
# Dataset Card for "AA_ApplicationDistilRoBERTa_110K_5_F"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
EgilKarlsen/AA_ApplicationDistilRoBERTa_110K_5_F
|
[
"region:us"
] |
2023-08-21T21:47:02+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "0", "dtype": "float32"}, {"name": "1", "dtype": "float32"}, {"name": "2", "dtype": "float32"}, {"name": "3", "dtype": "float32"}, {"name": "4", "dtype": "float32"}, {"name": "5", "dtype": "float32"}, {"name": "6", "dtype": "float32"}, {"name": "7", "dtype": "float32"}, {"name": "8", "dtype": "float32"}, {"name": "9", "dtype": "float32"}, {"name": "10", "dtype": "float32"}, {"name": "11", "dtype": "float32"}, {"name": "12", "dtype": "float32"}, {"name": "13", "dtype": "float32"}, {"name": "14", "dtype": "float32"}, {"name": "15", "dtype": "float32"}, {"name": "16", "dtype": "float32"}, {"name": "17", "dtype": "float32"}, {"name": "18", "dtype": "float32"}, {"name": "19", "dtype": "float32"}, {"name": "20", "dtype": "float32"}, {"name": "21", "dtype": "float32"}, {"name": "22", "dtype": "float32"}, {"name": "23", "dtype": "float32"}, {"name": "24", "dtype": "float32"}, {"name": "25", "dtype": "float32"}, {"name": "26", "dtype": "float32"}, {"name": "27", "dtype": "float32"}, {"name": "28", "dtype": "float32"}, {"name": "29", "dtype": "float32"}, {"name": "30", "dtype": "float32"}, {"name": "31", "dtype": "float32"}, {"name": "32", "dtype": "float32"}, {"name": "33", "dtype": "float32"}, {"name": "34", "dtype": "float32"}, {"name": "35", "dtype": "float32"}, {"name": "36", "dtype": "float32"}, {"name": "37", "dtype": "float32"}, {"name": "38", "dtype": "float32"}, {"name": "39", "dtype": "float32"}, {"name": "40", "dtype": "float32"}, {"name": "41", "dtype": "float32"}, {"name": "42", "dtype": "float32"}, {"name": "43", "dtype": "float32"}, {"name": "44", "dtype": "float32"}, {"name": "45", "dtype": "float32"}, {"name": "46", "dtype": "float32"}, {"name": "47", "dtype": "float32"}, {"name": "48", "dtype": "float32"}, {"name": "49", "dtype": "float32"}, {"name": "50", "dtype": "float32"}, {"name": "51", "dtype": "float32"}, {"name": "52", "dtype": "float32"}, {"name": "53", "dtype": "float32"}, {"name": "54", "dtype": "float32"}, {"name": "55", "dtype": "float32"}, {"name": "56", "dtype": "float32"}, {"name": "57", "dtype": "float32"}, {"name": "58", "dtype": "float32"}, {"name": "59", "dtype": "float32"}, {"name": "60", "dtype": "float32"}, {"name": "61", "dtype": "float32"}, {"name": "62", "dtype": "float32"}, {"name": "63", "dtype": "float32"}, {"name": "64", "dtype": "float32"}, {"name": "65", "dtype": "float32"}, {"name": "66", "dtype": "float32"}, {"name": "67", "dtype": "float32"}, {"name": "68", "dtype": "float32"}, {"name": "69", "dtype": "float32"}, {"name": "70", "dtype": "float32"}, {"name": "71", "dtype": "float32"}, {"name": "72", "dtype": "float32"}, {"name": "73", "dtype": "float32"}, {"name": "74", "dtype": "float32"}, {"name": "75", "dtype": "float32"}, {"name": "76", "dtype": "float32"}, {"name": "77", "dtype": "float32"}, {"name": "78", "dtype": "float32"}, {"name": "79", "dtype": "float32"}, {"name": "80", "dtype": "float32"}, {"name": "81", "dtype": "float32"}, {"name": "82", "dtype": "float32"}, {"name": "83", "dtype": "float32"}, {"name": "84", "dtype": "float32"}, {"name": "85", "dtype": "float32"}, {"name": "86", "dtype": "float32"}, {"name": "87", "dtype": "float32"}, {"name": "88", "dtype": "float32"}, {"name": "89", "dtype": "float32"}, {"name": "90", "dtype": "float32"}, {"name": "91", "dtype": "float32"}, {"name": "92", "dtype": "float32"}, {"name": "93", "dtype": "float32"}, {"name": "94", "dtype": "float32"}, {"name": "95", "dtype": "float32"}, {"name": "96", "dtype": "float32"}, {"name": "97", "dtype": "float32"}, {"name": "98", "dtype": "float32"}, {"name": "99", "dtype": "float32"}, {"name": "100", "dtype": "float32"}, {"name": "101", "dtype": "float32"}, {"name": "102", "dtype": "float32"}, {"name": "103", "dtype": "float32"}, {"name": "104", "dtype": "float32"}, {"name": "105", "dtype": "float32"}, {"name": "106", "dtype": "float32"}, {"name": "107", "dtype": "float32"}, {"name": "108", "dtype": "float32"}, {"name": "109", "dtype": "float32"}, {"name": "110", "dtype": "float32"}, {"name": "111", "dtype": "float32"}, {"name": "112", "dtype": "float32"}, {"name": "113", "dtype": "float32"}, {"name": "114", "dtype": "float32"}, {"name": "115", "dtype": "float32"}, {"name": "116", "dtype": "float32"}, {"name": "117", "dtype": "float32"}, {"name": "118", "dtype": "float32"}, {"name": "119", "dtype": "float32"}, {"name": "120", "dtype": "float32"}, {"name": "121", "dtype": "float32"}, {"name": "122", "dtype": "float32"}, {"name": "123", "dtype": "float32"}, {"name": "124", "dtype": "float32"}, {"name": "125", "dtype": "float32"}, {"name": "126", "dtype": "float32"}, {"name": "127", "dtype": "float32"}, {"name": "128", "dtype": "float32"}, {"name": "129", "dtype": "float32"}, {"name": "130", "dtype": "float32"}, {"name": "131", "dtype": "float32"}, {"name": "132", "dtype": "float32"}, {"name": "133", "dtype": "float32"}, {"name": "134", "dtype": "float32"}, {"name": "135", "dtype": "float32"}, {"name": "136", "dtype": "float32"}, {"name": "137", "dtype": "float32"}, {"name": "138", "dtype": "float32"}, {"name": "139", "dtype": "float32"}, {"name": "140", "dtype": "float32"}, {"name": "141", "dtype": "float32"}, {"name": "142", "dtype": "float32"}, {"name": "143", "dtype": "float32"}, {"name": "144", "dtype": "float32"}, {"name": "145", "dtype": "float32"}, {"name": "146", "dtype": "float32"}, {"name": "147", "dtype": "float32"}, {"name": "148", "dtype": "float32"}, {"name": "149", "dtype": "float32"}, {"name": "150", "dtype": "float32"}, {"name": "151", "dtype": "float32"}, {"name": "152", "dtype": "float32"}, {"name": "153", "dtype": "float32"}, {"name": "154", "dtype": "float32"}, {"name": "155", "dtype": "float32"}, {"name": "156", "dtype": "float32"}, {"name": "157", "dtype": "float32"}, {"name": "158", "dtype": "float32"}, {"name": "159", "dtype": "float32"}, {"name": "160", "dtype": "float32"}, {"name": "161", "dtype": "float32"}, {"name": "162", "dtype": "float32"}, {"name": "163", "dtype": "float32"}, {"name": "164", "dtype": "float32"}, {"name": "165", "dtype": "float32"}, {"name": "166", "dtype": "float32"}, {"name": "167", "dtype": "float32"}, {"name": "168", "dtype": "float32"}, {"name": "169", "dtype": "float32"}, {"name": "170", "dtype": "float32"}, {"name": "171", "dtype": "float32"}, {"name": "172", "dtype": "float32"}, {"name": "173", "dtype": "float32"}, {"name": "174", "dtype": "float32"}, {"name": "175", "dtype": "float32"}, {"name": "176", "dtype": "float32"}, {"name": "177", "dtype": "float32"}, {"name": "178", "dtype": "float32"}, {"name": "179", "dtype": "float32"}, {"name": "180", "dtype": "float32"}, {"name": "181", "dtype": "float32"}, {"name": "182", "dtype": "float32"}, {"name": "183", "dtype": "float32"}, {"name": "184", "dtype": "float32"}, {"name": "185", "dtype": "float32"}, {"name": "186", "dtype": "float32"}, {"name": "187", "dtype": "float32"}, {"name": "188", "dtype": "float32"}, {"name": "189", "dtype": "float32"}, {"name": "190", "dtype": "float32"}, {"name": "191", "dtype": "float32"}, {"name": "192", "dtype": "float32"}, {"name": "193", "dtype": "float32"}, {"name": "194", "dtype": "float32"}, {"name": "195", "dtype": "float32"}, {"name": "196", "dtype": "float32"}, {"name": "197", "dtype": "float32"}, {"name": "198", "dtype": "float32"}, {"name": "199", "dtype": "float32"}, {"name": "200", "dtype": "float32"}, {"name": "201", "dtype": "float32"}, {"name": "202", "dtype": "float32"}, {"name": "203", "dtype": "float32"}, {"name": "204", "dtype": "float32"}, {"name": "205", "dtype": "float32"}, {"name": "206", "dtype": "float32"}, {"name": "207", "dtype": "float32"}, {"name": "208", "dtype": "float32"}, {"name": "209", "dtype": "float32"}, {"name": "210", "dtype": "float32"}, {"name": "211", "dtype": "float32"}, {"name": "212", "dtype": "float32"}, {"name": "213", "dtype": "float32"}, {"name": "214", "dtype": "float32"}, {"name": "215", "dtype": "float32"}, {"name": "216", "dtype": "float32"}, {"name": "217", "dtype": "float32"}, {"name": "218", "dtype": "float32"}, {"name": "219", "dtype": "float32"}, {"name": "220", "dtype": "float32"}, {"name": "221", "dtype": "float32"}, {"name": "222", "dtype": "float32"}, {"name": "223", "dtype": "float32"}, {"name": "224", "dtype": "float32"}, {"name": "225", "dtype": "float32"}, {"name": "226", "dtype": "float32"}, {"name": "227", "dtype": "float32"}, {"name": "228", "dtype": "float32"}, {"name": "229", "dtype": "float32"}, {"name": "230", "dtype": "float32"}, {"name": "231", "dtype": "float32"}, {"name": "232", "dtype": "float32"}, {"name": "233", "dtype": "float32"}, {"name": "234", "dtype": "float32"}, {"name": "235", "dtype": "float32"}, {"name": "236", "dtype": "float32"}, {"name": "237", "dtype": "float32"}, {"name": "238", "dtype": "float32"}, {"name": "239", "dtype": "float32"}, {"name": "240", "dtype": "float32"}, {"name": "241", "dtype": "float32"}, {"name": "242", "dtype": "float32"}, {"name": "243", "dtype": "float32"}, {"name": "244", "dtype": "float32"}, {"name": "245", "dtype": "float32"}, {"name": "246", "dtype": "float32"}, {"name": "247", "dtype": "float32"}, {"name": "248", "dtype": "float32"}, {"name": "249", "dtype": "float32"}, {"name": "250", "dtype": "float32"}, {"name": "251", "dtype": "float32"}, {"name": "252", "dtype": "float32"}, {"name": "253", "dtype": "float32"}, {"name": "254", "dtype": "float32"}, {"name": "255", "dtype": "float32"}, {"name": "256", "dtype": "float32"}, {"name": "257", "dtype": "float32"}, {"name": "258", "dtype": "float32"}, {"name": "259", "dtype": "float32"}, {"name": "260", "dtype": "float32"}, {"name": "261", "dtype": "float32"}, {"name": "262", "dtype": "float32"}, {"name": "263", "dtype": "float32"}, {"name": "264", "dtype": "float32"}, {"name": "265", "dtype": "float32"}, {"name": "266", "dtype": "float32"}, {"name": "267", "dtype": "float32"}, {"name": "268", "dtype": "float32"}, {"name": "269", "dtype": "float32"}, {"name": "270", "dtype": "float32"}, {"name": "271", "dtype": "float32"}, {"name": "272", "dtype": "float32"}, {"name": "273", "dtype": "float32"}, {"name": "274", "dtype": "float32"}, {"name": "275", "dtype": "float32"}, {"name": "276", "dtype": "float32"}, {"name": "277", "dtype": "float32"}, {"name": "278", "dtype": "float32"}, {"name": "279", "dtype": "float32"}, {"name": "280", "dtype": "float32"}, {"name": "281", "dtype": "float32"}, {"name": "282", "dtype": "float32"}, {"name": "283", "dtype": "float32"}, {"name": "284", "dtype": "float32"}, {"name": "285", "dtype": "float32"}, {"name": "286", "dtype": "float32"}, {"name": "287", "dtype": "float32"}, {"name": "288", "dtype": "float32"}, {"name": "289", "dtype": "float32"}, {"name": "290", "dtype": "float32"}, {"name": "291", "dtype": "float32"}, {"name": "292", "dtype": "float32"}, {"name": "293", "dtype": "float32"}, {"name": "294", "dtype": "float32"}, {"name": "295", "dtype": "float32"}, {"name": "296", "dtype": "float32"}, {"name": "297", "dtype": "float32"}, {"name": "298", "dtype": "float32"}, {"name": "299", "dtype": "float32"}, {"name": "300", "dtype": "float32"}, {"name": "301", "dtype": "float32"}, {"name": "302", "dtype": "float32"}, {"name": "303", "dtype": "float32"}, {"name": "304", "dtype": "float32"}, {"name": "305", "dtype": "float32"}, {"name": "306", "dtype": "float32"}, {"name": "307", "dtype": "float32"}, {"name": "308", "dtype": "float32"}, {"name": "309", "dtype": "float32"}, {"name": "310", "dtype": "float32"}, {"name": "311", "dtype": "float32"}, {"name": "312", "dtype": "float32"}, {"name": "313", "dtype": "float32"}, {"name": "314", "dtype": "float32"}, {"name": "315", "dtype": "float32"}, {"name": "316", "dtype": "float32"}, {"name": "317", "dtype": "float32"}, {"name": "318", "dtype": "float32"}, {"name": "319", "dtype": "float32"}, {"name": "320", "dtype": "float32"}, {"name": "321", "dtype": "float32"}, {"name": "322", "dtype": "float32"}, {"name": "323", "dtype": "float32"}, {"name": "324", "dtype": "float32"}, {"name": "325", "dtype": "float32"}, {"name": "326", "dtype": "float32"}, {"name": "327", "dtype": "float32"}, {"name": "328", "dtype": "float32"}, {"name": "329", "dtype": "float32"}, {"name": "330", "dtype": "float32"}, {"name": "331", "dtype": "float32"}, {"name": "332", "dtype": "float32"}, {"name": "333", "dtype": "float32"}, {"name": "334", "dtype": "float32"}, {"name": "335", "dtype": "float32"}, {"name": "336", "dtype": "float32"}, {"name": "337", "dtype": "float32"}, {"name": "338", "dtype": "float32"}, {"name": "339", "dtype": "float32"}, {"name": "340", "dtype": "float32"}, {"name": "341", "dtype": "float32"}, {"name": "342", "dtype": "float32"}, {"name": "343", "dtype": "float32"}, {"name": "344", "dtype": "float32"}, {"name": "345", "dtype": "float32"}, {"name": "346", "dtype": "float32"}, {"name": "347", "dtype": "float32"}, {"name": "348", "dtype": "float32"}, {"name": "349", "dtype": "float32"}, {"name": "350", "dtype": "float32"}, {"name": "351", "dtype": "float32"}, {"name": "352", "dtype": "float32"}, {"name": "353", "dtype": "float32"}, {"name": "354", "dtype": "float32"}, {"name": "355", "dtype": "float32"}, {"name": "356", "dtype": "float32"}, {"name": "357", "dtype": "float32"}, {"name": "358", "dtype": "float32"}, {"name": "359", "dtype": "float32"}, {"name": "360", "dtype": "float32"}, {"name": "361", "dtype": "float32"}, {"name": "362", "dtype": "float32"}, {"name": "363", "dtype": "float32"}, {"name": "364", "dtype": "float32"}, {"name": "365", "dtype": "float32"}, {"name": "366", "dtype": "float32"}, {"name": "367", "dtype": "float32"}, {"name": "368", "dtype": "float32"}, {"name": "369", "dtype": "float32"}, {"name": "370", "dtype": "float32"}, {"name": "371", "dtype": "float32"}, {"name": "372", "dtype": "float32"}, {"name": "373", "dtype": "float32"}, {"name": "374", "dtype": "float32"}, {"name": "375", "dtype": "float32"}, {"name": "376", "dtype": "float32"}, {"name": "377", "dtype": "float32"}, {"name": "378", "dtype": "float32"}, {"name": "379", "dtype": "float32"}, {"name": "380", "dtype": "float32"}, {"name": "381", "dtype": "float32"}, {"name": "382", "dtype": "float32"}, {"name": "383", "dtype": "float32"}, {"name": "384", "dtype": "float32"}, {"name": "385", "dtype": "float32"}, {"name": "386", "dtype": "float32"}, {"name": "387", "dtype": "float32"}, {"name": "388", "dtype": "float32"}, {"name": "389", "dtype": "float32"}, {"name": "390", "dtype": "float32"}, {"name": "391", "dtype": "float32"}, {"name": "392", "dtype": "float32"}, {"name": "393", "dtype": "float32"}, {"name": "394", "dtype": "float32"}, {"name": "395", "dtype": "float32"}, {"name": "396", "dtype": "float32"}, {"name": "397", "dtype": "float32"}, {"name": "398", "dtype": "float32"}, {"name": "399", "dtype": "float32"}, {"name": "400", "dtype": "float32"}, {"name": "401", "dtype": "float32"}, {"name": "402", "dtype": "float32"}, {"name": "403", "dtype": "float32"}, {"name": "404", "dtype": "float32"}, {"name": "405", "dtype": "float32"}, {"name": "406", "dtype": "float32"}, {"name": "407", "dtype": "float32"}, {"name": "408", "dtype": "float32"}, {"name": "409", "dtype": "float32"}, {"name": "410", "dtype": "float32"}, {"name": "411", "dtype": "float32"}, {"name": "412", "dtype": "float32"}, {"name": "413", "dtype": "float32"}, {"name": "414", "dtype": "float32"}, {"name": "415", "dtype": "float32"}, {"name": "416", "dtype": "float32"}, {"name": "417", "dtype": "float32"}, {"name": "418", "dtype": "float32"}, {"name": "419", "dtype": "float32"}, {"name": "420", "dtype": "float32"}, {"name": "421", "dtype": "float32"}, {"name": "422", "dtype": "float32"}, {"name": "423", "dtype": "float32"}, {"name": "424", "dtype": "float32"}, {"name": "425", "dtype": "float32"}, {"name": "426", "dtype": "float32"}, {"name": "427", "dtype": "float32"}, {"name": "428", "dtype": "float32"}, {"name": "429", "dtype": "float32"}, {"name": "430", "dtype": "float32"}, {"name": "431", "dtype": "float32"}, {"name": "432", "dtype": "float32"}, {"name": "433", "dtype": "float32"}, {"name": "434", "dtype": "float32"}, {"name": "435", "dtype": "float32"}, {"name": "436", "dtype": "float32"}, {"name": "437", "dtype": "float32"}, {"name": "438", "dtype": "float32"}, {"name": "439", "dtype": "float32"}, {"name": "440", "dtype": "float32"}, {"name": "441", "dtype": "float32"}, {"name": "442", "dtype": "float32"}, {"name": "443", "dtype": "float32"}, {"name": "444", "dtype": "float32"}, {"name": "445", "dtype": "float32"}, {"name": "446", "dtype": "float32"}, {"name": "447", "dtype": "float32"}, {"name": "448", "dtype": "float32"}, {"name": "449", "dtype": "float32"}, {"name": "450", "dtype": "float32"}, {"name": "451", "dtype": "float32"}, {"name": "452", "dtype": "float32"}, {"name": "453", "dtype": "float32"}, {"name": "454", "dtype": "float32"}, {"name": "455", "dtype": "float32"}, {"name": "456", "dtype": "float32"}, {"name": "457", "dtype": "float32"}, {"name": "458", "dtype": "float32"}, {"name": "459", "dtype": "float32"}, {"name": "460", "dtype": "float32"}, {"name": "461", "dtype": "float32"}, {"name": "462", "dtype": "float32"}, {"name": "463", "dtype": "float32"}, {"name": "464", "dtype": "float32"}, {"name": "465", "dtype": "float32"}, {"name": "466", "dtype": "float32"}, {"name": "467", "dtype": "float32"}, {"name": "468", "dtype": "float32"}, {"name": "469", "dtype": "float32"}, {"name": "470", "dtype": "float32"}, {"name": "471", "dtype": "float32"}, {"name": "472", "dtype": "float32"}, {"name": "473", "dtype": "float32"}, {"name": "474", "dtype": "float32"}, {"name": "475", "dtype": "float32"}, {"name": "476", "dtype": "float32"}, {"name": "477", "dtype": "float32"}, {"name": "478", "dtype": "float32"}, {"name": "479", "dtype": "float32"}, {"name": "480", "dtype": "float32"}, {"name": "481", "dtype": "float32"}, {"name": "482", "dtype": "float32"}, {"name": "483", "dtype": "float32"}, {"name": "484", "dtype": "float32"}, {"name": "485", "dtype": "float32"}, {"name": "486", "dtype": "float32"}, {"name": "487", "dtype": "float32"}, {"name": "488", "dtype": "float32"}, {"name": "489", "dtype": "float32"}, {"name": "490", "dtype": "float32"}, {"name": "491", "dtype": "float32"}, {"name": "492", "dtype": "float32"}, {"name": "493", "dtype": "float32"}, {"name": "494", "dtype": "float32"}, {"name": "495", "dtype": "float32"}, {"name": "496", "dtype": "float32"}, {"name": "497", "dtype": "float32"}, {"name": "498", "dtype": "float32"}, {"name": "499", "dtype": "float32"}, {"name": "500", "dtype": "float32"}, {"name": "501", "dtype": "float32"}, {"name": "502", "dtype": "float32"}, {"name": "503", "dtype": "float32"}, {"name": "504", "dtype": "float32"}, {"name": "505", "dtype": "float32"}, {"name": "506", "dtype": "float32"}, {"name": "507", "dtype": "float32"}, {"name": "508", "dtype": "float32"}, {"name": "509", "dtype": "float32"}, {"name": "510", "dtype": "float32"}, {"name": "511", "dtype": "float32"}, {"name": "512", "dtype": "float32"}, {"name": "513", "dtype": "float32"}, {"name": "514", "dtype": "float32"}, {"name": "515", "dtype": "float32"}, {"name": "516", "dtype": "float32"}, {"name": "517", "dtype": "float32"}, {"name": "518", "dtype": "float32"}, {"name": "519", "dtype": "float32"}, {"name": "520", "dtype": "float32"}, {"name": "521", "dtype": "float32"}, {"name": "522", "dtype": "float32"}, {"name": "523", "dtype": "float32"}, {"name": "524", "dtype": "float32"}, {"name": "525", "dtype": "float32"}, {"name": "526", "dtype": "float32"}, {"name": "527", "dtype": "float32"}, {"name": "528", "dtype": "float32"}, {"name": "529", "dtype": "float32"}, {"name": "530", "dtype": "float32"}, {"name": "531", "dtype": "float32"}, {"name": "532", "dtype": "float32"}, {"name": "533", "dtype": "float32"}, {"name": "534", "dtype": "float32"}, {"name": "535", "dtype": "float32"}, {"name": "536", "dtype": "float32"}, {"name": "537", "dtype": "float32"}, {"name": "538", "dtype": "float32"}, {"name": "539", "dtype": "float32"}, {"name": "540", "dtype": "float32"}, {"name": "541", "dtype": "float32"}, {"name": "542", "dtype": "float32"}, {"name": "543", "dtype": "float32"}, {"name": "544", "dtype": "float32"}, {"name": "545", "dtype": "float32"}, {"name": "546", "dtype": "float32"}, {"name": "547", "dtype": "float32"}, {"name": "548", "dtype": "float32"}, {"name": "549", "dtype": "float32"}, {"name": "550", "dtype": "float32"}, {"name": "551", "dtype": "float32"}, {"name": "552", "dtype": "float32"}, {"name": "553", "dtype": "float32"}, {"name": "554", "dtype": "float32"}, {"name": "555", "dtype": "float32"}, {"name": "556", "dtype": "float32"}, {"name": "557", "dtype": "float32"}, {"name": "558", "dtype": "float32"}, {"name": "559", "dtype": "float32"}, {"name": "560", "dtype": "float32"}, {"name": "561", "dtype": "float32"}, {"name": "562", "dtype": "float32"}, {"name": "563", "dtype": "float32"}, {"name": "564", "dtype": "float32"}, {"name": "565", "dtype": "float32"}, {"name": "566", "dtype": "float32"}, {"name": "567", "dtype": "float32"}, {"name": "568", "dtype": "float32"}, {"name": "569", "dtype": "float32"}, {"name": "570", "dtype": "float32"}, {"name": "571", "dtype": "float32"}, {"name": "572", "dtype": "float32"}, {"name": "573", "dtype": "float32"}, {"name": "574", "dtype": "float32"}, {"name": "575", "dtype": "float32"}, {"name": "576", "dtype": "float32"}, {"name": "577", "dtype": "float32"}, {"name": "578", "dtype": "float32"}, {"name": "579", "dtype": "float32"}, {"name": "580", "dtype": "float32"}, {"name": "581", "dtype": "float32"}, {"name": "582", "dtype": "float32"}, {"name": "583", "dtype": "float32"}, {"name": "584", "dtype": "float32"}, {"name": "585", "dtype": "float32"}, {"name": "586", "dtype": "float32"}, {"name": "587", "dtype": "float32"}, {"name": "588", "dtype": "float32"}, {"name": "589", "dtype": "float32"}, {"name": "590", "dtype": "float32"}, {"name": "591", "dtype": "float32"}, {"name": "592", "dtype": "float32"}, {"name": "593", "dtype": "float32"}, {"name": "594", "dtype": "float32"}, {"name": "595", "dtype": "float32"}, {"name": "596", "dtype": "float32"}, {"name": "597", "dtype": "float32"}, {"name": "598", "dtype": "float32"}, {"name": "599", "dtype": "float32"}, {"name": "600", "dtype": "float32"}, {"name": "601", "dtype": "float32"}, {"name": "602", "dtype": "float32"}, {"name": "603", "dtype": "float32"}, {"name": "604", "dtype": "float32"}, {"name": "605", "dtype": "float32"}, {"name": "606", "dtype": "float32"}, {"name": "607", "dtype": "float32"}, {"name": "608", "dtype": "float32"}, {"name": "609", "dtype": "float32"}, {"name": "610", "dtype": "float32"}, {"name": "611", "dtype": "float32"}, {"name": "612", "dtype": "float32"}, {"name": "613", "dtype": "float32"}, {"name": "614", "dtype": "float32"}, {"name": "615", "dtype": "float32"}, {"name": "616", "dtype": "float32"}, {"name": "617", "dtype": "float32"}, {"name": "618", "dtype": "float32"}, {"name": "619", "dtype": "float32"}, {"name": "620", "dtype": "float32"}, {"name": "621", "dtype": "float32"}, {"name": "622", "dtype": "float32"}, {"name": "623", "dtype": "float32"}, {"name": "624", "dtype": "float32"}, {"name": "625", "dtype": "float32"}, {"name": "626", "dtype": "float32"}, {"name": "627", "dtype": "float32"}, {"name": "628", "dtype": "float32"}, {"name": "629", "dtype": "float32"}, {"name": "630", "dtype": "float32"}, {"name": "631", "dtype": "float32"}, {"name": "632", "dtype": "float32"}, {"name": "633", "dtype": "float32"}, {"name": "634", "dtype": "float32"}, {"name": "635", "dtype": "float32"}, {"name": "636", "dtype": "float32"}, {"name": "637", "dtype": "float32"}, {"name": "638", "dtype": "float32"}, {"name": "639", "dtype": "float32"}, {"name": "640", "dtype": "float32"}, {"name": "641", "dtype": "float32"}, {"name": "642", "dtype": "float32"}, {"name": "643", "dtype": "float32"}, {"name": "644", "dtype": "float32"}, {"name": "645", "dtype": "float32"}, {"name": "646", "dtype": "float32"}, {"name": "647", "dtype": "float32"}, {"name": "648", "dtype": "float32"}, {"name": "649", "dtype": "float32"}, {"name": "650", "dtype": "float32"}, {"name": "651", "dtype": "float32"}, {"name": "652", "dtype": "float32"}, {"name": "653", "dtype": "float32"}, {"name": "654", "dtype": "float32"}, {"name": "655", "dtype": "float32"}, {"name": "656", "dtype": "float32"}, {"name": "657", "dtype": "float32"}, {"name": "658", "dtype": "float32"}, {"name": "659", "dtype": "float32"}, {"name": "660", "dtype": "float32"}, {"name": "661", "dtype": "float32"}, {"name": "662", "dtype": "float32"}, {"name": "663", "dtype": "float32"}, {"name": "664", "dtype": "float32"}, {"name": "665", "dtype": "float32"}, {"name": "666", "dtype": "float32"}, {"name": "667", "dtype": "float32"}, {"name": "668", "dtype": "float32"}, {"name": "669", "dtype": "float32"}, {"name": "670", "dtype": "float32"}, {"name": "671", "dtype": "float32"}, {"name": "672", "dtype": "float32"}, {"name": "673", "dtype": "float32"}, {"name": "674", "dtype": "float32"}, {"name": "675", "dtype": "float32"}, {"name": "676", "dtype": "float32"}, {"name": "677", "dtype": "float32"}, {"name": "678", "dtype": "float32"}, {"name": "679", "dtype": "float32"}, {"name": "680", "dtype": "float32"}, {"name": "681", "dtype": "float32"}, {"name": "682", "dtype": "float32"}, {"name": "683", "dtype": "float32"}, {"name": "684", "dtype": "float32"}, {"name": "685", "dtype": "float32"}, {"name": "686", "dtype": "float32"}, {"name": "687", "dtype": "float32"}, {"name": "688", "dtype": "float32"}, {"name": "689", "dtype": "float32"}, {"name": "690", "dtype": "float32"}, {"name": "691", "dtype": "float32"}, {"name": "692", "dtype": "float32"}, {"name": "693", "dtype": "float32"}, {"name": "694", "dtype": "float32"}, {"name": "695", "dtype": "float32"}, {"name": "696", "dtype": "float32"}, {"name": "697", "dtype": "float32"}, {"name": "698", "dtype": "float32"}, {"name": "699", "dtype": "float32"}, {"name": "700", "dtype": "float32"}, {"name": "701", "dtype": "float32"}, {"name": "702", "dtype": "float32"}, {"name": "703", "dtype": "float32"}, {"name": "704", "dtype": "float32"}, {"name": "705", "dtype": "float32"}, {"name": "706", "dtype": "float32"}, {"name": "707", "dtype": "float32"}, {"name": "708", "dtype": "float32"}, {"name": "709", "dtype": "float32"}, {"name": "710", "dtype": "float32"}, {"name": "711", "dtype": "float32"}, {"name": "712", "dtype": "float32"}, {"name": "713", "dtype": "float32"}, {"name": "714", "dtype": "float32"}, {"name": "715", "dtype": "float32"}, {"name": "716", "dtype": "float32"}, {"name": "717", "dtype": "float32"}, {"name": "718", "dtype": "float32"}, {"name": "719", "dtype": "float32"}, {"name": "720", "dtype": "float32"}, {"name": "721", "dtype": "float32"}, {"name": "722", "dtype": "float32"}, {"name": "723", "dtype": "float32"}, {"name": "724", "dtype": "float32"}, {"name": "725", "dtype": "float32"}, {"name": "726", "dtype": "float32"}, {"name": "727", "dtype": "float32"}, {"name": "728", "dtype": "float32"}, {"name": "729", "dtype": "float32"}, {"name": "730", "dtype": "float32"}, {"name": "731", "dtype": "float32"}, {"name": "732", "dtype": "float32"}, {"name": "733", "dtype": "float32"}, {"name": "734", "dtype": "float32"}, {"name": "735", "dtype": "float32"}, {"name": "736", "dtype": "float32"}, {"name": "737", "dtype": "float32"}, {"name": "738", "dtype": "float32"}, {"name": "739", "dtype": "float32"}, {"name": "740", "dtype": "float32"}, {"name": "741", "dtype": "float32"}, {"name": "742", "dtype": "float32"}, {"name": "743", "dtype": "float32"}, {"name": "744", "dtype": "float32"}, {"name": "745", "dtype": "float32"}, {"name": "746", "dtype": "float32"}, {"name": "747", "dtype": "float32"}, {"name": "748", "dtype": "float32"}, {"name": "749", "dtype": "float32"}, {"name": "750", "dtype": "float32"}, {"name": "751", "dtype": "float32"}, {"name": "752", "dtype": "float32"}, {"name": "753", "dtype": "float32"}, {"name": "754", "dtype": "float32"}, {"name": "755", "dtype": "float32"}, {"name": "756", "dtype": "float32"}, {"name": "757", "dtype": "float32"}, {"name": "758", "dtype": "float32"}, {"name": "759", "dtype": "float32"}, {"name": "760", "dtype": "float32"}, {"name": "761", "dtype": "float32"}, {"name": "762", "dtype": "float32"}, {"name": "763", "dtype": "float32"}, {"name": "764", "dtype": "float32"}, {"name": "765", "dtype": "float32"}, {"name": "766", "dtype": "float32"}, {"name": "767", "dtype": "float32"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 80318780.21618997, "num_examples": 26057}, {"name": "test", "num_bytes": 26774087.073587257, "num_examples": 8686}], "download_size": 147219352, "dataset_size": 107092867.28977722}}
|
2023-08-21T21:50:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "AA_ApplicationDistilRoBERTa_110K_5_F"
More Information needed
|
[
"# Dataset Card for \"AA_ApplicationDistilRoBERTa_110K_5_F\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"AA_ApplicationDistilRoBERTa_110K_5_F\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"AA_ApplicationDistilRoBERTa_110K_5_F\"\n\nMore Information needed"
] |
c4bd725fce00df8a083801fae0c3ff035531bc52
|
# Dataset of katori/香取/香取 (Kantai Collection)
This is the dataset of katori/香取/香取 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `glasses, green_eyes, folded_ponytail, breasts, large_breasts, brown_hair, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 423.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katori_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 304.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katori_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1097 | 598.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katori_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 397.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katori_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1097 | 738.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katori_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/katori_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 17 |  |  |  |  |  | 1girl, epaulettes, military_uniform, necktie, solo, white_gloves, collared_shirt, double-breasted, jacket, black_pantyhose, miniskirt, smile, looking_at_viewer, parted_bangs, light_brown_hair, pencil_skirt, simple_background, white_background, grey_skirt, riding_crop, long_sleeves |
| 1 | 15 |  |  |  |  |  | 1girl, collared_shirt, double-breasted, epaulettes, looking_at_viewer, military_uniform, solo, upper_body, parted_bangs, simple_background, smile, white_gloves, white_background, jacket, long_sleeves, light_brown_hair, black_necktie, grey_shirt |
| 2 | 6 |  |  |  |  |  | 1girl, epaulettes, military_uniform, miniskirt, necktie, pantyhose, riding_crop, solo, white_gloves, smile |
| 3 | 5 |  |  |  |  |  | 1girl, epaulettes, military_uniform, necktie, panties_under_pantyhose, solo, white_gloves, black_pantyhose, sitting, smile, looking_at_viewer, miniskirt, feet |
| 4 | 5 |  |  |  |  |  | 1boy, 1girl, epaulettes, hetero, military_uniform, solo_focus, blush, necktie, penis, white_gloves, smile, bar_censor, heart, huge_breasts, looking_at_viewer, nipples, paizuri |
| 5 | 8 |  |  |  |  |  | 1girl, light_brown_hair, looking_at_viewer, solo, blush, cleavage, parted_bangs, rimless_eyewear, simple_background, long_hair, side-tie_bikini_bottom, white_background, cowboy_shot, navel, white_bikini, front-tie_top |
| 6 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, smile, solo, bikini, blush, navel, cleavage, pointer, twitter_username |
| 7 | 7 |  |  |  |  |  | 1girl, competition_swimsuit, cowboy_shot, solo, parted_bangs, collarbone, highleg_swimsuit, looking_at_viewer, simple_background, blue_one-piece_swimsuit, dated, jacket, twitter_username, white_background, white_one-piece_swimsuit |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | epaulettes | military_uniform | necktie | solo | white_gloves | collared_shirt | double-breasted | jacket | black_pantyhose | miniskirt | smile | looking_at_viewer | parted_bangs | light_brown_hair | pencil_skirt | simple_background | white_background | grey_skirt | riding_crop | long_sleeves | upper_body | black_necktie | grey_shirt | pantyhose | panties_under_pantyhose | sitting | feet | 1boy | hetero | solo_focus | blush | penis | bar_censor | heart | huge_breasts | nipples | paizuri | cleavage | rimless_eyewear | long_hair | side-tie_bikini_bottom | cowboy_shot | navel | white_bikini | front-tie_top | bikini | pointer | twitter_username | competition_swimsuit | collarbone | highleg_swimsuit | blue_one-piece_swimsuit | dated | white_one-piece_swimsuit |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:-------------------|:----------|:-------|:---------------|:-----------------|:------------------|:---------|:------------------|:------------|:--------|:--------------------|:---------------|:-------------------|:---------------|:--------------------|:-------------------|:-------------|:--------------|:---------------|:-------------|:----------------|:-------------|:------------|:--------------------------|:----------|:-------|:-------|:---------|:-------------|:--------|:--------|:-------------|:--------|:---------------|:----------|:----------|:-----------|:------------------|:------------|:-------------------------|:--------------|:--------|:---------------|:----------------|:---------|:----------|:-------------------|:-----------------------|:-------------|:-------------------|:--------------------------|:--------|:---------------------------|
| 0 | 17 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 15 |  |  |  |  |  | X | X | X | | X | X | X | X | X | | | X | X | X | X | | X | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | X | X | X | | | | | X | X | | | | | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | X | X | X | X | | | | X | X | X | X | | | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | X | X | | X | | | | | | X | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 5 | 8 |  |  |  |  |  | X | | | | X | | | | | | | | X | X | X | | X | X | | | | | | | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | | | | X | | | | | | | X | X | | | | | | | | | | | | | | | | | | | X | | | | | | | X | | | | | X | | | X | X | X | | | | | | |
| 7 | 7 |  |  |  |  |  | X | | | | X | | | | X | | | | X | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | X | X | X | X | X | X |
|
CyberHarem/katori_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T21:48:30+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T18:37:42+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of katori/香取/香取 (Kantai Collection)
===========================================
This is the dataset of katori/香取/香取 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are 'glasses, green\_eyes, folded\_ponytail, breasts, large\_breasts, brown\_hair, bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
9d684109708688ba124fa71d948df994ca2d2337
|
# Dataset of mogami/最上/最上 (Kantai Collection)
This is the dataset of mogami/最上/最上 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `short_hair, black_hair, bangs, green_eyes, swept_bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 362.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mogami_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 253.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mogami_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 991 | 471.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mogami_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 339.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mogami_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 991 | 595.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mogami_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/mogami_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, brown_sailor_collar, serafuku, simple_background, solo, upper_body, white_background, black_neckerchief, looking_at_viewer, brown_shirt, smile, one-hour_drawing_challenge, brown_neckerchief, open_mouth, red_sailor_collar, twitter_username |
| 1 | 10 |  |  |  |  |  | 1girl, brown_sailor_collar, brown_shorts, cowboy_shot, long_sleeves, serafuku, simple_background, solo, white_background, looking_at_viewer, smile, brown_shirt, black_neckerchief, one-hour_drawing_challenge, orange_neckerchief, green_hair, twitter_username |
| 2 | 9 |  |  |  |  |  | 1girl, black_socks, brown_sailor_collar, brown_shorts, long_sleeves, serafuku, solo, full_body, looking_at_viewer, black_neckerchief, boots, brown_shirt, kneehighs, standing, smile, white_background |
| 3 | 10 |  |  |  |  |  | 1girl, looking_at_viewer, solo, simple_background, white_jacket, cowboy_shot, hooded_jacket, white_background, hoodie, smile, white_bikini, navel, open_jacket, small_breasts, twitter_username, blush, dated, green_hair, medium_breasts, multicolored_bikini, official_alternate_costume, one-hour_drawing_challenge, open_mouth, tanlines |
| 4 | 6 |  |  |  |  |  | 1girl, blue_sky, cowboy_shot, day, looking_at_viewer, ocean, outdoors, solo, white_bikini, cloud, mismatched_bikini, standing, beach, horizon, small_breasts, medium_breasts, multicolored_bikini, smile |
| 5 | 5 |  |  |  |  |  | 1boy, 1girl, blush, hetero, nipples, serafuku, sweat, long_sleeves, medium_breasts, open_clothes, sex, girl_on_top, open_mouth, penis, solo_focus, bar_censor, cowgirl_position, kneehighs, spread_legs, vaginal |
| 6 | 7 |  |  |  |  |  | 1girl, solo, looking_at_viewer, simple_background, small_breasts, white_background, blush, cowboy_shot, navel, female_pubic_hair, nipples, brown_shorts, panties, smile, standing, topless |
| 7 | 8 |  |  |  |  |  | detached_collar, fake_animal_ears, rabbit_ears, 1girl, playboy_bunny, solo, wrist_cuffs, green_hair, looking_at_viewer, simple_background, strapless_leotard, black_bowtie, black_pantyhose, small_breasts, black_leotard, blush, white_background, alternate_costume, black_eyes, grey_background, high_heels, rabbit_tail |
| 8 | 9 |  |  |  |  |  | 1girl, official_alternate_costume, solo, bag, cowboy_shot, white_shirt, denim, looking_at_viewer, skirt, smile, t-shirt, open_mouth, short_sleeves, shorts |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | brown_sailor_collar | serafuku | simple_background | solo | upper_body | white_background | black_neckerchief | looking_at_viewer | brown_shirt | smile | one-hour_drawing_challenge | brown_neckerchief | open_mouth | red_sailor_collar | twitter_username | brown_shorts | cowboy_shot | long_sleeves | orange_neckerchief | green_hair | black_socks | full_body | boots | kneehighs | standing | white_jacket | hooded_jacket | hoodie | white_bikini | navel | open_jacket | small_breasts | blush | dated | medium_breasts | multicolored_bikini | official_alternate_costume | tanlines | blue_sky | day | ocean | outdoors | cloud | mismatched_bikini | beach | horizon | 1boy | hetero | nipples | sweat | open_clothes | sex | girl_on_top | penis | solo_focus | bar_censor | cowgirl_position | spread_legs | vaginal | female_pubic_hair | panties | topless | detached_collar | fake_animal_ears | rabbit_ears | playboy_bunny | wrist_cuffs | strapless_leotard | black_bowtie | black_pantyhose | black_leotard | alternate_costume | black_eyes | grey_background | high_heels | rabbit_tail | bag | white_shirt | denim | skirt | t-shirt | short_sleeves | shorts |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------------|:-----------|:--------------------|:-------|:-------------|:-------------------|:--------------------|:--------------------|:--------------|:--------|:-----------------------------|:--------------------|:-------------|:--------------------|:-------------------|:---------------|:--------------|:---------------|:---------------------|:-------------|:--------------|:------------|:--------|:------------|:-----------|:---------------|:----------------|:---------|:---------------|:--------|:--------------|:----------------|:--------|:--------|:-----------------|:----------------------|:-----------------------------|:-----------|:-----------|:------|:--------|:-----------|:--------|:--------------------|:--------|:----------|:-------|:---------|:----------|:--------|:---------------|:------|:--------------|:--------|:-------------|:-------------|:-------------------|:--------------|:----------|:--------------------|:----------|:----------|:------------------|:-------------------|:--------------|:----------------|:--------------|:--------------------|:---------------|:------------------|:----------------|:--------------------|:-------------|:------------------|:-------------|:--------------|:------|:--------------|:--------|:--------|:----------|:----------------|:---------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | X | X | X | X | | X | X | X | X | X | X | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | X | X | X | | X | | X | X | X | X | X | | | | | | X | | X | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 10 |  |  |  |  |  | X | | | X | X | | X | | X | | X | X | | X | | X | | X | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | | | | X | | | | X | | X | | | | | | | X | | | | | | | | X | | | | X | | | X | | | X | X | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | X | | | | | | | | | | | X | | | | | X | | | | | | X | | | | | | | | | X | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | | | X | X | | X | | X | | X | | | | | | X | X | | | | | | | | X | | | | | X | | X | X | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 7 | 8 |  |  |  |  |  | X | | | X | X | | X | | X | | | | | | | | | | | | X | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | |
| 8 | 9 |  |  |  |  |  | X | | | | X | | | | X | | X | | | X | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X |
|
CyberHarem/mogami_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T21:50:08+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T08:58:16+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of mogami/最上/最上 (Kantai Collection)
===========================================
This is the dataset of mogami/最上/最上 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are 'short\_hair, black\_hair, bangs, green\_eyes, swept\_bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
4490fe7981cff02616b884c54c65a61b6dff3fa2
|
# Dataset Card for "469b15a7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/469b15a7
|
[
"region:us"
] |
2023-08-21T21:57:09+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 184, "num_examples": 10}], "download_size": 1337, "dataset_size": 184}}
|
2023-08-21T21:57:10+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "469b15a7"
More Information needed
|
[
"# Dataset Card for \"469b15a7\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"469b15a7\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"469b15a7\"\n\nMore Information needed"
] |
f57f491775f6fd09fb8cf1ea986c09a3454f2f88
|
# Dataset of mamiya/間宮 (Kantai Collection)
This is the dataset of mamiya/間宮 (Kantai Collection), containing 475 images and their tags.
The core tags of this character are `brown_hair, long_hair, ribbon, hair_ornament, hairclip, hair_ribbon, breasts, ahoge, large_breasts, red_eyes, purple_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 475 | 453.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mamiya_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 475 | 297.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mamiya_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1091 | 611.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mamiya_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 475 | 414.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mamiya_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1091 | 794.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mamiya_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/mamiya_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 17 |  |  |  |  |  | serafuku, 1girl, alternate_costume, solo, red_neckerchief, long_sleeves, looking_at_viewer, pleated_skirt, simple_background, open_mouth, smile, blue_sailor_collar, white_background, blue_skirt, black_skirt, shoes, white_shirt |
| 1 | 12 |  |  |  |  |  | 1girl, simple_background, smile, solo, kappougi, looking_at_viewer, white_background, pink_shirt, upper_body, one-hour_drawing_challenge, open_mouth, twitter_username |
| 2 | 6 |  |  |  |  |  | 1girl, kappougi, smile, hair_bow, ponytail, solo, brown_eyes, open_mouth, looking_at_viewer |
| 3 | 5 |  |  |  |  |  | 1girl, kappougi, looking_at_viewer, solo, hair_bow, ice_cream, open_mouth, twitter_username, :d, blush, tray |
| 4 | 7 |  |  |  |  |  | 1girl, black_bra, cleavage, looking_at_viewer, simple_background, smile, solo, white_background, blush, twitter_username, collarbone, ponytail, upper_body, long_sleeves, open_shirt, pink_shirt, closed_mouth, one-hour_drawing_challenge, red_ribbon |
| 5 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, simple_background, solo, blush, nipples, nude, smile, white_background, collarbone, navel, heart, huge_breasts, upper_body |
| 6 | 30 |  |  |  |  |  | 1girl, solo, black_bikini, looking_at_viewer, frilled_bikini, smile, cleavage, blush, navel, simple_background, white_background, collarbone, twitter_username, cowboy_shot, red_ribbon |
| 7 | 11 |  |  |  |  |  | 1boy, 1girl, blush, hetero, nipples, solo_focus, vaginal, navel, open_mouth, girl_on_top, penis, sweat, bar_censor, bow, cowgirl_position, happy_sex, huge_breasts, pov, smile, completely_nude, cum_in_pussy, heart, female_pubic_hair, mosaic_censoring, spread_legs |
| 8 | 14 |  |  |  |  |  | 1girl, looking_at_viewer, playboy_bunny, rabbit_ears, solo, wrist_cuffs, cleavage, detached_collar, strapless_leotard, fake_animal_ears, rabbit_tail, cowboy_shot, tray, simple_background, black_pantyhose, red_bowtie, white_background, brown_pantyhose, food |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | serafuku | 1girl | alternate_costume | solo | red_neckerchief | long_sleeves | looking_at_viewer | pleated_skirt | simple_background | open_mouth | smile | blue_sailor_collar | white_background | blue_skirt | black_skirt | shoes | white_shirt | kappougi | pink_shirt | upper_body | one-hour_drawing_challenge | twitter_username | hair_bow | ponytail | brown_eyes | ice_cream | :d | blush | tray | black_bra | cleavage | collarbone | open_shirt | closed_mouth | red_ribbon | nipples | nude | navel | heart | huge_breasts | black_bikini | frilled_bikini | cowboy_shot | 1boy | hetero | solo_focus | vaginal | girl_on_top | penis | sweat | bar_censor | bow | cowgirl_position | happy_sex | pov | completely_nude | cum_in_pussy | female_pubic_hair | mosaic_censoring | spread_legs | playboy_bunny | rabbit_ears | wrist_cuffs | detached_collar | strapless_leotard | fake_animal_ears | rabbit_tail | black_pantyhose | red_bowtie | brown_pantyhose | food |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------|:--------|:--------------------|:-------|:------------------|:---------------|:--------------------|:----------------|:--------------------|:-------------|:--------|:---------------------|:-------------------|:-------------|:--------------|:--------|:--------------|:-----------|:-------------|:-------------|:-----------------------------|:-------------------|:-----------|:-----------|:-------------|:------------|:-----|:--------|:-------|:------------|:-----------|:-------------|:-------------|:---------------|:-------------|:----------|:-------|:--------|:--------|:---------------|:---------------|:-----------------|:--------------|:-------|:---------|:-------------|:----------|:--------------|:--------|:--------|:-------------|:------|:-------------------|:------------|:------|:------------------|:---------------|:--------------------|:-------------------|:--------------|:----------------|:--------------|:--------------|:------------------|:--------------------|:-------------------|:--------------|:------------------|:-------------|:------------------|:-------|
| 0 | 17 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 12 |  |  |  |  |  | | X | | X | | | X | | X | X | X | | X | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | | X | | X | | | X | | | X | X | | | | | | | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | | X | | X | | | X | | | X | | | | | | | | X | | | | X | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | | X | | X | | X | X | | X | | X | | X | | | | | | X | X | X | X | | X | | | | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | | X | | X | | | X | | X | | X | | X | | | | | | | X | | | | | | | | X | | | | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 30 |  |  |  |  |  | | X | | X | | | X | | X | | X | | X | | | | | | | | | X | | | | | | X | | | X | X | | | X | | | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 11 |  |  |  |  |  | | X | | | | | | | | X | X | | | | | | | | | | | | | | | | | X | | | | | | | | X | | X | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 8 | 14 |  |  |  |  |  | | X | | X | | | X | | X | | | | X | | | | | | | | | | | | | | | | X | | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/mamiya_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T22:45:01+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T18:19:35+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of mamiya/間宮 (Kantai Collection)
========================================
This is the dataset of mamiya/間宮 (Kantai Collection), containing 475 images and their tags.
The core tags of this character are 'brown\_hair, long\_hair, ribbon, hair\_ornament, hairclip, hair\_ribbon, breasts, ahoge, large\_breasts, red\_eyes, purple\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
4ebbd95e880d1520f0a4c90ca2a5ec1f251f4ff6
|
# Dataset Card for "mathy-phase1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
approach0/mathy-phase1
|
[
"region:us"
] |
2023-08-21T23:26:18+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22020519, "num_examples": 18636}], "download_size": 0, "dataset_size": 22020519}}
|
2023-08-23T04:00:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "mathy-phase1"
More Information needed
|
[
"# Dataset Card for \"mathy-phase1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"mathy-phase1\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"mathy-phase1\"\n\nMore Information needed"
] |
06e7ff458c0d30cbda90e6c2ad94f3fb616bab85
|
# Dataset of minazuki/水無月 (Kantai Collection)
This is the dataset of minazuki/水無月 (Kantai Collection), containing 319 images and their tags.
The core tags of this character are `blue_hair, blue_eyes, asymmetrical_hair, fang, ahoge`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 319 | 245.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/minazuki_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 319 | 171.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/minazuki_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 685 | 350.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/minazuki_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 319 | 231.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/minazuki_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 685 | 449.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/minazuki_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/minazuki_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, blouse, blue_necktie, blue_shirt, crescent_pin, frilled_shorts, serafuku, short_hair_with_long_locks, solo, blue_shorts, kneehighs, long_sleeves, open_mouth, white_background, blue_flower, looking_at_viewer, morning_glory, simple_background, blush, navel, :d, full_body, midriff |
| 1 | 8 |  |  |  |  |  | 1girl, blouse, blue_necktie, crescent_pin, looking_at_viewer, open_mouth, serafuku, short_hair_with_long_locks, smile, solo, blue_shirt, blue_shorts, frilled_shorts, long_sleeves, simple_background, white_background, blush, cowboy_shot, navel |
| 2 | 10 |  |  |  |  |  | 1girl, blue_shirt, looking_at_viewer, open_mouth, serafuku, short_hair_with_long_locks, smile, blue_necktie, crescent_pin, solo, upper_body, blouse, simple_background, white_background, sailor_collar, blush, skin_fang, long_sleeves |
| 3 | 6 |  |  |  |  |  | 1girl, blouse, blue_necktie, crescent_pin, frilled_shorts, serafuku, short_hair_with_long_locks, smile, solo, blue_shirt, blue_shorts, looking_at_viewer, navel, open_mouth, twitter_username |
| 4 | 6 |  |  |  |  |  | blouse, blue_necktie, blue_shirt, crescent_pin, frilled_shorts, serafuku, solo_focus, 2girls, blonde_hair, long_sleeves, blue_shorts, long_hair, open_mouth, short_hair_with_long_locks, :d, blush |
| 5 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, solo, blush, collarbone, hair_between_eyes, simple_background, white_background, blue_bikini, navel, open_mouth, short_hair_with_long_locks, small_breasts, smile, cowboy_shot, one_eye_closed, twitter_username, armpits, crescent, flat_chest, side-tie_bikini_bottom |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blouse | blue_necktie | blue_shirt | crescent_pin | frilled_shorts | serafuku | short_hair_with_long_locks | solo | blue_shorts | kneehighs | long_sleeves | open_mouth | white_background | blue_flower | looking_at_viewer | morning_glory | simple_background | blush | navel | :d | full_body | midriff | smile | cowboy_shot | upper_body | sailor_collar | skin_fang | twitter_username | solo_focus | 2girls | blonde_hair | long_hair | collarbone | hair_between_eyes | blue_bikini | small_breasts | one_eye_closed | armpits | crescent | flat_chest | side-tie_bikini_bottom |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:---------------|:-------------|:---------------|:-----------------|:-----------|:-----------------------------|:-------|:--------------|:------------|:---------------|:-------------|:-------------------|:--------------|:--------------------|:----------------|:--------------------|:--------|:--------|:-----|:------------|:----------|:--------|:--------------|:-------------|:----------------|:------------|:-------------------|:-------------|:---------|:--------------|:------------|:-------------|:--------------------|:--------------|:----------------|:-----------------|:----------|:-----------|:-------------|:-------------------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | X | X | X | | X | | X | X | X | | | | X | X | | | | | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | X | X | X | X | | X | X | X | | | X | X | X | | X | | X | X | | | | | X | | X | X | X | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | X | | | X | | | | X | | | | X | | | | | X | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | | X | X | X | X | X | X | X | | X | | X | X | | | | | | X | | X | | | | | | | | | X | X | X | X | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | | | | | | | X | X | | | | X | X | | X | | X | X | X | | | | X | X | | | | X | | | | | X | X | X | X | X | X | X | X | X |
|
CyberHarem/minazuki_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T23:30:43+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T19:29:22+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of minazuki/水無月 (Kantai Collection)
===========================================
This is the dataset of minazuki/水無月 (Kantai Collection), containing 319 images and their tags.
The core tags of this character are 'blue\_hair, blue\_eyes, asymmetrical\_hair, fang, ahoge', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
590c17dfc7d3f294bdae9c6aa9e6a7f0c753d487
|
# ChatHaruhi
# Reviving Anime Character in Reality via Large Language Model
[]()
[]()
github repo: https://github.com/LC1332/Chat-Haruhi-Suzumiya
**Chat-Haruhi-Suzumiya**is a language model that imitates the tone, personality and storylines of characters like Haruhi Suzumiya,
<details>
<summary> The project was developed by Cheng Li, Ziang Leng, Chenxi Yan, Xiaoyang Feng, HaoSheng Wang, Junyi Shen, Hao Wang, Weishi Mi, Aria Fei, Song Yan, Linkang Zhan, Yaokai Jia, Pingyu Wu, and Haozhen Sun,etc. </summary>
This is an open source project and the members were recruited from open source communities like DataWhale.
Lulu Li( [Cheng Li@SenseTime](https://github.com/LC1332) )initiated the whole project and designed and implemented most of the features.
Ziang Leng( [Ziang Leng@SenseTime](https://blairleng.github.io) )designed and implemented the training, data generation and backend architecture for ChatHaruhi 1.0.
Chenxi Yan( [Chenxi Yan@Chengdu University of Information Technology](https://github.com/todochenxi) )implemented and maintained the backend for ChatHaruhi 1.0.
Junyi Shen( [Junyi Shen@Zhejiang University](https://github.com/J1shen) )implemented the training code and participated in generating the training dataset.
Hao Wang( [Hao Wang](https://github.com/wanghao07456) )collected script data for a TV series and participated in data augmentation.
Weishi Mi( [Weishi MI@Tsinghua University](https://github.com/hhhwmws0117) )participated in data augmentation.
Aria Fei( [Aria Fei@BJUT](https://ariafyy.github.io/) )implemented the ASR feature for the script tool and participated in the Openness-Aware Personality paper project.
Xiaoyang Feng( [Xiaoyang Feng@Nanjing Agricultural University](https://github.com/fengyunzaidushi) )integrated the script recognition tool and participated in the Openness-Aware Personality paper project.
Yue Leng ( [Song Yan](https://github.com/zealot52099) )Collected data from The Big Bang Theory. Implemented script format conversion.
scixing(HaoSheng Wang)( [HaoSheng Wang](https://github.com/ssccinng) ) implemented voiceprint recognition in the script tool and tts-vits speech synthesis.
Linkang Zhan( [JunityZhan@Case Western Reserve University](https://github.com/JunityZhan) ) collected Genshin Impact's system prompts and story data.
Yaokai Jia( [Yaokai Jia](https://github.com/KaiJiaBrother) )implemented the Vue frontend and practiced GPU extraction of Bert in a psychology project.
Pingyu Wu( [Pingyu Wu@Juncai Shuyun](https://github.com/wpydcr) )helped deploy the first version of the training code.
Haozhen Sun( [Haozhen Sun@Tianjin University] )plot the character figures for ChatHaruhi.
</details>
## transfer into input-target format
If you want to convert this data into an input-output format
check the link here
https://huggingface.co/datasets/silk-road/ChatHaruhi-Expand-118K
### Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{li2023chatharuhi,
title={ChatHaruhi: Reviving Anime Character in Reality via Large Language Model},
author={Cheng Li and Ziang Leng and Chenxi Yan and Junyi Shen and Hao Wang and Weishi MI and Yaying Fei and Xiaoyang Feng and Song Yan and HaoSheng Wang and Linkang Zhan and Yaokai Jia and Pingyu Wu and Haozhen Sun},
year={2023},
eprint={2308.09597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
silk-road/ChatHaruhi-54K-Role-Playing-Dialogue
|
[
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:en",
"language:zh",
"license:cc-by-4.0",
"arxiv:2308.09597",
"region:us"
] |
2023-08-21T23:40:09+00:00
|
{"language": ["en", "zh"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "text2text-generation"], "pretty_name": "conversa"}
|
2023-12-16T11:34:47+00:00
|
[
"2308.09597"
] |
[
"en",
"zh"
] |
TAGS
#task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #language-Chinese #license-cc-by-4.0 #arxiv-2308.09597 #region-us
|
# ChatHaruhi
# Reviving Anime Character in Reality via Large Language Model
![Code License]()
![Data License]()
github repo: URL
Chat-Haruhi-Suzumiyais a language model that imitates the tone, personality and storylines of characters like Haruhi Suzumiya,
<details>
<summary> The project was developed by Cheng Li, Ziang Leng, Chenxi Yan, Xiaoyang Feng, HaoSheng Wang, Junyi Shen, Hao Wang, Weishi Mi, Aria Fei, Song Yan, Linkang Zhan, Yaokai Jia, Pingyu Wu, and Haozhen Sun,etc. </summary>
This is an open source project and the members were recruited from open source communities like DataWhale.
Lulu Li( Cheng Li@SenseTime )initiated the whole project and designed and implemented most of the features.
Ziang Leng( Ziang Leng@SenseTime )designed and implemented the training, data generation and backend architecture for ChatHaruhi 1.0.
Chenxi Yan( Chenxi Yan@Chengdu University of Information Technology )implemented and maintained the backend for ChatHaruhi 1.0.
Junyi Shen( Junyi Shen@Zhejiang University )implemented the training code and participated in generating the training dataset.
Hao Wang( Hao Wang )collected script data for a TV series and participated in data augmentation.
Weishi Mi( Weishi MI@Tsinghua University )participated in data augmentation.
Aria Fei( Aria Fei@BJUT )implemented the ASR feature for the script tool and participated in the Openness-Aware Personality paper project.
Xiaoyang Feng( Xiaoyang Feng@Nanjing Agricultural University )integrated the script recognition tool and participated in the Openness-Aware Personality paper project.
Yue Leng ( Song Yan )Collected data from The Big Bang Theory. Implemented script format conversion.
scixing(HaoSheng Wang)( HaoSheng Wang ) implemented voiceprint recognition in the script tool and tts-vits speech synthesis.
Linkang Zhan( JunityZhan@Case Western Reserve University ) collected Genshin Impact's system prompts and story data.
Yaokai Jia( Yaokai Jia )implemented the Vue frontend and practiced GPU extraction of Bert in a psychology project.
Pingyu Wu( Pingyu Wu@Juncai Shuyun )helped deploy the first version of the training code.
Haozhen Sun( [Haozhen Sun@Tianjin University] )plot the character figures for ChatHaruhi.
</details>
## transfer into input-target format
If you want to convert this data into an input-output format
check the link here
URL
Please cite the repo if you use the data or code in this repo.
|
[
"# ChatHaruhi",
"# Reviving Anime Character in Reality via Large Language Model\n\n![Code License]()\n![Data License]()\n\ngithub repo: URL\n\n\n\nChat-Haruhi-Suzumiyais a language model that imitates the tone, personality and storylines of characters like Haruhi Suzumiya,\n\n\n<details>\n <summary> The project was developed by Cheng Li, Ziang Leng, Chenxi Yan, Xiaoyang Feng, HaoSheng Wang, Junyi Shen, Hao Wang, Weishi Mi, Aria Fei, Song Yan, Linkang Zhan, Yaokai Jia, Pingyu Wu, and Haozhen Sun,etc. </summary>\n\nThis is an open source project and the members were recruited from open source communities like DataWhale.\n\nLulu Li( Cheng Li@SenseTime )initiated the whole project and designed and implemented most of the features.\n \nZiang Leng( Ziang Leng@SenseTime )designed and implemented the training, data generation and backend architecture for ChatHaruhi 1.0.\n\nChenxi Yan( Chenxi Yan@Chengdu University of Information Technology )implemented and maintained the backend for ChatHaruhi 1.0.\n\nJunyi Shen( Junyi Shen@Zhejiang University )implemented the training code and participated in generating the training dataset.\n\nHao Wang( Hao Wang )collected script data for a TV series and participated in data augmentation.\n\nWeishi Mi( Weishi MI@Tsinghua University )participated in data augmentation.\n \nAria Fei( Aria Fei@BJUT )implemented the ASR feature for the script tool and participated in the Openness-Aware Personality paper project.\n\nXiaoyang Feng( Xiaoyang Feng@Nanjing Agricultural University )integrated the script recognition tool and participated in the Openness-Aware Personality paper project.\n\nYue Leng ( Song Yan )Collected data from The Big Bang Theory. Implemented script format conversion.\n\nscixing(HaoSheng Wang)( HaoSheng Wang ) implemented voiceprint recognition in the script tool and tts-vits speech synthesis.\n\nLinkang Zhan( JunityZhan@Case Western Reserve University ) collected Genshin Impact's system prompts and story data.\n\nYaokai Jia( Yaokai Jia )implemented the Vue frontend and practiced GPU extraction of Bert in a psychology project.\n\nPingyu Wu( Pingyu Wu@Juncai Shuyun )helped deploy the first version of the training code. \n\nHaozhen Sun( [Haozhen Sun@Tianjin University] )plot the character figures for ChatHaruhi. \n\n\n\n</details>",
"## transfer into input-target format\n\nIf you want to convert this data into an input-output format\n\ncheck the link here\n\nURL\n\n\nPlease cite the repo if you use the data or code in this repo."
] |
[
"TAGS\n#task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #language-Chinese #license-cc-by-4.0 #arxiv-2308.09597 #region-us \n",
"# ChatHaruhi",
"# Reviving Anime Character in Reality via Large Language Model\n\n![Code License]()\n![Data License]()\n\ngithub repo: URL\n\n\n\nChat-Haruhi-Suzumiyais a language model that imitates the tone, personality and storylines of characters like Haruhi Suzumiya,\n\n\n<details>\n <summary> The project was developed by Cheng Li, Ziang Leng, Chenxi Yan, Xiaoyang Feng, HaoSheng Wang, Junyi Shen, Hao Wang, Weishi Mi, Aria Fei, Song Yan, Linkang Zhan, Yaokai Jia, Pingyu Wu, and Haozhen Sun,etc. </summary>\n\nThis is an open source project and the members were recruited from open source communities like DataWhale.\n\nLulu Li( Cheng Li@SenseTime )initiated the whole project and designed and implemented most of the features.\n \nZiang Leng( Ziang Leng@SenseTime )designed and implemented the training, data generation and backend architecture for ChatHaruhi 1.0.\n\nChenxi Yan( Chenxi Yan@Chengdu University of Information Technology )implemented and maintained the backend for ChatHaruhi 1.0.\n\nJunyi Shen( Junyi Shen@Zhejiang University )implemented the training code and participated in generating the training dataset.\n\nHao Wang( Hao Wang )collected script data for a TV series and participated in data augmentation.\n\nWeishi Mi( Weishi MI@Tsinghua University )participated in data augmentation.\n \nAria Fei( Aria Fei@BJUT )implemented the ASR feature for the script tool and participated in the Openness-Aware Personality paper project.\n\nXiaoyang Feng( Xiaoyang Feng@Nanjing Agricultural University )integrated the script recognition tool and participated in the Openness-Aware Personality paper project.\n\nYue Leng ( Song Yan )Collected data from The Big Bang Theory. Implemented script format conversion.\n\nscixing(HaoSheng Wang)( HaoSheng Wang ) implemented voiceprint recognition in the script tool and tts-vits speech synthesis.\n\nLinkang Zhan( JunityZhan@Case Western Reserve University ) collected Genshin Impact's system prompts and story data.\n\nYaokai Jia( Yaokai Jia )implemented the Vue frontend and practiced GPU extraction of Bert in a psychology project.\n\nPingyu Wu( Pingyu Wu@Juncai Shuyun )helped deploy the first version of the training code. \n\nHaozhen Sun( [Haozhen Sun@Tianjin University] )plot the character figures for ChatHaruhi. \n\n\n\n</details>",
"## transfer into input-target format\n\nIf you want to convert this data into an input-output format\n\ncheck the link here\n\nURL\n\n\nPlease cite the repo if you use the data or code in this repo."
] |
[
68,
5,
597,
42
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-English #language-Chinese #license-cc-by-4.0 #arxiv-2308.09597 #region-us \n# ChatHaruhi"
] |
8e8033432f97f8929bd78fa4fb2ad0fc53dd069f
|
# Dataset of destroyer_hime/駆逐棲姫 (Kantai Collection)
This is the dataset of destroyer_hime/駆逐棲姫 (Kantai Collection), containing 34 images and their tags.
The core tags of this character are `long_hair, side_ponytail, white_hair, white_skin, colored_skin, hat, purple_eyes, pale_skin`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 34 | 40.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/destroyer_hime_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 34 | 29.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/destroyer_hime_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 77 | 54.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/destroyer_hime_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 34 | 38.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/destroyer_hime_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 77 | 67.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/destroyer_hime_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/destroyer_hime_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 34 |  |  |  |  |  | abyssal_ship, 1girl, serafuku, solo, skirt, sleeveless, bare_shoulders, choker, midriff, navel, black_gloves, looking_at_viewer, amputee, neckerchief |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | abyssal_ship | 1girl | serafuku | solo | skirt | sleeveless | bare_shoulders | choker | midriff | navel | black_gloves | looking_at_viewer | amputee | neckerchief |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------|:--------|:-----------|:-------|:--------|:-------------|:-----------------|:---------|:----------|:--------|:---------------|:--------------------|:----------|:--------------|
| 0 | 34 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/destroyer_hime_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-21T23:43:01+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T21:45:14+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of destroyer\_hime/駆逐棲姫 (Kantai Collection)
===================================================
This is the dataset of destroyer\_hime/駆逐棲姫 (Kantai Collection), containing 34 images and their tags.
The core tags of this character are 'long\_hair, side\_ponytail, white\_hair, white\_skin, colored\_skin, hat, purple\_eyes, pale\_skin', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
6c4e7a1debed85cb7689aab252a9d2c008000d5c
|
# Dataset of tone/利根/利根 (Kantai Collection)
This is the dataset of tone/利根/利根 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `long_hair, twintails, ribbon, brown_hair, hair_ribbon, white_ribbon, brown_eyes, hair_between_eyes, breasts, bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 492.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tone_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 314.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tone_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1143 | 658.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tone_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 447.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tone_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1143 | 882.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tone_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tone_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, pelvic_curtain, single_elbow_glove, single_thighhigh, smile, solo, uneven_legwear, black_gloves, side_slit, looking_at_viewer, single_glove, boots, no_panties, open_mouth, hand_on_hip |
| 1 | 5 |  |  |  |  |  | 1girl, military_uniform, pelvic_curtain, simple_background, solo, white_background, dated, looking_at_viewer, one-hour_drawing_challenge, single_thighhigh, sitting, twitter_username, uneven_legwear, black_thighhighs, single_elbow_glove, black_footwear, black_gloves, boots, red_bowtie |
| 2 | 7 |  |  |  |  |  | 1girl, solo, looking_at_viewer, fang, :d, open_mouth, upper_body |
| 3 | 6 |  |  |  |  |  | 1girl, simple_background, solo, closed_mouth, collarbone, small_breasts, blush, looking_at_viewer, micro_bikini, white_background, green_eyes, navel, smile |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | pelvic_curtain | single_elbow_glove | single_thighhigh | smile | solo | uneven_legwear | black_gloves | side_slit | looking_at_viewer | single_glove | boots | no_panties | open_mouth | hand_on_hip | military_uniform | simple_background | white_background | dated | one-hour_drawing_challenge | sitting | twitter_username | black_thighhighs | black_footwear | red_bowtie | fang | :d | upper_body | closed_mouth | collarbone | small_breasts | blush | micro_bikini | green_eyes | navel |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:---------------------|:-------------------|:--------|:-------|:-----------------|:---------------|:------------|:--------------------|:---------------|:--------|:-------------|:-------------|:--------------|:-------------------|:--------------------|:-------------------|:--------|:-----------------------------|:----------|:-------------------|:-------------------|:-----------------|:-------------|:-------|:-----|:-------------|:---------------|:-------------|:----------------|:--------|:---------------|:-------------|:--------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | | X | X | X | | X | | X | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | | | | | X | | | | X | | | | X | | | | | | | | | | | | X | X | X | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | | | X | X | | | | X | | | | | | | X | X | | | | | | | | | | | X | X | X | X | X | X | X |
|
CyberHarem/tone_kantaicollection
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-22T00:03:43+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T15:23:32+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of tone/利根/利根 (Kantai Collection)
=========================================
This is the dataset of tone/利根/利根 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are 'long\_hair, twintails, ribbon, brown\_hair, hair\_ribbon, white\_ribbon, brown\_eyes, hair\_between\_eyes, breasts, bow', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.