sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d9956b9f64e05e1740cb24f0e230d366cdae8496
|
# Dataset Card for "find_word_10000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/find_word_10000
|
[
"region:us"
] |
2023-08-17T08:45:12+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1522035, "num_examples": 21000}, {"name": "eval_find_word", "num_bytes": 53196, "num_examples": 1000}], "download_size": 743489, "dataset_size": 1575231}}
|
2023-08-17T08:45:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "find_word_10000"
More Information needed
|
[
"# Dataset Card for \"find_word_10000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"find_word_10000\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"find_word_10000\"\n\nMore Information needed"
] |
be625709a277d9378ab86a0910fb4fc3da388645
|
# Dataset Card for "find_word_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/find_word_1000
|
[
"region:us"
] |
2023-08-17T08:46:33+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 233196, "num_examples": 3000}, {"name": "eval_find_word", "num_bytes": 53196, "num_examples": 1000}], "download_size": 136283, "dataset_size": 286392}}
|
2023-08-17T08:46:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "find_word_1000"
More Information needed
|
[
"# Dataset Card for \"find_word_1000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"find_word_1000\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"find_word_1000\"\n\nMore Information needed"
] |
874086ab0192347d91dd883aa074a244db73d22d
|
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
AI-C/codeformer-clone
|
[
"region:us"
] |
2023-08-17T08:48:04+00:00
|
{"title": "CodeFormer", "emoji": "\ud83d\udc3c", "colorFrom": "blue", "colorTo": "green", "sdk": "gradio", "sdk_version": "3.37.0", "app_file": "app.py", "pinned": false}
|
2023-08-17T09:16:49+00:00
|
[] |
[] |
TAGS
#region-us
|
Check out the configuration reference at URL
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
2d14721bac5435ab7465343b85934a8de4cddef5
|
# Dataset Card for "semantic-try"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Michael823/semantic-try
|
[
"region:us"
] |
2023-08-17T09:07:02+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3347017.0, "num_examples": 10}, {"name": "validation", "num_bytes": 834103.0, "num_examples": 3}], "download_size": 849393, "dataset_size": 4181120.0}}
|
2023-08-18T01:27:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "semantic-try"
More Information needed
|
[
"# Dataset Card for \"semantic-try\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"semantic-try\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"semantic-try\"\n\nMore Information needed"
] |
3cc17aec7ee7b0910f3e1d72ff29cbe1809dfe2a
|
# Dataset of toyosatomimi_no_miko/豊聡耳神子/토요사토미미노미코 (Touhou)
This is the dataset of toyosatomimi_no_miko/豊聡耳神子/토요사토미미노미코 (Touhou), containing 500 images and their tags.
The core tags of this character are `short_hair, brown_hair, blonde_hair, brown_eyes, pointy_hair, yellow_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 555.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/toyosatomimi_no_miko_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 391.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/toyosatomimi_no_miko_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1100 | 743.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/toyosatomimi_no_miko_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 522.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/toyosatomimi_no_miko_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1100 | 923.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/toyosatomimi_no_miko_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/toyosatomimi_no_miko_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, bangs, bare_shoulders, bracelet, earmuffs, hair_between_eyes, looking_at_viewer, neck_ribbon, purple_ribbon, sleeveless_shirt, solo, breasts, holding, purple_skirt, ritual_baton, :d, blouse, open_mouth, simple_background, bare_arms, black_belt, blush, cowboy_shot, gradient_background, white_background |
| 1 | 10 |  |  |  |  |  | 1girl, bracelet, earmuffs, ritual_baton, skirt, sleeveless_shirt, smile, solo, sword, belt, looking_at_viewer, open_mouth, cape, sheath |
| 2 | 7 |  |  |  |  |  | 1girl, belt, dress, earmuffs, ritual_baton, sleeveless, solo, sword, bracelet, scabbard, sheathed, skirt |
| 3 | 7 |  |  |  |  |  | 1girl, belt, bracelet, earmuffs, skirt, solo, sword, sheath, sleeveless_shirt |
| 4 | 14 |  |  |  |  |  | 1girl, earmuffs, looking_at_viewer, solo, bangs, bare_shoulders, sleeveless_shirt, bracelet, neck_ribbon, purple_ribbon, upper_body, hair_between_eyes, smile, collarbone, simple_background, closed_mouth, white_background, blush, tattoo, light_brown_hair, sailor_collar, small_breasts |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bangs | bare_shoulders | bracelet | earmuffs | hair_between_eyes | looking_at_viewer | neck_ribbon | purple_ribbon | sleeveless_shirt | solo | breasts | holding | purple_skirt | ritual_baton | :d | blouse | open_mouth | simple_background | bare_arms | black_belt | blush | cowboy_shot | gradient_background | white_background | skirt | smile | sword | belt | cape | sheath | dress | sleeveless | scabbard | sheathed | upper_body | collarbone | closed_mouth | tattoo | light_brown_hair | sailor_collar | small_breasts |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-----------------|:-----------|:-----------|:--------------------|:--------------------|:--------------|:----------------|:-------------------|:-------|:----------|:----------|:---------------|:---------------|:-----|:---------|:-------------|:--------------------|:------------|:-------------|:--------|:--------------|:----------------------|:-------------------|:--------|:--------|:--------|:-------|:-------|:---------|:--------|:-------------|:-----------|:-----------|:-------------|:-------------|:---------------|:---------|:-------------------|:----------------|:----------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | | | X | X | | X | | | X | X | | | | X | | | X | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | | | X | X | | | | | | X | | | | X | | | | | | | | | | | X | | X | X | | | X | X | X | X | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | | | X | X | | | | | X | X | | | | | | | | | | | | | | | X | | X | X | | X | | | | | | | | | | | |
| 4 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | X | | | X | | | X | | X | | | | | | | | | X | X | X | X | X | X | X |
|
CyberHarem/toyosatomimi_no_miko_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T09:12:36+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T15:44:01+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of toyosatomimi\_no\_miko/豊聡耳神子/토요사토미미노미코 (Touhou)
==========================================================
This is the dataset of toyosatomimi\_no\_miko/豊聡耳神子/토요사토미미노미코 (Touhou), containing 500 images and their tags.
The core tags of this character are 'short\_hair, brown\_hair, blonde\_hair, brown\_eyes, pointy\_hair, yellow\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
644327b6320ca411192edffd9df0dec17eac1da1
|
# Dataset of hakurei_reimu/博麗霊夢/하쿠레이레이무 (Touhou)
This is the dataset of hakurei_reimu/博麗霊夢/하쿠레이레이무 (Touhou), containing 500 images and their tags.
The core tags of this character are `bow, hair_bow, red_bow, brown_hair, long_hair, bangs, brown_eyes, sidelocks, frilled_bow, hair_between_eyes, red_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 851.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hakurei_reimu_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 435.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hakurei_reimu_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1257 | 942.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hakurei_reimu_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 729.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hakurei_reimu_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1257 | 1.37 GiB | [Download](https://huggingface.co/datasets/CyberHarem/hakurei_reimu_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hakurei_reimu_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, frills, hair_tubes, looking_at_viewer, simple_background, solo, upper_body, white_background, yellow_ascot, bare_shoulders, detached_sleeves, closed_mouth, blush, smile, black_hair, breasts, ribbon-trimmed_sleeves |
| 1 | 5 |  |  |  |  |  | 1girl, bare_shoulders, detached_sleeves, frills, hair_tubes, red_skirt, simple_background, solo, white_background, wide_sleeves, yellow_ascot, black_hair, closed_mouth, gohei, holding, looking_at_viewer, ribbon-trimmed_sleeves, blush, long_sleeves, shirt, red_vest |
| 2 | 16 |  |  |  |  |  | 1girl, detached_sleeves, hair_tubes, looking_at_viewer, red_skirt, ribbon-trimmed_sleeves, solo, wide_sleeves, bare_shoulders, red_vest, yellow_ascot, long_sleeves, gohei, holding, smile, blush, closed_mouth, ofuda, red_shirt, frilled_skirt, white_background, simple_background, standing, skirt_set |
| 3 | 5 |  |  |  |  |  | 1girl, detached_sleeves, gohei, holding, looking_at_viewer, ofuda, red_skirt, red_vest, ribbon-trimmed_sleeves, solo, wide_sleeves, yellow_ascot, closed_mouth, frilled_skirt, full_body, midriff, navel, nontraditional_miko, shoes, white_background, black_footwear, red_shirt, smile, white_socks, yin_yang_orb, black_hair, frilled_hair_tubes, medium_hair, sarashi, torii |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | frills | hair_tubes | looking_at_viewer | simple_background | solo | upper_body | white_background | yellow_ascot | bare_shoulders | detached_sleeves | closed_mouth | blush | smile | black_hair | breasts | ribbon-trimmed_sleeves | red_skirt | wide_sleeves | gohei | holding | long_sleeves | shirt | red_vest | ofuda | red_shirt | frilled_skirt | standing | skirt_set | full_body | midriff | navel | nontraditional_miko | shoes | black_footwear | white_socks | yin_yang_orb | frilled_hair_tubes | medium_hair | sarashi | torii |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:-------------|:--------------------|:--------------------|:-------|:-------------|:-------------------|:---------------|:-----------------|:-------------------|:---------------|:--------|:--------|:-------------|:----------|:-------------------------|:------------|:---------------|:--------|:----------|:---------------|:--------|:-----------|:--------|:------------|:----------------|:-----------|:------------|:------------|:----------|:--------|:----------------------|:--------|:-----------------|:--------------|:---------------|:---------------------|:--------------|:----------|:--------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | | X | X | X | X | X | X | | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 2 | 16 |  |  |  |  |  | X | | X | X | X | X | | X | X | X | X | X | X | X | | | X | X | X | X | X | X | | X | X | X | X | X | X | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | | X | | X | | X | X | | X | X | | X | X | | X | X | X | X | X | | | X | X | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/hakurei_reimu_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T09:22:45+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T08:25:03+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of hakurei\_reimu/博麗霊夢/하쿠레이레이무 (Touhou)
===============================================
This is the dataset of hakurei\_reimu/博麗霊夢/하쿠레이레이무 (Touhou), containing 500 images and their tags.
The core tags of this character are 'bow, hair\_bow, red\_bow, brown\_hair, long\_hair, bangs, brown\_eyes, sidelocks, frilled\_bow, hair\_between\_eyes, red\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
82b8979fd272215ef3b275af4d2b6cada3a62c34
|
**Product Name** - [Wild Stallion Pro](https://wild-stallion-pro-male-enhancement.jimdosite.com/)
**Treatment** - Erectile Dysfunction \[Male Enhancement\]
**Supplement Form** - Capsules
**Benefits** - Regain Natural Energy, Stamina, & Sex Drive, Get Harder, Longer Lasting Erections
**Customer Reviews** - ★★★★✰ 4.9/5
**Official Website** - [https://www.healthsupplement24x7.com/get-wild-stallion-pro](https://www.healthsupplement24x7.com/get-wild-stallion-pro)
**[Wild Stallion Pro](https://pdfhost.io/v/oGby.zhuB_Wild_Stallion_Pro_Male_Enhancement_Is_Work_Only_For_Sexually_Active_Men_Read_Full_Article_Below) USA Reviews**: Wild Stallion Pro is a nutritional supplement for male enhancement which is available in powder form and is proven to be the most effective for battling erectile dysfunction in men. [Wild Stallion Pro](https://www.sympla.com.br/evento/wild-stallion-pro/2124371) is a brand new supplement that benefits from all-natural and potent ingredients. Its daily use helps to boost intimate performance and improves blood flow to vital sections. Being a nutritional supplement, [Wild Stallion Pro](https://healthsupplements24x7.blogspot.com/2023/08/wild-stallion-pro.html) supports optimal health and is based on the findings of science and technology.
[](https://www.healthsupplement24x7.com/get-wild-stallion-pro)
### _**[CLICK HERE TO BUY - “Wild Stallion Pro (United States)”](https://www.healthsupplement24x7.com/get-wild-stallion-pro)**_
What Is Wild Stallion Pro?
--------------------------
[Wild Stallion Pro](https://wild-stallion-pro-reviews.hashnode.dev/wild-stallion-pro-male-enhancement-is-work-only-for-sexually-active-men-read-full-article-below) is a product that has been particularly designed to improve sexual health and vigor. It is a herbal tea made using only organic components. The old indigenous Tupi Indians are the source of this product’s recipe.
[Wild Stallion Pro](https://www.ivoox.com/wild-stallion-pro-male-enhancement-is-work-only-audios-mp3_rf_114444938_1.html) is a powder that has extracts from several natural substances and elements that can improve your and your partner's sex life. Wild Stallion Pro doesn't list any negative effects like blindness, limping, or other cardiac problems.
Wild Stallion Pro also has other health advantages for the body, including improved confidence and greater physical fitness and stamina. Without regard to age, Wild Stallion Pro gives the consumer a feeling of youth and vitality.
How Wild Stallion Pro Works
---------------------------
Three core elements underlie how [Wild Stallion Pro](https://form.jotform.com/wildstallionpro/wild-stallion-pro-male-enhancement) operates. Recent research indicates that the endothelium, a two-cell organ, is responsible for manufacturing more cGMP and lowering levels of the PDE5 enzyme to strengthen erections.
By combining a variety of natural herbs and plants and making the mix into a powdered formulation, Wild Stallion Pro’s creators state that they address the issues with erectile dysfunction organically. The following section summarizes the effects of Wild Stallion Pro on the body.
[](https://www.healthsupplement24x7.com/get-wild-stallion-pro)
What Health Benefits Can Wild Stallion Pro Provide?
---------------------------------------------------
The Wild Stallion Pro supplement provides a range of health benefits. From boosted testosterone levels to reduced oxidative stress and anxiety, Wild Stallion Pro can do it all. Let’s learn about some of it’s benefits-
**Boosts Testosterone Levels**: The Zinc in Wild Stallion Pro's male health supplement is an essential mineral for testosterone production. Zinc is found in high concentrations in the testes and plays a key role in regulating testosterone levels. Studies have shown that zinc deficiency can lead to low testosterone levels and poor performance.Supplementing with zinc has increased testosterone levels and improved sperm quality. Wild Stallion Pro's male health supplement provides a potent dose of zinc to help support testosterone production and optimize men's health.
**Increases Energy Levels:** Maca root extract is a popular ingredient in many male health supplements because it boosts energy levels and stamina. Wild Stallion Pro also contains a high concentration of maca root extract, which makes it an ideal choice for men looking to improve their energy levels and stamina.In addition, maca root extract also helps improve mental focus and clarity, which can be beneficial for men who need to be mentally at the top of their game.
**Improved Energy and Endurance:** Wild Stallion Pro increases energy levels by giving the body with necessary nutrients. With enough sexual energy, one can indulge in extended and passionate sexual interactions throughout the night.
**Reduces Stress And Anxiety:** It's no secret that stress and anxiety can take a toll on your health, but did you know that they can also impact your reproductive health? According to some experts, one of the leading causes of declining male fertility is stress and anxiety.While there are many ways to manage stress and anxiety, one simple way is to take a supplement like Wild Stallion Pro. Wild Stallion Pro contains maca root extract and ginger extract, which have been shown to help regulate stress and anxiety levels.
**Supports A Strong Immunity:** The zinc stearate and L-Arginine in Wild Stallion Pro's male reproductive health supplement provide immune system support in several ways. Zinc is an essential mineral for proper functioning of the immune system, and L-Arginine is an amino acid that plays a role in wound healing.Together, these two ingredients help to keep the immune system functioning properly, which is important for overall health and well-being. Additionally, they help to speed up the healing process if the body is injured or fighting off an infection.
[.png)](https://www.healthsupplement24x7.com/get-wild-stallion-pro)
### _**[Click Here to Buy Wild Stallion Pro With Discount!](https://www.healthsupplement24x7.com/get-wild-stallion-pro)**_
**Ingredients used in Wild Stallion Pro**
-----------------------------------------
Wild Stallion Pro ingredients are all-natural that boosts male sexual performance. Richard Johnson has chosen the best ingredients to create this formula to help men with sexual disorders. Let’s see the complete list of the Wild Stallion Pro constituents:
**L-Arginine** - The building block L-arginine is what the body uses to make nitric oxide. The substance opens up the blood arteries, improving oxygenation and blood flow. L-arginine enhances penile girth and length and lowers blood pressure. Erections are harder and firmer as a result of the increased blood flow to the penis.
**Tribulus Terrestris** - Tribulus Terrestris improves sex drive, arousal, orgasm, and satisfaction. Combined with L-arginine, it flushes out toxic hormones in the body and increases testosterone production. Tribulus Terrestris has androgenic effects that support athlete performance while boosting testosterone levels.
**Horny Goat Weed** - The natural sex enhancer honey goat weed opens the AR gene, allowing it to realise its full potential. It has an ingredient called icariin, which prevents the development of a protein linked to erectile dysfunction. The blood vessels are widened by horny goat weed, allowing blood to flow to the penis and improving the quality of erections.
[](https://www.healthsupplement24x7.com/get-wild-stallion-pro)
### _**[Don't wait - Order Wild Stallion Pro here for the best deal!](https://www.healthsupplement24x7.com/get-wild-stallion-pro)**_
How to Use It?
--------------
One bottle of Wild Stallion Pro contains 60 capsules, which is a one-month supply. To benefit from this supplement, you have to be consistent when using it. You must use it for at least 2 to 3 months for better results.
According to the official website, this formula is not meant for treating, curing, or preventing diseases and must be used in conjunction with a healthy diet and regular exercise. It is also essential to consult your doctor before using the pills to avoid health complications. For any other information you might require to know about the formula, like the storage conditions, use the manufacturer’s manuscript to find out.
Does Wild Stallion Pro have side effects?
-----------------------------------------
The formula contains natural ingredients and works naturally to boost testosterone levels. Therefore, this supplement has no side effects.
How Much Does Wild Stallion Pro Cost?
-------------------------------------
First, [Wild Stallion Pro](https://devfolio.co/projects/wild-stallion-pro-reviews-80b8) is only available on its official website. It is not available on Amazon or Walmart. So, avoid purchasing it from any other website except the official website.
When you visit the official website, you’ll get the option to purchase any of the three packages, which are:
* **Package 1:**One bottle for $69 + Shipping
* **Package 3:**Three bottles for $177 ($59 per bottle) + Free Shipping
* **Package 6:**Six bottles for $294 ($49 per bottle) + Free Shipping
[.png)](https://www.healthsupplement24x7.com/get-wild-stallion-pro)
### _**[Click Here to Buy Wild Stallion Pro From The Official Website](https://www.healthsupplement24x7.com/get-wild-stallion-pro)**_
**Refund policy of Wild Stallion Pro**
--------------------------------------
[Wild Stallion Pro](https://wildstallion.clubeo.com/page/wild-stallion-pro-is-it-only-for-sexually-active-men-read-full-article.html) comes with a six-day cash-back guarantee as the creator has confidence in their product. People should have two months to decide whether this product will suit them.
You can claim a refund if you think the supplement didn’t give the desired result. The creator will provide a complete refund to the buyer with no questions asked. The person should contact customer care within sixty days from the date of purchase.
They will return the full money if you have some capsules in the bottle or use the whole bottle. There are no subscriptions or hidden fees in this official portal.
**Where To Purchase?
**
-------------------------
If you need to purchase the [Wild Stallion Pro](https://wildstallion.clubeo.com/calendar/2023/08/17/wild-stallion-pro-male-enhancement-is-it-only-for-sexually-active-men-read-full-article) capsule, you should visit the **official portal** of the manufacturer. This website is 100% secure, so you can place your order to make the purchase.
While ordering the product online, you should be aware of fake websitesIt is beneficial to order bundle packages because the manufacturer provides discounts on bulk orders. Let’s see three packages of the organic supplement:
No matter what package you have chosen, you will get free shipping for delivery. People ordering the product from outside the country should pay shipping fees based on the country.
**Wild Stallion Pro Reviews – Final Words**
-------------------------------------------
In a nutshell, [**Wild Stallion Pro**](https://wildstallion.clubeo.com/) can be used to boost testosterone levels naturally. It has a natural composition, and the company is offering free delivery on all orders. For more details on orders, refunds, and delivery locations, **visit the official website using this link**.
[.png)](https://www.healthsupplement24x7.com/get-wild-stallion-pro)
### [_**Click Here to Order Wild Stallion Pro for the Best Price Available in USA!**_](https://www.healthsupplement24x7.com/get-wild-stallion-pro)
[https://healthsupplements24x7.blogspot.com/2023/08/wild-stallion-pro.html](https://healthsupplements24x7.blogspot.com/2023/08/wild-stallion-pro.html)
[https://pdfhost.io/v/oGby.zhuB\_Wild\_Stallion\_Pro\_Male\_Enhancement\_Is\_Work\_Only\_For\_Sexually\_Active\_Men\_Read\_Full\_Article\_Below](https://pdfhost.io/v/oGby.zhuB_Wild_Stallion_Pro_Male_Enhancement_Is_Work_Only_For_Sexually_Active_Men_Read_Full_Article_Below)
[https://wildstallion.clubeo.com/](https://wildstallion.clubeo.com/)
[https://wildstallion.clubeo.com/calendar/2023/08/17/wild-stallion-pro-male-enhancement-is-it-only-for-sexually-active-men-read-full-article](https://wildstallion.clubeo.com/calendar/2023/08/17/wild-stallion-pro-male-enhancement-is-it-only-for-sexually-active-men-read-full-article)
[https://www.sympla.com.br/evento/wild-stallion-pro/2124371](https://www.sympla.com.br/evento/wild-stallion-pro/2124371)
[https://wildstallion.clubeo.com/page/wild-stallion-pro-is-it-only-for-sexually-active-men-read-full-article.html](https://wildstallion.clubeo.com/page/wild-stallion-pro-is-it-only-for-sexually-active-men-read-full-article.html)
[https://wildstallion.clubeo.com/page/wild-stallion-pro-male-enhancement-is-it-only-for-sexually-active-men-read-full-article.html](https://wildstallion.clubeo.com/page/wild-stallion-pro-male-enhancement-is-it-only-for-sexually-active-men-read-full-article.html)
[https://www.scoop.it/topic/wild-stallion-pro-reviews/](https://www.scoop.it/topic/wild-stallion-pro-reviews/)
[https://wild-stallion-pro-reviews.hashnode.dev/wild-stallion-pro-male-enhancement-is-work-only-for-sexually-active-men-read-full-article-below](https://wild-stallion-pro-reviews.hashnode.dev/wild-stallion-pro-male-enhancement-is-work-only-for-sexually-active-men-read-full-article-below)
[https://www.ivoox.com/wild-stallion-pro-male-enhancement-is-work-only-audios-mp3\_rf\_114444938\_1.html](https://www.ivoox.com/wild-stallion-pro-male-enhancement-is-work-only-audios-mp3_rf_114444938_1.html)
[https://wild-stallion-pro-male-enhancement.jimdosite.com/](https://wild-stallion-pro-male-enhancement.jimdosite.com/)
[https://colab.research.google.com/drive/1tThdp38stmRVVp-d5pc4i8PqEJxF9Iq5](https://colab.research.google.com/drive/1tThdp38stmRVVp-d5pc4i8PqEJxF9Iq5)
[https://colab.research.google.com/drive/1vUWfvLtUhG9ED-p75D3P-NJ2HugMZIjc](https://colab.research.google.com/drive/1vUWfvLtUhG9ED-p75D3P-NJ2HugMZIjc)
[https://colab.research.google.com/drive/1uvoFLWIU3uxPhvKFrrpCIDtM\_Lp\_JZ4f](https://colab.research.google.com/drive/1uvoFLWIU3uxPhvKFrrpCIDtM_Lp_JZ4f)
[https://colab.research.google.com/drive/1D0IgM-jlfShg0pHRZLhbxEbuzAeoCX5Y](https://colab.research.google.com/drive/1D0IgM-jlfShg0pHRZLhbxEbuzAeoCX5Y)
[https://colab.research.google.com/drive/1AdwG3hv9F94gkbiU1juN3PTMM6umpcC\_](https://colab.research.google.com/drive/1AdwG3hv9F94gkbiU1juN3PTMM6umpcC_)
[https://form.jotform.com/wildstallionpro/wild-stallion-pro-male-enhancement](https://form.jotform.com/wildstallionpro/wild-stallion-pro-male-enhancement)
[https://devfolio.co/projects/wild-stallion-pro-reviews-80b8](https://devfolio.co/projects/wild-stallion-pro-reviews-80b8)
|
wildstallionpro/wild-stallion-pro
|
[
"region:us"
] |
2023-08-17T09:26:40+00:00
|
{}
|
2023-08-17T09:27:40+00:00
|
[] |
[] |
TAGS
#region-us
|
Product Name - Wild Stallion Pro
Treatment - Erectile Dysfunction \[Male Enhancement\]
Supplement Form - Capsules
Benefits - Regain Natural Energy, Stamina, & Sex Drive, Get Harder, Longer Lasting Erections
Customer Reviews - 4.9/5
Official Website - URL
Wild Stallion Pro USA Reviews: Wild Stallion Pro is a nutritional supplement for male enhancement which is available in powder form and is proven to be the most effective for battling erectile dysfunction in men. Wild Stallion Pro is a brand new supplement that benefits from all-natural and potent ingredients. Its daily use helps to boost intimate performance and improves blood flow to vital sections. Being a nutritional supplement, Wild Stallion Pro supports optimal health and is based on the findings of science and technology.
”_
What Is Wild Stallion Pro?
--------------------------
Wild Stallion Pro is a product that has been particularly designed to improve sexual health and vigor. It is a herbal tea made using only organic components. The old indigenous Tupi Indians are the source of this product’s recipe.
Wild Stallion Pro is a powder that has extracts from several natural substances and elements that can improve your and your partner's sex life. Wild Stallion Pro doesn't list any negative effects like blindness, limping, or other cardiac problems.
Wild Stallion Pro also has other health advantages for the body, including improved confidence and greater physical fitness and stamina. Without regard to age, Wild Stallion Pro gives the consumer a feeling of youth and vitality.
How Wild Stallion Pro Works
---------------------------
Three core elements underlie how Wild Stallion Pro operates. Recent research indicates that the endothelium, a two-cell organ, is responsible for manufacturing more cGMP and lowering levels of the PDE5 enzyme to strengthen erections.
By combining a variety of natural herbs and plants and making the mix into a powdered formulation, Wild Stallion Pro’s creators state that they address the issues with erectile dysfunction organically. The following section summarizes the effects of Wild Stallion Pro on the body.
](URL
### _Click Here to Buy Wild Stallion Pro With Discount!_
Ingredients used in Wild Stallion Pro
-----------------------------------------
Wild Stallion Pro ingredients are all-natural that boosts male sexual performance. Richard Johnson has chosen the best ingredients to create this formula to help men with sexual disorders. Let’s see the complete list of the Wild Stallion Pro constituents:
L-Arginine - The building block L-arginine is what the body uses to make nitric oxide. The substance opens up the blood arteries, improving oxygenation and blood flow. L-arginine enhances penile girth and length and lowers blood pressure. Erections are harder and firmer as a result of the increased blood flow to the penis.
Tribulus Terrestris - Tribulus Terrestris improves sex drive, arousal, orgasm, and satisfaction. Combined with L-arginine, it flushes out toxic hormones in the body and increases testosterone production. Tribulus Terrestris has androgenic effects that support athlete performance while boosting testosterone levels.
Horny Goat Weed - The natural sex enhancer honey goat weed opens the AR gene, allowing it to realise its full potential. It has an ingredient called icariin, which prevents the development of a protein linked to erectile dysfunction. The blood vessels are widened by horny goat weed, allowing blood to flow to the penis and improving the quality of erections.
 + Free Shipping
* Package 6:Six bottles for $294 ($49 per bottle) + Free Shipping
](URL
### _Click Here to Order Wild Stallion Pro for the Best Price Available in USA!_
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
|
[
"### _CLICK HERE TO BUY - “Wild Stallion Pro (United States)”_\n\nWhat Is Wild Stallion Pro?\n--------------------------\n\nWild Stallion Pro is a product that has been particularly designed to improve sexual health and vigor. It is a herbal tea made using only organic components. The old indigenous Tupi Indians are the source of this product’s recipe.\n\nWild Stallion Pro is a powder that has extracts from several natural substances and elements that can improve your and your partner's sex life. Wild Stallion Pro doesn't list any negative effects like blindness, limping, or other cardiac problems.\n\nWild Stallion Pro also has other health advantages for the body, including improved confidence and greater physical fitness and stamina. Without regard to age, Wild Stallion Pro gives the consumer a feeling of youth and vitality.\n\nHow Wild Stallion Pro Works\n---------------------------\n\nThree core elements underlie how Wild Stallion Pro operates. Recent research indicates that the endothelium, a two-cell organ, is responsible for manufacturing more cGMP and lowering levels of the PDE5 enzyme to strengthen erections.\n\nBy combining a variety of natural herbs and plants and making the mix into a powdered formulation, Wild Stallion Pro’s creators state that they address the issues with erectile dysfunction organically. The following section summarizes the effects of Wild Stallion Pro on the body.\n\n](URL",
"### _Click Here to Buy Wild Stallion Pro With Discount!_\n\nIngredients used in Wild Stallion Pro\n-----------------------------------------\n\nWild Stallion Pro ingredients are all-natural that boosts male sexual performance. Richard Johnson has chosen the best ingredients to create this formula to help men with sexual disorders. Let’s see the complete list of the Wild Stallion Pro constituents:\n\nL-Arginine - The building block L-arginine is what the body uses to make nitric oxide. The substance opens up the blood arteries, improving oxygenation and blood flow. L-arginine enhances penile girth and length and lowers blood pressure. Erections are harder and firmer as a result of the increased blood flow to the penis.\n\nTribulus Terrestris - Tribulus Terrestris improves sex drive, arousal, orgasm, and satisfaction. Combined with L-arginine, it flushes out toxic hormones in the body and increases testosterone production. Tribulus Terrestris has androgenic effects that support athlete performance while boosting testosterone levels.\n\nHorny Goat Weed - The natural sex enhancer honey goat weed opens the AR gene, allowing it to realise its full potential. It has an ingredient called icariin, which prevents the development of a protein linked to erectile dysfunction. The blood vessels are widened by horny goat weed, allowing blood to flow to the penis and improving the quality of erections.\n\n + Free Shipping\n* Package 6:Six bottles for $294 ($49 per bottle) + Free Shipping\n\n](URL",
"### _Click Here to Order Wild Stallion Pro for the Best Price Available in USA!_\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL"
] |
[
"TAGS\n#region-us \n",
"### _CLICK HERE TO BUY - “Wild Stallion Pro (United States)”_\n\nWhat Is Wild Stallion Pro?\n--------------------------\n\nWild Stallion Pro is a product that has been particularly designed to improve sexual health and vigor. It is a herbal tea made using only organic components. The old indigenous Tupi Indians are the source of this product’s recipe.\n\nWild Stallion Pro is a powder that has extracts from several natural substances and elements that can improve your and your partner's sex life. Wild Stallion Pro doesn't list any negative effects like blindness, limping, or other cardiac problems.\n\nWild Stallion Pro also has other health advantages for the body, including improved confidence and greater physical fitness and stamina. Without regard to age, Wild Stallion Pro gives the consumer a feeling of youth and vitality.\n\nHow Wild Stallion Pro Works\n---------------------------\n\nThree core elements underlie how Wild Stallion Pro operates. Recent research indicates that the endothelium, a two-cell organ, is responsible for manufacturing more cGMP and lowering levels of the PDE5 enzyme to strengthen erections.\n\nBy combining a variety of natural herbs and plants and making the mix into a powdered formulation, Wild Stallion Pro’s creators state that they address the issues with erectile dysfunction organically. The following section summarizes the effects of Wild Stallion Pro on the body.\n\n](URL",
"### _Click Here to Buy Wild Stallion Pro With Discount!_\n\nIngredients used in Wild Stallion Pro\n-----------------------------------------\n\nWild Stallion Pro ingredients are all-natural that boosts male sexual performance. Richard Johnson has chosen the best ingredients to create this formula to help men with sexual disorders. Let’s see the complete list of the Wild Stallion Pro constituents:\n\nL-Arginine - The building block L-arginine is what the body uses to make nitric oxide. The substance opens up the blood arteries, improving oxygenation and blood flow. L-arginine enhances penile girth and length and lowers blood pressure. Erections are harder and firmer as a result of the increased blood flow to the penis.\n\nTribulus Terrestris - Tribulus Terrestris improves sex drive, arousal, orgasm, and satisfaction. Combined with L-arginine, it flushes out toxic hormones in the body and increases testosterone production. Tribulus Terrestris has androgenic effects that support athlete performance while boosting testosterone levels.\n\nHorny Goat Weed - The natural sex enhancer honey goat weed opens the AR gene, allowing it to realise its full potential. It has an ingredient called icariin, which prevents the development of a protein linked to erectile dysfunction. The blood vessels are widened by horny goat weed, allowing blood to flow to the penis and improving the quality of erections.\n\n + Free Shipping\n* Package 6:Six bottles for $294 ($49 per bottle) + Free Shipping\n\n](URL",
"### _Click Here to Order Wild Stallion Pro for the Best Price Available in USA!_\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL"
] |
[
6,
910,
335,
348,
367,
38
] |
[
"passage: TAGS\n#region-us \n",
"passage: ### _CLICK HERE TO BUY - “Wild Stallion Pro (United States)”_\n\nWhat Is Wild Stallion Pro?\n--------------------------\n\nWild Stallion Pro is a product that has been particularly designed to improve sexual health and vigor. It is a herbal tea made using only organic components. The old indigenous Tupi Indians are the source of this product’s recipe.\n\nWild Stallion Pro is a powder that has extracts from several natural substances and elements that can improve your and your partner's sex life. Wild Stallion Pro doesn't list any negative effects like blindness, limping, or other cardiac problems.\n\nWild Stallion Pro also has other health advantages for the body, including improved confidence and greater physical fitness and stamina. Without regard to age, Wild Stallion Pro gives the consumer a feeling of youth and vitality.\n\nHow Wild Stallion Pro Works\n---------------------------\n\nThree core elements underlie how Wild Stallion Pro operates. Recent research indicates that the endothelium, a two-cell organ, is responsible for manufacturing more cGMP and lowering levels of the PDE5 enzyme to strengthen erections.\n\nBy combining a variety of natural herbs and plants and making the mix into a powdered formulation, Wild Stallion Pro’s creators state that they address the issues with erectile dysfunction organically. The following section summarizes the effects of Wild Stallion Pro on the body.\n\n](URL### _Click Here to Buy Wild Stallion Pro With Discount!_\n\nIngredients used in Wild Stallion Pro\n-----------------------------------------\n\nWild Stallion Pro ingredients are all-natural that boosts male sexual performance. Richard Johnson has chosen the best ingredients to create this formula to help men with sexual disorders. Let’s see the complete list of the Wild Stallion Pro constituents:\n\nL-Arginine - The building block L-arginine is what the body uses to make nitric oxide. The substance opens up the blood arteries, improving oxygenation and blood flow. L-arginine enhances penile girth and length and lowers blood pressure. Erections are harder and firmer as a result of the increased blood flow to the penis.\n\nTribulus Terrestris - Tribulus Terrestris improves sex drive, arousal, orgasm, and satisfaction. Combined with L-arginine, it flushes out toxic hormones in the body and increases testosterone production. Tribulus Terrestris has androgenic effects that support athlete performance while boosting testosterone levels.\n\nHorny Goat Weed - The natural sex enhancer honey goat weed opens the AR gene, allowing it to realise its full potential. It has an ingredient called icariin, which prevents the development of a protein linked to erectile dysfunction. The blood vessels are widened by horny goat weed, allowing blood to flow to the penis and improving the quality of erections.\n\n
This is the dataset of kirisame_marisa/霧雨魔理沙/키리사메마리사 (Touhou), containing 500 images and their tags.
The core tags of this character are `blonde_hair, hat, long_hair, witch_hat, bow, braid, yellow_eyes, single_braid, hat_bow, hair_bow, white_bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 821.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kirisame_marisa_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 469.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kirisame_marisa_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1209 | 945.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kirisame_marisa_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 734.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kirisame_marisa_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1209 | 1.31 GiB | [Download](https://huggingface.co/datasets/CyberHarem/kirisame_marisa_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kirisame_marisa_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 18 |  |  |  |  |  | 1girl, short_sleeves, solo, waist_apron, puffy_sleeves, smile, looking_at_viewer, broom, ribbon, dress, star_(symbol) |
| 1 | 10 |  |  |  |  |  | 1girl, black_footwear, black_headwear, black_skirt, black_vest, frills, looking_at_viewer, puffy_short_sleeves, solo, waist_apron, white_apron, white_shirt, full_body, white_socks, bangs, broom, mary_janes, buttons, grin, holding, mini-hakkero, simple_background, star_(symbol), blush |
| 2 | 14 |  |  |  |  |  | 1girl, solo, bloomers, star_(symbol), broom_riding, grin, shoes, open_mouth |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | short_sleeves | solo | waist_apron | puffy_sleeves | smile | looking_at_viewer | broom | ribbon | dress | star_(symbol) | black_footwear | black_headwear | black_skirt | black_vest | frills | puffy_short_sleeves | white_apron | white_shirt | full_body | white_socks | bangs | mary_janes | buttons | grin | holding | mini-hakkero | simple_background | blush | bloomers | broom_riding | shoes | open_mouth |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------|:-------|:--------------|:----------------|:--------|:--------------------|:--------|:---------|:--------|:----------------|:-----------------|:-----------------|:--------------|:-------------|:---------|:----------------------|:--------------|:--------------|:------------|:--------------|:--------|:-------------|:----------|:-------|:----------|:---------------|:--------------------|:--------|:-----------|:---------------|:--------|:-------------|
| 0 | 18 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | | X | X | | | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | |
| 2 | 14 |  |  |  |  |  | X | | X | | | | | | | | X | | | | | | | | | | | | | | X | | | | | X | X | X | X |
|
CyberHarem/kirisame_marisa_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T10:21:29+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T08:19:37+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of kirisame\_marisa/霧雨魔理沙/키리사메마리사 (Touhou)
==================================================
This is the dataset of kirisame\_marisa/霧雨魔理沙/키리사메마리사 (Touhou), containing 500 images and their tags.
The core tags of this character are 'blonde\_hair, hat, long\_hair, witch\_hat, bow, braid, yellow\_eyes, single\_braid, hat\_bow, hair\_bow, white\_bow', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
97cf4a0e6734fa39cd098592a30121d3bc5a4d3c
|
# Dataset Card for "Emotion_Recognition_4_llama2_chat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
RikoteMaster/Emotion_Recognition_4_llama2_chat
|
[
"region:us"
] |
2023-08-17T10:22:32+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "Text_processed", "dtype": "string"}, {"name": "Emotion", "dtype": "string"}, {"name": "Augmented", "dtype": "bool"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28688912, "num_examples": 61463}], "download_size": 8968276, "dataset_size": 28688912}}
|
2023-08-17T10:22:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Emotion_Recognition_4_llama2_chat"
More Information needed
|
[
"# Dataset Card for \"Emotion_Recognition_4_llama2_chat\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Emotion_Recognition_4_llama2_chat\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Emotion_Recognition_4_llama2_chat\"\n\nMore Information needed"
] |
a5db791503b56e5af74394cb3b50d7a0e953ef49
|
# Dataset of kaenbyou_rin/火焔猫燐/카엔뵤린 (Touhou)
This is the dataset of kaenbyou_rin/火焔猫燐/카엔뵤린 (Touhou), containing 500 images and their tags.
The core tags of this character are `red_hair, animal_ears, cat_ears, red_eyes, braid, twin_braids, bow, hair_bow, tail, cat_tail, multiple_tails, long_hair, extra_ears, ribbon, twintails`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 604.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kaenbyou_rin_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 377.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kaenbyou_rin_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1105 | 735.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kaenbyou_rin_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 545.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kaenbyou_rin_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1105 | 984.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kaenbyou_rin_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kaenbyou_rin_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, bangs, black_bow, looking_at_viewer, simple_background, solo, white_background, :d, blush, breasts, green_dress, juliet_sleeves, open_mouth, animal_ear_fluff, fang, nekomata, two_tails, paw_pose |
| 1 | 18 |  |  |  |  |  | 1girl, solo, skull, smile, dress |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bangs | black_bow | looking_at_viewer | simple_background | solo | white_background | :d | blush | breasts | green_dress | juliet_sleeves | open_mouth | animal_ear_fluff | fang | nekomata | two_tails | paw_pose | skull | smile | dress |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:------------|:--------------------|:--------------------|:-------|:-------------------|:-----|:--------|:----------|:--------------|:-----------------|:-------------|:-------------------|:-------|:-----------|:------------|:-----------|:--------|:--------|:--------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | |
| 1 | 18 |  |  |  |  |  | X | | | | | X | | | | | | | | | | | | | X | X | X |
|
CyberHarem/kaenbyou_rin_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T10:22:54+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T12:32:41+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of kaenbyou\_rin/火焔猫燐/카엔뵤린 (Touhou)
===========================================
This is the dataset of kaenbyou\_rin/火焔猫燐/카엔뵤린 (Touhou), containing 500 images and their tags.
The core tags of this character are 'red\_hair, animal\_ears, cat\_ears, red\_eyes, braid, twin\_braids, bow, hair\_bow, tail, cat\_tail, multiple\_tails, long\_hair, extra\_ears, ribbon, twintails', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
c4a8a263e36d02a750fac49e66a998ad961df029
|
# Dataset Card for "dnotes-data-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
senga-ml/dnotes-data-v1
|
[
"region:us"
] |
2023-08-17T10:46:20+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 169798787.0, "num_examples": 87}, {"name": "validation", "num_bytes": 29563876.0, "num_examples": 10}, {"name": "test", "num_bytes": 16997016.0, "num_examples": 6}], "download_size": 216159591, "dataset_size": 216359679.0}}
|
2023-08-17T10:53:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "dnotes-data-v1"
More Information needed
|
[
"# Dataset Card for \"dnotes-data-v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"dnotes-data-v1\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"dnotes-data-v1\"\n\nMore Information needed"
] |
6337df42af586925052153ebba7cdf560822353d
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
- alta: 422
- angie: 48
- arlie: 212
- b: 3116
- b2: 1138
- bay_area_girls_456: 234
- bay_area_girls_789: 154
- bea1: 223
- bea2: 63
- blind-f: 238
- blind-m: 143
- bosnak: 53
- chris: 100
- chuck: 75
- college-f: 160
- college-m: 160
- dahlia: 24
- david: 166
- dorothea: 900
- ed: 143
- edna: 19
- elizabeth: 1707
- emma: 1221
- emmas_husband: 72
- esther: 110
- hall_female: 681
- izzy-all: 4352
- jasmine-all: 664
- jeff: 87
- joan: 42
- kenneth: 2022
- lawrence: 206
- mack: 38
- madeline1-hs: 98
- madeline2-dorms: 186
- madeline3-offcampus: 348
- madeline4-postgrad: 294
- mark: 23
- melissa: 89
- melora: 211
- melvin: 128
- merri: 315
- miami-home: 171
- miami-lab: 274
- midwest_teens-f: 111
- midwest_teens-m: 83
- nancy: 44
- natural_scientist: 234
- norman: 1235
- norms-f: 491
- norms-m: 500
- pegasus: 1093
- peru-f: 382
- peru-m: 384
- phil1: 106
- phil2: 220
- phil3: 180
- physiologist: 86
- pregnancy_abortion: 226
- ringo: 16
- sally: 249
- samantha: 63
- seventh_graders: 69
- toby: 33
- tom: 27
- ucsc_women: 81
- van: 192
- vickie: 35
- vietnam_vet: 98
- vietnam_vet2: 32
- vietnam_vet3: 463
- west_coast_teens: 89
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
gustavecortal/DreamBank-annotated
|
[
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:summarization",
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] |
2023-08-17T10:47:18+00:00
|
{"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "text2text-generation", "summarization", "text-classification"], "pretty_name": "DreamBank"}
|
2023-08-18T07:12:48+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #task_categories-text2text-generation #task_categories-summarization #task_categories-text-classification #size_categories-10K<n<100K #language-English #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
- alta: 422
- angie: 48
- arlie: 212
- b: 3116
- b2: 1138
- bay_area_girls_456: 234
- bay_area_girls_789: 154
- bea1: 223
- bea2: 63
- blind-f: 238
- blind-m: 143
- bosnak: 53
- chris: 100
- chuck: 75
- college-f: 160
- college-m: 160
- dahlia: 24
- david: 166
- dorothea: 900
- ed: 143
- edna: 19
- elizabeth: 1707
- emma: 1221
- emmas_husband: 72
- esther: 110
- hall_female: 681
- izzy-all: 4352
- jasmine-all: 664
- jeff: 87
- joan: 42
- kenneth: 2022
- lawrence: 206
- mack: 38
- madeline1-hs: 98
- madeline2-dorms: 186
- madeline3-offcampus: 348
- madeline4-postgrad: 294
- mark: 23
- melissa: 89
- melora: 211
- melvin: 128
- merri: 315
- miami-home: 171
- miami-lab: 274
- midwest_teens-f: 111
- midwest_teens-m: 83
- nancy: 44
- natural_scientist: 234
- norman: 1235
- norms-f: 491
- norms-m: 500
- pegasus: 1093
- peru-f: 382
- peru-m: 384
- phil1: 106
- phil2: 220
- phil3: 180
- physiologist: 86
- pregnancy_abortion: 226
- ringo: 16
- sally: 249
- samantha: 63
- seventh_graders: 69
- toby: 33
- tom: 27
- ucsc_women: 81
- van: 192
- vickie: 35
- vietnam_vet: 98
- vietnam_vet2: 32
- vietnam_vet3: 463
- west_coast_teens: 89
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure\n\n - alta: 422\n - angie: 48\n - arlie: 212\n - b: 3116\n - b2: 1138\n - bay_area_girls_456: 234\n - bay_area_girls_789: 154\n - bea1: 223\n - bea2: 63\n - blind-f: 238\n - blind-m: 143\n - bosnak: 53\n - chris: 100\n - chuck: 75\n - college-f: 160\n - college-m: 160\n - dahlia: 24\n - david: 166\n - dorothea: 900\n - ed: 143\n - edna: 19\n - elizabeth: 1707\n - emma: 1221\n - emmas_husband: 72\n - esther: 110\n - hall_female: 681\n - izzy-all: 4352\n - jasmine-all: 664\n - jeff: 87\n - joan: 42\n - kenneth: 2022\n - lawrence: 206\n - mack: 38\n - madeline1-hs: 98\n - madeline2-dorms: 186\n - madeline3-offcampus: 348\n - madeline4-postgrad: 294\n - mark: 23\n - melissa: 89\n - melora: 211\n - melvin: 128\n - merri: 315\n - miami-home: 171\n - miami-lab: 274\n - midwest_teens-f: 111\n - midwest_teens-m: 83\n - nancy: 44\n - natural_scientist: 234\n - norman: 1235\n - norms-f: 491\n - norms-m: 500\n - pegasus: 1093\n - peru-f: 382\n - peru-m: 384\n - phil1: 106\n - phil2: 220\n - phil3: 180\n - physiologist: 86\n - pregnancy_abortion: 226\n - ringo: 16\n - sally: 249\n - samantha: 63\n - seventh_graders: 69\n - toby: 33\n - tom: 27\n - ucsc_women: 81\n - van: 192\n - vickie: 35\n - vietnam_vet: 98\n - vietnam_vet2: 32\n - vietnam_vet3: 463\n - west_coast_teens: 89",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-text-generation #task_categories-text2text-generation #task_categories-summarization #task_categories-text-classification #size_categories-10K<n<100K #language-English #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure\n\n - alta: 422\n - angie: 48\n - arlie: 212\n - b: 3116\n - b2: 1138\n - bay_area_girls_456: 234\n - bay_area_girls_789: 154\n - bea1: 223\n - bea2: 63\n - blind-f: 238\n - blind-m: 143\n - bosnak: 53\n - chris: 100\n - chuck: 75\n - college-f: 160\n - college-m: 160\n - dahlia: 24\n - david: 166\n - dorothea: 900\n - ed: 143\n - edna: 19\n - elizabeth: 1707\n - emma: 1221\n - emmas_husband: 72\n - esther: 110\n - hall_female: 681\n - izzy-all: 4352\n - jasmine-all: 664\n - jeff: 87\n - joan: 42\n - kenneth: 2022\n - lawrence: 206\n - mack: 38\n - madeline1-hs: 98\n - madeline2-dorms: 186\n - madeline3-offcampus: 348\n - madeline4-postgrad: 294\n - mark: 23\n - melissa: 89\n - melora: 211\n - melvin: 128\n - merri: 315\n - miami-home: 171\n - miami-lab: 274\n - midwest_teens-f: 111\n - midwest_teens-m: 83\n - nancy: 44\n - natural_scientist: 234\n - norman: 1235\n - norms-f: 491\n - norms-m: 500\n - pegasus: 1093\n - peru-f: 382\n - peru-m: 384\n - phil1: 106\n - phil2: 220\n - phil3: 180\n - physiologist: 86\n - pregnancy_abortion: 226\n - ringo: 16\n - sally: 249\n - samantha: 63\n - seventh_graders: 69\n - toby: 33\n - tom: 27\n - ucsc_women: 81\n - van: 192\n - vickie: 35\n - vietnam_vet: 98\n - vietnam_vet2: 32\n - vietnam_vet3: 463\n - west_coast_teens: 89",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
67,
8,
24,
32,
10,
4,
470,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-text2text-generation #task_categories-summarization #task_categories-text-classification #size_categories-10K<n<100K #language-English #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages"
] |
0a9010434c24b031bea95c34889a4e759a8c5932
|
# Dataset Card for "tamasheq_arabic_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
YassineBenlaria/tq_ar_fr
|
[
"region:us"
] |
2023-08-17T10:47:37+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "tq", "dtype": "string"}, {"name": "ar", "dtype": "string"}, {"name": "fr", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3799473, "num_examples": 5467}, {"name": "test", "num_bytes": 433414, "num_examples": 804}], "download_size": 2361408, "dataset_size": 4232887}}
|
2023-08-24T20:18:10+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "tamasheq_arabic_dataset"
More Information needed
|
[
"# Dataset Card for \"tamasheq_arabic_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"tamasheq_arabic_dataset\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"tamasheq_arabic_dataset\"\n\nMore Information needed"
] |
604698a5c9decc53ae9e7b8a4492229665449642
|
# Dataset Card for "bert-base-uncased-refined-web-segment0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Jackmin108/bert-base-uncased-refined-web-segment0
|
[
"region:us"
] |
2023-08-17T10:48:12+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 234885131268, "num_examples": 100000000}], "download_size": 10689166809, "dataset_size": 234885131268}}
|
2023-08-17T16:45:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bert-base-uncased-refined-web-segment0"
More Information needed
|
[
"# Dataset Card for \"bert-base-uncased-refined-web-segment0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bert-base-uncased-refined-web-segment0\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bert-base-uncased-refined-web-segment0\"\n\nMore Information needed"
] |
2a3a4d46a79f85954356fa3f5a4f345764a8a6c1
|
# Dataset Card for "find_word_train_100_eval_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/find_word_train_100_eval_100
|
[
"region:us"
] |
2023-08-17T11:01:38+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23323, "num_examples": 300}, {"name": "eval_find_word", "num_bytes": 5323, "num_examples": 100}], "download_size": 16396, "dataset_size": 28646}}
|
2023-08-17T11:01:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "find_word_train_100_eval_100"
More Information needed
|
[
"# Dataset Card for \"find_word_train_100_eval_100\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"find_word_train_100_eval_100\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"find_word_train_100_eval_100\"\n\nMore Information needed"
] |
0569e3c5584d90f88533b60e0a7786a3c6729d3b
|
# Dataset of hong_meiling/紅美鈴/홍메이링 (Touhou)
This is the dataset of hong_meiling/紅美鈴/홍메이링 (Touhou), containing 500 images and their tags.
The core tags of this character are `long_hair, red_hair, braid, twin_braids, hat, star_hat_ornament, hat_ornament, blue_eyes, breasts, bow, beret, hair_bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 663.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hong_meiling_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 410.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hong_meiling_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1120 | 800.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hong_meiling_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 597.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hong_meiling_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1120 | 1.06 GiB | [Download](https://huggingface.co/datasets/CyberHarem/hong_meiling_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hong_meiling_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, bangs, chinese_clothes, green_headwear, green_skirt, green_vest, looking_at_viewer, open_mouth, puffy_short_sleeves, simple_background, solo, star_(symbol), white_background, white_shirt, black_bow, medium_breasts, black_ribbon, blush, very_long_hair, black_footwear, collared_shirt, fighting_stance, neck_ribbon, standing_on_one_leg |
| 1 | 11 |  |  |  |  |  | 1girl, looking_at_viewer, solo, star_(symbol), black_ribbon, green_headwear, simple_background, upper_body, white_background, white_shirt, green_eyes, smile, closed_mouth, neck_ribbon, black_bow, collared_shirt, parted_bangs, puffy_short_sleeves, blush, chinese_clothes, green_vest |
| 2 | 5 |  |  |  |  |  | 1girl, chinese_clothes, solo, star_(symbol), looking_at_viewer, open_mouth, puffy_short_sleeves, shirt, fighting_stance, ribbon, very_long_hair, pants, skirt_set, vest, wrist_cuffs |
| 3 | 23 |  |  |  |  |  | 1girl, solo, star_(symbol), china_dress, fighting_stance |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bangs | chinese_clothes | green_headwear | green_skirt | green_vest | looking_at_viewer | open_mouth | puffy_short_sleeves | simple_background | solo | star_(symbol) | white_background | white_shirt | black_bow | medium_breasts | black_ribbon | blush | very_long_hair | black_footwear | collared_shirt | fighting_stance | neck_ribbon | standing_on_one_leg | upper_body | green_eyes | smile | closed_mouth | parted_bangs | shirt | ribbon | pants | skirt_set | vest | wrist_cuffs | china_dress |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:------------------|:-----------------|:--------------|:-------------|:--------------------|:-------------|:----------------------|:--------------------|:-------|:----------------|:-------------------|:--------------|:------------|:-----------------|:---------------|:--------|:-----------------|:-----------------|:-----------------|:------------------|:--------------|:----------------------|:-------------|:-------------|:--------|:---------------|:---------------|:--------|:---------|:--------|:------------|:-------|:--------------|:--------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 1 | 11 |  |  |  |  |  | X | | X | X | | X | X | | X | X | X | X | X | X | X | | X | X | | | X | | X | | X | X | X | X | X | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | | X | | | | X | X | X | | X | X | | | | | | | X | | | X | | | | | | | | X | X | X | X | X | X | |
| 3 | 23 |  |  |  |  |  | X | | | | | | | | | | X | X | | | | | | | | | | X | | | | | | | | | | | | | | X |
|
CyberHarem/hong_meiling_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T11:08:29+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T10:30:17+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of hong\_meiling/紅美鈴/홍메이링 (Touhou)
==========================================
This is the dataset of hong\_meiling/紅美鈴/홍메이링 (Touhou), containing 500 images and their tags.
The core tags of this character are 'long\_hair, red\_hair, braid, twin\_braids, hat, star\_hat\_ornament, hat\_ornament, blue\_eyes, breasts, bow, beret, hair\_bow', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
0340ac5c2d913684cbce7da9eaad5e39479f349a
|
This is an upload of `imagenet_resized/64x64` from [tensorflow datasets](https://www.tensorflow.org/datasets/catalog/imagenet_resized), (but shuffled before uploading).
The homepage of imagenet_resized is: https://patrykchrabaszcz.github.io/Imagenet32/
imagenet_resized is a derivative of imagenet (and also available to download from there): https://image-net.org/index.php
> Warning: The integer labels used are defined by the authors and do not match those from the other ImageNet datasets provided by Tensorflow datasets. See the original label list, and the labels used by this dataset. Additionally, the original authors 1 index there labels which we convert to 0 indexed by subtracting one.
— From the tensorflow datasets [page](https://www.tensorflow.org/datasets/catalog/imagenet_resized).
The data in this dataset is of the format:
```
{
"image": Array3D(shape=(64, 64, 3), dtype="uint8"),
"label": Value(dtype="int32"),
}
```
- There are 1,281,167 samples in the train split.
- There are 50,000 samples in the validation split.
|
sradc/imagenet_resized_64x64
|
[
"region:us"
] |
2023-08-17T11:28:22+00:00
|
{}
|
2023-08-17T13:09:02+00:00
|
[] |
[] |
TAGS
#region-us
|
This is an upload of 'imagenet_resized/64x64' from tensorflow datasets, (but shuffled before uploading).
The homepage of imagenet_resized is: URL
imagenet_resized is a derivative of imagenet (and also available to download from there): URL
> Warning: The integer labels used are defined by the authors and do not match those from the other ImageNet datasets provided by Tensorflow datasets. See the original label list, and the labels used by this dataset. Additionally, the original authors 1 index there labels which we convert to 0 indexed by subtracting one.
— From the tensorflow datasets page.
The data in this dataset is of the format:
- There are 1,281,167 samples in the train split.
- There are 50,000 samples in the validation split.
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
978fe26e78ecaaf6562aa249be43c2c690484d0a
|
[arxiv](https://arxiv.org/abs/2303.13193)
# ANAKIN
ANAKIN is a dataset of mANipulated videos and mAsK annotatIoNs.
To our best knowledge, ANAKIN is the first real-world dataset of professionally edited video clips,
paired with source videos, edit descriptions and binary mask annotations of the edited regions.
ANAKIN consists of 1023 videos in total, including 352 edited videos from the
[VideoSham](https://github.com/adobe-research/VideoSham-dataset)
dataset plus 671 new videos collected from the Vimeo platform.
## Data Format
| Label | Description |
|----------|-------------------------------------------------------------------------------|
| video-id | Video ID |
|full* | Full length original video |
|trimmed | Short clip trimmed from `full` |
|edited| Manipulated version of `trimmed`|
|masks*| Per-frame binary masks, annotating the manipulation|
| start-time* | Trim beginning time (in seconds) |
| end-time* | Trim end time (in seconds) |
| task | Task given to the video editor |
|manipulation-type| One of the 5 manipulation types: splicing, inpainting, swap, audio, frame-level |
| editor-id | Editor ID |
*There are several subset configurations available.
The choice depends on whether you need to download full length videos and/or you only need the videos with masks available.
`start-time` and `end-time` will be returned for subset configs with full videos in them.
| config | full | masks | train/val/test |
| ---------- | ---- | ----- | -------------- |
| all | yes | maybe | 681/98/195 |
| no-full | no | maybe | 716/102/205 |
| has-masks | no | yes | 297/43/85 |
| full-masks | yes | yes | 297/43/85 |
## Example
The data can either be downloaded or [streamed](https://huggingface.co/docs/datasets/stream).
### Downloaded
```python
from datasets import load_dataset
from torchvision.io import read_video
config = 'no-full' # ['all', 'no-full', 'has-masks', 'full-masks']
dataset = load_dataset("AlexBlck/ANAKIN", config, nproc=8)
for sample in dataset['train']: # ['train', 'validation', 'test']
trimmed_video, trimmed_audio, _ = read_video(sample['trimmed'], output_format="TCHW")
edited_video, edited_audio, _ = read_video(sample['edited'], output_format="TCHW")
masks = sample['masks']
print(sample.keys())
```
### Streamed
```python
from datasets import load_dataset
import cv2
dataset = load_dataset("AlexBlck/ANAKIN", streaming=True)
sample = next(iter(dataset['train'])) # ['train', 'validation', 'test']
cap = cv2.VideoCapture(sample['trimmed'])
while(cap.isOpened()):
ret, frame = cap.read()
# ...
```
|
AlexBlck/ANAKIN
|
[
"task_categories:video-classification",
"task_categories:visual-question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"arxiv:2303.13193",
"region:us"
] |
2023-08-17T11:33:16+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["video-classification", "visual-question-answering"], "pretty_name": "ANAKIN: manipulated videos and mask annotations"}
|
2023-09-21T09:37:04+00:00
|
[
"2303.13193"
] |
[
"en"
] |
TAGS
#task_categories-video-classification #task_categories-visual-question-answering #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #arxiv-2303.13193 #region-us
|
arxiv
ANAKIN
======
ANAKIN is a dataset of mANipulated videos and mAsK annotatIoNs.
To our best knowledge, ANAKIN is the first real-world dataset of professionally edited video clips,
paired with source videos, edit descriptions and binary mask annotations of the edited regions.
ANAKIN consists of 1023 videos in total, including 352 edited videos from the
VideoSham
dataset plus 671 new videos collected from the Vimeo platform.
Data Format
-----------
\*There are several subset configurations available.
The choice depends on whether you need to download full length videos and/or you only need the videos with masks available.
'start-time' and 'end-time' will be returned for subset configs with full videos in them.
Example
-------
The data can either be downloaded or streamed.
### Downloaded
### Streamed
|
[
"### Downloaded",
"### Streamed"
] |
[
"TAGS\n#task_categories-video-classification #task_categories-visual-question-answering #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #arxiv-2303.13193 #region-us \n",
"### Downloaded",
"### Streamed"
] |
[
65,
4,
5
] |
[
"passage: TAGS\n#task_categories-video-classification #task_categories-visual-question-answering #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #arxiv-2303.13193 #region-us \n### Downloaded### Streamed"
] |
da72c41782d07a75b7fe06ddd56a8a6ad70b757a
|
# Dataset Card for "find_word_train_1000_eval_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/find_word_train_1000_eval_100
|
[
"region:us"
] |
2023-08-17T11:35:09+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 152196, "num_examples": 2100}, {"name": "eval_find_word", "num_bytes": 5323, "num_examples": 100}], "download_size": 3424, "dataset_size": 157519}}
|
2023-08-17T13:22:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "find_word_train_1000_eval_100"
More Information needed
|
[
"# Dataset Card for \"find_word_train_1000_eval_100\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"find_word_train_1000_eval_100\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"find_word_train_1000_eval_100\"\n\nMore Information needed"
] |
0aacfba645c52665330dee7e254f20c355a20c6c
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Palmyra v1.4 dataset is a clean-room dataset. This HuggingFace repository contains a 1 billion token sample of the dataset. The full dataset has the following token counts and is available upon request.
| Dataset | Token Count |
|---------------|-------------|
| Commoncrawl (Filtered) | 790 Billion |
| C4 (Filtered) | 121 Billion |
| GitHub | 31 Billion |
| Books (Filtered) | 16 Billion |
| ArXiv | 28 Billion |
| Wikipedia | 24 Billion |
### Languages
Primarily English, though the Wikipedia slice contains multiple languages.
## Dataset Structure
The dataset structure is as follows:
```
{
"text": ...,
"meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...}
}
```
## Dataset Creation
The Writer Linguistics team created this dataset in order to adhere to business data and free copyright content as much as possible.
### Source Data
#### Commoncrawl
We downloaded five dumps from Commoncrawl and ran them through the official `cc_net` pipeline. We filtered out low quality data and only kept data that is distributed free of any copyright restrictions.
#### C4
C4 is downloaded from Huggingface. Filter out low quality data, and only keep data that is distributed free of any copyright restrictions.
#### GitHub
The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality
files and only keep projects that are distributed under the MIT, BSD, or Apache license.
#### Wikipedia
We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains
text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other
formatting boilerplate has been removed.
#### Gutenberg and Public domains
The PG19 subset of the Gutenberg Project and public domains books.
#### ArXiv
ArXiv data is downloaded from Amazon S3 in the `arxiv` requester pays bucket. We only keep latex source files and remove preambles, comments, macros and bibliographies.
|
Writer/palmyra-data-index
|
[
"task_categories:text-generation",
"size_categories:n>1T",
"language:en",
"B2B",
"palmyra",
"region:us"
] |
2023-08-17T12:04:51+00:00
|
{"language": ["en"], "size_categories": ["n>1T"], "task_categories": ["text-generation"], "pretty_name": "Palmyra index 1T Sample", "tags": ["B2B", "palmyra"]}
|
2024-02-09T17:53:03+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-n>1T #language-English #B2B #palmyra #region-us
|
Dataset Card for Dataset Name
=============================
Dataset Description
-------------------
* Homepage:
* Repository:
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
The Palmyra v1.4 dataset is a clean-room dataset. This HuggingFace repository contains a 1 billion token sample of the dataset. The full dataset has the following token counts and is available upon request.
### Languages
Primarily English, though the Wikipedia slice contains multiple languages.
Dataset Structure
-----------------
The dataset structure is as follows:
Dataset Creation
----------------
The Writer Linguistics team created this dataset in order to adhere to business data and free copyright content as much as possible.
### Source Data
#### Commoncrawl
We downloaded five dumps from Commoncrawl and ran them through the official 'cc\_net' pipeline. We filtered out low quality data and only kept data that is distributed free of any copyright restrictions.
#### C4
C4 is downloaded from Huggingface. Filter out low quality data, and only keep data that is distributed free of any copyright restrictions.
#### GitHub
The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality
files and only keep projects that are distributed under the MIT, BSD, or Apache license.
#### Wikipedia
We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains
text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other
formatting boilerplate has been removed.
#### Gutenberg and Public domains
The PG19 subset of the Gutenberg Project and public domains books.
#### ArXiv
ArXiv data is downloaded from Amazon S3 in the 'arxiv' requester pays bucket. We only keep latex source files and remove preambles, comments, macros and bibliographies.
|
[
"### Dataset Summary\n\n\nThe Palmyra v1.4 dataset is a clean-room dataset. This HuggingFace repository contains a 1 billion token sample of the dataset. The full dataset has the following token counts and is available upon request.",
"### Languages\n\n\nPrimarily English, though the Wikipedia slice contains multiple languages.\n\n\nDataset Structure\n-----------------\n\n\nThe dataset structure is as follows:\n\n\nDataset Creation\n----------------\n\n\nThe Writer Linguistics team created this dataset in order to adhere to business data and free copyright content as much as possible.",
"### Source Data",
"#### Commoncrawl\n\n\nWe downloaded five dumps from Commoncrawl and ran them through the official 'cc\\_net' pipeline. We filtered out low quality data and only kept data that is distributed free of any copyright restrictions.",
"#### C4\n\n\nC4 is downloaded from Huggingface. Filter out low quality data, and only keep data that is distributed free of any copyright restrictions.",
"#### GitHub\n\n\nThe raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality \n\nfiles and only keep projects that are distributed under the MIT, BSD, or Apache license.",
"#### Wikipedia\n\n\nWe use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains \n\ntext in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other \n\nformatting boilerplate has been removed.",
"#### Gutenberg and Public domains\n\n\nThe PG19 subset of the Gutenberg Project and public domains books.",
"#### ArXiv\n\n\nArXiv data is downloaded from Amazon S3 in the 'arxiv' requester pays bucket. We only keep latex source files and remove preambles, comments, macros and bibliographies."
] |
[
"TAGS\n#task_categories-text-generation #size_categories-n>1T #language-English #B2B #palmyra #region-us \n",
"### Dataset Summary\n\n\nThe Palmyra v1.4 dataset is a clean-room dataset. This HuggingFace repository contains a 1 billion token sample of the dataset. The full dataset has the following token counts and is available upon request.",
"### Languages\n\n\nPrimarily English, though the Wikipedia slice contains multiple languages.\n\n\nDataset Structure\n-----------------\n\n\nThe dataset structure is as follows:\n\n\nDataset Creation\n----------------\n\n\nThe Writer Linguistics team created this dataset in order to adhere to business data and free copyright content as much as possible.",
"### Source Data",
"#### Commoncrawl\n\n\nWe downloaded five dumps from Commoncrawl and ran them through the official 'cc\\_net' pipeline. We filtered out low quality data and only kept data that is distributed free of any copyright restrictions.",
"#### C4\n\n\nC4 is downloaded from Huggingface. Filter out low quality data, and only keep data that is distributed free of any copyright restrictions.",
"#### GitHub\n\n\nThe raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality \n\nfiles and only keep projects that are distributed under the MIT, BSD, or Apache license.",
"#### Wikipedia\n\n\nWe use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains \n\ntext in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other \n\nformatting boilerplate has been removed.",
"#### Gutenberg and Public domains\n\n\nThe PG19 subset of the Gutenberg Project and public domains books.",
"#### ArXiv\n\n\nArXiv data is downloaded from Amazon S3 in the 'arxiv' requester pays bucket. We only keep latex source files and remove preambles, comments, macros and bibliographies."
] |
[
39,
59,
72,
4,
52,
35,
55,
65,
25,
51
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-n>1T #language-English #B2B #palmyra #region-us \n### Dataset Summary\n\n\nThe Palmyra v1.4 dataset is a clean-room dataset. This HuggingFace repository contains a 1 billion token sample of the dataset. The full dataset has the following token counts and is available upon request.### Languages\n\n\nPrimarily English, though the Wikipedia slice contains multiple languages.\n\n\nDataset Structure\n-----------------\n\n\nThe dataset structure is as follows:\n\n\nDataset Creation\n----------------\n\n\nThe Writer Linguistics team created this dataset in order to adhere to business data and free copyright content as much as possible.### Source Data#### Commoncrawl\n\n\nWe downloaded five dumps from Commoncrawl and ran them through the official 'cc\\_net' pipeline. We filtered out low quality data and only kept data that is distributed free of any copyright restrictions.#### C4\n\n\nC4 is downloaded from Huggingface. Filter out low quality data, and only keep data that is distributed free of any copyright restrictions.#### GitHub\n\n\nThe raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality \n\nfiles and only keep projects that are distributed under the MIT, BSD, or Apache license.#### Wikipedia\n\n\nWe use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains \n\ntext in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other \n\nformatting boilerplate has been removed.#### Gutenberg and Public domains\n\n\nThe PG19 subset of the Gutenberg Project and public domains books.#### ArXiv\n\n\nArXiv data is downloaded from Amazon S3 in the 'arxiv' requester pays bucket. We only keep latex source files and remove preambles, comments, macros and bibliographies."
] |
120e4dd7ddedbfd6920e6ed0fdaafebe3d542e1a
|
<p><strong>Wild Stallion Pro (HIGH TESTOSTERONE ALERT⚠️):</strong> <a href="https://sites.google.com/view/wild-stallion-pros/home">Wild Stallion Pro</a> is a novel dietary supplement designed specifically for enhancing male health and well-being healthily. The team of experts who have created the formula says Wild Stallion Pro is an exotic tonic consisting of clinically verified natural ingredients that act together on the root cause of poor health of men.</p>
<h2 style="background-color: white; box-sizing: border-box; color: black; font-family: Roboto, Helvetica, Arial, sans-serif; font-size: 1.5em; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: bold; letter-spacing: normal; line-height: 1.1; margin: 10px 0px; padding: 0px; text-align: start; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><a href="https://www.healthsupplement24x7.com/get-wild-stallion-pro" target="_blank"><span style="background-color: red; box-sizing: border-box;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"><span style="box-sizing: border-box; color: #ffd966;">Wild Stallion Pro – Official Website Link – Click Here</span></strong></span></a></h2>
<p style="background-color: white; box-sizing: border-box; color: black; font-family: 'Times New Roman'; font-size: medium; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: 400; letter-spacing: normal; margin: 0px 0px 10px; padding: 0px; text-align: left; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;">➥ <span style="box-sizing: border-box; color: #993300;">Product Name -</span> <span style="box-sizing: border-box; color: red;">{<a href="https://www.healthsupplement24x7.com/get-wild-stallion-pro" target="_blank">Wild Stallion Pro</a>} (<a href="https://sites.google.com/view/wild-pro-stallion/home" target="_blank">Wild Stallion Pro Reviews</a>)</span><br style="box-sizing: border-box;" />➥ <span style="box-sizing: border-box; color: green;">Benefits - Wild Stallion Pro Supports Testosterone Production !</span><br style="box-sizing: border-box;" />➥ <span style="box-sizing: border-box; color: olive;">Category -</span></strong><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"> Male Growth Pills</strong><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"><br style="box-sizing: border-box;" />➥ <span style="box-sizing: border-box; color: purple;">Availability –</span> Online<br style="box-sizing: border-box;" />➥ <span style="box-sizing: border-box; color: navy;">Rating: -</span> <span style="box-sizing: border-box; color: red;">5.0/5.0</span> ⭐⭐⭐⭐⭐</strong></p>
<h2 style="background-color: white; box-sizing: border-box; color: black; font-family: Roboto, Helvetica, Arial, sans-serif; font-size: 1.5em; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: bold; letter-spacing: normal; line-height: 1.1; margin: 10px 0px; padding: 0px; text-align: start; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><a href="https://www.healthsupplement24x7.com/get-wild-stallion-pro" target="_blank"><span style="background-color: red; box-sizing: border-box;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;">✅<span style="box-sizing: border-box; color: #ffcc00;">Click Here To Visit – “OFFICIAL WEBSITE”</span>✅</strong></span></a></h2>
<h2 style="background-color: white; box-sizing: border-box; color: black; font-family: Roboto, Helvetica, Arial, sans-serif; font-size: 1.5em; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: bold; letter-spacing: normal; line-height: 1.1; margin: 10px 0px; padding: 0px; text-align: start; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><a href="https://www.healthsupplement24x7.com/get-wild-stallion-pro" target="_blank"><span style="background-color: red; box-sizing: border-box;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;">✅<span style="box-sizing: border-box; color: #ffcc00;">Click Here To Visit – “OFFICIAL WEBSITE”</span>✅</strong></span></a></h2>
<h2 style="background-color: white; box-sizing: border-box; color: black; font-family: Roboto, Helvetica, Arial, sans-serif; font-size: 1.5em; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: bold; letter-spacing: normal; line-height: 1.1; margin: 10px 0px; padding: 0px; text-align: start; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><a href="https://www.healthsupplement24x7.com/get-wild-stallion-pro" target="_blank"><span style="background-color: red; box-sizing: border-box;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;">✅<span style="box-sizing: border-box; color: #ffcc00;">Click Here To Visit – “OFFICIAL WEBSITE”</span>✅</strong></span></a></h2>
<p dir="ltr">On the surface, <a href="https://sites.google.com/view/wild-pro-stallion/home">Wild Stallion Pro</a> seems to be an effective and authentic supplement but is it? This <a href="https://wild-stallion-pro-official.clubeo.com/calendar/2023/08/16/wild-stallion-pro-2023-new-male-growth-formula-is-wild-stallion-pro-right-choice?_ga=2.73074003.1549269187.1692256917-97551096.1692256915">Wild Stallion Pro</a> review will discuss all the things related to the supplement which will give you a more detailed picture of it that goes beyond the outer surface and will assist you in deciding if the formula works or not.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.healthsupplement24x7.com/get-wild-stallion-pro" target="_blank"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZH6Zdj7dFM6MC2louGljMiU3wDwXQqb7y7EnWOXrxxXeudRb7Wi-MQZk-NYdMzbTPLqK6EzGjxldSb5bYFw3OIa3yHVl_QD8PeDM0f9pgP8W22TDPTZU1_rTQMX3LkQMD3LRVPc-q-dUEb7fe6Vo9ZALeCMIP2I67dEdqunojSRoo7zfUukhGY5h2uxo/w640-h356/Wild%20Stallion%20Pro%2010.png" alt="" width="640" height="356" border="0" data-original-height="600" data-original-width="1078" /></a></div>
<h2 dir="ltr">Introducing Wild Stallion Pro - A Novel Dietary Supplement For Male Health</h2>
<p dir="ltr"><a href="https://wild-stallion-pro-official.clubeo.com/page/wild-stallion-pro-high-testosterone-alert-accelerates-male-peformance-in-7-days.html">Wild Stallion Pro</a> is a natural supplement that contains science-backed ingredients that aid in improving male health. The dietary formula can help adult men in restoring their health by working on the prime factor that influences their well-being which is healthy testosterone levels. <a href="https://wild-stallion-pro-official.clubeo.com/page/wild-stallion-pro-dr-warning-is-wild-stallion-pro-worth-buying-what-do-customers-say.html">Wild Stallion Pro</a> drink offers a slew of male health benefits such as better energy levels, weight loss, improved cognitive functioning, promoting muscle building, and so on. The supplement is completely natural and is made in a state-of-the-art laboratory using pioneering technologies without compromising on its quality.</p>
<p dir="ltr">Does it work is one of the main queries that people had when they came to know about the <a href="https://groups.google.com/g/wild-pro-stallion/c/Ne1E-LSMa7o">Wild Stallion Pro</a> blood flow support formula. By looking at the prime factors of the supplement, it seems that Wild Stallion Pro works. But we must look deeper into the supplement to get a reliable answer to this question.</p>
<h2 style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-wild-stallion-pro" target="_blank"><span style="background-color: red; color: #f1c232;">(THE BIG BILLION DAYS SALE) - THE MOST SELLING "WILD STALLION PRO" IS HERE ORDER NOW</span></a></h2>
<h2>The working mechanism of Wild Stallion Pro</h2>
<p><a href="https://wild-stallion-pro-official.clubeo.com/page/wild-stallion-pro-dr-warning-is-wild-stallion-pro-worth-buying-what-do-customers-say.html">Wild Stallion Pro</a> ’s performance booster for men promotes blood flow from cavernosal corpora, enabling additional blood to get into the penile and offering strong and long-lasting sexual sensations. Ths solution enhances testosterone hormones’ efficiency, the primary cause of men’s sexual drive and libido because of the corpora cavernosa’s boosted blood flow.</p>
<p><a href="https://pdfhost.io/v/Kgvi6J19l_Wild_Stallion_Pro_HIGH_TESTOSTERONE_ALERT_Accelerates_Male_Peformance_In_7_Days">Wild Stallion Pro</a> alsoboosts the cells’ creation in a short perioddue to foods rich in antioxidants rich proved to assist in developing cells. Also, the Wild Stallion Pro’s offers increased energy for your system, enabling the new vigor’s strength and pleasure.</p>
<h2 dir="ltr">Ingredients Of Wild Stallion Pro: How Each Component Contributes To Male Health</h2>
<p dir="ltr"><a href="https://www.eventcreate.com/e/wild-stallion-pro-update">Wild Stallion Pro</a> all-natural supplement is created using the following ingredients:</p>
<ul>
<li dir="ltr">
<p dir="ltr"><strong>Boron</strong></p>
</li>
</ul>
<p dir="ltr">Boron is a trace element that is known for its testosterone-boosting properties. This Wild Stallion Pro ingredient also aids in fighting against feminizing chemicals. Boron lowers the production of estrogen in your body which plays a significant role in enhancing male health. It also boosts energy production and supports brain health. </p>
<ul>
<li dir="ltr">
<p dir="ltr"><strong>Ashwagandha</strong></p>
</li>
</ul>
<p dir="ltr">Ashwagandha is a powerful antioxidant with a wide range of health benefits. The ingredient can help with weight loss by promoting the growth of lean muscle. Ashwagandha is known for its properties which aid in reducing stress and improving cognitive health. </p>
<ul>
<li dir="ltr">
<p dir="ltr"><strong>Tongkat Ali</strong></p>
</li>
</ul>
<p dir="ltr">Tongkat Ali is an ingredient that is well known for its ability to boost the production of testosterone in your body. The ingredient aids in lowering cortisol levels in your body which is the stress hormone. Tongkat ali also supports healthy body composition. </p>
<ul>
<li dir="ltr">
<p dir="ltr"><strong>Fenugreek</strong></p>
</li>
</ul>
<p dir="ltr">Fenugreek is an ingredient that has a wide array of health benefits such as supporting testosterone production, boosting energy levels, and managing healthy blood sugar levels. This ingredient present in the Wild Stallion Pro formula also has many powerful antioxidants in it. </p>
<ul>
<li dir="ltr">
<p dir="ltr"><strong>Panax ginseng</strong></p>
</li>
</ul>
<p dir="ltr">Panax ginseng is a highly powerful natural ingredient that promotes male health by increasing testosterone levels in your body. The ingredient also increases your energy levels and helps you stay active all the time. </p>
<ul>
<li dir="ltr">
<p dir="ltr"><strong>Maca root</strong></p>
</li>
</ul>
<p dir="ltr">Maca root is an ingredient known for its energy-boosting properties. It also delivers a wide range of cognitive and mental health benefits such as reducing stress, elevating your mood, and reducing symptoms associated with depression. </p>
<h2 style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-wild-stallion-pro" target="_blank"><span style="background-color: red; color: #f1c232;">(THE BIG BILLION DAYS SALE) - THE MOST SELLING "WILD STALLION PRO" IS HERE ORDER NOW</span></a></h2>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.healthsupplement24x7.com/get-wild-stallion-pro" target="_blank"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhN0PA69DfIFMzzJzVHAXoNDB_0AHZFMjpDgk5iL2AY4SpN-4VdiRYFKhlsxPybeU2Jd8NVArxTnNPD1Cba83tnID-bcssKu6Ms1-LxM8AG_vQ9VeF4AaB7ylX9hp4-I3JpqRur7753U5JI-9eylJv2t6BD5OgU6uj_5tVmkgjHzqHHluXghtfG2Ll7_G4/w640-h238/Wild%20Stallion%20Pro%207.jpg" alt="" width="640" height="238" border="0" data-original-height="714" data-original-width="1920" /></a></div>
<h2 dir="ltr">Benefits Of Wild Stallion Pro: What The Pills Provides You With</h2>
<p dir="ltr"><a href="https://sketchfab.com/3d-models/wild-stallion-pro-reviews-bbcd96e981b0482dbbb4d5f50b4257e0">Wild Stallion Pro</a> nutritional formula offers a wide array of male health benefits to its users and some of them are discussed below:</p>
<ul>
<li dir="ltr">
<p dir="ltr"><strong>Supports testosterone production:</strong> One of the prime benefits that Wild Stallion Pro powder will deliver to its users is increasing the production of testosterone in their bodies. The majority of the ingredients of the supplement are testosterone boosters and they also improve male health. </p>
</li>
<li dir="ltr">
<p dir="ltr"><strong>Increases energy levels:</strong> Wild Stallion Pro tonic also increases your energy levels and boosts your stamina. This will aid you in staying active and energetic all the time. Wild Stallion Pro powder also increases your strength which is an essential thing for a healthy male body. </p>
</li>
<li dir="ltr">
<p dir="ltr"><strong>Promotes healthy weight loss:</strong> Wild Stallion Pro formula can help men in losing the extra fat in their bodies and also promotes the growth of lean muscle. The ingredients of the supplement might also help in building muscles. </p>
</li>
<li dir="ltr">
<p dir="ltr"><strong>Fights against feminizing of the male body:</strong> Wild Stallion Pro ingredients are efficient in working on the main cause of poor male health which is the feminizing of their bodies and lack of masculinity. The supplement fights against chemicals that feminize your body. </p>
</li>
<li dir="ltr">
<p dir="ltr"><strong>Support sharper mind and better cognitive health: </strong>Besides providing physical health benefits, Wild Stallion Pro drink also delivers mental and cognitive health benefits. The ingredients of the formula sharpen your mind and promote brain health.</p>
</li>
</ul>
<h2 dir="ltr">Results And Longevity: When Can You Expect To See Effects From Wild Stallion Pro?</h2>
<p dir="ltr">Wild Stallion Pro works healthily and naturally to enhance the production of male hormones in your body. Therefore, the average time needed by the supplement to give you effective results is three months. This may vary from person to person.</p>
<p dir="ltr">Nevertheless, the manufacturer of the <a href="https://devfolio.co/@WildStallion">Wild Stallion Pro</a> energy-boosting supplement says that the majority of the users will be able to see changes in their overall body within the first few weeks of incorporating the Wild Stallion Pro drink into their daily routine. The manufacturer says that the results that you receive from Wild Stallion Pro after using it continuously for a few months will last a few years.</p>
<h2 dir="ltr">Where To Purchase Wild Stallion Pro?</h2>
<p dir="ltr">Wild Stallion Pro blood flow support supplement is now available on its official website at exclusive discount prices. Ordering the supplement on its official Wild Stallion Pro website is a simple process. The first thing you need to do is to choose a package from the three that are available and then click on the ‘add to cart’ button. After you press the button, you will be directed to an order summary page.</p>
<h2 style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-wild-stallion-pro" target="_blank"><span style="background-color: red; color: #f1c232;">(THE BIG BILLION DAYS SALE) - THE MOST SELLING "WILD STALLION PRO" IS HERE ORDER NOW</span></a></h2>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.healthsupplement24x7.com/get-wild-stallion-pro" target="_blank"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiD9WM3rhJ3Zlj6lS2P8yLkbds6glgXH0jJZiqAC5Ike28GjD0P8BOFeZDGk-Ov8KZkANYiepVOS6vnFBC0iwyRurIUk9gX1c6GM_bsNxRnj8VaZBAybWvBHe70S4f_FcZkRnEleFw_ViPB75-pI2TbutbEPMMKUqc7s46m1ftNFuh-jj_yJ6z4pRHo9h0/w640-h452/Screenshot%20(1223).png" alt="" width="640" height="452" border="0" data-original-height="759" data-original-width="1077" /></a></div>
<h3 dir="ltr">Wild Stallion Pro Pricing!</h3>
<p dir="ltr">The price details of the Wild Stallion Pro blood flow support supplement are given below:</p>
<ul>
<li dir="ltr">
<p dir="ltr">Basic package: The basic package of <a href="https://www.ivoox.com/wild-stallion-pro-new-2023-does-it-work-audios-mp3_rf_114451503_1.html">Wild Stallion Pro</a> contains one bottle of the supplement and the price is $69</p>
</li>
<li dir="ltr">
<p dir="ltr">Popular package: The popular package of Wild Stallion Pro contains three bottles of the supplement and the price is $177 ($59 per bottle)</p>
</li>
<li dir="ltr">
<p dir="ltr">Best value package: The best value package of Wild Stallion Pro contains six bottles of the supplement and the price is $234 ($39 per bottle)</p>
</li>
</ul>
<h2 dir="ltr">Money-Back Guarantee: Ensuring Risk-Free Purchase And Customer Satisfaction</h2>
<p dir="ltr">Each package of Wild Stallion Pro male health booster is backed by a 180-day money-back guarantee. Therefore, if you are unsatisfied with the results that you received from the tonic, then you can contact the Wild Stallion Pro manufacturer and request a full refund.</p>
<p dir="ltr">You can contact the manufacturer of Wild Stallion Pro at [email protected]. Remember that the shipping charge isn’t included in the money-back guarantee. Wild Stallion Pro bottles that are bought from the official website of the supplement are only eligible for the money-back guarantee.</p>
<h2 dir="ltr">Wild Stallion Pro Reviews - Is It Worth Trying?</h2>
<p dir="ltr">Based on my in-depth analysis of the details in this Wild Stallion Pro review, it is quite evident that the Wild Stallion Pro supplement is a legit solution that can help men in restoring their health and activeness. It has powerful nutrients in it which fight against chemicals that feminize a man’s body and also boost your testosterone levels.</p>
<h2 style="text-align: center;"><a href="https://www.healthsupplement24x7.com/get-wild-stallion-pro" target="_blank"><span style="background-color: red; color: #f1c232;">(THE BIG BILLION DAYS SALE) - THE MOST SELLING "WILD STALLION PRO" IS HERE ORDER NOW</span></a></h2>
<p>These two factors will enhance the overall well-being of men and promote optimal body functioning. <a href="https://wildstallionpro1.bandcamp.com/track/wild-stallion-pro-new-2023-does-it-work-or-just-scam">Wild Stallion Pro</a> dietary supplement contains zero harmful substances and is 100% natural, vegan-friendly, non-GMO, and non-habit-forming. This makes the supplement safe.</p>
<h3>READ MORE ON OFFICIAL WEBSITE:</h3>
<p><a href="https://wild-stallion-pro-official.clubeo.com/calendar/2023/08/16/wild-stallion-pro-2023-new-male-growth-formula-is-wild-stallion-pro-right-choice?_ga=2.73074003.1549269187.1692256917-97551096.1692256915">https://wild-stallion-pro-official.clubeo.com/calendar/2023/08/16/wild-stallion-pro-2023-new-male-growth-formula-is-wild-stallion-pro-right-choice</a></p>
<p><a href="https://wildstallionpro1.bandcamp.com/track/wild-stallion-pro-new-2023-does-it-work-or-just-scam">https://wildstallionpro1.bandcamp.com/track/wild-stallion-pro-new-2023-does-it-work-or-just-scam</a></p>
<p><a href="https://sites.google.com/view/wild-stallion-pros/home">https://sites.google.com/view/wild-stallion-pros/home</a></p>
<p><a href="https://www.ivoox.com/wild-stallion-pro-new-2023-does-it-work-audios-mp3_rf_114451503_1.html">https://www.ivoox.com/wild-stallion-pro-new-2023-does-it-work-audios-mp3_rf_114451503_1.html</a></p>
<p><a href="https://sketchfab.com/3d-models/wild-stallion-pro-reviews-bbcd96e981b0482dbbb4d5f50b4257e0">https://sketchfab.com/3d-models/wild-stallion-pro-reviews-bbcd96e981b0482dbbb4d5f50b4257e0</a></p>
<p><a href="https://www.eventcreate.com/e/wild-stallion-pro-update">https://www.eventcreate.com/e/wild-stallion-pro-update</a></p>
<p><a href="https://pdfhost.io/v/Kgvi6J19l_Wild_Stallion_Pro_HIGH_TESTOSTERONE_ALERT_Accelerates_Male_Peformance_In_7_Days">https://pdfhost.io/v/Kgvi6J19l_Wild_Stallion_Pro_HIGH_TESTOSTERONE_ALERT_Accelerates_Male_Peformance_In_7_Days</a></p>
<p><a href="https://wild-stallion-pro-official.clubeo.com/page/wild-stallion-pro-high-testosterone-alert-accelerates-male-peformance-in-7-days.html">https://wild-stallion-pro-official.clubeo.com/page/wild-stallion-pro-high-testosterone-alert-accelerates-male-peformance-in-7-days.html</a></p>
<p><a href="https://sites.google.com/view/wild-pro-stallion/home">https://sites.google.com/view/wild-pro-stallion/home</a></p>
<p><a href="https://groups.google.com/g/wild-pro-stallion/c/Ne1E-LSMa7o">https://groups.google.com/g/wild-pro-stallion/c/Ne1E-LSMa7o</a></p>
<p><a href="https://wild-stallion-pro-official.clubeo.com/page/wild-stallion-pro-dr-warning-is-wild-stallion-pro-worth-buying-what-do-customers-say.html">https://wild-stallion-pro-official.clubeo.com/page/wild-stallion-pro-dr-warning-is-wild-stallion-pro-worth-buying-what-do-customers-say.html</a></p>
|
WildStallionProReviews/Wild-Stallion-Pro-Official-Website
|
[
"region:us"
] |
2023-08-17T12:06:56+00:00
|
{}
|
2023-08-17T12:14:06+00:00
|
[] |
[] |
TAGS
#region-us
|
<p><strong>Wild Stallion Pro (HIGH TESTOSTERONE ALERT️):</strong> <a href="URL Stallion Pro</a> is a novel dietary supplement designed specifically for enhancing male health and well-being healthily. The team of experts who have created the formula says Wild Stallion Pro is an exotic tonic consisting of clinically verified natural ingredients that act together on the root cause of poor health of men.</p>
<h2 style="background-color: white; box-sizing: border-box; color: black; font-family: Roboto, Helvetica, Arial, sans-serif; font-size: 1.5em; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: bold; letter-spacing: normal; line-height: 1.1; margin: 10px 0px; padding: 0px; text-align: start; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><a href="URL target="_blank"><span style="background-color: red; box-sizing: border-box;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"><span style="box-sizing: border-box; color: #ffd966;">Wild Stallion Pro – Official Website Link – Click Here</span></strong></span></a></h2>
<p style="background-color: white; box-sizing: border-box; color: black; font-family: 'Times New Roman'; font-size: medium; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: 400; letter-spacing: normal; margin: 0px 0px 10px; padding: 0px; text-align: left; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"> <span style="box-sizing: border-box; color: #993300;">Product Name -</span> <span style="box-sizing: border-box; color: red;">{<a href="URL target="_blank">Wild Stallion Pro</a>} (<a href="URL target="_blank">Wild Stallion Pro Reviews</a>)</span><br style="box-sizing: border-box;" /> <span style="box-sizing: border-box; color: green;">Benefits - Wild Stallion Pro Supports Testosterone Production !</span><br style="box-sizing: border-box;" /> <span style="box-sizing: border-box; color: olive;">Category -</span></strong><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"> Male Growth Pills</strong><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"><br style="box-sizing: border-box;" /> <span style="box-sizing: border-box; color: purple;">Availability –</span> Online<br style="box-sizing: border-box;" /> <span style="box-sizing: border-box; color: navy;">Rating: -</span> <span style="box-sizing: border-box; color: red;">5.0/5.0</span> ⭐⭐⭐⭐⭐</strong></p>
<h2 style="background-color: white; box-sizing: border-box; color: black; font-family: Roboto, Helvetica, Arial, sans-serif; font-size: 1.5em; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: bold; letter-spacing: normal; line-height: 1.1; margin: 10px 0px; padding: 0px; text-align: start; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><a href="URL target="_blank"><span style="background-color: red; box-sizing: border-box;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"><span style="box-sizing: border-box; color: #ffcc00;">Click Here To Visit – “OFFICIAL WEBSITE”</span></strong></span></a></h2>
<h2 style="background-color: white; box-sizing: border-box; color: black; font-family: Roboto, Helvetica, Arial, sans-serif; font-size: 1.5em; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: bold; letter-spacing: normal; line-height: 1.1; margin: 10px 0px; padding: 0px; text-align: start; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><a href="URL target="_blank"><span style="background-color: red; box-sizing: border-box;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"><span style="box-sizing: border-box; color: #ffcc00;">Click Here To Visit – “OFFICIAL WEBSITE”</span></strong></span></a></h2>
<h2 style="background-color: white; box-sizing: border-box; color: black; font-family: Roboto, Helvetica, Arial, sans-serif; font-size: 1.5em; font-style: normal; font-variant-caps: normal; font-variant-ligatures: normal; font-weight: bold; letter-spacing: normal; line-height: 1.1; margin: 10px 0px; padding: 0px; text-align: start; text-decoration-color: initial; text-decoration-style: initial; text-decoration-thickness: initial; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;"><a href="URL target="_blank"><span style="background-color: red; box-sizing: border-box;"><strong style="box-sizing: border-box; font-style: normal; font-weight: bold;"><span style="box-sizing: border-box; color: #ffcc00;">Click Here To Visit – “OFFICIAL WEBSITE”</span></strong></span></a></h2>
<p dir="ltr">On the surface, <a href="URL Stallion Pro</a> seems to be an effective and authentic supplement but is it? This <a href="URL Stallion Pro</a> review will discuss all the things related to the supplement which will give you a more detailed picture of it that goes beyond the outer surface and will assist you in deciding if the formula works or not.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL target="_blank"><img src="URL alt="" width="640" height="356" border="0" data-original-height="600" data-original-width="1078" /></a></div>
<h2 dir="ltr">Introducing Wild Stallion Pro - A Novel Dietary Supplement For Male Health</h2>
<p dir="ltr"><a href="URL Stallion Pro</a> is a natural supplement that contains science-backed ingredients that aid in improving male health. The dietary formula can help adult men in restoring their health by working on the prime factor that influences their well-being which is healthy testosterone levels. <a href="URL Stallion Pro</a> drink offers a slew of male health benefits such as better energy levels, weight loss, improved cognitive functioning, promoting muscle building, and so on. The supplement is completely natural and is made in a state-of-the-art laboratory using pioneering technologies without compromising on its quality.</p>
<p dir="ltr">Does it work is one of the main queries that people had when they came to know about the <a href="URL Stallion Pro</a> blood flow support formula. By looking at the prime factors of the supplement, it seems that Wild Stallion Pro works. But we must look deeper into the supplement to get a reliable answer to this question.</p>
<h2 style="text-align: center;"><a href="URL target="_blank"><span style="background-color: red; color: #f1c232;">(THE BIG BILLION DAYS SALE) - THE MOST SELLING "WILD STALLION PRO" IS HERE ORDER NOW</span></a></h2>
<h2>The working mechanism of Wild Stallion Pro</h2>
<p><a href="URL Stallion Pro</a> ’s performance booster for men promotes blood flow from cavernosal corpora, enabling additional blood to get into the penile and offering strong and long-lasting sexual sensations. Ths solution enhances testosterone hormones’ efficiency, the primary cause of men’s sexual drive and libido because of the corpora cavernosa’s boosted blood flow.</p>
<p><a href="URL Stallion Pro</a> alsoboosts the cells’ creation in a short perioddue to foods rich in antioxidants rich proved to assist in developing cells. Also, the Wild Stallion Pro’s offers increased energy for your system, enabling the new vigor’s strength and pleasure.</p>
<h2 dir="ltr">Ingredients Of Wild Stallion Pro: How Each Component Contributes To Male Health</h2>
<p dir="ltr"><a href="URL Stallion Pro</a> all-natural supplement is created using the following ingredients:</p>
<ul>
<li dir="ltr">
<p dir="ltr"><strong>Boron</strong></p>
</li>
</ul>
<p dir="ltr">Boron is a trace element that is known for its testosterone-boosting properties. This Wild Stallion Pro ingredient also aids in fighting against feminizing chemicals. Boron lowers the production of estrogen in your body which plays a significant role in enhancing male health. It also boosts energy production and supports brain health. </p>
<ul>
<li dir="ltr">
<p dir="ltr"><strong>Ashwagandha</strong></p>
</li>
</ul>
<p dir="ltr">Ashwagandha is a powerful antioxidant with a wide range of health benefits. The ingredient can help with weight loss by promoting the growth of lean muscle. Ashwagandha is known for its properties which aid in reducing stress and improving cognitive health. </p>
<ul>
<li dir="ltr">
<p dir="ltr"><strong>Tongkat Ali</strong></p>
</li>
</ul>
<p dir="ltr">Tongkat Ali is an ingredient that is well known for its ability to boost the production of testosterone in your body. The ingredient aids in lowering cortisol levels in your body which is the stress hormone. Tongkat ali also supports healthy body composition. </p>
<ul>
<li dir="ltr">
<p dir="ltr"><strong>Fenugreek</strong></p>
</li>
</ul>
<p dir="ltr">Fenugreek is an ingredient that has a wide array of health benefits such as supporting testosterone production, boosting energy levels, and managing healthy blood sugar levels. This ingredient present in the Wild Stallion Pro formula also has many powerful antioxidants in it. </p>
<ul>
<li dir="ltr">
<p dir="ltr"><strong>Panax ginseng</strong></p>
</li>
</ul>
<p dir="ltr">Panax ginseng is a highly powerful natural ingredient that promotes male health by increasing testosterone levels in your body. The ingredient also increases your energy levels and helps you stay active all the time. </p>
<ul>
<li dir="ltr">
<p dir="ltr"><strong>Maca root</strong></p>
</li>
</ul>
<p dir="ltr">Maca root is an ingredient known for its energy-boosting properties. It also delivers a wide range of cognitive and mental health benefits such as reducing stress, elevating your mood, and reducing symptoms associated with depression. </p>
<h2 style="text-align: center;"><a href="URL target="_blank"><span style="background-color: red; color: #f1c232;">(THE BIG BILLION DAYS SALE) - THE MOST SELLING "WILD STALLION PRO" IS HERE ORDER NOW</span></a></h2>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL target="_blank"><img src="URL alt="" width="640" height="238" border="0" data-original-height="714" data-original-width="1920" /></a></div>
<h2 dir="ltr">Benefits Of Wild Stallion Pro: What The Pills Provides You With</h2>
<p dir="ltr"><a href="URL Stallion Pro</a> nutritional formula offers a wide array of male health benefits to its users and some of them are discussed below:</p>
<ul>
<li dir="ltr">
<p dir="ltr"><strong>Supports testosterone production:</strong> One of the prime benefits that Wild Stallion Pro powder will deliver to its users is increasing the production of testosterone in their bodies. The majority of the ingredients of the supplement are testosterone boosters and they also improve male health. </p>
</li>
<li dir="ltr">
<p dir="ltr"><strong>Increases energy levels:</strong> Wild Stallion Pro tonic also increases your energy levels and boosts your stamina. This will aid you in staying active and energetic all the time. Wild Stallion Pro powder also increases your strength which is an essential thing for a healthy male body. </p>
</li>
<li dir="ltr">
<p dir="ltr"><strong>Promotes healthy weight loss:</strong> Wild Stallion Pro formula can help men in losing the extra fat in their bodies and also promotes the growth of lean muscle. The ingredients of the supplement might also help in building muscles. </p>
</li>
<li dir="ltr">
<p dir="ltr"><strong>Fights against feminizing of the male body:</strong> Wild Stallion Pro ingredients are efficient in working on the main cause of poor male health which is the feminizing of their bodies and lack of masculinity. The supplement fights against chemicals that feminize your body. </p>
</li>
<li dir="ltr">
<p dir="ltr"><strong>Support sharper mind and better cognitive health: </strong>Besides providing physical health benefits, Wild Stallion Pro drink also delivers mental and cognitive health benefits. The ingredients of the formula sharpen your mind and promote brain health.</p>
</li>
</ul>
<h2 dir="ltr">Results And Longevity: When Can You Expect To See Effects From Wild Stallion Pro?</h2>
<p dir="ltr">Wild Stallion Pro works healthily and naturally to enhance the production of male hormones in your body. Therefore, the average time needed by the supplement to give you effective results is three months. This may vary from person to person.</p>
<p dir="ltr">Nevertheless, the manufacturer of the <a href="URL Stallion Pro</a> energy-boosting supplement says that the majority of the users will be able to see changes in their overall body within the first few weeks of incorporating the Wild Stallion Pro drink into their daily routine. The manufacturer says that the results that you receive from Wild Stallion Pro after using it continuously for a few months will last a few years.</p>
<h2 dir="ltr">Where To Purchase Wild Stallion Pro?</h2>
<p dir="ltr">Wild Stallion Pro blood flow support supplement is now available on its official website at exclusive discount prices. Ordering the supplement on its official Wild Stallion Pro website is a simple process. The first thing you need to do is to choose a package from the three that are available and then click on the ‘add to cart’ button. After you press the button, you will be directed to an order summary page.</p>
<h2 style="text-align: center;"><a href="URL target="_blank"><span style="background-color: red; color: #f1c232;">(THE BIG BILLION DAYS SALE) - THE MOST SELLING "WILD STALLION PRO" IS HERE ORDER NOW</span></a></h2>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL target="_blank"><img src="URL alt="" width="640" height="452" border="0" data-original-height="759" data-original-width="1077" /></a></div>
<h3 dir="ltr">Wild Stallion Pro Pricing!</h3>
<p dir="ltr">The price details of the Wild Stallion Pro blood flow support supplement are given below:</p>
<ul>
<li dir="ltr">
<p dir="ltr">Basic package: The basic package of <a href="URL Stallion Pro</a> contains one bottle of the supplement and the price is $69</p>
</li>
<li dir="ltr">
<p dir="ltr">Popular package: The popular package of Wild Stallion Pro contains three bottles of the supplement and the price is $177 ($59 per bottle)</p>
</li>
<li dir="ltr">
<p dir="ltr">Best value package: The best value package of Wild Stallion Pro contains six bottles of the supplement and the price is $234 ($39 per bottle)</p>
</li>
</ul>
<h2 dir="ltr">Money-Back Guarantee: Ensuring Risk-Free Purchase And Customer Satisfaction</h2>
<p dir="ltr">Each package of Wild Stallion Pro male health booster is backed by a 180-day money-back guarantee. Therefore, if you are unsatisfied with the results that you received from the tonic, then you can contact the Wild Stallion Pro manufacturer and request a full refund.</p>
<p dir="ltr">You can contact the manufacturer of Wild Stallion Pro at support@URL. Remember that the shipping charge isn’t included in the money-back guarantee. Wild Stallion Pro bottles that are bought from the official website of the supplement are only eligible for the money-back guarantee.</p>
<h2 dir="ltr">Wild Stallion Pro Reviews - Is It Worth Trying?</h2>
<p dir="ltr">Based on my in-depth analysis of the details in this Wild Stallion Pro review, it is quite evident that the Wild Stallion Pro supplement is a legit solution that can help men in restoring their health and activeness. It has powerful nutrients in it which fight against chemicals that feminize a man’s body and also boost your testosterone levels.</p>
<h2 style="text-align: center;"><a href="URL target="_blank"><span style="background-color: red; color: #f1c232;">(THE BIG BILLION DAYS SALE) - THE MOST SELLING "WILD STALLION PRO" IS HERE ORDER NOW</span></a></h2>
<p>These two factors will enhance the overall well-being of men and promote optimal body functioning. <a href="URL Stallion Pro</a> dietary supplement contains zero harmful substances and is 100% natural, vegan-friendly, non-GMO, and non-habit-forming. This makes the supplement safe.</p>
<h3>READ MORE ON OFFICIAL WEBSITE:</h3>
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
<p><a href="URL/URL
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
b5ebba44f7eb0cfc5130a1a8ca77e3e740fbad8d
|
# BioMassters: A Benchmark Dataset for Forest Biomass Estimation using Multi-modal Satellite Time-series https://nascetti-a.github.io/BioMasster/
The objective of this repository is to provide a deep learning ready dataset to predict yearly Above Ground Biomass (AGB) for Finnish forests using multi-temporal satellite imagery from
the European Space Agency and European Commission's joint Sentinel-1 and Sentinel-2 satellite missions, designed to collect a rich array of Earth observation data
### Reference data:
* Reference AGB measurements were collected using LiDAR (Light Detection and Ranging) calibrated with in-situ measurements.
* Total 13000 patches, each patch covering 2,560 by 2,560 meter area.
### Feature data:
* Sentinel-1 SAR and Sentinel-2 MSI data
* 12 months of data (1 image per month)
* Total 310,000 patches
### Data Specifications:

### Data Size:
```
dataset | # files | size
--------------------------------------
train_features | 189078 | 215.9GB
test_features | 63348 | 73.0GB
train_agbm | 8689 | 2.1GB
```
## Citation : under review
|
nascetti-a/BioMassters
|
[
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-4.0",
"climate",
"doi:10.57967/hf/1009",
"region:us"
] |
2023-08-17T12:09:56+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "pretty_name": "BioMassters", "tags": ["climate"]}
|
2023-10-27T21:18:39+00:00
|
[] |
[
"en"
] |
TAGS
#size_categories-100K<n<1M #language-English #license-cc-by-4.0 #climate #doi-10.57967/hf/1009 #region-us
|
# BioMassters: A Benchmark Dataset for Forest Biomass Estimation using Multi-modal Satellite Time-series URL
The objective of this repository is to provide a deep learning ready dataset to predict yearly Above Ground Biomass (AGB) for Finnish forests using multi-temporal satellite imagery from
the European Space Agency and European Commission's joint Sentinel-1 and Sentinel-2 satellite missions, designed to collect a rich array of Earth observation data
### Reference data:
* Reference AGB measurements were collected using LiDAR (Light Detection and Ranging) calibrated with in-situ measurements.
* Total 13000 patches, each patch covering 2,560 by 2,560 meter area.
### Feature data:
* Sentinel-1 SAR and Sentinel-2 MSI data
* 12 months of data (1 image per month)
* Total 310,000 patches
### Data Specifications:
!img
### Data Size:
: under review
|
[
"# BioMassters: A Benchmark Dataset for Forest Biomass Estimation using Multi-modal Satellite Time-series URL\n\nThe objective of this repository is to provide a deep learning ready dataset to predict yearly Above Ground Biomass (AGB) for Finnish forests using multi-temporal satellite imagery from\nthe European Space Agency and European Commission's joint Sentinel-1 and Sentinel-2 satellite missions, designed to collect a rich array of Earth observation data",
"### Reference data: \n* Reference AGB measurements were collected using LiDAR (Light Detection and Ranging) calibrated with in-situ measurements.\n* Total 13000 patches, each patch covering 2,560 by 2,560 meter area.",
"### Feature data: \n* Sentinel-1 SAR and Sentinel-2 MSI data\n* 12 months of data (1 image per month)\n* Total 310,000 patches",
"### Data Specifications:\n!img",
"### Data Size:\n\n\n\n: under review"
] |
[
"TAGS\n#size_categories-100K<n<1M #language-English #license-cc-by-4.0 #climate #doi-10.57967/hf/1009 #region-us \n",
"# BioMassters: A Benchmark Dataset for Forest Biomass Estimation using Multi-modal Satellite Time-series URL\n\nThe objective of this repository is to provide a deep learning ready dataset to predict yearly Above Ground Biomass (AGB) for Finnish forests using multi-temporal satellite imagery from\nthe European Space Agency and European Commission's joint Sentinel-1 and Sentinel-2 satellite missions, designed to collect a rich array of Earth observation data",
"### Reference data: \n* Reference AGB measurements were collected using LiDAR (Light Detection and Ranging) calibrated with in-situ measurements.\n* Total 13000 patches, each patch covering 2,560 by 2,560 meter area.",
"### Feature data: \n* Sentinel-1 SAR and Sentinel-2 MSI data\n* 12 months of data (1 image per month)\n* Total 310,000 patches",
"### Data Specifications:\n!img",
"### Data Size:\n\n\n\n: under review"
] |
[
47,
109,
54,
32,
9,
8
] |
[
"passage: TAGS\n#size_categories-100K<n<1M #language-English #license-cc-by-4.0 #climate #doi-10.57967/hf/1009 #region-us \n# BioMassters: A Benchmark Dataset for Forest Biomass Estimation using Multi-modal Satellite Time-series URL\n\nThe objective of this repository is to provide a deep learning ready dataset to predict yearly Above Ground Biomass (AGB) for Finnish forests using multi-temporal satellite imagery from\nthe European Space Agency and European Commission's joint Sentinel-1 and Sentinel-2 satellite missions, designed to collect a rich array of Earth observation data### Reference data: \n* Reference AGB measurements were collected using LiDAR (Light Detection and Ranging) calibrated with in-situ measurements.\n* Total 13000 patches, each patch covering 2,560 by 2,560 meter area.### Feature data: \n* Sentinel-1 SAR and Sentinel-2 MSI data\n* 12 months of data (1 image per month)\n* Total 310,000 patches### Data Specifications:\n!img### Data Size:\n\n\n\n: under review"
] |
5e89392376049f4d589ea339ed64468310ed5c3f
|
# Dataset Card for DocLayNet v1.1
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/
- **Repository:** https://github.com/DS4SD/DocLayNet
- **Paper:** https://doi.org/10.1145/3534678.3539043
### Dataset Summary
DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout
2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals
3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.
4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models
5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.
## Dataset Structure
This dataset is structured differently from the other repository [ds4sd/DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet), as this one includes the content (PDF cells) of the detections, and abandons the COCO format.
* `image`: page PIL image.
* `bboxes`: a list of layout bounding boxes.
* `category_id`: a list of class ids corresponding to the bounding boxes.
* `segmentation`: a list of layout segmentation polygons.
* `pdf_cells`: a list of lists corresponding to `bbox`. Each list contains the PDF cells (content) inside the bbox.
* `metadata`: page and document metadetails.
Bounding boxes classes / categories:
```
1: Caption
2: Footnote
3: Formula
4: List-item
5: Page-footer
6: Page-header
7: Picture
8: Section-header
9: Table
10: Text
11: Title
```
The `["metadata"]["doc_category"]` field uses one of the following constants:
```
* financial_reports,
* scientific_articles,
* laws_and_regulations,
* government_tenders,
* manuals,
* patents
```
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Dataset Creation
### Annotations
#### Annotation process
The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf).
#### Who are the annotators?
Annotations are crowdsourced.
## Additional Information
### Dataset Curators
The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [[email protected]](mailto:[email protected]).
Curators:
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Licensing Information
License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/)
### Citation Information
```bib
@article{doclaynet2022,
title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation},
doi = {10.1145/3534678.353904},
url = {https://doi.org/10.1145/3534678.3539043},
author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
year = {2022},
isbn = {9781450393850},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
pages = {3743–3751},
numpages = {9},
location = {Washington DC, USA},
series = {KDD '22}
}
```
|
ds4sd/DocLayNet-v1.1
|
[
"task_categories:object-detection",
"task_categories:image-segmentation",
"task_ids:instance-segmentation",
"annotations_creators:crowdsourced",
"size_categories:10K<n<100K",
"license:other",
"layout-segmentation",
"COCO",
"document-understanding",
"PDF",
"region:us"
] |
2023-08-17T12:10:53+00:00
|
{"annotations_creators": ["crowdsourced"], "license": "other", "size_categories": ["10K<n<100K"], "task_categories": ["object-detection", "image-segmentation"], "task_ids": ["instance-segmentation"], "pretty_name": "DocLayNet", "tags": ["layout-segmentation", "COCO", "document-understanding", "PDF"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "bboxes", "sequence": {"sequence": "float64"}}, {"name": "category_id", "sequence": "int64"}, {"name": "segmentation", "sequence": {"sequence": {"sequence": "float64"}}}, {"name": "area", "sequence": "float64"}, {"name": "pdf_cells", "list": {"list": [{"name": "bbox", "sequence": "float64"}, {"name": "font", "struct": [{"name": "color", "sequence": "int64"}, {"name": "name", "dtype": "string"}, {"name": "size", "dtype": "float64"}]}, {"name": "text", "dtype": "string"}]}}, {"name": "metadata", "struct": [{"name": "coco_height", "dtype": "int64"}, {"name": "coco_width", "dtype": "int64"}, {"name": "collection", "dtype": "string"}, {"name": "doc_category", "dtype": "string"}, {"name": "image_id", "dtype": "int64"}, {"name": "num_pages", "dtype": "int64"}, {"name": "original_filename", "dtype": "string"}, {"name": "original_height", "dtype": "float64"}, {"name": "original_width", "dtype": "float64"}, {"name": "page_hash", "dtype": "string"}, {"name": "page_no", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 28172005254.125, "num_examples": 69375}, {"name": "test", "num_bytes": 1996179229.125, "num_examples": 4999}, {"name": "val", "num_bytes": 2493896901.875, "num_examples": 6489}], "download_size": 7766115331, "dataset_size": 32662081385.125}}
|
2023-09-01T08:58:52+00:00
|
[] |
[] |
TAGS
#task_categories-object-detection #task_categories-image-segmentation #task_ids-instance-segmentation #annotations_creators-crowdsourced #size_categories-10K<n<100K #license-other #layout-segmentation #COCO #document-understanding #PDF #region-us
|
# Dataset Card for DocLayNet v1.1
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Dataset Structure
- Data Fields
- Data Splits
- Dataset Creation
- Annotations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
### Dataset Summary
DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout
2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals
3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.
4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models
5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.
## Dataset Structure
This dataset is structured differently from the other repository ds4sd/DocLayNet, as this one includes the content (PDF cells) of the detections, and abandons the COCO format.
* 'image': page PIL image.
* 'bboxes': a list of layout bounding boxes.
* 'category_id': a list of class ids corresponding to the bounding boxes.
* 'segmentation': a list of layout segmentation polygons.
* 'pdf_cells': a list of lists corresponding to 'bbox'. Each list contains the PDF cells (content) inside the bbox.
* 'metadata': page and document metadetails.
Bounding boxes classes / categories:
The '["metadata"]["doc_category"]' field uses one of the following constants:
### Data Splits
The dataset provides three splits
- 'train'
- 'val'
- 'test'
## Dataset Creation
### Annotations
#### Annotation process
The labeling guideline used for training of the annotation experts are available at DocLayNet_Labeling_Guide_Public.pdf.
#### Who are the annotators?
Annotations are crowdsourced.
## Additional Information
### Dataset Curators
The dataset is curated by the Deep Search team at IBM Research.
You can contact us at deepsearch-core@URL.
Curators:
- Christoph Auer, @cau-git
- Michele Dolfi, @dolfim-ibm
- Ahmed Nassar, @nassarofficial
- Peter Staar, @PeterStaar-IBM
### Licensing Information
License: CDLA-Permissive-1.0
|
[
"# Dataset Card for DocLayNet v1.1",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Annotations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL",
"### Dataset Summary\n\nDocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:\n\n1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout\n2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals\n3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.\n4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models\n5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.",
"## Dataset Structure\n\nThis dataset is structured differently from the other repository ds4sd/DocLayNet, as this one includes the content (PDF cells) of the detections, and abandons the COCO format.\n\n* 'image': page PIL image.\n* 'bboxes': a list of layout bounding boxes.\n* 'category_id': a list of class ids corresponding to the bounding boxes.\n* 'segmentation': a list of layout segmentation polygons.\n* 'pdf_cells': a list of lists corresponding to 'bbox'. Each list contains the PDF cells (content) inside the bbox.\n* 'metadata': page and document metadetails.\n\nBounding boxes classes / categories:\n\n\n\n\nThe '[\"metadata\"][\"doc_category\"]' field uses one of the following constants:",
"### Data Splits\n\nThe dataset provides three splits\n- 'train'\n- 'val'\n- 'test'",
"## Dataset Creation",
"### Annotations",
"#### Annotation process\n\nThe labeling guideline used for training of the annotation experts are available at DocLayNet_Labeling_Guide_Public.pdf.",
"#### Who are the annotators?\n\nAnnotations are crowdsourced.",
"## Additional Information",
"### Dataset Curators\n\nThe dataset is curated by the Deep Search team at IBM Research.\nYou can contact us at deepsearch-core@URL.\n\nCurators:\n- Christoph Auer, @cau-git\n- Michele Dolfi, @dolfim-ibm\n- Ahmed Nassar, @nassarofficial\n- Peter Staar, @PeterStaar-IBM",
"### Licensing Information\n\nLicense: CDLA-Permissive-1.0"
] |
[
"TAGS\n#task_categories-object-detection #task_categories-image-segmentation #task_ids-instance-segmentation #annotations_creators-crowdsourced #size_categories-10K<n<100K #license-other #layout-segmentation #COCO #document-understanding #PDF #region-us \n",
"# Dataset Card for DocLayNet v1.1",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Annotations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL",
"### Dataset Summary\n\nDocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:\n\n1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout\n2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals\n3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.\n4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models\n5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.",
"## Dataset Structure\n\nThis dataset is structured differently from the other repository ds4sd/DocLayNet, as this one includes the content (PDF cells) of the detections, and abandons the COCO format.\n\n* 'image': page PIL image.\n* 'bboxes': a list of layout bounding boxes.\n* 'category_id': a list of class ids corresponding to the bounding boxes.\n* 'segmentation': a list of layout segmentation polygons.\n* 'pdf_cells': a list of lists corresponding to 'bbox'. Each list contains the PDF cells (content) inside the bbox.\n* 'metadata': page and document metadetails.\n\nBounding boxes classes / categories:\n\n\n\n\nThe '[\"metadata\"][\"doc_category\"]' field uses one of the following constants:",
"### Data Splits\n\nThe dataset provides three splits\n- 'train'\n- 'val'\n- 'test'",
"## Dataset Creation",
"### Annotations",
"#### Annotation process\n\nThe labeling guideline used for training of the annotation experts are available at DocLayNet_Labeling_Guide_Public.pdf.",
"#### Who are the annotators?\n\nAnnotations are crowdsourced.",
"## Additional Information",
"### Dataset Curators\n\nThe dataset is curated by the Deep Search team at IBM Research.\nYou can contact us at deepsearch-core@URL.\n\nCurators:\n- Christoph Auer, @cau-git\n- Michele Dolfi, @dolfim-ibm\n- Ahmed Nassar, @nassarofficial\n- Peter Staar, @PeterStaar-IBM",
"### Licensing Information\n\nLicense: CDLA-Permissive-1.0"
] |
[
88,
11,
74,
18,
296,
209,
25,
5,
5,
36,
17,
5,
80,
16
] |
[
"passage: TAGS\n#task_categories-object-detection #task_categories-image-segmentation #task_ids-instance-segmentation #annotations_creators-crowdsourced #size_categories-10K<n<100K #license-other #layout-segmentation #COCO #document-understanding #PDF #region-us \n# Dataset Card for DocLayNet v1.1## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Annotations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL### Dataset Summary\n\nDocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:\n\n1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout\n2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals\n3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.\n4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models\n5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets."
] |
357e69c7dfacd2374f4e02527f815608325d9c09
|
# Dataset Card for "viet_news_split_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thanhduycao/viet_news_split_1
|
[
"region:us"
] |
2023-08-17T12:32:26+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9432341121.587133, "num_examples": 2656944}, {"name": "validation", "num_bytes": 95276818.41286662, "num_examples": 26838}], "download_size": 5153961412, "dataset_size": 9527617940.0}}
|
2023-08-17T13:07:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "viet_news_split_1"
More Information needed
|
[
"# Dataset Card for \"viet_news_split_1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"viet_news_split_1\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"viet_news_split_1\"\n\nMore Information needed"
] |
3ab73145cfe2d55cc152a343ea705a52620567c9
|
# Dataset Card for "perigon-200k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
judy93536/perigon-200k
|
[
"region:us"
] |
2023-08-17T12:32:37+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 216299930.0087607, "num_examples": 176584}, {"name": "test", "num_bytes": 38170719.9912393, "num_examples": 31162}], "download_size": 129060894, "dataset_size": 254470650.0}}
|
2023-08-17T12:35:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "perigon-200k"
More Information needed
|
[
"# Dataset Card for \"perigon-200k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"perigon-200k\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"perigon-200k\"\n\nMore Information needed"
] |
3859ef418276d2d2e8b44596f1cf01abf40c76b2
|
You can download this Dataset just like this (if you only need: premise, hypothesis, and label column):
```
from datasets import load_dataset, Dataset, DatasetDict
import pandas as pd
data_files = {"train": "data_nli_train_df.csv",
"validation": "data_nli_val_df.csv",
"test": "data_nli_test_df.csv"}
dataset = load_dataset("muhammadravi251001/idk-mrc-nli", data_files=data_files)
selected_columns = ["premise", "hypothesis", "label"]
# selected_columns = dataset.column_names['train'] # Uncomment this line to retrieve all of the columns
df_train = pd.DataFrame(dataset["train"])
df_train = df_train[selected_columns]
df_val = pd.DataFrame(dataset["validation"])
df_val = df_val[selected_columns]
df_test = pd.DataFrame(dataset["test"])
df_test = df_test[selected_columns]
train_dataset = Dataset.from_dict(df_train)
validation_dataset = Dataset.from_dict(df_val)
test_dataset = Dataset.from_dict(df_test)
dataset = DatasetDict({"train": train_dataset, "validation": validation_dataset, "test": test_dataset})
dataset
```
This is some modification from IDK-MRC dataset to IDK-MRC-NLI dataset. By convert QAS dataset to NLI dataset. You can find the original IDK-MRC in this link: https://huggingface.co/datasets/rifkiaputri/idk-mrc.
### Citation Information
```bibtex
@inproceedings{putri-oh-2022-idk,
title = "{IDK}-{MRC}: Unanswerable Questions for {I}ndonesian Machine Reading Comprehension",
author = "Putri, Rifki Afina and
Oh, Alice",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.465",
pages = "6918--6933",
}
```
|
muhammadravi251001/idk-mrc-nli
|
[
"license:openrail",
"region:us"
] |
2023-08-17T12:39:01+00:00
|
{"license": "openrail"}
|
2023-08-20T01:00:59+00:00
|
[] |
[] |
TAGS
#license-openrail #region-us
|
You can download this Dataset just like this (if you only need: premise, hypothesis, and label column):
This is some modification from IDK-MRC dataset to IDK-MRC-NLI dataset. By convert QAS dataset to NLI dataset. You can find the original IDK-MRC in this link: URL
|
[] |
[
"TAGS\n#license-openrail #region-us \n"
] |
[
12
] |
[
"passage: TAGS\n#license-openrail #region-us \n"
] |
e64f0cc91918f5544a0daeb071ae02bebc64f72f
|
# Dataset Card for "distilled-ccmatrix-en-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
thesistranslation/distilled-ccmatrix-en-de
|
[
"language:en",
"language:de",
"region:us"
] |
2023-08-17T12:44:37+00:00
|
{"language": ["en", "de"], "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "translation", "dtype": {"translation": {"languages": ["en", "de"]}}}], "splits": [{"name": "train", "num_bytes": 7294036621, "num_examples": 30000000}], "download_size": 5135500985, "dataset_size": 7294036621}}
|
2023-10-03T11:20:34+00:00
|
[] |
[
"en",
"de"
] |
TAGS
#language-English #language-German #region-us
|
# Dataset Card for "distilled-ccmatrix-en-de"
More Information needed
|
[
"# Dataset Card for \"distilled-ccmatrix-en-de\"\n\nMore Information needed"
] |
[
"TAGS\n#language-English #language-German #region-us \n",
"# Dataset Card for \"distilled-ccmatrix-en-de\"\n\nMore Information needed"
] |
[
14,
21
] |
[
"passage: TAGS\n#language-English #language-German #region-us \n# Dataset Card for \"distilled-ccmatrix-en-de\"\n\nMore Information needed"
] |
49b7261e6ddde047b980ec7485aeb2cded69703e
|
# Dataset Card for "find_word_train_10000_eval_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/find_word_train_10000_eval_100
|
[
"region:us"
] |
2023-08-17T12:55:14+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1441035, "num_examples": 20100}, {"name": "eval_find_word", "num_bytes": 5323, "num_examples": 100}], "download_size": 0, "dataset_size": 1446358}}
|
2023-08-17T12:55:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "find_word_train_10000_eval_100"
More Information needed
|
[
"# Dataset Card for \"find_word_train_10000_eval_100\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"find_word_train_10000_eval_100\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"find_word_train_10000_eval_100\"\n\nMore Information needed"
] |
1ae3558e1b0d4c4baf424d3faa0eddf0289d0e6f
|
# FACETS Datasets
Datasets described in the paper:
> Toward a Realistic Benchmark for Out-of-Distribution - Pietro Recalcati, Fabio Garcea, Luca Piano, Fabrizio Lamberti, Lia Morra
## Baseline
| | ID classes | OOD classes | ID samples | OOD samples |
|--------------------------|------------|-------------|------------|-------------|
| **Places365-Standard (val)** | 365 | 0 | 18,250 | 0 |
| **SVHN (train)** | 0 | 1 | 0 | 16,701 |
| **SVHN (test)** | 0 | 1 | 0 | 1,549 |
| **Total** | **365** | **2** | **18,250** | **18,250** |
## Inter-Dataset
| | ID classes | OOD classes | ID samples | OOD samples |
|--------------------------|--------------|--------------|-----------------|-----------------|
| Places365-Standard (val) | 365 | 0 | 18,250 | 0 |
| ImageNet (train) | 0 | 968 | 0 | 18,108 |
| **Total** | **365** | **968** | **18,250** | **18,108** |
## WordNet ImageNet
| WordNet ImageNet T40 (val/test) | ID classes | OOD classes | ID samples | OOD samples |
|------------------------------------|--------------|--------------|-----------------|-----------------|
| Places365-Standard (val) | 365 | 0 | 18,250 | 0 |
| ImageNet (train) | 56 | 944 | 2,800 | 21,332 |
| **Total** | **421** | **944** | **21,050** | **21,332** |
| **WordNet ImageNet T45 (val/test)** | ID classes | OOD classes | ID samples | OOD samples |
| Places365-Standard (val) | 365 | 0 | 18,250 | 0 |
| ImageNet (train) | 90 | 910 | 4,500 | 22,750 |
| **Total** | **455** | **910** | **22,750** | **22,750** |
| **WordNet ImageNet T50 (val/test)** | ID classes | OOD classes | ID samples | OOD samples |
| Places365-Standard (val) | 365 | 0 | 18,250 | 0 |
| ImageNet (train) | 140 | 860 | 7,000 | 25,560 |
| **Total** | **505** | **860** | **25,250** | **25,560** |
## FACETS OOD Detection
| **FACETS OOD Detection T1** | ID classes | OOD classes | ID samples | OOD samples |
|-----------------------------|----------------|----------------|-----------------|-----------------|
| Places365-Standard (val) | 365 | 0 | 18,250 | 0 |
| SUN397 | 319 | 78 | 46,851 | 7,530 |
| ImageNet (train) | 0 | 644 | 0 | 50,232 |
| **Total** | **1,040** | **1,366** | **74,001** | **73,862** |
| **FACETS OOD Detection T2** | ID classes | OOD classes | ID samples | OOD samples |
| Places365-Standard (val) | 365 | 0 | 18,250 | 0 |
| SUN397 | 351 | 46 | 49,827 | 4,554 |
| ImageNet (train) | 0 | 644 | 0 | 56,606 |
| **Total** | **1,072** | **1,334** | **76,977** | **77,260** |
|
GrainsPolito/FACETS_Datasets
|
[
"license:mit",
"region:us"
] |
2023-08-17T13:06:47+00:00
|
{"license": "mit"}
|
2023-08-23T12:08:19+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
FACETS Datasets
===============
Datasets described in the paper:
>
> Toward a Realistic Benchmark for Out-of-Distribution - Pietro Recalcati, Fabio Garcea, Luca Piano, Fabrizio Lamberti, Lia Morra
>
>
>
Baseline
--------
Inter-Dataset
-------------
WordNet ImageNet
----------------
FACETS OOD Detection
--------------------
|
[] |
[
"TAGS\n#license-mit #region-us \n"
] |
[
11
] |
[
"passage: TAGS\n#license-mit #region-us \n"
] |
8ad52b9f7ce996cf429b55575c58dbd15592e91c
|
# MultiPL-T fine-tuning sets
This dataset contains the MultiPL-T fine-tuning sets described in the paper "Knowledge Transfer from High-Resource to Low-Resource
Programming Languages for Code LLMs": [Arxiv](https://arxiv.org/abs/2308.09895).
In short, it contains fine-tuning datasets for Julia, Lua, Racket, OCaml, and R.
**If you utilize our dataset we kindly request that you cite our paper**
## MultiPL-T tuned models
StarCoderBase-1b: https://huggingface.co/nuprl/MultiPLCoder-1b
StarCoderBase-15b: https://huggingface.co/nuprl/MultiPLCoder-15b
CodeLlama-34b: https://huggingface.co/nuprl/MultiPLCoder-34b
|
nuprl/MultiPL-T
|
[
"license:bigcode-openrail-m",
"arxiv:2308.09895",
"region:us"
] |
2023-08-17T13:17:33+00:00
|
{"license": "bigcode-openrail-m", "dataset_info": {"features": [{"name": "content", "dtype": "string"}], "splits": [{"name": "lua", "num_bytes": 25917278, "num_examples": 48194}, {"name": "racket", "num_bytes": 14482516, "num_examples": 40510}, {"name": "ocaml", "num_bytes": 19240207, "num_examples": 43401}, {"name": "julia", "num_bytes": 18723475, "num_examples": 45000}, {"name": "r", "num_bytes": 13961595, "num_examples": 37592}], "download_size": 48334705, "dataset_size": 111048546}, "configs": [{"config_name": "default", "data_files": [{"split": "lua", "path": "data/lua-*"}, {"split": "racket", "path": "data/racket-*"}, {"split": "ocaml", "path": "data/ocaml-*"}, {"split": "julia", "path": "data/julia-*"}, {"split": "r", "path": "data/r-*"}]}]}
|
2024-02-11T03:09:19+00:00
|
[
"2308.09895"
] |
[] |
TAGS
#license-bigcode-openrail-m #arxiv-2308.09895 #region-us
|
# MultiPL-T fine-tuning sets
This dataset contains the MultiPL-T fine-tuning sets described in the paper "Knowledge Transfer from High-Resource to Low-Resource
Programming Languages for Code LLMs": Arxiv.
In short, it contains fine-tuning datasets for Julia, Lua, Racket, OCaml, and R.
If you utilize our dataset we kindly request that you cite our paper
## MultiPL-T tuned models
StarCoderBase-1b: URL
StarCoderBase-15b: URL
CodeLlama-34b: URL
|
[
"# MultiPL-T fine-tuning sets\n\nThis dataset contains the MultiPL-T fine-tuning sets described in the paper \"Knowledge Transfer from High-Resource to Low-Resource\nProgramming Languages for Code LLMs\": Arxiv.\n\nIn short, it contains fine-tuning datasets for Julia, Lua, Racket, OCaml, and R.\n\nIf you utilize our dataset we kindly request that you cite our paper",
"## MultiPL-T tuned models\n\nStarCoderBase-1b: URL\nStarCoderBase-15b: URL\nCodeLlama-34b: URL"
] |
[
"TAGS\n#license-bigcode-openrail-m #arxiv-2308.09895 #region-us \n",
"# MultiPL-T fine-tuning sets\n\nThis dataset contains the MultiPL-T fine-tuning sets described in the paper \"Knowledge Transfer from High-Resource to Low-Resource\nProgramming Languages for Code LLMs\": Arxiv.\n\nIn short, it contains fine-tuning datasets for Julia, Lua, Racket, OCaml, and R.\n\nIf you utilize our dataset we kindly request that you cite our paper",
"## MultiPL-T tuned models\n\nStarCoderBase-1b: URL\nStarCoderBase-15b: URL\nCodeLlama-34b: URL"
] |
[
25,
105,
33
] |
[
"passage: TAGS\n#license-bigcode-openrail-m #arxiv-2308.09895 #region-us \n# MultiPL-T fine-tuning sets\n\nThis dataset contains the MultiPL-T fine-tuning sets described in the paper \"Knowledge Transfer from High-Resource to Low-Resource\nProgramming Languages for Code LLMs\": Arxiv.\n\nIn short, it contains fine-tuning datasets for Julia, Lua, Racket, OCaml, and R.\n\nIf you utilize our dataset we kindly request that you cite our paper## MultiPL-T tuned models\n\nStarCoderBase-1b: URL\nStarCoderBase-15b: URL\nCodeLlama-34b: URL"
] |
4848105d097b71f576ad8152abe37297d2cf30d2
|
# VietBibleVox Dataset
The VietBibleVox Dataset is based on the data extracted from [open.bible](https://open.bible/) specifically for the Vietnamese language. As the original data is provided under the `cc-by-sa-4.0` license, this derived dataset is also licensed under `cc-by-sa-4.0`.
The dataset comprises 29,185 pairs of (verse, audio clip), with each verse from the Bible read in Vietnamese by a male voice.
- The verses are the original texts and *may not* be directly usable for training text-to-speech models.
- The clips are in MP3 format with a sample rate of 48k.
|
ntt123/VietBibleVox
|
[
"task_categories:text-to-speech",
"size_categories:10K<n<100K",
"language:vi",
"license:cc-by-sa-4.0",
"region:us"
] |
2023-08-17T13:27:50+00:00
|
{"language": ["vi"], "license": "cc-by-sa-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-to-speech"], "pretty_name": "viet-bible-vox"}
|
2023-08-17T14:31:15+00:00
|
[] |
[
"vi"
] |
TAGS
#task_categories-text-to-speech #size_categories-10K<n<100K #language-Vietnamese #license-cc-by-sa-4.0 #region-us
|
# VietBibleVox Dataset
The VietBibleVox Dataset is based on the data extracted from URL specifically for the Vietnamese language. As the original data is provided under the 'cc-by-sa-4.0' license, this derived dataset is also licensed under 'cc-by-sa-4.0'.
The dataset comprises 29,185 pairs of (verse, audio clip), with each verse from the Bible read in Vietnamese by a male voice.
- The verses are the original texts and *may not* be directly usable for training text-to-speech models.
- The clips are in MP3 format with a sample rate of 48k.
|
[
"# VietBibleVox Dataset\n\nThe VietBibleVox Dataset is based on the data extracted from URL specifically for the Vietnamese language. As the original data is provided under the 'cc-by-sa-4.0' license, this derived dataset is also licensed under 'cc-by-sa-4.0'.\n\nThe dataset comprises 29,185 pairs of (verse, audio clip), with each verse from the Bible read in Vietnamese by a male voice.\n- The verses are the original texts and *may not* be directly usable for training text-to-speech models.\n- The clips are in MP3 format with a sample rate of 48k."
] |
[
"TAGS\n#task_categories-text-to-speech #size_categories-10K<n<100K #language-Vietnamese #license-cc-by-sa-4.0 #region-us \n",
"# VietBibleVox Dataset\n\nThe VietBibleVox Dataset is based on the data extracted from URL specifically for the Vietnamese language. As the original data is provided under the 'cc-by-sa-4.0' license, this derived dataset is also licensed under 'cc-by-sa-4.0'.\n\nThe dataset comprises 29,185 pairs of (verse, audio clip), with each verse from the Bible read in Vietnamese by a male voice.\n- The verses are the original texts and *may not* be directly usable for training text-to-speech models.\n- The clips are in MP3 format with a sample rate of 48k."
] |
[
49,
147
] |
[
"passage: TAGS\n#task_categories-text-to-speech #size_categories-10K<n<100K #language-Vietnamese #license-cc-by-sa-4.0 #region-us \n# VietBibleVox Dataset\n\nThe VietBibleVox Dataset is based on the data extracted from URL specifically for the Vietnamese language. As the original data is provided under the 'cc-by-sa-4.0' license, this derived dataset is also licensed under 'cc-by-sa-4.0'.\n\nThe dataset comprises 29,185 pairs of (verse, audio clip), with each verse from the Bible read in Vietnamese by a male voice.\n- The verses are the original texts and *may not* be directly usable for training text-to-speech models.\n- The clips are in MP3 format with a sample rate of 48k."
] |
e2caf27d4c7631bf48704da712782d9d95aa5912
|
# Dataset Card for "Cheguanaco-unchained-format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pssubitha/Cheguanaco-unchained-format
|
[
"region:us"
] |
2023-08-17T13:30:30+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1775294, "num_examples": 1000}], "download_size": 983904, "dataset_size": 1775294}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-17T13:30:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Cheguanaco-unchained-format"
More Information needed
|
[
"# Dataset Card for \"Cheguanaco-unchained-format\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Cheguanaco-unchained-format\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Cheguanaco-unchained-format\"\n\nMore Information needed"
] |
cf641990852a31680fa98f7d46109123547e592b
|
# Dataset Card for "wikisql-processed"
Based out of [wikisql](https://huggingface.co/datasets/wikisql)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sartmis1/wikisql-processed
|
[
"region:us"
] |
2023-08-17T13:31:27+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "messages", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 10327196, "num_examples": 56355}, {"name": "test", "num_bytes": 2917591, "num_examples": 15878}, {"name": "validation", "num_bytes": 2917591, "num_examples": 15878}], "download_size": 0, "dataset_size": 16162378}}
|
2023-08-17T13:45:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wikisql-processed"
Based out of wikisql
More Information needed
|
[
"# Dataset Card for \"wikisql-processed\"\n\nBased out of wikisql\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wikisql-processed\"\n\nBased out of wikisql\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wikisql-processed\"\n\nBased out of wikisql\n\nMore Information needed"
] |
030418e23d6ab23a7d7f94422456a6a24c37b978
|
# Dataset Card for "sales-force-formatted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pssubitha/sales-force-formatted
|
[
"region:us"
] |
2023-08-17T13:36:47+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1352454, "num_examples": 6500}], "download_size": 518424, "dataset_size": 1352454}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-17T13:36:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "sales-force-formatted"
More Information needed
|
[
"# Dataset Card for \"sales-force-formatted\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"sales-force-formatted\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"sales-force-formatted\"\n\nMore Information needed"
] |
11e53eeda43923e57e36d66b1961d92326dd9966
|
# nthakur/msmarco-passage-sampled-10k
This is a 10k randomly sampled training pairs of the Tevatron [msmarco-passage](https://huggingface.co/datasets/Tevatron/msmarco-passage) for debugging and training models on a smaller subset of MSMARCO training data.
## Citing & Authors
Have a look at [Tevatron](https://github.com/texttron/tevatron).
<!--- Describe where people can find more information -->
|
nthakur/msmarco-passage-sampled-10k
|
[
"task_categories:text-retrieval",
"source_datasets:Tevatron/msmarco-passage",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] |
2023-08-17T13:37:04+00:00
|
{"language": ["en"], "license": "cc-by-sa-3.0", "source_datasets": ["Tevatron/msmarco-passage"], "task_categories": ["text-retrieval"]}
|
2023-08-17T13:46:16+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-retrieval #source_datasets-Tevatron/msmarco-passage #language-English #license-cc-by-sa-3.0 #region-us
|
# nthakur/msmarco-passage-sampled-10k
This is a 10k randomly sampled training pairs of the Tevatron msmarco-passage for debugging and training models on a smaller subset of MSMARCO training data.
## Citing & Authors
Have a look at Tevatron.
|
[
"# nthakur/msmarco-passage-sampled-10k\n\nThis is a 10k randomly sampled training pairs of the Tevatron msmarco-passage for debugging and training models on a smaller subset of MSMARCO training data.",
"## Citing & Authors\nHave a look at Tevatron."
] |
[
"TAGS\n#task_categories-text-retrieval #source_datasets-Tevatron/msmarco-passage #language-English #license-cc-by-sa-3.0 #region-us \n",
"# nthakur/msmarco-passage-sampled-10k\n\nThis is a 10k randomly sampled training pairs of the Tevatron msmarco-passage for debugging and training models on a smaller subset of MSMARCO training data.",
"## Citing & Authors\nHave a look at Tevatron."
] |
[
50,
59,
14
] |
[
"passage: TAGS\n#task_categories-text-retrieval #source_datasets-Tevatron/msmarco-passage #language-English #license-cc-by-sa-3.0 #region-us \n# nthakur/msmarco-passage-sampled-10k\n\nThis is a 10k randomly sampled training pairs of the Tevatron msmarco-passage for debugging and training models on a smaller subset of MSMARCO training data.## Citing & Authors\nHave a look at Tevatron."
] |
ae4ae9dc861a18f8df50aab3a8a4315850e23351
|
# nthakur/msmarco-passage-sampled-100k
This is a 100k randomly sampled training pairs of the Tevatron [msmarco-passage](https://huggingface.co/datasets/Tevatron/msmarco-passage) for debugging and training models on a smaller subset of MSMARCO training data.
## Citing & Authors
Have a look at [Tevatron](https://github.com/texttron/tevatron).
<!--- Describe where people can find more information -->
|
nthakur/msmarco-passage-sampled-100k
|
[
"task_categories:text-retrieval",
"source_datasets:Tevatron/msmarco-passage",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] |
2023-08-17T13:47:30+00:00
|
{"language": ["en"], "license": "cc-by-sa-3.0", "source_datasets": ["Tevatron/msmarco-passage"], "task_categories": ["text-retrieval"]}
|
2023-08-17T13:47:35+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-retrieval #source_datasets-Tevatron/msmarco-passage #language-English #license-cc-by-sa-3.0 #region-us
|
# nthakur/msmarco-passage-sampled-100k
This is a 100k randomly sampled training pairs of the Tevatron msmarco-passage for debugging and training models on a smaller subset of MSMARCO training data.
## Citing & Authors
Have a look at Tevatron.
|
[
"# nthakur/msmarco-passage-sampled-100k\n\nThis is a 100k randomly sampled training pairs of the Tevatron msmarco-passage for debugging and training models on a smaller subset of MSMARCO training data.",
"## Citing & Authors\nHave a look at Tevatron."
] |
[
"TAGS\n#task_categories-text-retrieval #source_datasets-Tevatron/msmarco-passage #language-English #license-cc-by-sa-3.0 #region-us \n",
"# nthakur/msmarco-passage-sampled-100k\n\nThis is a 100k randomly sampled training pairs of the Tevatron msmarco-passage for debugging and training models on a smaller subset of MSMARCO training data.",
"## Citing & Authors\nHave a look at Tevatron."
] |
[
50,
59,
14
] |
[
"passage: TAGS\n#task_categories-text-retrieval #source_datasets-Tevatron/msmarco-passage #language-English #license-cc-by-sa-3.0 #region-us \n# nthakur/msmarco-passage-sampled-100k\n\nThis is a 100k randomly sampled training pairs of the Tevatron msmarco-passage for debugging and training models on a smaller subset of MSMARCO training data.## Citing & Authors\nHave a look at Tevatron."
] |
a87d97ca4d7995b9ad0f1a2e1992ca6a0aa4c774
|
# Dataset of kawashiro_nitori/河城にとり/카와시로니토리 (Touhou)
This is the dataset of kawashiro_nitori/河城にとり/카와시로니토리 (Touhou), containing 500 images and their tags.
The core tags of this character are `blue_hair, two_side_up, hair_ornament, blue_eyes, hat, short_hair, twintails`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 529.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kawashiro_nitori_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 359.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kawashiro_nitori_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1068 | 683.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kawashiro_nitori_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 492.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kawashiro_nitori_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1068 | 874.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kawashiro_nitori_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kawashiro_nitori_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, backpack, hair_bobbles, key, open_mouth, smile, solo |
| 1 | 6 |  |  |  |  |  | 1girl, backpack, hair_bobbles, key, open_mouth, smile, solo, rubber_boots, skirt_set, water |
| 2 | 14 |  |  |  |  |  | 1girl, hair_bobbles, key, solo, backpack, underwater, air_bubble, skirt, smile, boots, open_mouth |
| 3 | 15 |  |  |  |  |  | 1girl, backpack, bangs, green_headwear, hair_bobbles, solo, blue_shirt, flat_cap, key, blue_footwear, blue_skirt, looking_at_viewer, pocket, long_sleeves, rubber_boots, full_body, green_bag, skirt_set, smile, blush, closed_mouth, frilled_shirt_collar, open_mouth |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | backpack | hair_bobbles | key | open_mouth | smile | solo | rubber_boots | skirt_set | water | underwater | air_bubble | skirt | boots | bangs | green_headwear | blue_shirt | flat_cap | blue_footwear | blue_skirt | looking_at_viewer | pocket | long_sleeves | full_body | green_bag | blush | closed_mouth | frilled_shirt_collar |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:---------------|:------|:-------------|:--------|:-------|:---------------|:------------|:--------|:-------------|:-------------|:--------|:--------|:--------|:-----------------|:-------------|:-----------|:----------------|:-------------|:--------------------|:---------|:---------------|:------------|:------------|:--------|:---------------|:-----------------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 2 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | | | | X | X | X | X | | | | | | | | | | | | | | |
| 3 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/kawashiro_nitori_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T13:47:52+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T12:25:28+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of kawashiro\_nitori/河城にとり/카와시로니토리 (Touhou)
===================================================
This is the dataset of kawashiro\_nitori/河城にとり/카와시로니토리 (Touhou), containing 500 images and their tags.
The core tags of this character are 'blue\_hair, two\_side\_up, hair\_ornament, blue\_eyes, hat, short\_hair, twintails', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
2abfec7f5cd03476575d1c9f51dc7e395311f55f
|
# Dataset of cirno/ちるの/치르노 (Touhou)
This is the dataset of cirno/ちるの/치르노 (Touhou), containing 500 images and their tags.
The core tags of this character are `blue_hair, short_hair, bow, hair_bow, wings, blue_eyes, ice_wings, blue_bow, ribbon, bangs, hair_between_eyes, red_ribbon, neck_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 740.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/cirno_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 397.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/cirno_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1223 | 859.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/cirno_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 642.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/cirno_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1223 | 1.22 GiB | [Download](https://huggingface.co/datasets/CyberHarem/cirno_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/cirno_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, :d, blue_dress, blush, cowboy_shot, ice, looking_at_viewer, open_mouth, puffy_short_sleeves, simple_background, solo, white_background, white_shirt, breasts, collared_shirt, standing |
| 1 | 6 |  |  |  |  |  | 1girl, blue_dress, closed_mouth, collared_shirt, ice, looking_at_viewer, puffy_short_sleeves, simple_background, solo, white_background, white_shirt, blush, pinafore_dress, cowboy_shot, smile |
| 2 | 8 |  |  |  |  |  | 1girl, blue_dress, ice, looking_at_viewer, puffy_short_sleeves, solo, white_background, simple_background, shirt, upper_body, smile |
| 3 | 7 |  |  |  |  |  | 1girl, blue_dress, full_body, ice, looking_at_viewer, open_mouth, solo, white_socks, puffy_short_sleeves, white_shirt, :d, blush, black_footwear, mary_janes, pinafore_dress |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | :d | blue_dress | blush | cowboy_shot | ice | looking_at_viewer | open_mouth | puffy_short_sleeves | simple_background | solo | white_background | white_shirt | breasts | collared_shirt | standing | closed_mouth | pinafore_dress | smile | shirt | upper_body | full_body | white_socks | black_footwear | mary_janes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----|:-------------|:--------|:--------------|:------|:--------------------|:-------------|:----------------------|:--------------------|:-------|:-------------------|:--------------|:----------|:-----------------|:-----------|:---------------|:-----------------|:--------|:--------|:-------------|:------------|:--------------|:-----------------|:-------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | | X | X | X | X | X | | X | X | X | X | X | | X | | X | X | X | | | | | | |
| 2 | 8 |  |  |  |  |  | X | | X | | | X | X | | X | X | X | X | | | | | | | X | X | X | | | | |
| 3 | 7 |  |  |  |  |  | X | X | X | X | | X | X | X | X | | X | | X | | | | | X | | | | X | X | X | X |
|
CyberHarem/cirno_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T13:49:28+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T08:10:29+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of cirno/ちるの/치르노 (Touhou)
=================================
This is the dataset of cirno/ちるの/치르노 (Touhou), containing 500 images and their tags.
The core tags of this character are 'blue\_hair, short\_hair, bow, hair\_bow, wings, blue\_eyes, ice\_wings, blue\_bow, ribbon, bangs, hair\_between\_eyes, red\_ribbon, neck\_ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
8ad45fc56c4ea0705d13ef1c774cbf62f751555a
|
Crouse, M., Abdelaziz, I., Basu, K., Dan, S., Kumaravel, S., Fokoue, A., Kapanipathi, P., & Lastras, L. (2023). Formally Specifying the High-Level Behavior of LLM-Based Agents. ArXiv. /abs/2310.08535
|
inuwamobarak/random-files
|
[
"license:openrail",
"arxiv:2310.08535",
"region:us"
] |
2023-08-17T14:03:18+00:00
|
{"license": "openrail"}
|
2023-10-19T15:08:45+00:00
|
[
"2310.08535"
] |
[] |
TAGS
#license-openrail #arxiv-2310.08535 #region-us
|
Crouse, M., Abdelaziz, I., Basu, K., Dan, S., Kumaravel, S., Fokoue, A., Kapanipathi, P., & Lastras, L. (2023). Formally Specifying the High-Level Behavior of LLM-Based Agents. ArXiv. /abs/2310.08535
|
[] |
[
"TAGS\n#license-openrail #arxiv-2310.08535 #region-us \n"
] |
[
22
] |
[
"passage: TAGS\n#license-openrail #arxiv-2310.08535 #region-us \n"
] |
dba6425906832385c0ff882c070c5fc4aeb0c420
|
# Dataset Card for "sample-ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Prikshit7766/sample-ner
|
[
"region:us"
] |
2023-08-17T14:07:32+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 748135, "num_examples": 3250}], "download_size": 213070, "dataset_size": 748135}}
|
2023-08-29T14:47:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "sample-ner"
More Information needed
|
[
"# Dataset Card for \"sample-ner\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"sample-ner\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"sample-ner\"\n\nMore Information needed"
] |
c25b74872aa2d40b3235af074d565a81546e6138
|
# Dataset Card for "srbd1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Lancelot53/srbd1
|
[
"region:us"
] |
2023-08-17T14:10:09+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "xml", "dtype": "string"}, {"name": "html", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23236666, "num_examples": 723}], "download_size": 2835772, "dataset_size": 23236666}}
|
2023-08-17T14:14:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "srbd1"
More Information needed
|
[
"# Dataset Card for \"srbd1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"srbd1\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"srbd1\"\n\nMore Information needed"
] |
70043b43eb4cd69bcb4bf0321beb9420f8f620c0
|
# Dataset Card for "ASCOR_audio2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ellenlnt/ASCOR_audio2
|
[
"region:us"
] |
2023-08-17T14:19:29+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "file_name", "dtype": "string"}, {"name": "ID", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1482575793.0, "num_examples": 250}], "download_size": 1465127520, "dataset_size": 1482575793.0}}
|
2023-08-17T14:31:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ASCOR_audio2"
More Information needed
|
[
"# Dataset Card for \"ASCOR_audio2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ASCOR_audio2\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ASCOR_audio2\"\n\nMore Information needed"
] |
8b8ea39aa33bf87fce5f5988d8f1827bfe909f07
|
# Dataset of hijiri_byakuren/聖白蓮/히지리뱌쿠렌 (Touhou)
This is the dataset of hijiri_byakuren/聖白蓮/히지리뱌쿠렌 (Touhou), containing 405 images and their tags.
The core tags of this character are `gradient_hair, long_hair, multicolored_hair, purple_hair, yellow_eyes, blonde_hair, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 405 | 491.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hijiri_byakuren_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 405 | 338.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hijiri_byakuren_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 841 | 592.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hijiri_byakuren_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 405 | 456.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hijiri_byakuren_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 841 | 744.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hijiri_byakuren_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hijiri_byakuren_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, dress, solo, sorcerer's_sutra_scroll |
| 1 | 5 |  |  |  |  |  | 1girl, cape, dress, solo, sorcerer's_sutra_scroll |
| 2 | 10 |  |  |  |  |  | 1girl, cape, dress, smile, solo, sorcerer's_sutra_scroll |
| 3 | 7 |  |  |  |  |  | 1girl, dress, open_mouth, smile, solo, sorcerer's_sutra_scroll, cape, long_sleeves |
| 4 | 7 |  |  |  |  |  | 1girl, solo, sorcerer's_sutra_scroll, layered_dress, looking_at_viewer, smile, cape, juliet_sleeves, open_mouth |
| 5 | 6 |  |  |  |  |  | 1girl, black_dress, juliet_sleeves, layered_dress, solo, brown_hair, looking_at_viewer, smile, sorcerer's_sutra_scroll, white_dress, cross-laced_clothes, very_long_hair, wavy_hair |
| 6 | 7 |  |  |  |  |  | 1girl, juliet_sleeves, layered_dress, solo, sorcerer's_sutra_scroll, white_dress, bangs, black_dress, black_footwear, brown_hair, closed_mouth, cross-laced_clothes, full_body, holding, looking_at_viewer, simple_background, smile, white_background, boots, cape, brown_eyes, frills, wavy_hair |
| 7 | 5 |  |  |  |  |  | 1girl, alternate_costume, kimono, obi, solo, alternate_hairstyle, smile, bag, flower, hair_ornament, looking_at_viewer, open_mouth |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | dress | solo | sorcerer's_sutra_scroll | cape | smile | open_mouth | long_sleeves | layered_dress | looking_at_viewer | juliet_sleeves | black_dress | brown_hair | white_dress | cross-laced_clothes | very_long_hair | wavy_hair | bangs | black_footwear | closed_mouth | full_body | holding | simple_background | white_background | boots | brown_eyes | frills | alternate_costume | kimono | obi | alternate_hairstyle | bag | flower | hair_ornament |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:--------------------------|:-------|:--------|:-------------|:---------------|:----------------|:--------------------|:-----------------|:--------------|:-------------|:--------------|:----------------------|:-----------------|:------------|:--------|:-----------------|:---------------|:------------|:----------|:--------------------|:-------------------|:--------|:-------------|:---------|:--------------------|:---------|:------|:----------------------|:------|:---------|:----------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | X | | X | X | X | X | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | | X | X | | X | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | | X | X | X | X | | | X | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | | X | | | X | X | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X |
|
CyberHarem/hijiri_byakuren_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T14:29:33+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T13:58:36+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of hijiri\_byakuren/聖白蓮/히지리뱌쿠렌 (Touhou)
===============================================
This is the dataset of hijiri\_byakuren/聖白蓮/히지리뱌쿠렌 (Touhou), containing 405 images and their tags.
The core tags of this character are 'gradient\_hair, long\_hair, multicolored\_hair, purple\_hair, yellow\_eyes, blonde\_hair, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
c40ec6218060ab26fad7b72dd84bdf8ce0f9947d
|
# Dataset of izayoi_sakuya/いざよいさくや/十六夜咲夜/이자요이사쿠야 (Touhou)
This is the dataset of izayoi_sakuya/いざよいさくや/十六夜咲夜/이자요이사쿠야 (Touhou), containing 500 images and their tags.
The core tags of this character are `braid, twin_braids, maid_headdress, short_hair, grey_hair, bow, hair_bow, blue_eyes, bangs, breasts, ribbon, green_bow, hair_between_eyes, white_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 809.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/izayoi_sakuya_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 465.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/izayoi_sakuya_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1213 | 941.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/izayoi_sakuya_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 714.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/izayoi_sakuya_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1213 | 1.27 GiB | [Download](https://huggingface.co/datasets/CyberHarem/izayoi_sakuya_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/izayoi_sakuya_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, looking_at_viewer, maid, solo, waist_apron, blue_dress, knife, puffy_short_sleeves, holding, red_eyes, wrist_cuffs |
| 1 | 24 |  |  |  |  |  | 1girl, blue_dress, looking_at_viewer, maid_apron, puffy_short_sleeves, solo, waist_apron, white_apron, weapon, holding_knife, white_shirt, frilled_apron, red_eyes, closed_mouth, medium_breasts, wrist_cuffs, between_fingers, cowboy_shot, standing |
| 2 | 5 |  |  |  |  |  | 1girl, blue_dress, holding_knife, looking_at_viewer, puffy_short_sleeves, simple_background, solo, waist_apron, white_background, between_fingers, white_apron, black_pantyhose, frills, maid_apron, medium_breasts, closed_mouth, large_breasts, shoes, white_shirt, wrist_cuffs |
| 3 | 8 |  |  |  |  |  | 1girl, knife, maid, solo, apron, red_eyes, fingerless_gloves |
| 4 | 6 |  |  |  |  |  | 1girl, solo, lingerie, looking_at_viewer, navel, on_back, open_shirt, maid, thighhighs, white_bra, white_panties |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | maid | solo | waist_apron | blue_dress | knife | puffy_short_sleeves | holding | red_eyes | wrist_cuffs | maid_apron | white_apron | weapon | holding_knife | white_shirt | frilled_apron | closed_mouth | medium_breasts | between_fingers | cowboy_shot | standing | simple_background | white_background | black_pantyhose | frills | large_breasts | shoes | apron | fingerless_gloves | lingerie | navel | on_back | open_shirt | thighhighs | white_bra | white_panties |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:-------|:--------------|:-------------|:--------|:----------------------|:----------|:-----------|:--------------|:-------------|:--------------|:---------|:----------------|:--------------|:----------------|:---------------|:-----------------|:------------------|:--------------|:-----------|:--------------------|:-------------------|:------------------|:---------|:----------------|:--------|:--------|:--------------------|:-----------|:--------|:----------|:-------------|:-------------|:------------|:----------------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 24 |  |  |  |  |  | X | X | | X | X | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | | X | X | X | | X | | | X | X | X | | X | X | | X | X | X | | | X | X | X | X | X | X | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | X | | X | X | | | X | | | X | | | | | | | | | | | | | | | | | | | X | X | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X |
|
CyberHarem/izayoi_sakuya_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T14:39:39+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T08:31:21+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of izayoi\_sakuya/いざよいさくや/十六夜咲夜/이자요이사쿠야 (Touhou)
========================================================
This is the dataset of izayoi\_sakuya/いざよいさくや/十六夜咲夜/이자요이사쿠야 (Touhou), containing 500 images and their tags.
The core tags of this character are 'braid, twin\_braids, maid\_headdress, short\_hair, grey\_hair, bow, hair\_bow, blue\_eyes, bangs, breasts, ribbon, green\_bow, hair\_between\_eyes, white\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
3793c2940fe3d349dfff00fdbcf83854bf20d860
|
# Dataset Card for "zaa11"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rakshit122/zaa11
|
[
"region:us"
] |
2023-08-17T14:48:50+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 46270, "num_examples": 226}], "download_size": 16707, "dataset_size": 46270}}
|
2023-08-17T15:42:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "zaa11"
More Information needed
|
[
"# Dataset Card for \"zaa11\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"zaa11\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"zaa11\"\n\nMore Information needed"
] |
e13d1b899d4a72f8a56db4575faef7bc2a7ddc40
|
# Dataset Card for "question_generation_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
reinforz/question_generation_data
|
[
"region:us"
] |
2023-08-17T14:55:04+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "topic", "dtype": "string"}, {"name": "subTopic", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16940946, "num_examples": 6978}], "download_size": 3800898, "dataset_size": 16940946}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-17T14:55:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "question_generation_data"
More Information needed
|
[
"# Dataset Card for \"question_generation_data\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"question_generation_data\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"question_generation_data\"\n\nMore Information needed"
] |
583b181ea2666eb28d10909784690009f6c9da9d
|
# Dataset Card for "TUT-urban-acoustic-scenes-2018-development"
## Dataset Description
- **Homepage: https://zenodo.org/record/1228142**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: Toni Heittola ([email protected], http://www.cs.tut.fi/~heittolt/)**
### Dataset Summary
TUT Urban Acoustic Scenes 2018 development dataset consists of 10-seconds audio segments from 10 acoustic scenes:
Airport - airport
Indoor shopping mall - shopping_mall
Metro station - metro_station
Pedestrian street - street_pedestrian
Public square - public_square
Street with medium level of traffic - street_traffic
Travelling by a tram - tram
Travelling by a bus - bus
Travelling by an underground metro - metro
Urban park - park
Each acoustic scene has 864 segments (144 minutes of audio). The dataset contains in total 24 hours of audio.
The dataset was collected in Finland by Tampere University of Technology between 02/2018 - 03/2018.
The data collection has received funding from the European Research Council under the ERC Grant Agreement 637422 EVERYSOUND.
### Supported Tasks and Leaderboards
- `audio-classification`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name).
- The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard
- which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
## Dataset Structure
### Data Instances
```
{
'scene_label': 'airport',
'identifier': 'barcelona-0',
'source_label': 'a',
'audio': {'path': '/data/airport-barcelona-0-0-a.wav'
'array': array([-1.91628933e-04, -1.18494034e-04, -1.87635422e-04, ...,
4.90546227e-05, -4.98890877e-05, -4.66108322e-05]),
'sampling_rate': 48000}
}
```
### Data Fields
- `scene_label`: acoustic scene label from the 10 class set,
- `identifier`: city-location id 'barcelona-0',
- `source_label: device id, for this dataset is always the same 'a',
Filenames of the dataset have the following pattern:
[scene label]-[city]-[location id]-[segment id]-[device id].wav
### Data Splits
A suggested training/test partitioning of the development set is provided in order to make results reported with this dataset uniform. The partitioning is done such that the segments recorded at the same location are included into the same subset - either training or testing. The partitioning is done aiming for a 70/30 ratio between the number of segments in training and test subsets while taking into account recording locations, and selecting the closest available option.
| Scene class | Train / Segments | Train / Locations | Test / Segments | Test / Locations |
| ------------------ | ---------------- | ----------------- | --------------- | ---------------- |
| Airport | 599 | 15 | 265 | 7 |
| Bus | 622 | 26 | 242 | 10 |
| Metro | 603 | 20 | 261 | 9 |
| Metro station | 605 | 28 | 259 | 12 |
| Park | 622 | 18 | 242 | 7 |
| Public square | 648 | 18 | 216 | 6 |
| Shopping mall | 585 | 16 | 279 | 6 |
| Street, pedestrian | 617 | 20 | 247 | 8 |
| Street, traffic | 618 | 18 | 246 | 7 |
| Tram | 603 | 24 | 261 | 11 |
| **Total** | **6122** | **203** | **2518** | **83** |
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The dataset was recorded in six large European cities: Barcelona, Helsinki, London, Paris, Stockholm, and Vienna. For all acoustic scenes, audio was captured in multiple locations: different streets, different parks, different shopping malls. In each location, multiple 2-3 minute long audio recordings were captured in a few slightly different positions (2-4) within the selected location. Collected audio material was cut into segments of 10 seconds length.
The equipment used for recording consists of a binaural [Soundman OKM II Klassik/studio A3](http://www.soundman.de/en/products/) electret in-ear microphone and a [Zoom F8](https://www.zoom.co.jp/products/handy-recorder/zoom-f8-multitrack-field-recorder) audio recorder using 48 kHz sampling rate and 24 bit resolution. During the recording, the microphones were worn by the recording person in the ears, and head movement was kept to minimum.
### Annotations
#### Annotation process
Post-processing of the recorded audio involves aspects related to privacy of recorded individuals, and possible errors in the recording process. Some interferences from mobile phones are audible, but are considered part of real-world recording process.
#### Who are the annotators?
* Ronal Bejarano Rodriguez
* Eemi Fagerlund
* Aino Koskimies
* Toni Heittola
### Personal and Sensitive Information
The material was screened for content, and segments containing close microphone conversation were eliminated.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Toni Heittola ([email protected], http://www.cs.tut.fi/~heittolt/)
Annamaria Mesaros ([email protected], http://www.cs.tut.fi/~mesaros/)
Tuomas Virtanen ([email protected], http://www.cs.tut.fi/~tuomasv/)
### Licensing Information
Copyright (c) 2018 Tampere University of Technology and its licensors
All rights reserved.
Permission is hereby granted, without written agreement and without license or royalty
fees, to use and copy the TUT Urban Acoustic Scenes 2018 (“Work”) described in this document
and composed of audio and metadata. This grant is only for experimental and non-commercial
purposes, provided that the copyright notice in its entirety appear in all copies of this Work,
and the original source of this Work, (Audio Research Group from Laboratory of Signal
Processing at Tampere University of Technology),
is acknowledged in any publication that reports research using this Work.
Any commercial use of the Work or any part thereof is strictly prohibited.
Commercial use include, but is not limited to:
- selling or reproducing the Work
- selling or distributing the results or content achieved by use of the Work
- providing services by using the Work.
IN NO EVENT SHALL TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS LICENSORS BE LIABLE TO ANY PARTY
FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE
OF THIS WORK AND ITS DOCUMENTATION, EVEN IF TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS
LICENSORS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
TAMPERE UNIVERSITY OF TECHNOLOGY AND ALL ITS LICENSORS SPECIFICALLY DISCLAIMS ANY
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS FOR A PARTICULAR PURPOSE. THE WORK PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND
THE TAMPERE UNIVERSITY OF TECHNOLOGY HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT,
UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
### Citation Information
[](https://doi.org/10.5281/zenodo.1228142)
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
wetdog/TUT-urban-acoustic-scenes-2018-development
|
[
"task_categories:audio-classification",
"size_categories:1K<n<10K",
"license:afl-3.0",
"region:us"
] |
2023-08-17T15:14:41+00:00
|
{"license": "afl-3.0", "size_categories": ["1K<n<10K"], "task_categories": ["audio-classification"], "dataset_info": {"features": [{"name": "scene_label", "dtype": "string"}, {"name": "identifier", "dtype": "string"}, {"name": "source_label", "dtype": "string"}, {"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 24883936611.28, "num_examples": 8640}], "download_size": 24885037396, "dataset_size": 24883936611.28}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-18T23:08:29+00:00
|
[] |
[] |
TAGS
#task_categories-audio-classification #size_categories-1K<n<10K #license-afl-3.0 #region-us
|
Dataset Card for "TUT-urban-acoustic-scenes-2018-development"
=============================================================
Dataset Description
-------------------
* Homepage: URL
* Repository:
* Paper:
* Leaderboard:
* Point of Contact: Toni Heittola (toni.heittola@URL, URL
### Dataset Summary
TUT Urban Acoustic Scenes 2018 development dataset consists of 10-seconds audio segments from 10 acoustic scenes:
```
Airport - airport
Indoor shopping mall - shopping_mall
Metro station - metro_station
Pedestrian street - street_pedestrian
Public square - public_square
Street with medium level of traffic - street_traffic
Travelling by a tram - tram
Travelling by a bus - bus
Travelling by an underground metro - metro
Urban park - park
```
Each acoustic scene has 864 segments (144 minutes of audio). The dataset contains in total 24 hours of audio.
The dataset was collected in Finland by Tampere University of Technology between 02/2018 - 03/2018.
The data collection has received funding from the European Research Council under the ERC Grant Agreement 637422 EVERYSOUND.
### Supported Tasks and Leaderboards
* 'audio-classification': The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* metric name.
* The (model name or model class) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard
* which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.
Dataset Structure
-----------------
### Data Instances
### Data Fields
* 'scene\_label': acoustic scene label from the 10 class set,
* 'identifier': city-location id 'barcelona-0',
* 'source\_label: device id, for this dataset is always the same 'a',
Filenames of the dataset have the following pattern:
[scene label]-[city]-[location id]-[segment id]-[device id].wav
### Data Splits
A suggested training/test partitioning of the development set is provided in order to make results reported with this dataset uniform. The partitioning is done such that the segments recorded at the same location are included into the same subset - either training or testing. The partitioning is done aiming for a 70/30 ratio between the number of segments in training and test subsets while taking into account recording locations, and selecting the closest available option.
Dataset Creation
----------------
### Source Data
#### Initial Data Collection and Normalization
The dataset was recorded in six large European cities: Barcelona, Helsinki, London, Paris, Stockholm, and Vienna. For all acoustic scenes, audio was captured in multiple locations: different streets, different parks, different shopping malls. In each location, multiple 2-3 minute long audio recordings were captured in a few slightly different positions (2-4) within the selected location. Collected audio material was cut into segments of 10 seconds length.
The equipment used for recording consists of a binaural Soundman OKM II Klassik/studio A3 electret in-ear microphone and a Zoom F8 audio recorder using 48 kHz sampling rate and 24 bit resolution. During the recording, the microphones were worn by the recording person in the ears, and head movement was kept to minimum.
### Annotations
#### Annotation process
Post-processing of the recorded audio involves aspects related to privacy of recorded individuals, and possible errors in the recording process. Some interferences from mobile phones are audible, but are considered part of real-world recording process.
#### Who are the annotators?
* Ronal Bejarano Rodriguez
* Eemi Fagerlund
* Aino Koskimies
* Toni Heittola
### Personal and Sensitive Information
The material was screened for content, and segments containing close microphone conversation were eliminated.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
Toni Heittola (toni.heittola@URL, URL
Annamaria Mesaros (annamaria.mesaros@URL, URL
Tuomas Virtanen (tuomas.virtanen@URL, URL
### Licensing Information
Copyright (c) 2018 Tampere University of Technology and its licensors
All rights reserved.
Permission is hereby granted, without written agreement and without license or royalty
fees, to use and copy the TUT Urban Acoustic Scenes 2018 (“Work”) described in this document
and composed of audio and metadata. This grant is only for experimental and non-commercial
purposes, provided that the copyright notice in its entirety appear in all copies of this Work,
and the original source of this Work, (Audio Research Group from Laboratory of Signal
Processing at Tampere University of Technology),
is acknowledged in any publication that reports research using this Work.
Any commercial use of the Work or any part thereof is strictly prohibited.
Commercial use include, but is not limited to:
* selling or reproducing the Work
* selling or distributing the results or content achieved by use of the Work
* providing services by using the Work.
IN NO EVENT SHALL TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS LICENSORS BE LIABLE TO ANY PARTY
FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE
OF THIS WORK AND ITS DOCUMENTATION, EVEN IF TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS
LICENSORS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
TAMPERE UNIVERSITY OF TECHNOLOGY AND ALL ITS LICENSORS SPECIFICALLY DISCLAIMS ANY
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS FOR A PARTICULAR PURPOSE. THE WORK PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND
THE TAMPERE UNIVERSITY OF TECHNOLOGY HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT,
UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
. The dataset contains in total 24 hours of audio.\n\n\nThe dataset was collected in Finland by Tampere University of Technology between 02/2018 - 03/2018.\nThe data collection has received funding from the European Research Council under the ERC Grant Agreement 637422 EVERYSOUND.",
"### Supported Tasks and Leaderboards\n\n\n* 'audio-classification': The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* metric name.\n* The (model name or model class) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard\n* which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* 'scene\\_label': acoustic scene label from the 10 class set,\n* 'identifier': city-location id 'barcelona-0',\n* 'source\\_label: device id, for this dataset is always the same 'a',\n\n\nFilenames of the dataset have the following pattern:\n\n\n[scene label]-[city]-[location id]-[segment id]-[device id].wav",
"### Data Splits\n\n\nA suggested training/test partitioning of the development set is provided in order to make results reported with this dataset uniform. The partitioning is done such that the segments recorded at the same location are included into the same subset - either training or testing. The partitioning is done aiming for a 70/30 ratio between the number of segments in training and test subsets while taking into account recording locations, and selecting the closest available option.\n\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe dataset was recorded in six large European cities: Barcelona, Helsinki, London, Paris, Stockholm, and Vienna. For all acoustic scenes, audio was captured in multiple locations: different streets, different parks, different shopping malls. In each location, multiple 2-3 minute long audio recordings were captured in a few slightly different positions (2-4) within the selected location. Collected audio material was cut into segments of 10 seconds length.\n\n\nThe equipment used for recording consists of a binaural Soundman OKM II Klassik/studio A3 electret in-ear microphone and a Zoom F8 audio recorder using 48 kHz sampling rate and 24 bit resolution. During the recording, the microphones were worn by the recording person in the ears, and head movement was kept to minimum.",
"### Annotations",
"#### Annotation process\n\n\nPost-processing of the recorded audio involves aspects related to privacy of recorded individuals, and possible errors in the recording process. Some interferences from mobile phones are audible, but are considered part of real-world recording process.",
"#### Who are the annotators?\n\n\n* Ronal Bejarano Rodriguez\n* Eemi Fagerlund\n* Aino Koskimies\n* Toni Heittola",
"### Personal and Sensitive Information\n\n\nThe material was screened for content, and segments containing close microphone conversation were eliminated.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nToni Heittola (toni.heittola@URL, URL\nAnnamaria Mesaros (annamaria.mesaros@URL, URL\nTuomas Virtanen (tuomas.virtanen@URL, URL",
"### Licensing Information\n\n\nCopyright (c) 2018 Tampere University of Technology and its licensors\nAll rights reserved.\nPermission is hereby granted, without written agreement and without license or royalty\nfees, to use and copy the TUT Urban Acoustic Scenes 2018 (“Work”) described in this document\nand composed of audio and metadata. This grant is only for experimental and non-commercial\npurposes, provided that the copyright notice in its entirety appear in all copies of this Work,\nand the original source of this Work, (Audio Research Group from Laboratory of Signal\nProcessing at Tampere University of Technology),\nis acknowledged in any publication that reports research using this Work.\nAny commercial use of the Work or any part thereof is strictly prohibited.\nCommercial use include, but is not limited to:\n\n\n* selling or reproducing the Work\n* selling or distributing the results or content achieved by use of the Work\n* providing services by using the Work.\n\n\nIN NO EVENT SHALL TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS LICENSORS BE LIABLE TO ANY PARTY\nFOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE\nOF THIS WORK AND ITS DOCUMENTATION, EVEN IF TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS\nLICENSORS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\nTAMPERE UNIVERSITY OF TECHNOLOGY AND ALL ITS LICENSORS SPECIFICALLY DISCLAIMS ANY\nWARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND\nFITNESS FOR A PARTICULAR PURPOSE. THE WORK PROVIDED HEREUNDER IS ON AN \"AS IS\" BASIS, AND\nTHE TAMPERE UNIVERSITY OF TECHNOLOGY HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT,\nUPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n\n\n. The dataset contains in total 24 hours of audio.\n\n\nThe dataset was collected in Finland by Tampere University of Technology between 02/2018 - 03/2018.\nThe data collection has received funding from the European Research Council under the ERC Grant Agreement 637422 EVERYSOUND.",
"### Supported Tasks and Leaderboards\n\n\n* 'audio-classification': The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* metric name.\n* The (model name or model class) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard\n* which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* 'scene\\_label': acoustic scene label from the 10 class set,\n* 'identifier': city-location id 'barcelona-0',\n* 'source\\_label: device id, for this dataset is always the same 'a',\n\n\nFilenames of the dataset have the following pattern:\n\n\n[scene label]-[city]-[location id]-[segment id]-[device id].wav",
"### Data Splits\n\n\nA suggested training/test partitioning of the development set is provided in order to make results reported with this dataset uniform. The partitioning is done such that the segments recorded at the same location are included into the same subset - either training or testing. The partitioning is done aiming for a 70/30 ratio between the number of segments in training and test subsets while taking into account recording locations, and selecting the closest available option.\n\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe dataset was recorded in six large European cities: Barcelona, Helsinki, London, Paris, Stockholm, and Vienna. For all acoustic scenes, audio was captured in multiple locations: different streets, different parks, different shopping malls. In each location, multiple 2-3 minute long audio recordings were captured in a few slightly different positions (2-4) within the selected location. Collected audio material was cut into segments of 10 seconds length.\n\n\nThe equipment used for recording consists of a binaural Soundman OKM II Klassik/studio A3 electret in-ear microphone and a Zoom F8 audio recorder using 48 kHz sampling rate and 24 bit resolution. During the recording, the microphones were worn by the recording person in the ears, and head movement was kept to minimum.",
"### Annotations",
"#### Annotation process\n\n\nPost-processing of the recorded audio involves aspects related to privacy of recorded individuals, and possible errors in the recording process. Some interferences from mobile phones are audible, but are considered part of real-world recording process.",
"#### Who are the annotators?\n\n\n* Ronal Bejarano Rodriguez\n* Eemi Fagerlund\n* Aino Koskimies\n* Toni Heittola",
"### Personal and Sensitive Information\n\n\nThe material was screened for content, and segments containing close microphone conversation were eliminated.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nToni Heittola (toni.heittola@URL, URL\nAnnamaria Mesaros (annamaria.mesaros@URL, URL\nTuomas Virtanen (tuomas.virtanen@URL, URL",
"### Licensing Information\n\n\nCopyright (c) 2018 Tampere University of Technology and its licensors\nAll rights reserved.\nPermission is hereby granted, without written agreement and without license or royalty\nfees, to use and copy the TUT Urban Acoustic Scenes 2018 (“Work”) described in this document\nand composed of audio and metadata. This grant is only for experimental and non-commercial\npurposes, provided that the copyright notice in its entirety appear in all copies of this Work,\nand the original source of this Work, (Audio Research Group from Laboratory of Signal\nProcessing at Tampere University of Technology),\nis acknowledged in any publication that reports research using this Work.\nAny commercial use of the Work or any part thereof is strictly prohibited.\nCommercial use include, but is not limited to:\n\n\n* selling or reproducing the Work\n* selling or distributing the results or content achieved by use of the Work\n* providing services by using the Work.\n\n\nIN NO EVENT SHALL TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS LICENSORS BE LIABLE TO ANY PARTY\nFOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE\nOF THIS WORK AND ITS DOCUMENTATION, EVEN IF TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS\nLICENSORS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\nTAMPERE UNIVERSITY OF TECHNOLOGY AND ALL ITS LICENSORS SPECIFICALLY DISCLAIMS ANY\nWARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND\nFITNESS FOR A PARTICULAR PURPOSE. THE WORK PROVIDED HEREUNDER IS ON AN \"AS IS\" BASIS, AND\nTHE TAMPERE UNIVERSITY OF TECHNOLOGY HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT,\nUPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n\n\n. The dataset contains in total 24 hours of audio.\n\n\nThe dataset was collected in Finland by Tampere University of Technology between 02/2018 - 03/2018.\nThe data collection has received funding from the European Research Council under the ERC Grant Agreement 637422 EVERYSOUND.### Supported Tasks and Leaderboards\n\n\n* 'audio-classification': The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* metric name.\n* The (model name or model class) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard\n* which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.\n\n\nDataset Structure\n-----------------### Data Instances### Data Fields\n\n\n* 'scene\\_label': acoustic scene label from the 10 class set,\n* 'identifier': city-location id 'barcelona-0',\n* 'source\\_label: device id, for this dataset is always the same 'a',\n\n\nFilenames of the dataset have the following pattern:\n\n\n[scene label]-[city]-[location id]-[segment id]-[device id].wav",
"passage: ### Data Splits\n\n\nA suggested training/test partitioning of the development set is provided in order to make results reported with this dataset uniform. The partitioning is done such that the segments recorded at the same location are included into the same subset - either training or testing. The partitioning is done aiming for a 70/30 ratio between the number of segments in training and test subsets while taking into account recording locations, and selecting the closest available option.\n\n\n\nDataset Creation\n----------------### Source Data#### Initial Data Collection and Normalization\n\n\nThe dataset was recorded in six large European cities: Barcelona, Helsinki, London, Paris, Stockholm, and Vienna. For all acoustic scenes, audio was captured in multiple locations: different streets, different parks, different shopping malls. In each location, multiple 2-3 minute long audio recordings were captured in a few slightly different positions (2-4) within the selected location. Collected audio material was cut into segments of 10 seconds length.\n\n\nThe equipment used for recording consists of a binaural Soundman OKM II Klassik/studio A3 electret in-ear microphone and a Zoom F8 audio recorder using 48 kHz sampling rate and 24 bit resolution. During the recording, the microphones were worn by the recording person in the ears, and head movement was kept to minimum.### Annotations#### Annotation process\n\n\nPost-processing of the recorded audio involves aspects related to privacy of recorded individuals, and possible errors in the recording process. Some interferences from mobile phones are audible, but are considered part of real-world recording process.#### Who are the annotators?\n\n\n* Ronal Bejarano Rodriguez\n* Eemi Fagerlund\n* Aino Koskimies\n* Toni Heittola### Personal and Sensitive Information\n\n\nThe material was screened for content, and segments containing close microphone conversation were eliminated.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\nToni Heittola (toni.heittola@URL, URL\nAnnamaria Mesaros (annamaria.mesaros@URL, URL\nTuomas Virtanen (tuomas.virtanen@URL, URL"
] |
075bdaa040067084426d5828d29e09990b0824ad
|
# Dataset Card for "ToolBench"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Maurus/ToolBench
|
[
"region:us"
] |
2023-08-17T15:18:31+00:00
|
{"dataset_info": {"features": [{"name": "api_list", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "query_id", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "embedding", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 606404614, "num_examples": 88895}], "download_size": 347748862, "dataset_size": 606404614}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-17T16:40:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ToolBench"
More Information needed
|
[
"# Dataset Card for \"ToolBench\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ToolBench\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ToolBench\"\n\nMore Information needed"
] |
f4c91c610444eb584db82d93ab46bf3b9f8f103a
|
# Dataset Card for "za1a11"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rakshit122/za1a11
|
[
"region:us"
] |
2023-08-17T15:19:42+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 748135, "num_examples": 3250}], "download_size": 213070, "dataset_size": 748135}}
|
2023-08-17T15:19:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "za1a11"
More Information needed
|
[
"# Dataset Card for \"za1a11\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"za1a11\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"za1a11\"\n\nMore Information needed"
] |
1827e42d3e3f9cf07bc241bab91750e2137c014e
|
# Dataset Card for "za1aaaaa11"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rakshit122/za1aaaaa11
|
[
"region:us"
] |
2023-08-17T15:23:50+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 46270, "num_examples": 226}], "download_size": 16707, "dataset_size": 46270}}
|
2023-08-17T15:23:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "za1aaaaa11"
More Information needed
|
[
"# Dataset Card for \"za1aaaaa11\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"za1aaaaa11\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"za1aaaaa11\"\n\nMore Information needed"
] |
a8be4ab52d8e91ea1bb86b9bc5d25c2bc04e92d2
|
# Dataset Card for "za1aaaa11"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rakshit122/za1aaaa11
|
[
"region:us"
] |
2023-08-17T15:25:32+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 46270, "num_examples": 226}], "download_size": 16707, "dataset_size": 46270}}
|
2023-08-17T15:32:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "za1aaaa11"
More Information needed
|
[
"# Dataset Card for \"za1aaaa11\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"za1aaaa11\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"za1aaaa11\"\n\nMore Information needed"
] |
c9f5c3474b36b1b72ce7647dd857560a0d7d499b
|
# Dataset of mononobe_no_futo/物部布都/모노노베노후토 (Touhou)
This is the dataset of mononobe_no_futo/物部布都/모노노베노후토 (Touhou), containing 500 images and their tags.
The core tags of this character are `ponytail, hat, grey_hair, long_hair, blue_eyes, ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 493.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mononobe_no_futo_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 340.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mononobe_no_futo_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1087 | 655.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mononobe_no_futo_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 462.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mononobe_no_futo_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1087 | 832.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mononobe_no_futo_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/mononobe_no_futo_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | 1girl, kariginu, long_sleeves, ribbon-trimmed_sleeves, simple_background, solo, tate_eboshi, wide_sleeves, blue_skirt, open_mouth, pom_pom_(clothes), looking_at_viewer, :d, boots, full_body, white_background, blue_headwear, bangs, grey_eyes |
| 1 | 7 |  |  |  |  |  | 1girl, blue_headwear, kariginu, long_sleeves, looking_at_viewer, pom_pom_(clothes), ribbon-trimmed_sleeves, simple_background, solo, tate_eboshi, wide_sleeves, bangs, blue_skirt, hair_between_eyes, white_background, closed_mouth, :d, blush, frills, open_mouth |
| 2 | 16 |  |  |  |  |  | 1girl, skirt, solo, tate_eboshi, wide_sleeves, grey_eyes, kariginu, open_mouth, smile |
| 3 | 7 |  |  |  |  |  | 1girl, kariginu, skirt, smile, solo, tate_eboshi, wide_sleeves, long_sleeves, open_mouth, white_hair, blush |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | kariginu | long_sleeves | ribbon-trimmed_sleeves | simple_background | solo | tate_eboshi | wide_sleeves | blue_skirt | open_mouth | pom_pom_(clothes) | looking_at_viewer | :d | boots | full_body | white_background | blue_headwear | bangs | grey_eyes | hair_between_eyes | closed_mouth | blush | frills | skirt | smile | white_hair |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:---------------|:-------------------------|:--------------------|:-------|:--------------|:---------------|:-------------|:-------------|:--------------------|:--------------------|:-----|:--------|:------------|:-------------------|:----------------|:--------|:------------|:--------------------|:---------------|:--------|:---------|:--------|:--------|:-------------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | | X | X | X | | X | X | X | X | | | |
| 2 | 16 |  |  |  |  |  | X | X | | | | X | X | X | | X | | | | | | | | | X | | | | | X | X | |
| 3 | 7 |  |  |  |  |  | X | X | X | | | X | X | X | | X | | | | | | | | | | | | X | | X | X | X |
|
CyberHarem/mononobe_no_futo_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T15:28:57+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T14:55:36+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of mononobe\_no\_futo/物部布都/모노노베노후토 (Touhou)
===================================================
This is the dataset of mononobe\_no\_futo/物部布都/모노노베노후토 (Touhou), containing 500 images and their tags.
The core tags of this character are 'ponytail, hat, grey\_hair, long\_hair, blue\_eyes, ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
7465663d3ac20ccb7f438d5dd659776e64c5f66e
|
# Dataset of kochiya_sanae/東風谷早苗/코치야사나에 (Touhou)
This is the dataset of kochiya_sanae/東風谷早苗/코치야사나에 (Touhou), containing 500 images and their tags.
The core tags of this character are `green_hair, long_hair, hair_ornament, frog_hair_ornament, snake_hair_ornament, breasts, green_eyes, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 782.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kochiya_sanae_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 450.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kochiya_sanae_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1231 | 936.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kochiya_sanae_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 697.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kochiya_sanae_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1231 | 1.28 GiB | [Download](https://huggingface.co/datasets/CyberHarem/kochiya_sanae_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kochiya_sanae_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, bare_shoulders, blue_skirt, collared_shirt, detached_sleeves, hair_tubes, looking_at_viewer, solo, white_shirt, wide_sleeves, simple_background, white_background, blush, nontraditional_miko, smile, large_breasts, sleeveless_shirt, closed_mouth, blue_eyes, hair_between_eyes, open_mouth, upper_body |
| 1 | 8 |  |  |  |  |  | 1girl, detached_sleeves, hair_tubes, smile, solo, gohei, looking_at_viewer, wide_sleeves, long_sleeves, shirt, blue_eyes, blue_skirt, holding, navel |
| 2 | 19 |  |  |  |  |  | 1girl, alternate_costume, serafuku, solo, looking_at_viewer, hair_tubes, smile, blue_eyes, neckerchief, pleated_skirt, short_sleeves, white_background, simple_background, open_mouth |
| 3 | 7 |  |  |  |  |  | 1girl, blush, enmaided, looking_at_viewer, maid_headdress, solo, frills, maid_apron, hair_tubes, black_dress, blue_eyes, hair_between_eyes, open_mouth, short_sleeves, smile, black_thighhighs, blurry_background, large_breasts, long_sleeves, puffy_sleeves, simple_background, standing, white_apron, white_background |
| 4 | 9 |  |  |  |  |  | 1girl, solo, cleavage, day, navel, large_breasts, water, blush, looking_at_viewer, outdoors, smile, cloud, ocean, open_mouth, blue_eyes, blue_sky, side-tie_bikini_bottom, wading, white_bikini |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | blue_skirt | collared_shirt | detached_sleeves | hair_tubes | looking_at_viewer | solo | white_shirt | wide_sleeves | simple_background | white_background | blush | nontraditional_miko | smile | large_breasts | sleeveless_shirt | closed_mouth | blue_eyes | hair_between_eyes | open_mouth | upper_body | gohei | long_sleeves | shirt | holding | navel | alternate_costume | serafuku | neckerchief | pleated_skirt | short_sleeves | enmaided | maid_headdress | frills | maid_apron | black_dress | black_thighhighs | blurry_background | puffy_sleeves | standing | white_apron | cleavage | day | water | outdoors | cloud | ocean | blue_sky | side-tie_bikini_bottom | wading | white_bikini |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:-------------|:-----------------|:-------------------|:-------------|:--------------------|:-------|:--------------|:---------------|:--------------------|:-------------------|:--------|:----------------------|:--------|:----------------|:-------------------|:---------------|:------------|:--------------------|:-------------|:-------------|:--------|:---------------|:--------|:----------|:--------|:--------------------|:-----------|:--------------|:----------------|:----------------|:-----------|:-----------------|:---------|:-------------|:--------------|:-------------------|:--------------------|:----------------|:-----------|:--------------|:-----------|:------|:--------|:-----------|:--------|:--------|:-----------|:-------------------------|:---------|:---------------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | | X | | X | X | X | X | | X | | | | | X | | | | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 19 |  |  |  |  |  | X | | | | | X | X | X | | | X | X | | | X | | | | X | | X | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | | | | | X | X | X | | | X | X | X | | X | X | | | X | X | X | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 4 | 9 |  |  |  |  |  | X | | | | | | X | X | | | | | X | | X | X | | | X | | X | | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/kochiya_sanae_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T15:34:38+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T08:24:09+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of kochiya\_sanae/東風谷早苗/코치야사나에 (Touhou)
===============================================
This is the dataset of kochiya\_sanae/東風谷早苗/코치야사나에 (Touhou), containing 500 images and their tags.
The core tags of this character are 'green\_hair, long\_hair, hair\_ornament, frog\_hair\_ornament, snake\_hair\_ornament, breasts, green\_eyes, bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
d94b795db339820877a5117b9dcdf23087548f0e
|
A simple neural network training dataset for basic True-False fact checking, derived from the FEVEROUS dataset.
original dataset can be found here:
https://fever.ai/dataset/feverous.html
|
Bulutthecat/FactfulDataset
|
[
"region:us"
] |
2023-08-17T15:38:06+00:00
|
{}
|
2023-08-17T20:31:08+00:00
|
[] |
[] |
TAGS
#region-us
|
A simple neural network training dataset for basic True-False fact checking, derived from the FEVEROUS dataset.
original dataset can be found here:
URL
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
a3a402b5666ee85c639a2a3216c4995ccf64c778
|
# Dataset Card for "za1a11111111111111"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rakshit122/za1a11111111111111
|
[
"region:us"
] |
2023-08-17T15:38:24+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 748135, "num_examples": 3250}], "download_size": 213070, "dataset_size": 748135}}
|
2023-08-17T15:38:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "za1a11111111111111"
More Information needed
|
[
"# Dataset Card for \"za1a11111111111111\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"za1a11111111111111\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"za1a11111111111111\"\n\nMore Information needed"
] |
95f531c9143bab681c8854954b035bcccce412dc
|
# Dataset Card for "hh-rrhf-dahoas-gptj-rm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
philschmid/hh-rrhf-dahoas-gptj-rm
|
[
"region:us"
] |
2023-08-17T15:40:44+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "responses", "sequence": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 209649469, "num_examples": 160736}], "download_size": 120585884, "dataset_size": 209649469}}
|
2023-08-18T14:54:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "hh-rrhf-dahoas-gptj-rm"
More Information needed
|
[
"# Dataset Card for \"hh-rrhf-dahoas-gptj-rm\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"hh-rrhf-dahoas-gptj-rm\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"hh-rrhf-dahoas-gptj-rm\"\n\nMore Information needed"
] |
396d2ee84d6ff12698ce93b87f93be4781429784
|
# Dataset Card for "zaaaa11"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rakshit122/zaaaa11
|
[
"region:us"
] |
2023-08-17T15:42:43+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 46270, "num_examples": 226}], "download_size": 0, "dataset_size": 46270}}
|
2023-08-17T15:43:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "zaaaa11"
More Information needed
|
[
"# Dataset Card for \"zaaaa11\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"zaaaa11\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"zaaaa11\"\n\nMore Information needed"
] |
13b379b578ac96454426151d388f18808904eb21
|
# Dataset Card for "1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rakshit122/1
|
[
"region:us"
] |
2023-08-17T15:43:34+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 46270, "num_examples": 226}], "download_size": 16707, "dataset_size": 46270}}
|
2023-08-17T15:43:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "1"
More Information needed
|
[
"# Dataset Card for \"1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"1\"\n\nMore Information needed"
] |
[
6,
11
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"1\"\n\nMore Information needed"
] |
b2a556fe562e9d9c0c1c700321b0b48a11a17ded
|
# Protein Data Stability - Single Mutation
This repository contains data on the change in protein stability with a single mutation.
## Attribution of Data Sources
- **Primary Source**: Tsuboyama, K., Dauparas, J., Chen, J. et al. Mega-scale experimental analysis of protein folding stability in biology and design. Nature 620, 434–444 (2023). [Link to the paper](https://www.nature.com/articles/s41586-023-06328-6)
- **Dataset Link**: [Zenodo Record](https://zenodo.org/record/7992926)
As to where the dataset comes from in this broader work, the relevant dataset (#3) is shown in `dataset_table.jpeg` of this repository's files.
## Sample Protein Stability Data [subset of 4 columns]
| Base Protein Sequence | Mutation | ΔΔG_ML | Classification |
|-------------------------------------------------------------|----------|--------------------|-----------------|
| FDIYVVTADYLPLGAEQDAITLREGQYVEVLDAAHPLRWLVRTKPTKSSPSRQGWVSPAYLDRRL | R63W | -0.2010871345320799 | neutral |
| FDIYVVTADYLPLGAEQDAITLREGQYVEVLDAAHPLRWLVRTKPTKSSPSRQGWVSPAYLDRRL | R63Y | 0.0194756159891467 | neutral |
| FDIYVVTADYLPLGAEQDAITLREGQYVEVLDAAHPLRWLVRTKPTKSSPSRQGWVSPAYLDRRL | R63F | 0.7231614929744659 | stabilising |
| FDIYVVTADYLPLGAEQDAITLREGQYVEVLDAAHPLRWLVRTKPTKSSPSRQGWVSPAYLDRRL | R63P | -0.3668887752897785 | neutral |
| FDIYVVTADYLPLGAEQDAITLREGQYVEVLDAAHPLRWLVRTKPTKSSPSRQGWVSPAYLDRRL | R63C | -0.5317304030261774 | destabilising |
## Dataset Structure
This dataset focuses on the differential deltaG of *unfolding* (mutation minus base) of various protein mutations and is derived from stability measurements (free energy of unfolding) measured by two proteases, trypsin and chymotrypsin.
### Columns (Trypsin):
- **name**: The name of the protein variant.
- **dna_seq**: The DNA sequence encoding the protein variant.
- **log10_K50_t**: The log10 of the K50 value measured with trypsin (a measure of stability).
- **log10_K50_t_95CI_high**: The upper bound of the 95% confidence interval for log10_K50_t.
- **log10_K50_t_95CI_low**: The lower bound of the 95% confidence interval for log10_K50_t.
- **log10_K50_t_95CI**: The width of the 95% confidence interval for log10_K50_t.
- **fitting_error_t**: A measure of error between the model and data for trypsin.
- **log10_K50unfolded_t**: The predicted log10 K50 value for the unfolded state with trypsin.
- **deltaG_t**: The ΔG stability calculated from the trypsin data.
- **deltaG_t_95CI_high**: The upper bound of the ΔG confidence interval from trypsin.
- **deltaG_t_95CI_low**: The lower bound of the ΔG confidence interval from trypsin.
- **deltaG_t_95CI**: The width of the ΔG confidence interval from trypsin.
### Columns (Chymotrypsin):
- **log10_K50_c**: Analogous to `log10_K50_t`, but for chymotrypsin.
- **log10_K50_c_95CI_high**: Upper bound of the 95% CI for `log10_K50_c`.
- **log10_K50_c_95CI_low**: Lower bound of the 95% CI for `log10_K50_c`.
- **log10_K50_c_95CI**: Width of the 95% CI for `log10_K50_c`.
- **fitting_error_c**: A measure of error between the model and data for chymotrypsin.
- **log10_K50unfolded_c**: Predicted log10 K50 value for the unfolded state with chymotrypsin.
- **deltaG_c**: ΔG stability calculated from the chymotrypsin data.
- **deltaG_c_95CI_high**: Upper bound of the ΔG CI from chymotrypsin.
- **deltaG_c_95CI_low**: Lower bound of the ΔG CI from chymotrypsin.
- **deltaG_c_95CI**: Width of the ΔG CI from chymotrypsin.
### Combined Data:
- **deltaG**: The combined ΔG estimate from both trypsin and chymotrypsin.
- **deltaG_95CI_high**: Upper bound of the combined ΔG confidence interval.
- **deltaG_95CI_low**: Lower bound of the combined ΔG confidence interval.
- **deltaG_95CI**: Width of the combined ΔG confidence interval.
### Protein Sequencing Data:
- **aa_seq_full**: The full amino acid sequence.
- **aa_seq**: A (sometimes shortened) amino acid sequence representing the protein.
- **mut_type**: The type of mutation introduced to the protein.
- **WT_name**: Name of the wild type variant.
- **WT_cluster**: Cluster classification for the wild type variant.
- **mutation**: Represented as a combination of amino acid and its position (e.g., F10N indicates changing the 10th amino acid (F) in a sequence for N).
- **base_aa_seq**: The base sequence of the protein before the mutation.
### Derived Data:
- **log10_K50_trypsin_ML**: Log10 value of K50 derived from a machine learning model using trypsin data.
- **log10_K50_chymotrypsin_ML**: Log10 value of K50 derived from a machine learning model using chymotrypsin data.
- **dG_ML**: ΔG derived from a machine learning model that makes use of stability measurements from both proteases.
- **ddG_ML**: Differential ΔG (mutation minus base) derived from a machine learning model.
### Classification:
- **Stabilizing_mut**: Indicates whether the mutation is stabilizing or not.
- **pair_name**: Name representation combining the wild type and mutation.
- **classification**: Classification based on `ddG_ML`:
- Rows below -0.5 standard deviations are classified as 'destabilising'.
- Rows above +0.5 standard deviations are classified as 'stabilising'.
- Rows between -0.5 and 0.5 standard deviations are classified as 'neutral'.
This dataset offers a comprehensive view of protein mutations, their effects, and how they relate to the stability measurements made with trypsin and chymotrypsin.
### Understanding ΔG (delta G)
ΔG is the Gibbs free energy change of a process, dictating whether a process is thermodynamically favorable:
- **Negative ΔG**: Indicates the process is energetically favorable. For protein unfolding, it implies the protein is more stable in its unfolded form.
- **Positive ΔG**: Indicates the process is not energetically favorable. In protein unfolding, it means the protein requires energy to maintain its unfolded state, i.e. it is stable in folded form.
The **delta delta G** (ΔΔG) represents the deltaG of the mutation compared to the base protein:
- **Positive ΔΔG**: Suggests the mutation enhances protein stability.
- **Negative ΔΔG**: Suggests the mutation decreases protein stability.
### Data Cleanup and Validation:
1. Filtering: The dataset has been curated to only include examples of single mutations.
2. Sequence mutations were extracted from the row names. Base mutations are labelled as 'base'.
3. Consistency Check: Only rows with a consistent 'mutation', aligned with both the base and mutated sequences from the raw data, have been retained.
|
Trelis/protein_stability_single_mutation
|
[
"task_categories:question-answering",
"task_categories:tabular-classification",
"task_categories:text-generation",
"size_categories:100K<1M",
"language:en",
"biology",
"proteins",
"amino-acids",
"region:us"
] |
2023-08-17T15:43:47+00:00
|
{"language": ["en"], "size_categories": ["100K<1M"], "task_categories": ["question-answering", "tabular-classification", "text-generation"], "tags": ["biology", "proteins", "amino-acids"]}
|
2023-08-21T19:47:40+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-question-answering #task_categories-tabular-classification #task_categories-text-generation #size_categories-100K<1M #language-English #biology #proteins #amino-acids #region-us
|
Protein Data Stability - Single Mutation
========================================
This repository contains data on the change in protein stability with a single mutation.
Attribution of Data Sources
---------------------------
* Primary Source: Tsuboyama, K., Dauparas, J., Chen, J. et al. Mega-scale experimental analysis of protein folding stability in biology and design. Nature 620, 434–444 (2023). Link to the paper
* Dataset Link: Zenodo Record
As to where the dataset comes from in this broader work, the relevant dataset (#3) is shown in 'dataset\_table.jpeg' of this repository's files.
Sample Protein Stability Data [subset of 4 columns]
---------------------------------------------------
Dataset Structure
-----------------
This dataset focuses on the differential deltaG of *unfolding* (mutation minus base) of various protein mutations and is derived from stability measurements (free energy of unfolding) measured by two proteases, trypsin and chymotrypsin.
### Columns (Trypsin):
* name: The name of the protein variant.
* dna\_seq: The DNA sequence encoding the protein variant.
* log10\_K50\_t: The log10 of the K50 value measured with trypsin (a measure of stability).
* log10\_K50\_t\_95CI\_high: The upper bound of the 95% confidence interval for log10\_K50\_t.
* log10\_K50\_t\_95CI\_low: The lower bound of the 95% confidence interval for log10\_K50\_t.
* log10\_K50\_t\_95CI: The width of the 95% confidence interval for log10\_K50\_t.
* fitting\_error\_t: A measure of error between the model and data for trypsin.
* log10\_K50unfolded\_t: The predicted log10 K50 value for the unfolded state with trypsin.
* deltaG\_t: The ΔG stability calculated from the trypsin data.
* deltaG\_t\_95CI\_high: The upper bound of the ΔG confidence interval from trypsin.
* deltaG\_t\_95CI\_low: The lower bound of the ΔG confidence interval from trypsin.
* deltaG\_t\_95CI: The width of the ΔG confidence interval from trypsin.
### Columns (Chymotrypsin):
* log10\_K50\_c: Analogous to 'log10\_K50\_t', but for chymotrypsin.
* log10\_K50\_c\_95CI\_high: Upper bound of the 95% CI for 'log10\_K50\_c'.
* log10\_K50\_c\_95CI\_low: Lower bound of the 95% CI for 'log10\_K50\_c'.
* log10\_K50\_c\_95CI: Width of the 95% CI for 'log10\_K50\_c'.
* fitting\_error\_c: A measure of error between the model and data for chymotrypsin.
* log10\_K50unfolded\_c: Predicted log10 K50 value for the unfolded state with chymotrypsin.
* deltaG\_c: ΔG stability calculated from the chymotrypsin data.
* deltaG\_c\_95CI\_high: Upper bound of the ΔG CI from chymotrypsin.
* deltaG\_c\_95CI\_low: Lower bound of the ΔG CI from chymotrypsin.
* deltaG\_c\_95CI: Width of the ΔG CI from chymotrypsin.
### Combined Data:
* deltaG: The combined ΔG estimate from both trypsin and chymotrypsin.
* deltaG\_95CI\_high: Upper bound of the combined ΔG confidence interval.
* deltaG\_95CI\_low: Lower bound of the combined ΔG confidence interval.
* deltaG\_95CI: Width of the combined ΔG confidence interval.
### Protein Sequencing Data:
* aa\_seq\_full: The full amino acid sequence.
* aa\_seq: A (sometimes shortened) amino acid sequence representing the protein.
* mut\_type: The type of mutation introduced to the protein.
* WT\_name: Name of the wild type variant.
* WT\_cluster: Cluster classification for the wild type variant.
* mutation: Represented as a combination of amino acid and its position (e.g., F10N indicates changing the 10th amino acid (F) in a sequence for N).
* base\_aa\_seq: The base sequence of the protein before the mutation.
### Derived Data:
* log10\_K50\_trypsin\_ML: Log10 value of K50 derived from a machine learning model using trypsin data.
* log10\_K50\_chymotrypsin\_ML: Log10 value of K50 derived from a machine learning model using chymotrypsin data.
* dG\_ML: ΔG derived from a machine learning model that makes use of stability measurements from both proteases.
* ddG\_ML: Differential ΔG (mutation minus base) derived from a machine learning model.
### Classification:
* Stabilizing\_mut: Indicates whether the mutation is stabilizing or not.
* pair\_name: Name representation combining the wild type and mutation.
* classification: Classification based on 'ddG\_ML':
+ Rows below -0.5 standard deviations are classified as 'destabilising'.
+ Rows above +0.5 standard deviations are classified as 'stabilising'.
+ Rows between -0.5 and 0.5 standard deviations are classified as 'neutral'.
This dataset offers a comprehensive view of protein mutations, their effects, and how they relate to the stability measurements made with trypsin and chymotrypsin.
### Understanding ΔG (delta G)
ΔG is the Gibbs free energy change of a process, dictating whether a process is thermodynamically favorable:
* Negative ΔG: Indicates the process is energetically favorable. For protein unfolding, it implies the protein is more stable in its unfolded form.
* Positive ΔG: Indicates the process is not energetically favorable. In protein unfolding, it means the protein requires energy to maintain its unfolded state, i.e. it is stable in folded form.
The delta delta G (ΔΔG) represents the deltaG of the mutation compared to the base protein:
* Positive ΔΔG: Suggests the mutation enhances protein stability.
* Negative ΔΔG: Suggests the mutation decreases protein stability.
### Data Cleanup and Validation:
1. Filtering: The dataset has been curated to only include examples of single mutations.
2. Sequence mutations were extracted from the row names. Base mutations are labelled as 'base'.
3. Consistency Check: Only rows with a consistent 'mutation', aligned with both the base and mutated sequences from the raw data, have been retained.
|
[
"### Columns (Trypsin):\n\n\n* name: The name of the protein variant.\n* dna\\_seq: The DNA sequence encoding the protein variant.\n* log10\\_K50\\_t: The log10 of the K50 value measured with trypsin (a measure of stability).\n* log10\\_K50\\_t\\_95CI\\_high: The upper bound of the 95% confidence interval for log10\\_K50\\_t.\n* log10\\_K50\\_t\\_95CI\\_low: The lower bound of the 95% confidence interval for log10\\_K50\\_t.\n* log10\\_K50\\_t\\_95CI: The width of the 95% confidence interval for log10\\_K50\\_t.\n* fitting\\_error\\_t: A measure of error between the model and data for trypsin.\n* log10\\_K50unfolded\\_t: The predicted log10 K50 value for the unfolded state with trypsin.\n* deltaG\\_t: The ΔG stability calculated from the trypsin data.\n* deltaG\\_t\\_95CI\\_high: The upper bound of the ΔG confidence interval from trypsin.\n* deltaG\\_t\\_95CI\\_low: The lower bound of the ΔG confidence interval from trypsin.\n* deltaG\\_t\\_95CI: The width of the ΔG confidence interval from trypsin.",
"### Columns (Chymotrypsin):\n\n\n* log10\\_K50\\_c: Analogous to 'log10\\_K50\\_t', but for chymotrypsin.\n* log10\\_K50\\_c\\_95CI\\_high: Upper bound of the 95% CI for 'log10\\_K50\\_c'.\n* log10\\_K50\\_c\\_95CI\\_low: Lower bound of the 95% CI for 'log10\\_K50\\_c'.\n* log10\\_K50\\_c\\_95CI: Width of the 95% CI for 'log10\\_K50\\_c'.\n* fitting\\_error\\_c: A measure of error between the model and data for chymotrypsin.\n* log10\\_K50unfolded\\_c: Predicted log10 K50 value for the unfolded state with chymotrypsin.\n* deltaG\\_c: ΔG stability calculated from the chymotrypsin data.\n* deltaG\\_c\\_95CI\\_high: Upper bound of the ΔG CI from chymotrypsin.\n* deltaG\\_c\\_95CI\\_low: Lower bound of the ΔG CI from chymotrypsin.\n* deltaG\\_c\\_95CI: Width of the ΔG CI from chymotrypsin.",
"### Combined Data:\n\n\n* deltaG: The combined ΔG estimate from both trypsin and chymotrypsin.\n* deltaG\\_95CI\\_high: Upper bound of the combined ΔG confidence interval.\n* deltaG\\_95CI\\_low: Lower bound of the combined ΔG confidence interval.\n* deltaG\\_95CI: Width of the combined ΔG confidence interval.",
"### Protein Sequencing Data:\n\n\n* aa\\_seq\\_full: The full amino acid sequence.\n* aa\\_seq: A (sometimes shortened) amino acid sequence representing the protein.\n* mut\\_type: The type of mutation introduced to the protein.\n* WT\\_name: Name of the wild type variant.\n* WT\\_cluster: Cluster classification for the wild type variant.\n* mutation: Represented as a combination of amino acid and its position (e.g., F10N indicates changing the 10th amino acid (F) in a sequence for N).\n* base\\_aa\\_seq: The base sequence of the protein before the mutation.",
"### Derived Data:\n\n\n* log10\\_K50\\_trypsin\\_ML: Log10 value of K50 derived from a machine learning model using trypsin data.\n* log10\\_K50\\_chymotrypsin\\_ML: Log10 value of K50 derived from a machine learning model using chymotrypsin data.\n* dG\\_ML: ΔG derived from a machine learning model that makes use of stability measurements from both proteases.\n* ddG\\_ML: Differential ΔG (mutation minus base) derived from a machine learning model.",
"### Classification:\n\n\n* Stabilizing\\_mut: Indicates whether the mutation is stabilizing or not.\n* pair\\_name: Name representation combining the wild type and mutation.\n* classification: Classification based on 'ddG\\_ML':\n\t+ Rows below -0.5 standard deviations are classified as 'destabilising'.\n\t+ Rows above +0.5 standard deviations are classified as 'stabilising'.\n\t+ Rows between -0.5 and 0.5 standard deviations are classified as 'neutral'.\n\n\nThis dataset offers a comprehensive view of protein mutations, their effects, and how they relate to the stability measurements made with trypsin and chymotrypsin.",
"### Understanding ΔG (delta G)\n\n\nΔG is the Gibbs free energy change of a process, dictating whether a process is thermodynamically favorable:\n\n\n* Negative ΔG: Indicates the process is energetically favorable. For protein unfolding, it implies the protein is more stable in its unfolded form.\n* Positive ΔG: Indicates the process is not energetically favorable. In protein unfolding, it means the protein requires energy to maintain its unfolded state, i.e. it is stable in folded form.\n\n\nThe delta delta G (ΔΔG) represents the deltaG of the mutation compared to the base protein:\n\n\n* Positive ΔΔG: Suggests the mutation enhances protein stability.\n* Negative ΔΔG: Suggests the mutation decreases protein stability.",
"### Data Cleanup and Validation:\n\n\n1. Filtering: The dataset has been curated to only include examples of single mutations.\n2. Sequence mutations were extracted from the row names. Base mutations are labelled as 'base'.\n3. Consistency Check: Only rows with a consistent 'mutation', aligned with both the base and mutated sequences from the raw data, have been retained."
] |
[
"TAGS\n#task_categories-question-answering #task_categories-tabular-classification #task_categories-text-generation #size_categories-100K<1M #language-English #biology #proteins #amino-acids #region-us \n",
"### Columns (Trypsin):\n\n\n* name: The name of the protein variant.\n* dna\\_seq: The DNA sequence encoding the protein variant.\n* log10\\_K50\\_t: The log10 of the K50 value measured with trypsin (a measure of stability).\n* log10\\_K50\\_t\\_95CI\\_high: The upper bound of the 95% confidence interval for log10\\_K50\\_t.\n* log10\\_K50\\_t\\_95CI\\_low: The lower bound of the 95% confidence interval for log10\\_K50\\_t.\n* log10\\_K50\\_t\\_95CI: The width of the 95% confidence interval for log10\\_K50\\_t.\n* fitting\\_error\\_t: A measure of error between the model and data for trypsin.\n* log10\\_K50unfolded\\_t: The predicted log10 K50 value for the unfolded state with trypsin.\n* deltaG\\_t: The ΔG stability calculated from the trypsin data.\n* deltaG\\_t\\_95CI\\_high: The upper bound of the ΔG confidence interval from trypsin.\n* deltaG\\_t\\_95CI\\_low: The lower bound of the ΔG confidence interval from trypsin.\n* deltaG\\_t\\_95CI: The width of the ΔG confidence interval from trypsin.",
"### Columns (Chymotrypsin):\n\n\n* log10\\_K50\\_c: Analogous to 'log10\\_K50\\_t', but for chymotrypsin.\n* log10\\_K50\\_c\\_95CI\\_high: Upper bound of the 95% CI for 'log10\\_K50\\_c'.\n* log10\\_K50\\_c\\_95CI\\_low: Lower bound of the 95% CI for 'log10\\_K50\\_c'.\n* log10\\_K50\\_c\\_95CI: Width of the 95% CI for 'log10\\_K50\\_c'.\n* fitting\\_error\\_c: A measure of error between the model and data for chymotrypsin.\n* log10\\_K50unfolded\\_c: Predicted log10 K50 value for the unfolded state with chymotrypsin.\n* deltaG\\_c: ΔG stability calculated from the chymotrypsin data.\n* deltaG\\_c\\_95CI\\_high: Upper bound of the ΔG CI from chymotrypsin.\n* deltaG\\_c\\_95CI\\_low: Lower bound of the ΔG CI from chymotrypsin.\n* deltaG\\_c\\_95CI: Width of the ΔG CI from chymotrypsin.",
"### Combined Data:\n\n\n* deltaG: The combined ΔG estimate from both trypsin and chymotrypsin.\n* deltaG\\_95CI\\_high: Upper bound of the combined ΔG confidence interval.\n* deltaG\\_95CI\\_low: Lower bound of the combined ΔG confidence interval.\n* deltaG\\_95CI: Width of the combined ΔG confidence interval.",
"### Protein Sequencing Data:\n\n\n* aa\\_seq\\_full: The full amino acid sequence.\n* aa\\_seq: A (sometimes shortened) amino acid sequence representing the protein.\n* mut\\_type: The type of mutation introduced to the protein.\n* WT\\_name: Name of the wild type variant.\n* WT\\_cluster: Cluster classification for the wild type variant.\n* mutation: Represented as a combination of amino acid and its position (e.g., F10N indicates changing the 10th amino acid (F) in a sequence for N).\n* base\\_aa\\_seq: The base sequence of the protein before the mutation.",
"### Derived Data:\n\n\n* log10\\_K50\\_trypsin\\_ML: Log10 value of K50 derived from a machine learning model using trypsin data.\n* log10\\_K50\\_chymotrypsin\\_ML: Log10 value of K50 derived from a machine learning model using chymotrypsin data.\n* dG\\_ML: ΔG derived from a machine learning model that makes use of stability measurements from both proteases.\n* ddG\\_ML: Differential ΔG (mutation minus base) derived from a machine learning model.",
"### Classification:\n\n\n* Stabilizing\\_mut: Indicates whether the mutation is stabilizing or not.\n* pair\\_name: Name representation combining the wild type and mutation.\n* classification: Classification based on 'ddG\\_ML':\n\t+ Rows below -0.5 standard deviations are classified as 'destabilising'.\n\t+ Rows above +0.5 standard deviations are classified as 'stabilising'.\n\t+ Rows between -0.5 and 0.5 standard deviations are classified as 'neutral'.\n\n\nThis dataset offers a comprehensive view of protein mutations, their effects, and how they relate to the stability measurements made with trypsin and chymotrypsin.",
"### Understanding ΔG (delta G)\n\n\nΔG is the Gibbs free energy change of a process, dictating whether a process is thermodynamically favorable:\n\n\n* Negative ΔG: Indicates the process is energetically favorable. For protein unfolding, it implies the protein is more stable in its unfolded form.\n* Positive ΔG: Indicates the process is not energetically favorable. In protein unfolding, it means the protein requires energy to maintain its unfolded state, i.e. it is stable in folded form.\n\n\nThe delta delta G (ΔΔG) represents the deltaG of the mutation compared to the base protein:\n\n\n* Positive ΔΔG: Suggests the mutation enhances protein stability.\n* Negative ΔΔG: Suggests the mutation decreases protein stability.",
"### Data Cleanup and Validation:\n\n\n1. Filtering: The dataset has been curated to only include examples of single mutations.\n2. Sequence mutations were extracted from the row names. Base mutations are labelled as 'base'.\n3. Consistency Check: Only rows with a consistent 'mutation', aligned with both the base and mutated sequences from the raw data, have been retained."
] |
[
67,
344,
326,
96,
167,
137,
151,
178,
97
] |
[
"passage: TAGS\n#task_categories-question-answering #task_categories-tabular-classification #task_categories-text-generation #size_categories-100K<1M #language-English #biology #proteins #amino-acids #region-us \n### Columns (Trypsin):\n\n\n* name: The name of the protein variant.\n* dna\\_seq: The DNA sequence encoding the protein variant.\n* log10\\_K50\\_t: The log10 of the K50 value measured with trypsin (a measure of stability).\n* log10\\_K50\\_t\\_95CI\\_high: The upper bound of the 95% confidence interval for log10\\_K50\\_t.\n* log10\\_K50\\_t\\_95CI\\_low: The lower bound of the 95% confidence interval for log10\\_K50\\_t.\n* log10\\_K50\\_t\\_95CI: The width of the 95% confidence interval for log10\\_K50\\_t.\n* fitting\\_error\\_t: A measure of error between the model and data for trypsin.\n* log10\\_K50unfolded\\_t: The predicted log10 K50 value for the unfolded state with trypsin.\n* deltaG\\_t: The ΔG stability calculated from the trypsin data.\n* deltaG\\_t\\_95CI\\_high: The upper bound of the ΔG confidence interval from trypsin.\n* deltaG\\_t\\_95CI\\_low: The lower bound of the ΔG confidence interval from trypsin.\n* deltaG\\_t\\_95CI: The width of the ΔG confidence interval from trypsin.",
"passage: ### Columns (Chymotrypsin):\n\n\n* log10\\_K50\\_c: Analogous to 'log10\\_K50\\_t', but for chymotrypsin.\n* log10\\_K50\\_c\\_95CI\\_high: Upper bound of the 95% CI for 'log10\\_K50\\_c'.\n* log10\\_K50\\_c\\_95CI\\_low: Lower bound of the 95% CI for 'log10\\_K50\\_c'.\n* log10\\_K50\\_c\\_95CI: Width of the 95% CI for 'log10\\_K50\\_c'.\n* fitting\\_error\\_c: A measure of error between the model and data for chymotrypsin.\n* log10\\_K50unfolded\\_c: Predicted log10 K50 value for the unfolded state with chymotrypsin.\n* deltaG\\_c: ΔG stability calculated from the chymotrypsin data.\n* deltaG\\_c\\_95CI\\_high: Upper bound of the ΔG CI from chymotrypsin.\n* deltaG\\_c\\_95CI\\_low: Lower bound of the ΔG CI from chymotrypsin.\n* deltaG\\_c\\_95CI: Width of the ΔG CI from chymotrypsin.### Combined Data:\n\n\n* deltaG: The combined ΔG estimate from both trypsin and chymotrypsin.\n* deltaG\\_95CI\\_high: Upper bound of the combined ΔG confidence interval.\n* deltaG\\_95CI\\_low: Lower bound of the combined ΔG confidence interval.\n* deltaG\\_95CI: Width of the combined ΔG confidence interval.### Protein Sequencing Data:\n\n\n* aa\\_seq\\_full: The full amino acid sequence.\n* aa\\_seq: A (sometimes shortened) amino acid sequence representing the protein.\n* mut\\_type: The type of mutation introduced to the protein.\n* WT\\_name: Name of the wild type variant.\n* WT\\_cluster: Cluster classification for the wild type variant.\n* mutation: Represented as a combination of amino acid and its position (e.g., F10N indicates changing the 10th amino acid (F) in a sequence for N).\n* base\\_aa\\_seq: The base sequence of the protein before the mutation.### Derived Data:\n\n\n* log10\\_K50\\_trypsin\\_ML: Log10 value of K50 derived from a machine learning model using trypsin data.\n* log10\\_K50\\_chymotrypsin\\_ML: Log10 value of K50 derived from a machine learning model using chymotrypsin data.\n* dG\\_ML: ΔG derived from a machine learning model that makes use of stability measurements from both proteases.\n* ddG\\_ML: Differential ΔG (mutation minus base) derived from a machine learning model."
] |
36535e7bf3d5d49e1c7d2678062b6f331066c1a9
|
# Dataset Card for "100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rakshit122/100
|
[
"region:us"
] |
2023-08-17T15:44:33+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 46270, "num_examples": 226}], "download_size": 16707, "dataset_size": 46270}}
|
2023-08-17T15:44:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "100"
More Information needed
|
[
"# Dataset Card for \"100\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"100\"\n\nMore Information needed"
] |
[
6,
11
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"100\"\n\nMore Information needed"
] |
33c694a702bbb0721043436ae3bdcd6ed1746ca9
|
# Dataset Card for "1a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rakshit122/1a
|
[
"region:us"
] |
2023-08-17T15:45:37+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 46270, "num_examples": 226}], "download_size": 16707, "dataset_size": 46270}}
|
2023-08-17T15:45:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "1a"
More Information needed
|
[
"# Dataset Card for \"1a\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"1a\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"1a\"\n\nMore Information needed"
] |
3b8a6bca56d00aef5e17c374d08a9318d436f977
|
# Dataset Card for "b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rakshit122/b
|
[
"region:us"
] |
2023-08-17T15:46:08+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 748135, "num_examples": 3250}], "download_size": 213070, "dataset_size": 748135}}
|
2023-08-17T15:46:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "b"
More Information needed
|
[
"# Dataset Card for \"b\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"b\"\n\nMore Information needed"
] |
[
6,
11
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"b\"\n\nMore Information needed"
] |
12c99d771726552ba8184f32fa23d84e4b23bf51
|
# Dataset Card for "zavvv11"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rakshit122/zavvv11
|
[
"region:us"
] |
2023-08-17T15:48:05+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 46270, "num_examples": 226}], "download_size": 16707, "dataset_size": 46270}}
|
2023-08-17T15:48:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "zavvv11"
More Information needed
|
[
"# Dataset Card for \"zavvv11\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"zavvv11\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"zavvv11\"\n\nMore Information needed"
] |
b716115a07479d49e58233d2382295c5ca585431
|
# simple-wikipedia
Processed, text-only dump of the Simple Wikipedia (English). Contains 23,886,673 words.
|
rahular/simple-wikipedia
|
[
"region:us"
] |
2023-08-17T16:07:10+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 144689943, "num_examples": 769764}], "download_size": 86969379, "dataset_size": 144689943}}
|
2023-08-17T16:09:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# simple-wikipedia
Processed, text-only dump of the Simple Wikipedia (English). Contains 23,886,673 words.
|
[
"# simple-wikipedia\n\nProcessed, text-only dump of the Simple Wikipedia (English). Contains 23,886,673 words."
] |
[
"TAGS\n#region-us \n",
"# simple-wikipedia\n\nProcessed, text-only dump of the Simple Wikipedia (English). Contains 23,886,673 words."
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# simple-wikipedia\n\nProcessed, text-only dump of the Simple Wikipedia (English). Contains 23,886,673 words."
] |
5fa454be65daab579a518efa4f051e6d7f3916ba
|
# Dataset Card for "webmd-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
shefali2023/webmd-data
|
[
"region:us"
] |
2023-08-17T16:11:11+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12994920, "num_examples": 27728}, {"name": "test", "num_bytes": 1613731, "num_examples": 3493}], "download_size": 7011628, "dataset_size": 14608651}}
|
2023-08-17T16:11:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "webmd-data"
More Information needed
|
[
"# Dataset Card for \"webmd-data\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"webmd-data\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"webmd-data\"\n\nMore Information needed"
] |
f99b84ac990a1f31f895131f52e4412b9391c1cb
|
# Dataset of kaku_seiga/青娥娘々/霍青娥/곽청아 (Touhou)
This is the dataset of kaku_seiga/青娥娘々/霍青娥/곽청아 (Touhou), containing 500 images and their tags.
The core tags of this character are `blue_hair, hair_rings, hair_ornament, blue_eyes, short_hair, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:------------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 622.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kaku_seiga_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 412.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kaku_seiga_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1100 | 780.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kaku_seiga_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 571.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kaku_seiga_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1100 | 1001.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kaku_seiga_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kaku_seiga_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 17 |  |  |  |  |  | 1girl, dress, flower, hair_stick, shawl, smile, solo, vest |
| 1 | 10 |  |  |  |  |  | 1girl, blush, dress, flower, hair_stick, shawl, smile, solo, vest, medium_breasts |
| 2 | 14 |  |  |  |  |  | 1girl, dress, flower, hair_stick, shawl, smile, solo, vest, open_mouth, danmaku, energy_ball |
| 3 | 10 |  |  |  |  |  | 1girl, dress, flower, hair_stick, shawl, smile, solo, vest, medium_breasts, butterfly, cleavage |
| 4 | 7 |  |  |  |  |  | 1girl, blue_dress, hair_stick, open_vest, shawl, solo, flower, puffy_short_sleeves, smile, looking_at_viewer, drill_hair |
| 5 | 6 |  |  |  |  |  | 1girl, bangs, black_footwear, blue_dress, closed_mouth, full_body, hair_stick, simple_background, solo, white_vest, flower, hagoromo, open_vest, puffy_short_sleeves, white_socks, smile, white_background, frills, looking_at_viewer, shoes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | dress | flower | hair_stick | shawl | smile | solo | vest | blush | medium_breasts | open_mouth | danmaku | energy_ball | butterfly | cleavage | blue_dress | open_vest | puffy_short_sleeves | looking_at_viewer | drill_hair | bangs | black_footwear | closed_mouth | full_body | simple_background | white_vest | hagoromo | white_socks | white_background | frills | shoes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:---------|:-------------|:--------|:--------|:-------|:-------|:--------|:-----------------|:-------------|:----------|:--------------|:------------|:-----------|:-------------|:------------|:----------------------|:--------------------|:-------------|:--------|:-----------------|:---------------|:------------|:--------------------|:-------------|:-----------|:--------------|:-------------------|:---------|:--------|
| 0 | 17 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 2 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | X | X | X | | | | | | | | | | | | | | | | | | |
| 3 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | | X | | | | X | X | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | X | | X | X | X | X | X | | | | | | | | | X | X | X | X | X | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | | X | X | | X | X | | | | | | | | | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/kaku_seiga_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T16:23:34+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T18:05:26+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of kaku\_seiga/青娥娘々/霍青娥/곽청아 (Touhou)
============================================
This is the dataset of kaku\_seiga/青娥娘々/霍青娥/곽청아 (Touhou), containing 500 images and their tags.
The core tags of this character are 'blue\_hair, hair\_rings, hair\_ornament, blue\_eyes, short\_hair, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
b36375014be5f87460499f58a77ff5562b68e1d6
|
# Dataset of saigyouji_yuyuko/西行寺幽々子/사이교지유유코 (Touhou)
This is the dataset of saigyouji_yuyuko/西行寺幽々子/사이교지유유코 (Touhou), containing 500 images and their tags.
The core tags of this character are `pink_hair, hat, short_hair, pink_eyes, mob_cap, blue_headwear, bangs, breasts, ribbon, hair_between_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 858.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saigyouji_yuyuko_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 469.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saigyouji_yuyuko_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1144 | 933.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saigyouji_yuyuko_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 746.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saigyouji_yuyuko_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1144 | 1.33 GiB | [Download](https://huggingface.co/datasets/CyberHarem/saigyouji_yuyuko_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/saigyouji_yuyuko_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | 1girl, long_sleeves, solo, triangular_headpiece, wide_sleeves, obi, cherry_blossoms, folding_fan, blue_dress, looking_at_viewer, petals, smile, kimono, butterfly |
| 1 | 19 |  |  |  |  |  | 1girl, japanese_clothes, solo, triangular_headpiece, petals, smile, folding_fan, cherry_blossoms, butterfly, sash |
| 2 | 16 |  |  |  |  |  | 1girl, blue_kimono, folding_fan, holding_fan, long_sleeves, solo, triangular_headpiece, wide_sleeves, looking_at_viewer, cherry_blossoms, smile, frilled_kimono, obi, butterfly, closed_mouth, frilled_sleeves, petals, large_breasts, neck_ribbon |
| 3 | 9 |  |  |  |  |  | 1girl, blue_kimono, cherry_blossoms, frills, long_sleeves, looking_at_viewer, petals, smile, solo, triangular_headpiece, wide_sleeves, neck_ribbon, closed_mouth, obi, upper_body |
| 4 | 14 |  |  |  |  |  | 1girl, japanese_clothes, solo, triangular_headpiece, wide_sleeves, cherry_blossoms, petals, butterfly, smile, long_sleeves, obi, hitodama |
| 5 | 11 |  |  |  |  |  | 1girl, blue_kimono, long_sleeves, smile, solo, triangular_headpiece, wide_sleeves, looking_at_viewer, butterfly, blush, closed_mouth, sash, frilled_kimono, large_breasts, cherry_blossoms, hitodama, neck_ribbon |
| 6 | 5 |  |  |  |  |  | 1girl, blue_belt, blue_bow, blue_dress, blue_kimono, closed_mouth, collared_dress, long_sleeves, medium_breasts, smile, solo, triangular_headpiece, wide_sleeves, looking_at_viewer, butterfly_wings, frilled_kimono, blue_background, bowtie, center_frills, holding_sword, katana, standing |
| 7 | 8 |  |  |  |  |  | 1girl, blue_kimono, frilled_kimono, long_sleeves, solo, triangular_headpiece, wide_sleeves, holding_sword, katana, looking_at_viewer, closed_mouth, butterfly |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | long_sleeves | solo | triangular_headpiece | wide_sleeves | obi | cherry_blossoms | folding_fan | blue_dress | looking_at_viewer | petals | smile | kimono | butterfly | japanese_clothes | sash | blue_kimono | holding_fan | frilled_kimono | closed_mouth | frilled_sleeves | large_breasts | neck_ribbon | frills | upper_body | hitodama | blush | blue_belt | blue_bow | collared_dress | medium_breasts | butterfly_wings | blue_background | bowtie | center_frills | holding_sword | katana | standing |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-------|:-----------------------|:---------------|:------|:------------------|:--------------|:-------------|:--------------------|:---------|:--------|:---------|:------------|:-------------------|:-------|:--------------|:--------------|:-----------------|:---------------|:------------------|:----------------|:--------------|:---------|:-------------|:-----------|:--------|:------------|:-----------|:-----------------|:-----------------|:------------------|:------------------|:---------|:----------------|:----------------|:---------|:-----------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 19 |  |  |  |  |  | X | | X | X | | | X | X | | | X | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 16 |  |  |  |  |  | X | X | X | X | X | X | X | X | | X | X | X | | X | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 3 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | | | X | X | X | | | | | X | | | X | | | X | X | X | | | | | | | | | | | | | |
| 4 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | | | | X | X | | X | X | | | | | | | | | | | X | | | | | | | | | | | | |
| 5 | 11 |  |  |  |  |  | X | X | X | X | X | | X | | | X | | X | | X | | X | X | | X | X | | X | X | | | X | X | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | X | X | X | X | | | | X | X | | X | | | | | X | | X | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X |
| 7 | 8 |  |  |  |  |  | X | X | X | X | X | | | | | X | | | | X | | | X | | X | X | | | | | | | | | | | | | | | | X | X | |
|
CyberHarem/saigyouji_yuyuko_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T16:32:31+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T10:31:10+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of saigyouji\_yuyuko/西行寺幽々子/사이교지유유코 (Touhou)
====================================================
This is the dataset of saigyouji\_yuyuko/西行寺幽々子/사이교지유유코 (Touhou), containing 500 images and their tags.
The core tags of this character are 'pink\_hair, hat, short\_hair, pink\_eyes, mob\_cap, blue\_headwear, bangs, breasts, ribbon, hair\_between\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
ba2ddd90d4c85cf683c895e8299a4646ce647bd8
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
Sylvana/qa_en_translation
|
[
"task_categories:translation",
"size_categories:1K<n<10K",
"language:ar",
"license:apache-2.0",
"region:us"
] |
2023-08-17T16:38:33+00:00
|
{"language": ["ar"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["translation"]}
|
2023-08-18T06:51:14+00:00
|
[] |
[
"ar"
] |
TAGS
#task_categories-translation #size_categories-1K<n<10K #language-Arabic #license-apache-2.0 #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-translation #size_categories-1K<n<10K #language-Arabic #license-apache-2.0 #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
40,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-translation #size_categories-1K<n<10K #language-Arabic #license-apache-2.0 #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
9dd2802d4010bdf06817b249516a9df597ed0a78
|
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: LOTR BLIP captions
size_categories:
- n>1K
tags: []
task_categories:
- text-to-image
task_ids: []
|
bulu/lotr-withcaption
|
[
"region:us"
] |
2023-08-17T16:40:47+00:00
|
{}
|
2023-08-27T07:22:44+00:00
|
[] |
[] |
TAGS
#region-us
|
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: LOTR BLIP captions
size_categories:
- n>1K
tags: []
task_categories:
- text-to-image
task_ids: []
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
93fa0ca5a3067268290c673497bf974cebc83fa6
|
# Objaverse-XL
<a href="//arxiv.org/abs/2307.05663" target="_blank">
<img src="https://img.shields.io/badge/arXiv-2307.05663-<COLOR>">
</a>
Objaverse-XL is an open dataset of over 10 million 3D objects!
With it, we train Zero123-XL, a foundation model for 3D, observing incredible 3D generalization abilities: 🧵👇
<img src="https://mattdeitke.com/static/1cdcdb2ef7033e177ca9ae2975a9b451/9c1ca/objaverse-xl.webp">
## Scale Comparison
Objaverse 1.0 was released back in December. It was a step in the right direction, but still relatively small with 800K objects.
Objaverse-XL is over an order of magnitude larger and much more diverse!
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/43833dd3-ec97-4a3d-8782-00a6aea584b4">
## Unlocking Generalization
Compared to the original Zero123 model, Zero123-XL improves remarkably in 0-shot generalization abilities, even being able to perform novel view synthesis on sketches, cartoons, and people!
A ton more examples in the [📝 paper](https://arxiv.org/abs/2307.05663) :)
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/8470e4df-e39d-444b-9871-58fbee4b87fd">
## Image → 3D
With the base Zero123-XL foundation model, we can perform image → 3D using [DreamFusion](https://dreamfusion3d.github.io/), having the model guide a NeRF to generate novel views!
<video autoplay muted loop controls>
<source src="https://github.com/allenai/objaverse-rendering/assets/28768645/571852cd-dc02-46ce-b2bb-88f64a67d0ac" type="video/mp4">
</video>
## Text → 3D
Text-to-3D comes for free with text → image models, such as with SDXL here, providing the initial image!
<video autoplay muted loop controls>
<source src="https://github.com/allenai/objaverse-rendering/assets/28768645/96255b42-8158-4c7a-8308-7b0f1257ada8" type="video/mp4">
</video>
## Scaling Trends
Beyond that, we show strong scaling trends for both Zero123-XL and [PixelNeRF](https://alexyu.net/pixelnerf/)!
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/0c8bb433-27df-43a1-8cb8-1772007c0899">
## Tutorial
Check out the [Google Colab tutorial](https://colab.research.google.com/drive/15XpZMjrHXuky0IgBbXcsUtb_0g-XWYmN?usp=sharing) to download Objaverse-XL.
Polycam data is available by Polycam to academic researchers for non-commercial use upon request and approval from Polycam. For access please fill out [this form](https://forms.gle/HUjYVtS9GKVS5QBXA).
## License
The use of the dataset as a whole is licensed under the ODC-By v1.0 license. Individual objects in Objaverse-XL are licensed under different licenses.
## Citation
To cite Objaverse-XL, please cite our [📝 arXiv](https://arxiv.org/abs/2307.05663) paper with the following BibTeX entry:
```bibtex
@article{objaverseXL,
title={Objaverse-XL: A Universe of 10M+ 3D Objects},
author={Matt Deitke and Ruoshi Liu and Matthew Wallingford and Huong Ngo and
Oscar Michel and Aditya Kusupati and Alan Fan and Christian Laforte and
Vikram Voleti and Samir Yitzhak Gadre and Eli VanderBilt and
Aniruddha Kembhavi and Carl Vondrick and Georgia Gkioxari and
Kiana Ehsani and Ludwig Schmidt and Ali Farhadi},
journal={arXiv preprint arXiv:2307.05663},
year={2023}
}
```
Objaverse 1.0 is available on 🤗Hugging Face at [@allenai/objaverse](https://huggingface.co/datasets/allenai/objaverse). To cite it, use:
```bibtex
@article{objaverse,
title={Objaverse: A Universe of Annotated 3D Objects},
author={Matt Deitke and Dustin Schwenk and Jordi Salvador and Luca Weihs and
Oscar Michel and Eli VanderBilt and Ludwig Schmidt and
Kiana Ehsani and Aniruddha Kembhavi and Ali Farhadi},
journal={arXiv preprint arXiv:2212.08051},
year={2022}
}
```
|
allenai/objaverse-xl
|
[
"language:en",
"license:odc-by",
"arxiv:2307.05663",
"region:us"
] |
2023-08-17T16:50:21+00:00
|
{"language": ["en"], "license": "odc-by", "viewer": false}
|
2023-10-31T16:46:54+00:00
|
[
"2307.05663"
] |
[
"en"
] |
TAGS
#language-English #license-odc-by #arxiv-2307.05663 #region-us
|
# Objaverse-XL
<a href="//URL target="_blank">
<img src="URL
</a>
Objaverse-XL is an open dataset of over 10 million 3D objects!
With it, we train Zero123-XL, a foundation model for 3D, observing incredible 3D generalization abilities:
<img src="URL
## Scale Comparison
Objaverse 1.0 was released back in December. It was a step in the right direction, but still relatively small with 800K objects.
Objaverse-XL is over an order of magnitude larger and much more diverse!
<img src="URL
## Unlocking Generalization
Compared to the original Zero123 model, Zero123-XL improves remarkably in 0-shot generalization abilities, even being able to perform novel view synthesis on sketches, cartoons, and people!
A ton more examples in the paper :)
<img src="URL
## Image → 3D
With the base Zero123-XL foundation model, we can perform image → 3D using DreamFusion, having the model guide a NeRF to generate novel views!
<video autoplay muted loop controls>
<source src="URL type="video/mp4">
</video>
## Text → 3D
Text-to-3D comes for free with text → image models, such as with SDXL here, providing the initial image!
<video autoplay muted loop controls>
<source src="URL type="video/mp4">
</video>
## Scaling Trends
Beyond that, we show strong scaling trends for both Zero123-XL and PixelNeRF!
<img src="URL
## Tutorial
Check out the Google Colab tutorial to download Objaverse-XL.
Polycam data is available by Polycam to academic researchers for non-commercial use upon request and approval from Polycam. For access please fill out this form.
## License
The use of the dataset as a whole is licensed under the ODC-By v1.0 license. Individual objects in Objaverse-XL are licensed under different licenses.
To cite Objaverse-XL, please cite our arXiv paper with the following BibTeX entry:
Objaverse 1.0 is available on Hugging Face at @allenai/objaverse. To cite it, use:
|
[
"# Objaverse-XL\n\n<a href=\"//URL target=\"_blank\">\n <img src=\"URL\n</a>\n\nObjaverse-XL is an open dataset of over 10 million 3D objects!\n\nWith it, we train Zero123-XL, a foundation model for 3D, observing incredible 3D generalization abilities: \n\n<img src=\"URL",
"## Scale Comparison\n\nObjaverse 1.0 was released back in December. It was a step in the right direction, but still relatively small with 800K objects.\n\nObjaverse-XL is over an order of magnitude larger and much more diverse!\n\n<img src=\"URL",
"## Unlocking Generalization\n\nCompared to the original Zero123 model, Zero123-XL improves remarkably in 0-shot generalization abilities, even being able to perform novel view synthesis on sketches, cartoons, and people!\n\nA ton more examples in the paper :)\n\n<img src=\"URL",
"## Image → 3D\n\nWith the base Zero123-XL foundation model, we can perform image → 3D using DreamFusion, having the model guide a NeRF to generate novel views!\n\n<video autoplay muted loop controls>\n <source src=\"URL type=\"video/mp4\">\n</video>",
"## Text → 3D\n\nText-to-3D comes for free with text → image models, such as with SDXL here, providing the initial image!\n\n<video autoplay muted loop controls>\n <source src=\"URL type=\"video/mp4\">\n</video>",
"## Scaling Trends\n\nBeyond that, we show strong scaling trends for both Zero123-XL and PixelNeRF!\n\n<img src=\"URL",
"## Tutorial\n\nCheck out the Google Colab tutorial to download Objaverse-XL.\n\nPolycam data is available by Polycam to academic researchers for non-commercial use upon request and approval from Polycam. For access please fill out this form.",
"## License\n\nThe use of the dataset as a whole is licensed under the ODC-By v1.0 license. Individual objects in Objaverse-XL are licensed under different licenses.\n\nTo cite Objaverse-XL, please cite our arXiv paper with the following BibTeX entry:\n\n\n\nObjaverse 1.0 is available on Hugging Face at @allenai/objaverse. To cite it, use:"
] |
[
"TAGS\n#language-English #license-odc-by #arxiv-2307.05663 #region-us \n",
"# Objaverse-XL\n\n<a href=\"//URL target=\"_blank\">\n <img src=\"URL\n</a>\n\nObjaverse-XL is an open dataset of over 10 million 3D objects!\n\nWith it, we train Zero123-XL, a foundation model for 3D, observing incredible 3D generalization abilities: \n\n<img src=\"URL",
"## Scale Comparison\n\nObjaverse 1.0 was released back in December. It was a step in the right direction, but still relatively small with 800K objects.\n\nObjaverse-XL is over an order of magnitude larger and much more diverse!\n\n<img src=\"URL",
"## Unlocking Generalization\n\nCompared to the original Zero123 model, Zero123-XL improves remarkably in 0-shot generalization abilities, even being able to perform novel view synthesis on sketches, cartoons, and people!\n\nA ton more examples in the paper :)\n\n<img src=\"URL",
"## Image → 3D\n\nWith the base Zero123-XL foundation model, we can perform image → 3D using DreamFusion, having the model guide a NeRF to generate novel views!\n\n<video autoplay muted loop controls>\n <source src=\"URL type=\"video/mp4\">\n</video>",
"## Text → 3D\n\nText-to-3D comes for free with text → image models, such as with SDXL here, providing the initial image!\n\n<video autoplay muted loop controls>\n <source src=\"URL type=\"video/mp4\">\n</video>",
"## Scaling Trends\n\nBeyond that, we show strong scaling trends for both Zero123-XL and PixelNeRF!\n\n<img src=\"URL",
"## Tutorial\n\nCheck out the Google Colab tutorial to download Objaverse-XL.\n\nPolycam data is available by Polycam to academic researchers for non-commercial use upon request and approval from Polycam. For access please fill out this form.",
"## License\n\nThe use of the dataset as a whole is licensed under the ODC-By v1.0 license. Individual objects in Objaverse-XL are licensed under different licenses.\n\nTo cite Objaverse-XL, please cite our arXiv paper with the following BibTeX entry:\n\n\n\nObjaverse 1.0 is available on Hugging Face at @allenai/objaverse. To cite it, use:"
] |
[
27,
82,
60,
69,
66,
58,
33,
53,
90
] |
[
"passage: TAGS\n#language-English #license-odc-by #arxiv-2307.05663 #region-us \n# Objaverse-XL\n\n<a href=\"//URL target=\"_blank\">\n <img src=\"URL\n</a>\n\nObjaverse-XL is an open dataset of over 10 million 3D objects!\n\nWith it, we train Zero123-XL, a foundation model for 3D, observing incredible 3D generalization abilities: \n\n<img src=\"URL## Scale Comparison\n\nObjaverse 1.0 was released back in December. It was a step in the right direction, but still relatively small with 800K objects.\n\nObjaverse-XL is over an order of magnitude larger and much more diverse!\n\n<img src=\"URL## Unlocking Generalization\n\nCompared to the original Zero123 model, Zero123-XL improves remarkably in 0-shot generalization abilities, even being able to perform novel view synthesis on sketches, cartoons, and people!\n\nA ton more examples in the paper :)\n\n<img src=\"URL## Image → 3D\n\nWith the base Zero123-XL foundation model, we can perform image → 3D using DreamFusion, having the model guide a NeRF to generate novel views!\n\n<video autoplay muted loop controls>\n <source src=\"URL type=\"video/mp4\">\n</video>## Text → 3D\n\nText-to-3D comes for free with text → image models, such as with SDXL here, providing the initial image!\n\n<video autoplay muted loop controls>\n <source src=\"URL type=\"video/mp4\">\n</video>## Scaling Trends\n\nBeyond that, we show strong scaling trends for both Zero123-XL and PixelNeRF!\n\n<img src=\"URL## Tutorial\n\nCheck out the Google Colab tutorial to download Objaverse-XL.\n\nPolycam data is available by Polycam to academic researchers for non-commercial use upon request and approval from Polycam. For access please fill out this form."
] |
05d7bb78680167e36f11ec1a7e8f2c789aa28edc
|
# Fine-tuning Instruct Llama2 Stack Overflow Python Q&A
## Transformed Dataset
### Objective
The transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow. It has around 20k instructions.
### Structure
- **Question-Answer Pairing**: Questions and answers are paired using the `ParentId` linkage.
- **Quality Focus**: Only top-rated answers for each question are retained.
- **HTML Tag Removal**: All HTML tags in the content are removed.
- **Combined Question Field**: Each question's title and body are merged.
- **Filtering**: Entries with negative scores or those not containing Python code structures are excluded.
Final columns:
- `score_question`
- `score_answer`
- `question`
- `answer`
### Llama2 Transformation
The dataset has been transformed to match the Llama2 prompt structure, which is relevant for the model's fine-tuning. The format is the following:
`<s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST]`
Where:
- `system_prompt` gives context or instructions to the model.
- `user_message` is the user's query following the system prompt, expecting a particular response from the model.
This structure ensures the training aligns with Llama2's expectations, optimizing the fine-tuning quality.
## Original Dataset
The dataset contains questions and answers from Stack Overflow with the `python` tag, covering the period from August 2, 2008, to October 19, 2016.
## License
All contributions are under the [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/). Attribution is required. The original dataset was posted [here](https://www.kaggle.com/datasets/stackoverflow/pythonquestions).
Keep in touch: [LinkedIn](https://www.linkedin.com/in/luisbrasroque/)
|
luisroque/instruct-python-llama2-20k
|
[
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] |
2023-08-17T16:59:03+00:00
|
{"language": ["en"], "license": "cc-by-sa-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "Instruct Python 500k", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 34661192.7, "num_examples": 19000}, {"name": "test", "num_bytes": 1824273.3, "num_examples": 1000}], "download_size": 19060329, "dataset_size": 36485466}}
|
2023-08-18T08:44:00+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-cc-by-sa-3.0 #region-us
|
# Fine-tuning Instruct Llama2 Stack Overflow Python Q&A
## Transformed Dataset
### Objective
The transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow. It has around 20k instructions.
### Structure
- Question-Answer Pairing: Questions and answers are paired using the 'ParentId' linkage.
- Quality Focus: Only top-rated answers for each question are retained.
- HTML Tag Removal: All HTML tags in the content are removed.
- Combined Question Field: Each question's title and body are merged.
- Filtering: Entries with negative scores or those not containing Python code structures are excluded.
Final columns:
- 'score_question'
- 'score_answer'
- 'question'
- 'answer'
### Llama2 Transformation
The dataset has been transformed to match the Llama2 prompt structure, which is relevant for the model's fine-tuning. The format is the following:
'<s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST]'
Where:
- 'system_prompt' gives context or instructions to the model.
- 'user_message' is the user's query following the system prompt, expecting a particular response from the model.
This structure ensures the training aligns with Llama2's expectations, optimizing the fine-tuning quality.
## Original Dataset
The dataset contains questions and answers from Stack Overflow with the 'python' tag, covering the period from August 2, 2008, to October 19, 2016.
## License
All contributions are under the CC-BY-SA 3.0. Attribution is required. The original dataset was posted here.
Keep in touch: LinkedIn
|
[
"# Fine-tuning Instruct Llama2 Stack Overflow Python Q&A",
"## Transformed Dataset",
"### Objective\n\nThe transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow. It has around 20k instructions.",
"### Structure\n\n- Question-Answer Pairing: Questions and answers are paired using the 'ParentId' linkage.\n- Quality Focus: Only top-rated answers for each question are retained.\n- HTML Tag Removal: All HTML tags in the content are removed.\n- Combined Question Field: Each question's title and body are merged.\n- Filtering: Entries with negative scores or those not containing Python code structures are excluded.\n\nFinal columns:\n- 'score_question'\n- 'score_answer'\n- 'question'\n- 'answer'",
"### Llama2 Transformation\n\nThe dataset has been transformed to match the Llama2 prompt structure, which is relevant for the model's fine-tuning. The format is the following:\n\n'<s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST]'\n\nWhere:\n- 'system_prompt' gives context or instructions to the model.\n- 'user_message' is the user's query following the system prompt, expecting a particular response from the model.\n\nThis structure ensures the training aligns with Llama2's expectations, optimizing the fine-tuning quality.",
"## Original Dataset\n\nThe dataset contains questions and answers from Stack Overflow with the 'python' tag, covering the period from August 2, 2008, to October 19, 2016.",
"## License\n\nAll contributions are under the CC-BY-SA 3.0. Attribution is required. The original dataset was posted here.\n\nKeep in touch: LinkedIn"
] |
[
"TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-cc-by-sa-3.0 #region-us \n",
"# Fine-tuning Instruct Llama2 Stack Overflow Python Q&A",
"## Transformed Dataset",
"### Objective\n\nThe transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow. It has around 20k instructions.",
"### Structure\n\n- Question-Answer Pairing: Questions and answers are paired using the 'ParentId' linkage.\n- Quality Focus: Only top-rated answers for each question are retained.\n- HTML Tag Removal: All HTML tags in the content are removed.\n- Combined Question Field: Each question's title and body are merged.\n- Filtering: Entries with negative scores or those not containing Python code structures are excluded.\n\nFinal columns:\n- 'score_question'\n- 'score_answer'\n- 'question'\n- 'answer'",
"### Llama2 Transformation\n\nThe dataset has been transformed to match the Llama2 prompt structure, which is relevant for the model's fine-tuning. The format is the following:\n\n'<s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST]'\n\nWhere:\n- 'system_prompt' gives context or instructions to the model.\n- 'user_message' is the user's query following the system prompt, expecting a particular response from the model.\n\nThis structure ensures the training aligns with Llama2's expectations, optimizing the fine-tuning quality.",
"## Original Dataset\n\nThe dataset contains questions and answers from Stack Overflow with the 'python' tag, covering the period from August 2, 2008, to October 19, 2016.",
"## License\n\nAll contributions are under the CC-BY-SA 3.0. Attribution is required. The original dataset was posted here.\n\nKeep in touch: LinkedIn"
] |
[
44,
18,
6,
46,
137,
150,
39,
32
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-cc-by-sa-3.0 #region-us \n# Fine-tuning Instruct Llama2 Stack Overflow Python Q&A## Transformed Dataset### Objective\n\nThe transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow. It has around 20k instructions.### Structure\n\n- Question-Answer Pairing: Questions and answers are paired using the 'ParentId' linkage.\n- Quality Focus: Only top-rated answers for each question are retained.\n- HTML Tag Removal: All HTML tags in the content are removed.\n- Combined Question Field: Each question's title and body are merged.\n- Filtering: Entries with negative scores or those not containing Python code structures are excluded.\n\nFinal columns:\n- 'score_question'\n- 'score_answer'\n- 'question'\n- 'answer'### Llama2 Transformation\n\nThe dataset has been transformed to match the Llama2 prompt structure, which is relevant for the model's fine-tuning. The format is the following:\n\n'<s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST]'\n\nWhere:\n- 'system_prompt' gives context or instructions to the model.\n- 'user_message' is the user's query following the system prompt, expecting a particular response from the model.\n\nThis structure ensures the training aligns with Llama2's expectations, optimizing the fine-tuning quality.## Original Dataset\n\nThe dataset contains questions and answers from Stack Overflow with the 'python' tag, covering the period from August 2, 2008, to October 19, 2016.## License\n\nAll contributions are under the CC-BY-SA 3.0. Attribution is required. The original dataset was posted here.\n\nKeep in touch: LinkedIn"
] |
2b1aa6f1a28dcd49d8a26d92817e12d6c2659c3a
|
# Fine-tuning Instruct Llama2 Stack Overflow Python Q&A
## Transformed Dataset
### Objective
The transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow. It has around 500k instructions.
### Structure
- **Question-Answer Pairing**: Questions and answers are paired using the `ParentId` linkage.
- **Quality Focus**: Only top-rated answers for each question are retained.
- **HTML Tag Removal**: All HTML tags in the content are removed.
- **Combined Question Field**: Each question's title and body are merged.
- **Filtering**: Entries with negative scores or those not containing Python code structures are excluded.
Final columns:
- `score_question`
- `score_answer`
- `question`
- `answer`
### Llama2 Transformation
The dataset has been transformed to match the Llama2 prompt structure, which is relevant for the model's fine-tuning. The format is the following:
`<s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST]`
Where:
- `system_prompt` gives context or instructions to the model.
- `user_message` is the user's query following the system prompt, expecting a particular response from the model.
This structure ensures the training aligns with Llama2's expectations, optimizing the fine-tuning quality.
## Original Dataset
The dataset contains questions and answers from Stack Overflow with the `python` tag, covering the period from August 2, 2008, to October 19, 2016.
## License
All contributions are under the [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/). Attribution is required. The original dataset was posted [here](https://www.kaggle.com/datasets/stackoverflow/pythonquestions).
Keep in touch: [LinkedIn](https://www.linkedin.com/in/luisbrasroque/)
|
luisroque/instruct-python-llama2-500k
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] |
2023-08-17T16:59:11+00:00
|
{"language": ["en"], "license": "cc-by-sa-3.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "Instruct Python 500k", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1046127202, "num_examples": 501349}], "download_size": 530786217, "dataset_size": 1046127202}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-18T08:44:26+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-cc-by-sa-3.0 #region-us
|
# Fine-tuning Instruct Llama2 Stack Overflow Python Q&A
## Transformed Dataset
### Objective
The transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow. It has around 500k instructions.
### Structure
- Question-Answer Pairing: Questions and answers are paired using the 'ParentId' linkage.
- Quality Focus: Only top-rated answers for each question are retained.
- HTML Tag Removal: All HTML tags in the content are removed.
- Combined Question Field: Each question's title and body are merged.
- Filtering: Entries with negative scores or those not containing Python code structures are excluded.
Final columns:
- 'score_question'
- 'score_answer'
- 'question'
- 'answer'
### Llama2 Transformation
The dataset has been transformed to match the Llama2 prompt structure, which is relevant for the model's fine-tuning. The format is the following:
'<s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST]'
Where:
- 'system_prompt' gives context or instructions to the model.
- 'user_message' is the user's query following the system prompt, expecting a particular response from the model.
This structure ensures the training aligns with Llama2's expectations, optimizing the fine-tuning quality.
## Original Dataset
The dataset contains questions and answers from Stack Overflow with the 'python' tag, covering the period from August 2, 2008, to October 19, 2016.
## License
All contributions are under the CC-BY-SA 3.0. Attribution is required. The original dataset was posted here.
Keep in touch: LinkedIn
|
[
"# Fine-tuning Instruct Llama2 Stack Overflow Python Q&A",
"## Transformed Dataset",
"### Objective\n\nThe transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow. It has around 500k instructions.",
"### Structure\n\n- Question-Answer Pairing: Questions and answers are paired using the 'ParentId' linkage.\n- Quality Focus: Only top-rated answers for each question are retained.\n- HTML Tag Removal: All HTML tags in the content are removed.\n- Combined Question Field: Each question's title and body are merged.\n- Filtering: Entries with negative scores or those not containing Python code structures are excluded.\n\nFinal columns:\n- 'score_question'\n- 'score_answer'\n- 'question'\n- 'answer'",
"### Llama2 Transformation\n\nThe dataset has been transformed to match the Llama2 prompt structure, which is relevant for the model's fine-tuning. The format is the following:\n\n'<s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST]'\n\nWhere:\n- 'system_prompt' gives context or instructions to the model.\n- 'user_message' is the user's query following the system prompt, expecting a particular response from the model.\n\nThis structure ensures the training aligns with Llama2's expectations, optimizing the fine-tuning quality.",
"## Original Dataset\n\nThe dataset contains questions and answers from Stack Overflow with the 'python' tag, covering the period from August 2, 2008, to October 19, 2016.",
"## License\n\nAll contributions are under the CC-BY-SA 3.0. Attribution is required. The original dataset was posted here.\n\nKeep in touch: LinkedIn"
] |
[
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-cc-by-sa-3.0 #region-us \n",
"# Fine-tuning Instruct Llama2 Stack Overflow Python Q&A",
"## Transformed Dataset",
"### Objective\n\nThe transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow. It has around 500k instructions.",
"### Structure\n\n- Question-Answer Pairing: Questions and answers are paired using the 'ParentId' linkage.\n- Quality Focus: Only top-rated answers for each question are retained.\n- HTML Tag Removal: All HTML tags in the content are removed.\n- Combined Question Field: Each question's title and body are merged.\n- Filtering: Entries with negative scores or those not containing Python code structures are excluded.\n\nFinal columns:\n- 'score_question'\n- 'score_answer'\n- 'question'\n- 'answer'",
"### Llama2 Transformation\n\nThe dataset has been transformed to match the Llama2 prompt structure, which is relevant for the model's fine-tuning. The format is the following:\n\n'<s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST]'\n\nWhere:\n- 'system_prompt' gives context or instructions to the model.\n- 'user_message' is the user's query following the system prompt, expecting a particular response from the model.\n\nThis structure ensures the training aligns with Llama2's expectations, optimizing the fine-tuning quality.",
"## Original Dataset\n\nThe dataset contains questions and answers from Stack Overflow with the 'python' tag, covering the period from August 2, 2008, to October 19, 2016.",
"## License\n\nAll contributions are under the CC-BY-SA 3.0. Attribution is required. The original dataset was posted here.\n\nKeep in touch: LinkedIn"
] |
[
44,
18,
6,
46,
137,
150,
39,
32
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-cc-by-sa-3.0 #region-us \n# Fine-tuning Instruct Llama2 Stack Overflow Python Q&A## Transformed Dataset### Objective\n\nThe transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow. It has around 500k instructions.### Structure\n\n- Question-Answer Pairing: Questions and answers are paired using the 'ParentId' linkage.\n- Quality Focus: Only top-rated answers for each question are retained.\n- HTML Tag Removal: All HTML tags in the content are removed.\n- Combined Question Field: Each question's title and body are merged.\n- Filtering: Entries with negative scores or those not containing Python code structures are excluded.\n\nFinal columns:\n- 'score_question'\n- 'score_answer'\n- 'question'\n- 'answer'### Llama2 Transformation\n\nThe dataset has been transformed to match the Llama2 prompt structure, which is relevant for the model's fine-tuning. The format is the following:\n\n'<s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST]'\n\nWhere:\n- 'system_prompt' gives context or instructions to the model.\n- 'user_message' is the user's query following the system prompt, expecting a particular response from the model.\n\nThis structure ensures the training aligns with Llama2's expectations, optimizing the fine-tuning quality.## Original Dataset\n\nThe dataset contains questions and answers from Stack Overflow with the 'python' tag, covering the period from August 2, 2008, to October 19, 2016.## License\n\nAll contributions are under the CC-BY-SA 3.0. Attribution is required. The original dataset was posted here.\n\nKeep in touch: LinkedIn"
] |
efec96c612cac66a1c3837ef6168383b60d73c88
|
# Dataset Card for "assessment_evaluation_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
reinforz/assessment_evaluation_data
|
[
"region:us"
] |
2023-08-17T17:00:42+00:00
|
{"dataset_info": {"features": [{"name": "user_input", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "relevence_score", "dtype": "int64"}, {"name": "grammar_score", "dtype": "int64"}, {"name": "coherence_score", "dtype": "int64"}, {"name": "type", "dtype": "string"}, {"name": "subject", "struct": [{"name": "subTopic", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "topic", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 13034721, "num_examples": 4523}], "download_size": 4911455, "dataset_size": 13034721}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-17T17:00:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "assessment_evaluation_data"
More Information needed
|
[
"# Dataset Card for \"assessment_evaluation_data\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"assessment_evaluation_data\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"assessment_evaluation_data\"\n\nMore Information needed"
] |
803057027a167025c7e31c9da13bb04b55dcdd26
|
# Dataset of himekaidou_hatate/姫海棠はたて/히메카이도하타테 (Touhou)
This is the dataset of himekaidou_hatate/姫海棠はたて/히메카이도하타테 (Touhou), containing 499 images and their tags.
The core tags of this character are `twintails, brown_hair, tokin_hat, hat, long_hair, ribbon, purple_eyes, brown_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 499 | 580.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/himekaidou_hatate_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 499 | 381.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/himekaidou_hatate_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1107 | 735.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/himekaidou_hatate_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 499 | 540.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/himekaidou_hatate_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1107 | 960.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/himekaidou_hatate_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/himekaidou_hatate_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, cellphone, checkered_skirt, necktie, pointy_ears, solo, tengu-geta |
| 1 | 7 |  |  |  |  |  | 1girl, cellphone, checkered_skirt, necktie, solo |
| 2 | 6 |  |  |  |  |  | 1girl, cellphone, checkered_skirt, necktie, solo, blush, pointy_ears |
| 3 | 31 |  |  |  |  |  | 1girl, solo, obi, japanese_clothes, kourindou_tengu_costume, wide_sleeves, looking_at_viewer, pointy_ears, long_sleeves, black_wings, hair_ribbon, smile, alternate_costume, katana, detached_sleeves, thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | cellphone | checkered_skirt | necktie | pointy_ears | solo | tengu-geta | blush | obi | japanese_clothes | kourindou_tengu_costume | wide_sleeves | looking_at_viewer | long_sleeves | black_wings | hair_ribbon | smile | alternate_costume | katana | detached_sleeves | thighhighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------|:------------------|:----------|:--------------|:-------|:-------------|:--------|:------|:-------------------|:--------------------------|:---------------|:--------------------|:---------------|:--------------|:--------------|:--------|:--------------------|:---------|:-------------------|:-------------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | | X | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | X | X | X | | X | | | | | | | | | | | | | |
| 3 | 31 |  |  |  |  |  | X | | | | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/himekaidou_hatate_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T17:05:33+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T17:42:24+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of himekaidou\_hatate/姫海棠はたて/히메카이도하타테 (Touhou)
======================================================
This is the dataset of himekaidou\_hatate/姫海棠はたて/히메카이도하타테 (Touhou), containing 499 images and their tags.
The core tags of this character are 'twintails, brown\_hair, tokin\_hat, hat, long\_hair, ribbon, purple\_eyes, brown\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
d33803151dfe35f84e91908fadf5c57def3e109f
|
# Fine-tuning Instruct Stack Overflow Python Q&A
## Transformed Dataset
### Objective
The transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow.
### Structure
- **Question-Answer Pairing**: Questions and answers are paired using the `ParentId` linkage.
- **Quality Focus**: Only top-rated answers for each question are retained.
- **HTML Tag Removal**: All HTML tags in the content are removed.
- **Combined Question Field**: Each question's title and body are merged.
- **Filtering**: Entries with negative scores or those not containing Python code structures are excluded.
Final columns:
- `score_question`
- `score_answer`
- `question`
- `answer`
## Original Dataset
The dataset contains questions and answers from Stack Overflow with the `python` tag, covering the period from August 2, 2008, to October 19, 2016.
## License
All contributions are under the [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/). Attribution is required. The original dataset was posted [here](https://www.kaggle.com/datasets/stackoverflow/pythonquestions).
Keep in touch: [LinkedIn](https://www.linkedin.com/in/luisbrasroque/)
|
luisroque/instruct-python-500k
|
[
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] |
2023-08-17T17:14:25+00:00
|
{"language": ["en"], "license": "cc-by-sa-3.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "Instruct Python 500k", "dataset_info": {"features": [{"name": "score_question", "dtype": "int16"}, {"name": "score_answer", "dtype": "int16"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 987469369, "num_examples": 501349}], "download_size": 550185963, "dataset_size": 987469369}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-08-18T08:44:42+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-cc-by-sa-3.0 #region-us
|
# Fine-tuning Instruct Stack Overflow Python Q&A
## Transformed Dataset
### Objective
The transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow.
### Structure
- Question-Answer Pairing: Questions and answers are paired using the 'ParentId' linkage.
- Quality Focus: Only top-rated answers for each question are retained.
- HTML Tag Removal: All HTML tags in the content are removed.
- Combined Question Field: Each question's title and body are merged.
- Filtering: Entries with negative scores or those not containing Python code structures are excluded.
Final columns:
- 'score_question'
- 'score_answer'
- 'question'
- 'answer'
## Original Dataset
The dataset contains questions and answers from Stack Overflow with the 'python' tag, covering the period from August 2, 2008, to October 19, 2016.
## License
All contributions are under the CC-BY-SA 3.0. Attribution is required. The original dataset was posted here.
Keep in touch: LinkedIn
|
[
"# Fine-tuning Instruct Stack Overflow Python Q&A",
"## Transformed Dataset",
"### Objective\n\nThe transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow.",
"### Structure\n\n- Question-Answer Pairing: Questions and answers are paired using the 'ParentId' linkage.\n- Quality Focus: Only top-rated answers for each question are retained.\n- HTML Tag Removal: All HTML tags in the content are removed.\n- Combined Question Field: Each question's title and body are merged.\n- Filtering: Entries with negative scores or those not containing Python code structures are excluded.\n\nFinal columns:\n- 'score_question'\n- 'score_answer'\n- 'question'\n- 'answer'",
"## Original Dataset\n\nThe dataset contains questions and answers from Stack Overflow with the 'python' tag, covering the period from August 2, 2008, to October 19, 2016.",
"## License\n\nAll contributions are under the CC-BY-SA 3.0. Attribution is required. The original dataset was posted here.\n\nKeep in touch: LinkedIn"
] |
[
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-cc-by-sa-3.0 #region-us \n",
"# Fine-tuning Instruct Stack Overflow Python Q&A",
"## Transformed Dataset",
"### Objective\n\nThe transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow.",
"### Structure\n\n- Question-Answer Pairing: Questions and answers are paired using the 'ParentId' linkage.\n- Quality Focus: Only top-rated answers for each question are retained.\n- HTML Tag Removal: All HTML tags in the content are removed.\n- Combined Question Field: Each question's title and body are merged.\n- Filtering: Entries with negative scores or those not containing Python code structures are excluded.\n\nFinal columns:\n- 'score_question'\n- 'score_answer'\n- 'question'\n- 'answer'",
"## Original Dataset\n\nThe dataset contains questions and answers from Stack Overflow with the 'python' tag, covering the period from August 2, 2008, to October 19, 2016.",
"## License\n\nAll contributions are under the CC-BY-SA 3.0. Attribution is required. The original dataset was posted here.\n\nKeep in touch: LinkedIn"
] |
[
44,
15,
6,
39,
137,
39,
32
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-cc-by-sa-3.0 #region-us \n# Fine-tuning Instruct Stack Overflow Python Q&A## Transformed Dataset### Objective\n\nThe transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow.### Structure\n\n- Question-Answer Pairing: Questions and answers are paired using the 'ParentId' linkage.\n- Quality Focus: Only top-rated answers for each question are retained.\n- HTML Tag Removal: All HTML tags in the content are removed.\n- Combined Question Field: Each question's title and body are merged.\n- Filtering: Entries with negative scores or those not containing Python code structures are excluded.\n\nFinal columns:\n- 'score_question'\n- 'score_answer'\n- 'question'\n- 'answer'## Original Dataset\n\nThe dataset contains questions and answers from Stack Overflow with the 'python' tag, covering the period from August 2, 2008, to October 19, 2016.## License\n\nAll contributions are under the CC-BY-SA 3.0. Attribution is required. The original dataset was posted here.\n\nKeep in touch: LinkedIn"
] |
67aad554f92226c537ee96f4c2d14dba6fec9d09
|
# Dataset of shameimaru_aya/射命丸文/샤메이마루아야 (Touhou)
This is the dataset of shameimaru_aya/射命丸文/샤메이마루아야 (Touhou), containing 500 images and their tags.
The core tags of this character are `hat, short_hair, tokin_hat, red_eyes, black_hair, breasts, wings, black_wings, red_headwear, bangs, pointy_ears`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 544.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shameimaru_aya_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 339.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shameimaru_aya_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1094 | 661.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shameimaru_aya_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 493.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shameimaru_aya_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1094 | 887.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shameimaru_aya_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/shameimaru_aya_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, black_skirt, collared_shirt, pom_pom_(clothes), puffy_short_sleeves, solo, white_shirt, bird_wings, black_ribbon, frilled_skirt, holding, looking_at_viewer, smile, feathered_wings, open_mouth, simple_background, hair_between_eyes, white_background |
| 1 | 8 |  |  |  |  |  | 1girl, black_skirt, collared_shirt, frilled_skirt, pom_pom_(clothes), red_footwear, solo, tengu-geta, white_shirt, black_ribbon, looking_at_viewer, simple_background, smile, white_background, closed_mouth, full_body, hauchiwa, holding_fan, bird_wings, feathered_wings, belt, black_socks, kneehighs, neck_ribbon, puffy_short_sleeves, standing, white_socks |
| 2 | 5 |  |  |  |  |  | 1girl, black_bowtie, collared_shirt, hair_between_eyes, puffy_short_sleeves, solo, white_shirt, black_skirt, blush, looking_at_viewer, simple_background, white_background, buttons, holding, open_mouth, pom_pom_(clothes), :d, belt, bird_wings, closed_mouth, cowboy_shot, feathered_wings, frilled_skirt, large_breasts, upper_body |
| 3 | 10 |  |  |  |  |  | 1girl, looking_at_viewer, skirt, solo, smile, hand_fan, tengu-geta, feathers, hauchiwa |
| 4 | 10 |  |  |  |  |  | 1girl, looking_at_viewer, open_shirt, solo, nipples, blush, medium_breasts, no_bra, open_mouth, panties |
| 5 | 5 |  |  |  |  |  | 1girl, blush, looking_at_viewer, nipples, solo, censored, medium_breasts, nude, pussy, anus, open_mouth, spread_legs, tears, cum, navel, on_back |
| 6 | 7 |  |  |  |  |  | 1girl, nipples, cum_on_breasts, large_breasts, looking_at_viewer, open_mouth, blush, hetero, 1boy, facial, penis, solo_focus, medium_breasts, open_shirt, smile |
| 7 | 5 |  |  |  |  |  | 1boy, 1girl, cowgirl_position, cum_in_pussy, girl_on_top, hetero, looking_at_viewer, navel, nipples, penis, sex, solo_focus, vaginal, blush, hair_between_eyes, large_breasts, mosaic_censoring, collarbone, completely_nude, indoors, open_mouth, overflow, pom_pom_(clothes), pov, spread_legs, closed_mouth, medium_breasts, pubic_hair, smile, stomach, sweat, thighs |
| 8 | 5 |  |  |  |  |  | 1boy, 1girl, blush, hetero, nipples, penis, solo_focus, large_breasts, navel, nude, open_mouth, sex, spread_legs, vaginal, bar_censor, cum_in_pussy, kneehighs, on_side, pom_pom_(clothes) |
| 9 | 7 |  |  |  |  |  | 1girl, blush, solo, looking_at_viewer, white_panties, medium_breasts, open_mouth, skirt |
| 10 | 22 |  |  |  |  |  | 1girl, solo, looking_at_viewer, kourindou_tengu_costume, smile, japanese_clothes, obi, pom_pom_(clothes), wide_sleeves, detached_sleeves, open_mouth, ribbon_trim |
| 11 | 6 |  |  |  |  |  | 1girl, big_belly, blush, fat, large_breasts, skirt, solo, bursting_breasts, undersized_clothes, collared_shirt, d:, looking_at_viewer, navel, open_mouth, plump, pom_pom_(clothes), sweat, tengu-geta, thick_thighs, v-shaped_eyebrows |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_skirt | collared_shirt | pom_pom_(clothes) | puffy_short_sleeves | solo | white_shirt | bird_wings | black_ribbon | frilled_skirt | holding | looking_at_viewer | smile | feathered_wings | open_mouth | simple_background | hair_between_eyes | white_background | red_footwear | tengu-geta | closed_mouth | full_body | hauchiwa | holding_fan | belt | black_socks | kneehighs | neck_ribbon | standing | white_socks | black_bowtie | blush | buttons | :d | cowboy_shot | large_breasts | upper_body | skirt | hand_fan | feathers | open_shirt | nipples | medium_breasts | no_bra | panties | censored | nude | pussy | anus | spread_legs | tears | cum | navel | on_back | cum_on_breasts | hetero | 1boy | facial | penis | solo_focus | cowgirl_position | cum_in_pussy | girl_on_top | sex | vaginal | mosaic_censoring | collarbone | completely_nude | indoors | overflow | pov | pubic_hair | stomach | sweat | thighs | bar_censor | on_side | white_panties | kourindou_tengu_costume | japanese_clothes | obi | wide_sleeves | detached_sleeves | ribbon_trim | big_belly | fat | bursting_breasts | undersized_clothes | d: | plump | thick_thighs | v-shaped_eyebrows |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------------|:-----------------|:--------------------|:----------------------|:-------|:--------------|:-------------|:---------------|:----------------|:----------|:--------------------|:--------|:------------------|:-------------|:--------------------|:--------------------|:-------------------|:---------------|:-------------|:---------------|:------------|:-----------|:--------------|:-------|:--------------|:------------|:--------------|:-----------|:--------------|:---------------|:--------|:----------|:-----|:--------------|:----------------|:-------------|:--------|:-----------|:-----------|:-------------|:----------|:-----------------|:---------|:----------|:-----------|:-------|:--------|:-------|:--------------|:--------|:------|:--------|:----------|:-----------------|:---------|:-------|:---------|:--------|:-------------|:-------------------|:---------------|:--------------|:------|:----------|:-------------------|:-------------|:------------------|:----------|:-----------|:------|:-------------|:----------|:--------|:---------|:-------------|:----------|:----------------|:--------------------------|:-------------------|:------|:---------------|:-------------------|:--------------|:------------|:------|:-------------------|:---------------------|:-----|:--------|:---------------|:--------------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | X | X | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | | X | X | X | | X | X | X | X | X | | | X | | | | X | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 10 |  |  |  |  |  | X | | | | | X | | | | | | X | X | | | | | | | X | | | X | | | | | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 10 |  |  |  |  |  | X | | | | | X | | | | | | X | | | X | | | | | | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | | | | X | | | | | | X | | | X | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | X | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | | | | | | | | | | | X | X | | X | | | | | | | | | | | | | | | | | X | | | | X | | | | | X | X | X | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | | | X | | | | | | | | X | X | | X | | X | | | | X | | | | | | | | | | | X | | | | X | | | | | | X | X | | | | | | | X | | | X | | | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | | | X | | | | | | | | | | | X | | | | | | | | | | | | X | | | | | X | | | | X | | | | | | X | | | | | X | | | X | | | X | | | X | X | | X | X | | X | | X | X | | | | | | | | | | | X | X | | | | | | | | | | | | | | | |
| 9 | 7 |  |  |  |  |  | X | | | | | X | | | | | | X | | | X | | | | | | | | | | | | | | | | | X | | | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | |
| 10 | 22 |  |  |  |  |  | X | | | X | | X | | | | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | |
| 11 | 6 |  |  |  |  |  | X | | X | X | | X | | | | | | X | | | X | | | | | X | | | | | | | | | | | | X | | | | X | | X | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X |
|
CyberHarem/shameimaru_aya_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T17:21:03+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T10:27:58+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of shameimaru\_aya/射命丸文/샤메이마루아야 (Touhou)
================================================
This is the dataset of shameimaru\_aya/射命丸文/샤메이마루아야 (Touhou), containing 500 images and their tags.
The core tags of this character are 'hat, short\_hair, tokin\_hat, red\_eyes, black\_hair, breasts, wings, black\_wings, red\_headwear, bangs, pointy\_ears', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
f5c09b4c7d4a86e9a8dd5ebecb86e892fb1be801
|
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
AI-C/rvc-models
|
[
"license:mit",
"region:us"
] |
2023-08-17T17:36:49+00:00
|
{"license": "mit", "title": "Genshin Impact RVC Models (combined)", "emoji": "\ud83c\udfa4", "colorFrom": "purple", "colorTo": "red", "sdk": "gradio", "sdk_version": "3.36.1", "app_file": "app.py", "pinned": false}
|
2023-08-27T14:56:46+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
Check out the configuration reference at URL
|
[] |
[
"TAGS\n#license-mit #region-us \n"
] |
[
11
] |
[
"passage: TAGS\n#license-mit #region-us \n"
] |
e5b757b5a01233e65a96b2421f98400489672ddb
|
# Dataset of chen/ちぇん/橙/첸 (Touhou)
This is the dataset of chen/ちぇん/橙/첸 (Touhou), containing 500 images and their tags.
The core tags of this character are `animal_ears, cat_ears, brown_hair, short_hair, hat, tail, cat_tail, multiple_tails, earrings, brown_eyes, bow, mob_cap, single_earring, two_tails, green_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 520.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/chen_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 325.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/chen_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1122 | 669.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/chen_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 467.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/chen_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1122 | 897.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/chen_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/chen_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, long_sleeves, red_dress, simple_background, solo, fang, looking_at_viewer, open_mouth, animal_ear_fluff, blush, jewelry, nekomata, :d, white_background, white_bowtie |
| 1 | 11 |  |  |  |  |  | 1girl, jewelry, long_sleeves, nail_polish, red_dress, solo, looking_at_viewer, red_nails, simple_background, nekomata, sharp_fingernails, bowtie, white_background, animal_ear_fluff, long_fingernails, open_mouth, :d, claw_pose |
| 2 | 7 |  |  |  |  |  | 1girl, jewelry, long_sleeves, looking_at_viewer, nekomata, red_dress, simple_background, solo, white_background, animal_ear_fluff, blush, bangs, bowtie, barefoot, :3, full_body, wariza |
| 3 | 8 |  |  |  |  |  | 1girl, jewelry, solo, blush |
| 4 | 7 |  |  |  |  |  | 1girl, long_fingernails, solo, jewelry, nail_polish, open_mouth, smile |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | long_sleeves | red_dress | simple_background | solo | fang | looking_at_viewer | open_mouth | animal_ear_fluff | blush | jewelry | nekomata | :d | white_background | white_bowtie | nail_polish | red_nails | sharp_fingernails | bowtie | long_fingernails | claw_pose | bangs | barefoot | :3 | full_body | wariza | smile |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:------------|:--------------------|:-------|:-------|:--------------------|:-------------|:-------------------|:--------|:----------|:-----------|:-----|:-------------------|:---------------|:--------------|:------------|:--------------------|:---------|:-------------------|:------------|:--------|:-----------|:-----|:------------|:---------|:--------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 1 | 11 |  |  |  |  |  | X | X | X | X | X | | X | X | X | | X | X | X | X | | X | X | X | X | X | X | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | X | X | X | | X | | X | X | X | X | | X | | | | | X | | | X | X | X | X | X | |
| 3 | 8 |  |  |  |  |  | X | | | | X | | | | | X | X | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | X | | | | X | | | X | | | X | | | | | X | | | | X | | | | | | | X |
|
CyberHarem/chen_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T17:49:19+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T10:06:31+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of chen/ちぇん/橙/첸 (Touhou)
================================
This is the dataset of chen/ちぇん/橙/첸 (Touhou), containing 500 images and their tags.
The core tags of this character are 'animal\_ears, cat\_ears, brown\_hair, short\_hair, hat, tail, cat\_tail, multiple\_tails, earrings, brown\_eyes, bow, mob\_cap, single\_earring, two\_tails, green\_headwear', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
62f9f9264fa7409643d757cc611fd7129b85ddcd
|
# Dataset Card for "amazon_theme"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ashhadahsan/amazon_theme
|
[
"region:us"
] |
2023-08-17T17:54:47+00:00
|
{"dataset_info": {"features": [{"name": "Transcript", "dtype": "string"}, {"name": "Review Theme", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 347105, "num_examples": 943}], "download_size": 208574, "dataset_size": 347105}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-21T15:21:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "amazon_theme"
More Information needed
|
[
"# Dataset Card for \"amazon_theme\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"amazon_theme\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"amazon_theme\"\n\nMore Information needed"
] |
de2b6cb365413c46708d446224ef163b3501d3c2
|
Неочищенный корпус из книг и газеты "Буряад унэн".
|
SaranaAbidueva/buryat_monocorpus
|
[
"language:bxr",
"license:mit",
"region:us"
] |
2023-08-17T17:56:46+00:00
|
{"language": ["bxr"], "license": "mit"}
|
2023-08-17T17:59:34+00:00
|
[] |
[
"bxr"
] |
TAGS
#language-Russia Buriat #license-mit #region-us
|
Неочищенный корпус из книг и газеты "Буряад унэн".
|
[] |
[
"TAGS\n#language-Russia Buriat #license-mit #region-us \n"
] |
[
18
] |
[
"passage: TAGS\n#language-Russia Buriat #license-mit #region-us \n"
] |
826d702d5df7a7cc72d6b813c3bcd95d44996c50
|
## Dataset Description
- **Homepage:** [VoxCeleb](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/)
This dataset includes both VoxCeleb and VoxCeleb2
# Multipart Zips
Already joined zips for convenience but these specified files are *NOT* part of the original datasets
vox2_mp4_1.zip - vox2_mp4_6.zip
vox2_aac_1.zip - vox2_aac_2.zip
# Joining Zip
```
cat vox1_dev* > vox1_dev_wav.zip
```
```
cat vox2_dev_aac* > vox2_aac.zip
```
```
cat vox2_dev_mp4* > vox2_mp4.zip
```
### Citation Information
```
@article{Nagrani19,
author = "Arsha Nagrani and Joon~Son Chung and Weidi Xie and Andrew Zisserman",
title = "Voxceleb: Large-scale speaker verification in the wild",
journal = "Computer Science and Language",
year = "2019",
publisher = "Elsevier",
}
@inProceedings{Chung18b,
author = "Chung, J.~S. and Nagrani, A. and Zisserman, A.",
title = "VoxCeleb2: Deep Speaker Recognition",
booktitle = "INTERSPEECH",
year = "2018",
}
@article{DBLP:journals/corr/NagraniCZ17,
author = {Arsha Nagrani and
Joon Son Chung and
Andrew Zisserman},
title = {VoxCeleb: a large-scale speaker identification dataset},
journal = {CoRR},
volume = {abs/1706.08612},
year = {2017},
url = {http://arxiv.org/abs/1706.08612},
eprinttype = {arXiv},
eprint = {1706.08612},
timestamp = {Mon, 13 Aug 2018 16:47:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/NagraniCZ17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@ProgramComputer](https://github.com/ProgramComputer) for adding this dataset.
|
ProgramComputer/voxceleb
|
[
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_categories:image-classification",
"task_categories:video-classification",
"size_categories:100K<n<1M",
"license:cc-by-4.0",
"arxiv:1706.08612",
"doi:10.57967/hf/0999",
"region:us"
] |
2023-08-17T17:57:37+00:00
|
{"license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["automatic-speech-recognition", "audio-classification", "image-classification", "video-classification"], "datasets": ["voxceleb", "voxceleb2"]}
|
2023-11-04T21:44:05+00:00
|
[
"1706.08612"
] |
[] |
TAGS
#task_categories-automatic-speech-recognition #task_categories-audio-classification #task_categories-image-classification #task_categories-video-classification #size_categories-100K<n<1M #license-cc-by-4.0 #arxiv-1706.08612 #doi-10.57967/hf/0999 #region-us
|
## Dataset Description
- Homepage: VoxCeleb
This dataset includes both VoxCeleb and VoxCeleb2
# Multipart Zips
Already joined zips for convenience but these specified files are *NOT* part of the original datasets
vox2_mp4_1.zip - vox2_mp4_6.zip
vox2_aac_1.zip - vox2_aac_2.zip
# Joining Zip
### Contributions
Thanks to @ProgramComputer for adding this dataset.
|
[
"## Dataset Description\n\n- Homepage: VoxCeleb\n\nThis dataset includes both VoxCeleb and VoxCeleb2",
"# Multipart Zips\n\nAlready joined zips for convenience but these specified files are *NOT* part of the original datasets\n\nvox2_mp4_1.zip - vox2_mp4_6.zip \n\nvox2_aac_1.zip - vox2_aac_2.zip",
"# Joining Zip",
"### Contributions\n\nThanks to @ProgramComputer for adding this dataset."
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #task_categories-audio-classification #task_categories-image-classification #task_categories-video-classification #size_categories-100K<n<1M #license-cc-by-4.0 #arxiv-1706.08612 #doi-10.57967/hf/0999 #region-us \n",
"## Dataset Description\n\n- Homepage: VoxCeleb\n\nThis dataset includes both VoxCeleb and VoxCeleb2",
"# Multipart Zips\n\nAlready joined zips for convenience but these specified files are *NOT* part of the original datasets\n\nvox2_mp4_1.zip - vox2_mp4_6.zip \n\nvox2_aac_1.zip - vox2_aac_2.zip",
"# Joining Zip",
"### Contributions\n\nThanks to @ProgramComputer for adding this dataset."
] |
[
98,
26,
71,
4,
17
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #task_categories-audio-classification #task_categories-image-classification #task_categories-video-classification #size_categories-100K<n<1M #license-cc-by-4.0 #arxiv-1706.08612 #doi-10.57967/hf/0999 #region-us \n## Dataset Description\n\n- Homepage: VoxCeleb\n\nThis dataset includes both VoxCeleb and VoxCeleb2# Multipart Zips\n\nAlready joined zips for convenience but these specified files are *NOT* part of the original datasets\n\nvox2_mp4_1.zip - vox2_mp4_6.zip \n\nvox2_aac_1.zip - vox2_aac_2.zip# Joining Zip### Contributions\n\nThanks to @ProgramComputer for adding this dataset."
] |
f017ab90f0c9702971cae8f01cc8959f412afed0
|
# Dataset Card for "amazon_subtheme"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ashhadahsan/amazon_subtheme
|
[
"region:us"
] |
2023-08-17T17:59:00+00:00
|
{"dataset_info": {"features": [{"name": "Transcript", "dtype": "string"}, {"name": "Review Issue", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 301970, "num_examples": 780}], "download_size": 0, "dataset_size": 301970}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-02T16:29:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "amazon_subtheme"
More Information needed
|
[
"# Dataset Card for \"amazon_subtheme\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"amazon_subtheme\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"amazon_subtheme\"\n\nMore Information needed"
] |
a6d75e77e31659cd33575dd84db585aaf2433814
|
# Dataset of hinanawi_tenshi/比那名居天子/比那名居天子/히나나위텐시 (Touhou)
This is the dataset of hinanawi_tenshi/比那名居天子/比那名居天子/히나나위텐시 (Touhou), containing 500 images and their tags.
The core tags of this character are `blue_hair, long_hair, red_eyes, hat, bow, black_headwear, very_long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 712.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hinanawi_tenshi_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 432.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hinanawi_tenshi_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1220 | 861.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hinanawi_tenshi_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 642.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hinanawi_tenshi_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1220 | 1.14 GiB | [Download](https://huggingface.co/datasets/CyberHarem/hinanawi_tenshi_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hinanawi_tenshi_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, peach, solo, sword_of_hisou, smile, dress |
| 1 | 9 |  |  |  |  |  | 1girl, looking_at_viewer, peach, solo, smile, skirt, open_mouth, puffy_short_sleeves, shirt |
| 2 | 31 |  |  |  |  |  | 1girl, leaf, puffy_short_sleeves, solo, white_shirt, looking_at_viewer, peach, bangs, red_bowtie, blush, hair_between_eyes, simple_background, center_frills, white_background, blue_skirt, open_mouth, upper_body, :d, collared_shirt, closed_mouth |
| 3 | 18 |  |  |  |  |  | 1girl, blue_skirt, leaf, looking_at_viewer, peach, puffy_short_sleeves, solo, sword_of_hisou, white_shirt, holding_sword, red_bowtie, bangs, hair_between_eyes, simple_background, closed_mouth, smile, white_background, center_frills, rainbow_order |
| 4 | 8 |  |  |  |  |  | 1girl, looking_at_viewer, navel, peach, solo, blush, smile, collarbone, side-tie_bikini_bottom, cleavage, open_mouth, large_breasts, strap_pull, white_bikini |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | peach | solo | sword_of_hisou | smile | dress | looking_at_viewer | skirt | open_mouth | puffy_short_sleeves | shirt | leaf | white_shirt | bangs | red_bowtie | blush | hair_between_eyes | simple_background | center_frills | white_background | blue_skirt | upper_body | :d | collared_shirt | closed_mouth | holding_sword | rainbow_order | navel | collarbone | side-tie_bikini_bottom | cleavage | large_breasts | strap_pull | white_bikini |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:-----------------|:--------|:--------|:--------------------|:--------|:-------------|:----------------------|:--------|:-------|:--------------|:--------|:-------------|:--------|:--------------------|:--------------------|:----------------|:-------------------|:-------------|:-------------|:-----|:-----------------|:---------------|:----------------|:----------------|:--------|:-------------|:-------------------------|:-----------|:----------------|:-------------|:---------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | X | | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 31 |  |  |  |  |  | X | X | X | | | | X | | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | |
| 3 | 18 |  |  |  |  |  | X | X | X | X | X | | X | | | X | | X | X | X | X | | X | X | X | X | X | | | | X | X | X | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | X | X | | X | | X | | X | | | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | X |
|
CyberHarem/hinanawi_tenshi_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T18:01:02+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T10:23:32+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of hinanawi\_tenshi/比那名居天子/比那名居天子/히나나위텐시 (Touhou)
=========================================================
This is the dataset of hinanawi\_tenshi/比那名居天子/比那名居天子/히나나위텐시 (Touhou), containing 500 images and their tags.
The core tags of this character are 'blue\_hair, long\_hair, red\_eyes, hat, bow, black\_headwear, very\_long\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
6c7df38b04da71fb29349f2f824d5ebe2a0b5c05
|
# Dataset Card for "meddocan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
finiteautomata/meddocan
|
[
"region:us"
] |
2023-08-17T18:29:58+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "document_id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-FECHAS", "2": "I-FECHAS", "3": "B-CENTRO_SALUD", "4": "I-CENTRO_SALUD", "5": "B-NOMBRE_SUJETO_ASISTENCIA", "6": "I-NOMBRE_SUJETO_ASISTENCIA", "7": "B-PAIS", "8": "I-PAIS", "9": "B-INSTITUCION", "10": "I-INSTITUCION", "11": "B-ID_TITULACION_PERSONAL_SANITARIO", "12": "I-ID_TITULACION_PERSONAL_SANITARIO", "13": "B-CALLE", "14": "I-CALLE", "15": "B-ID_SUJETO_ASISTENCIA", "16": "I-ID_SUJETO_ASISTENCIA", "17": "B-ID_ASEGURAMIENTO", "18": "I-ID_ASEGURAMIENTO", "19": "B-ID_EMPLEO_PERSONAL_SANITARIO", "20": "I-ID_EMPLEO_PERSONAL_SANITARIO", "21": "B-TERRITORIO", "22": "I-TERRITORIO", "23": "B-SEXO_SUJETO_ASISTENCIA", "24": "I-SEXO_SUJETO_ASISTENCIA", "25": "B-CORREO_ELECTRONICO", "26": "I-CORREO_ELECTRONICO", "27": "B-HOSPITAL", "28": "I-HOSPITAL", "29": "B-FAMILIARES_SUJETO_ASISTENCIA", "30": "I-FAMILIARES_SUJETO_ASISTENCIA", "31": "B-NUMERO_FAX", "32": "I-NUMERO_FAX", "33": "B-OTROS_SUJETO_ASISTENCIA", "34": "I-OTROS_SUJETO_ASISTENCIA", "35": "B-NUMERO_TELEFONO", "36": "I-NUMERO_TELEFONO", "37": "B-NOMBRE_PERSONAL_SANITARIO", "38": "I-NOMBRE_PERSONAL_SANITARIO", "39": "B-PROFESION", "40": "I-PROFESION", "41": "B-EDAD_SUJETO_ASISTENCIA", "42": "I-EDAD_SUJETO_ASISTENCIA", "43": "B-ID_CONTACTO_ASISTENCIAL", "44": "I-ID_CONTACTO_ASISTENCIAL"}}}}], "splits": [{"name": "train", "num_bytes": 9141826, "num_examples": 4731}, {"name": "validation", "num_bytes": 4826850, "num_examples": 2469}, {"name": "test", "num_bytes": 4586544, "num_examples": 2374}], "download_size": 1876568, "dataset_size": 18555220}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
|
2023-08-30T10:47:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "meddocan"
More Information needed
|
[
"# Dataset Card for \"meddocan\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"meddocan\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"meddocan\"\n\nMore Information needed"
] |
86a3c4f2b7b1aa5674bd8ecd3f05f5eafcf849fe
|
# Dataset Card for "FormulasInstructionPaired6k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
crewdon/FormulasInstructionPaired6k
|
[
"region:us"
] |
2023-08-17T18:33:41+00:00
|
{"dataset_info": {"config_name": "crewdon", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3712791, "num_examples": 6297}], "download_size": 910840, "dataset_size": 3712791}, "configs": [{"config_name": "crewdon", "data_files": [{"split": "train", "path": "crewdon/train-*"}]}]}
|
2023-08-17T18:33:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "FormulasInstructionPaired6k"
More Information needed
|
[
"# Dataset Card for \"FormulasInstructionPaired6k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"FormulasInstructionPaired6k\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"FormulasInstructionPaired6k\"\n\nMore Information needed"
] |
d600c3e53dd02b0e50b6e44c421e0e4c9cdd72bb
|
# Dataset of onozuka_komachi/小野塚小町 (Touhou)
This is the dataset of onozuka_komachi/小野塚小町 (Touhou), containing 500 images and their tags.
The core tags of this character are `red_hair, two_side_up, hair_ornament, red_eyes, short_hair, breasts, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 609.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/onozuka_komachi_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 379.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/onozuka_komachi_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1114 | 735.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/onozuka_komachi_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 557.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/onozuka_komachi_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1114 | 984.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/onozuka_komachi_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/onozuka_komachi_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, hair_bobbles, scythe, solo, smile, cleavage |
| 1 | 11 |  |  |  |  |  | 1girl, hair_bobbles, scythe, solo, spider_lily |
| 2 | 7 |  |  |  |  |  | 1girl, hair_bobbles, scythe, solo, cleavage, smile, spider_lily |
| 3 | 5 |  |  |  |  |  | 1girl, bangs, blue_dress, hair_bobbles, looking_at_viewer, obi, puffy_short_sleeves, smile, solo, coin, holding_scythe, open_mouth |
| 4 | 11 |  |  |  |  |  | 1girl, blue_dress, full_body, hair_bobbles, holding_scythe, obi, puffy_short_sleeves, solo, looking_at_viewer, tabi, bangs, coin, simple_background, smile, standing, white_socks, closed_mouth, white_background, sandals, blue_kimono, cleavage, frills |
| 5 | 15 |  |  |  |  |  | 2girls, hair_bobbles, green_hair, hat, scythe, smile, flower, cleavage, rod_of_remorse |
| 6 | 9 |  |  |  |  |  | 1boy, 1girl, blush, hair_bobbles, hetero, solo_focus, penis, nipples, smile, cum, huge_breasts, nude, paizuri, pov, looking_at_viewer, mosaic_censoring, pink_hair |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | hair_bobbles | scythe | solo | smile | cleavage | spider_lily | bangs | blue_dress | looking_at_viewer | obi | puffy_short_sleeves | coin | holding_scythe | open_mouth | full_body | tabi | simple_background | standing | white_socks | closed_mouth | white_background | sandals | blue_kimono | frills | 2girls | green_hair | hat | flower | rod_of_remorse | 1boy | blush | hetero | solo_focus | penis | nipples | cum | huge_breasts | nude | paizuri | pov | mosaic_censoring | pink_hair |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:---------|:-------|:--------|:-----------|:--------------|:--------|:-------------|:--------------------|:------|:----------------------|:-------|:-----------------|:-------------|:------------|:-------|:--------------------|:-----------|:--------------|:---------------|:-------------------|:----------|:--------------|:---------|:---------|:-------------|:------|:---------|:-----------------|:-------|:--------|:---------|:-------------|:--------|:----------|:------|:---------------|:-------|:----------|:------|:-------------------|:------------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 11 |  |  |  |  |  | X | X | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | | X | X | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 11 |  |  |  |  |  | X | X | | X | X | X | | X | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 5 | 15 |  |  |  |  |  | | X | X | | X | X | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | |
| 6 | 9 |  |  |  |  |  | X | X | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/onozuka_komachi_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T18:34:52+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T15:46:13+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of onozuka\_komachi/小野塚小町 (Touhou)
==========================================
This is the dataset of onozuka\_komachi/小野塚小町 (Touhou), containing 500 images and their tags.
The core tags of this character are 'red\_hair, two\_side\_up, hair\_ornament, red\_eyes, short\_hair, breasts, large\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
dd60d6a7be1bcff454ab2d37984fdda50c9a6169
|
# Dataset Card for "CSIC_BERT_Finetuned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
EgilKarlsen/CSIC_BERT_Finetuned
|
[
"region:us"
] |
2023-08-17T18:35:26+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "0", "dtype": "float32"}, {"name": "1", "dtype": "float32"}, {"name": "2", "dtype": "float32"}, {"name": "3", "dtype": "float32"}, {"name": "4", "dtype": "float32"}, {"name": "5", "dtype": "float32"}, {"name": "6", "dtype": "float32"}, {"name": "7", "dtype": "float32"}, {"name": "8", "dtype": "float32"}, {"name": "9", "dtype": "float32"}, {"name": "10", "dtype": "float32"}, {"name": "11", "dtype": "float32"}, {"name": "12", "dtype": "float32"}, {"name": "13", "dtype": "float32"}, {"name": "14", "dtype": "float32"}, {"name": "15", "dtype": "float32"}, {"name": "16", "dtype": "float32"}, {"name": "17", "dtype": "float32"}, {"name": "18", "dtype": "float32"}, {"name": "19", "dtype": "float32"}, {"name": "20", "dtype": "float32"}, {"name": "21", "dtype": "float32"}, {"name": "22", "dtype": "float32"}, {"name": "23", "dtype": "float32"}, {"name": "24", "dtype": "float32"}, {"name": "25", "dtype": "float32"}, {"name": "26", "dtype": "float32"}, {"name": "27", "dtype": "float32"}, {"name": "28", "dtype": "float32"}, {"name": "29", "dtype": "float32"}, {"name": "30", "dtype": "float32"}, {"name": "31", "dtype": "float32"}, {"name": "32", "dtype": "float32"}, {"name": "33", "dtype": "float32"}, {"name": "34", "dtype": "float32"}, {"name": "35", "dtype": "float32"}, {"name": "36", "dtype": "float32"}, {"name": "37", "dtype": "float32"}, {"name": "38", "dtype": "float32"}, {"name": "39", "dtype": "float32"}, {"name": "40", "dtype": "float32"}, {"name": "41", "dtype": "float32"}, {"name": "42", "dtype": "float32"}, {"name": "43", "dtype": "float32"}, {"name": "44", "dtype": "float32"}, {"name": "45", "dtype": "float32"}, {"name": "46", "dtype": "float32"}, {"name": "47", "dtype": "float32"}, {"name": "48", "dtype": "float32"}, {"name": "49", "dtype": "float32"}, {"name": "50", "dtype": "float32"}, {"name": "51", "dtype": "float32"}, {"name": "52", "dtype": "float32"}, {"name": "53", "dtype": "float32"}, {"name": "54", "dtype": "float32"}, {"name": "55", "dtype": "float32"}, {"name": "56", "dtype": "float32"}, {"name": "57", "dtype": "float32"}, {"name": "58", "dtype": "float32"}, {"name": "59", "dtype": "float32"}, {"name": "60", "dtype": "float32"}, {"name": "61", "dtype": "float32"}, {"name": "62", "dtype": "float32"}, {"name": "63", "dtype": "float32"}, {"name": "64", "dtype": "float32"}, {"name": "65", "dtype": "float32"}, {"name": "66", "dtype": "float32"}, {"name": "67", "dtype": "float32"}, {"name": "68", "dtype": "float32"}, {"name": "69", "dtype": "float32"}, {"name": "70", "dtype": "float32"}, {"name": "71", "dtype": "float32"}, {"name": "72", "dtype": "float32"}, {"name": "73", "dtype": "float32"}, {"name": "74", "dtype": "float32"}, {"name": "75", "dtype": "float32"}, {"name": "76", "dtype": "float32"}, {"name": "77", "dtype": "float32"}, {"name": "78", "dtype": "float32"}, {"name": "79", "dtype": "float32"}, {"name": "80", "dtype": "float32"}, {"name": "81", "dtype": "float32"}, {"name": "82", "dtype": "float32"}, {"name": "83", "dtype": "float32"}, {"name": "84", "dtype": "float32"}, {"name": "85", "dtype": "float32"}, {"name": "86", "dtype": "float32"}, {"name": "87", "dtype": "float32"}, {"name": "88", "dtype": "float32"}, {"name": "89", "dtype": "float32"}, {"name": "90", "dtype": "float32"}, {"name": "91", "dtype": "float32"}, {"name": "92", "dtype": "float32"}, {"name": "93", "dtype": "float32"}, {"name": "94", "dtype": "float32"}, {"name": "95", "dtype": "float32"}, {"name": "96", "dtype": "float32"}, {"name": "97", "dtype": "float32"}, {"name": "98", "dtype": "float32"}, {"name": "99", "dtype": "float32"}, {"name": "100", "dtype": "float32"}, {"name": "101", "dtype": "float32"}, {"name": "102", "dtype": "float32"}, {"name": "103", "dtype": "float32"}, {"name": "104", "dtype": "float32"}, {"name": "105", "dtype": "float32"}, {"name": "106", "dtype": "float32"}, {"name": "107", "dtype": "float32"}, {"name": "108", "dtype": "float32"}, {"name": "109", "dtype": "float32"}, {"name": "110", "dtype": "float32"}, {"name": "111", "dtype": "float32"}, {"name": "112", "dtype": "float32"}, {"name": "113", "dtype": "float32"}, {"name": "114", "dtype": "float32"}, {"name": "115", "dtype": "float32"}, {"name": "116", "dtype": "float32"}, {"name": "117", "dtype": "float32"}, {"name": "118", "dtype": "float32"}, {"name": "119", "dtype": "float32"}, {"name": "120", "dtype": "float32"}, {"name": "121", "dtype": "float32"}, {"name": "122", "dtype": "float32"}, {"name": "123", "dtype": "float32"}, {"name": "124", "dtype": "float32"}, {"name": "125", "dtype": "float32"}, {"name": "126", "dtype": "float32"}, {"name": "127", "dtype": "float32"}, {"name": "128", "dtype": "float32"}, {"name": "129", "dtype": "float32"}, {"name": "130", "dtype": "float32"}, {"name": "131", "dtype": "float32"}, {"name": "132", "dtype": "float32"}, {"name": "133", "dtype": "float32"}, {"name": "134", "dtype": "float32"}, {"name": "135", "dtype": "float32"}, {"name": "136", "dtype": "float32"}, {"name": "137", "dtype": "float32"}, {"name": "138", "dtype": "float32"}, {"name": "139", "dtype": "float32"}, {"name": "140", "dtype": "float32"}, {"name": "141", "dtype": "float32"}, {"name": "142", "dtype": "float32"}, {"name": "143", "dtype": "float32"}, {"name": "144", "dtype": "float32"}, {"name": "145", "dtype": "float32"}, {"name": "146", "dtype": "float32"}, {"name": "147", "dtype": "float32"}, {"name": "148", "dtype": "float32"}, {"name": "149", "dtype": "float32"}, {"name": "150", "dtype": "float32"}, {"name": "151", "dtype": "float32"}, {"name": "152", "dtype": "float32"}, {"name": "153", "dtype": "float32"}, {"name": "154", "dtype": "float32"}, {"name": "155", "dtype": "float32"}, {"name": "156", "dtype": "float32"}, {"name": "157", "dtype": "float32"}, {"name": "158", "dtype": "float32"}, {"name": "159", "dtype": "float32"}, {"name": "160", "dtype": "float32"}, {"name": "161", "dtype": "float32"}, {"name": "162", "dtype": "float32"}, {"name": "163", "dtype": "float32"}, {"name": "164", "dtype": "float32"}, {"name": "165", "dtype": "float32"}, {"name": "166", "dtype": "float32"}, {"name": "167", "dtype": "float32"}, {"name": "168", "dtype": "float32"}, {"name": "169", "dtype": "float32"}, {"name": "170", "dtype": "float32"}, {"name": "171", "dtype": "float32"}, {"name": "172", "dtype": "float32"}, {"name": "173", "dtype": "float32"}, {"name": "174", "dtype": "float32"}, {"name": "175", "dtype": "float32"}, {"name": "176", "dtype": "float32"}, {"name": "177", "dtype": "float32"}, {"name": "178", "dtype": "float32"}, {"name": "179", "dtype": "float32"}, {"name": "180", "dtype": "float32"}, {"name": "181", "dtype": "float32"}, {"name": "182", "dtype": "float32"}, {"name": "183", "dtype": "float32"}, {"name": "184", "dtype": "float32"}, {"name": "185", "dtype": "float32"}, {"name": "186", "dtype": "float32"}, {"name": "187", "dtype": "float32"}, {"name": "188", "dtype": "float32"}, {"name": "189", "dtype": "float32"}, {"name": "190", "dtype": "float32"}, {"name": "191", "dtype": "float32"}, {"name": "192", "dtype": "float32"}, {"name": "193", "dtype": "float32"}, {"name": "194", "dtype": "float32"}, {"name": "195", "dtype": "float32"}, {"name": "196", "dtype": "float32"}, {"name": "197", "dtype": "float32"}, {"name": "198", "dtype": "float32"}, {"name": "199", "dtype": "float32"}, {"name": "200", "dtype": "float32"}, {"name": "201", "dtype": "float32"}, {"name": "202", "dtype": "float32"}, {"name": "203", "dtype": "float32"}, {"name": "204", "dtype": "float32"}, {"name": "205", "dtype": "float32"}, {"name": "206", "dtype": "float32"}, {"name": "207", "dtype": "float32"}, {"name": "208", "dtype": "float32"}, {"name": "209", "dtype": "float32"}, {"name": "210", "dtype": "float32"}, {"name": "211", "dtype": "float32"}, {"name": "212", "dtype": "float32"}, {"name": "213", "dtype": "float32"}, {"name": "214", "dtype": "float32"}, {"name": "215", "dtype": "float32"}, {"name": "216", "dtype": "float32"}, {"name": "217", "dtype": "float32"}, {"name": "218", "dtype": "float32"}, {"name": "219", "dtype": "float32"}, {"name": "220", "dtype": "float32"}, {"name": "221", "dtype": "float32"}, {"name": "222", "dtype": "float32"}, {"name": "223", "dtype": "float32"}, {"name": "224", "dtype": "float32"}, {"name": "225", "dtype": "float32"}, {"name": "226", "dtype": "float32"}, {"name": "227", "dtype": "float32"}, {"name": "228", "dtype": "float32"}, {"name": "229", "dtype": "float32"}, {"name": "230", "dtype": "float32"}, {"name": "231", "dtype": "float32"}, {"name": "232", "dtype": "float32"}, {"name": "233", "dtype": "float32"}, {"name": "234", "dtype": "float32"}, {"name": "235", "dtype": "float32"}, {"name": "236", "dtype": "float32"}, {"name": "237", "dtype": "float32"}, {"name": "238", "dtype": "float32"}, {"name": "239", "dtype": "float32"}, {"name": "240", "dtype": "float32"}, {"name": "241", "dtype": "float32"}, {"name": "242", "dtype": "float32"}, {"name": "243", "dtype": "float32"}, {"name": "244", "dtype": "float32"}, {"name": "245", "dtype": "float32"}, {"name": "246", "dtype": "float32"}, {"name": "247", "dtype": "float32"}, {"name": "248", "dtype": "float32"}, {"name": "249", "dtype": "float32"}, {"name": "250", "dtype": "float32"}, {"name": "251", "dtype": "float32"}, {"name": "252", "dtype": "float32"}, {"name": "253", "dtype": "float32"}, {"name": "254", "dtype": "float32"}, {"name": "255", "dtype": "float32"}, {"name": "256", "dtype": "float32"}, {"name": "257", "dtype": "float32"}, {"name": "258", "dtype": "float32"}, {"name": "259", "dtype": "float32"}, {"name": "260", "dtype": "float32"}, {"name": "261", "dtype": "float32"}, {"name": "262", "dtype": "float32"}, {"name": "263", "dtype": "float32"}, {"name": "264", "dtype": "float32"}, {"name": "265", "dtype": "float32"}, {"name": "266", "dtype": "float32"}, {"name": "267", "dtype": "float32"}, {"name": "268", "dtype": "float32"}, {"name": "269", "dtype": "float32"}, {"name": "270", "dtype": "float32"}, {"name": "271", "dtype": "float32"}, {"name": "272", "dtype": "float32"}, {"name": "273", "dtype": "float32"}, {"name": "274", "dtype": "float32"}, {"name": "275", "dtype": "float32"}, {"name": "276", "dtype": "float32"}, {"name": "277", "dtype": "float32"}, {"name": "278", "dtype": "float32"}, {"name": "279", "dtype": "float32"}, {"name": "280", "dtype": "float32"}, {"name": "281", "dtype": "float32"}, {"name": "282", "dtype": "float32"}, {"name": "283", "dtype": "float32"}, {"name": "284", "dtype": "float32"}, {"name": "285", "dtype": "float32"}, {"name": "286", "dtype": "float32"}, {"name": "287", "dtype": "float32"}, {"name": "288", "dtype": "float32"}, {"name": "289", "dtype": "float32"}, {"name": "290", "dtype": "float32"}, {"name": "291", "dtype": "float32"}, {"name": "292", "dtype": "float32"}, {"name": "293", "dtype": "float32"}, {"name": "294", "dtype": "float32"}, {"name": "295", "dtype": "float32"}, {"name": "296", "dtype": "float32"}, {"name": "297", "dtype": "float32"}, {"name": "298", "dtype": "float32"}, {"name": "299", "dtype": "float32"}, {"name": "300", "dtype": "float32"}, {"name": "301", "dtype": "float32"}, {"name": "302", "dtype": "float32"}, {"name": "303", "dtype": "float32"}, {"name": "304", "dtype": "float32"}, {"name": "305", "dtype": "float32"}, {"name": "306", "dtype": "float32"}, {"name": "307", "dtype": "float32"}, {"name": "308", "dtype": "float32"}, {"name": "309", "dtype": "float32"}, {"name": "310", "dtype": "float32"}, {"name": "311", "dtype": "float32"}, {"name": "312", "dtype": "float32"}, {"name": "313", "dtype": "float32"}, {"name": "314", "dtype": "float32"}, {"name": "315", "dtype": "float32"}, {"name": "316", "dtype": "float32"}, {"name": "317", "dtype": "float32"}, {"name": "318", "dtype": "float32"}, {"name": "319", "dtype": "float32"}, {"name": "320", "dtype": "float32"}, {"name": "321", "dtype": "float32"}, {"name": "322", "dtype": "float32"}, {"name": "323", "dtype": "float32"}, {"name": "324", "dtype": "float32"}, {"name": "325", "dtype": "float32"}, {"name": "326", "dtype": "float32"}, {"name": "327", "dtype": "float32"}, {"name": "328", "dtype": "float32"}, {"name": "329", "dtype": "float32"}, {"name": "330", "dtype": "float32"}, {"name": "331", "dtype": "float32"}, {"name": "332", "dtype": "float32"}, {"name": "333", "dtype": "float32"}, {"name": "334", "dtype": "float32"}, {"name": "335", "dtype": "float32"}, {"name": "336", "dtype": "float32"}, {"name": "337", "dtype": "float32"}, {"name": "338", "dtype": "float32"}, {"name": "339", "dtype": "float32"}, {"name": "340", "dtype": "float32"}, {"name": "341", "dtype": "float32"}, {"name": "342", "dtype": "float32"}, {"name": "343", "dtype": "float32"}, {"name": "344", "dtype": "float32"}, {"name": "345", "dtype": "float32"}, {"name": "346", "dtype": "float32"}, {"name": "347", "dtype": "float32"}, {"name": "348", "dtype": "float32"}, {"name": "349", "dtype": "float32"}, {"name": "350", "dtype": "float32"}, {"name": "351", "dtype": "float32"}, {"name": "352", "dtype": "float32"}, {"name": "353", "dtype": "float32"}, {"name": "354", "dtype": "float32"}, {"name": "355", "dtype": "float32"}, {"name": "356", "dtype": "float32"}, {"name": "357", "dtype": "float32"}, {"name": "358", "dtype": "float32"}, {"name": "359", "dtype": "float32"}, {"name": "360", "dtype": "float32"}, {"name": "361", "dtype": "float32"}, {"name": "362", "dtype": "float32"}, {"name": "363", "dtype": "float32"}, {"name": "364", "dtype": "float32"}, {"name": "365", "dtype": "float32"}, {"name": "366", "dtype": "float32"}, {"name": "367", "dtype": "float32"}, {"name": "368", "dtype": "float32"}, {"name": "369", "dtype": "float32"}, {"name": "370", "dtype": "float32"}, {"name": "371", "dtype": "float32"}, {"name": "372", "dtype": "float32"}, {"name": "373", "dtype": "float32"}, {"name": "374", "dtype": "float32"}, {"name": "375", "dtype": "float32"}, {"name": "376", "dtype": "float32"}, {"name": "377", "dtype": "float32"}, {"name": "378", "dtype": "float32"}, {"name": "379", "dtype": "float32"}, {"name": "380", "dtype": "float32"}, {"name": "381", "dtype": "float32"}, {"name": "382", "dtype": "float32"}, {"name": "383", "dtype": "float32"}, {"name": "384", "dtype": "float32"}, {"name": "385", "dtype": "float32"}, {"name": "386", "dtype": "float32"}, {"name": "387", "dtype": "float32"}, {"name": "388", "dtype": "float32"}, {"name": "389", "dtype": "float32"}, {"name": "390", "dtype": "float32"}, {"name": "391", "dtype": "float32"}, {"name": "392", "dtype": "float32"}, {"name": "393", "dtype": "float32"}, {"name": "394", "dtype": "float32"}, {"name": "395", "dtype": "float32"}, {"name": "396", "dtype": "float32"}, {"name": "397", "dtype": "float32"}, {"name": "398", "dtype": "float32"}, {"name": "399", "dtype": "float32"}, {"name": "400", "dtype": "float32"}, {"name": "401", "dtype": "float32"}, {"name": "402", "dtype": "float32"}, {"name": "403", "dtype": "float32"}, {"name": "404", "dtype": "float32"}, {"name": "405", "dtype": "float32"}, {"name": "406", "dtype": "float32"}, {"name": "407", "dtype": "float32"}, {"name": "408", "dtype": "float32"}, {"name": "409", "dtype": "float32"}, {"name": "410", "dtype": "float32"}, {"name": "411", "dtype": "float32"}, {"name": "412", "dtype": "float32"}, {"name": "413", "dtype": "float32"}, {"name": "414", "dtype": "float32"}, {"name": "415", "dtype": "float32"}, {"name": "416", "dtype": "float32"}, {"name": "417", "dtype": "float32"}, {"name": "418", "dtype": "float32"}, {"name": "419", "dtype": "float32"}, {"name": "420", "dtype": "float32"}, {"name": "421", "dtype": "float32"}, {"name": "422", "dtype": "float32"}, {"name": "423", "dtype": "float32"}, {"name": "424", "dtype": "float32"}, {"name": "425", "dtype": "float32"}, {"name": "426", "dtype": "float32"}, {"name": "427", "dtype": "float32"}, {"name": "428", "dtype": "float32"}, {"name": "429", "dtype": "float32"}, {"name": "430", "dtype": "float32"}, {"name": "431", "dtype": "float32"}, {"name": "432", "dtype": "float32"}, {"name": "433", "dtype": "float32"}, {"name": "434", "dtype": "float32"}, {"name": "435", "dtype": "float32"}, {"name": "436", "dtype": "float32"}, {"name": "437", "dtype": "float32"}, {"name": "438", "dtype": "float32"}, {"name": "439", "dtype": "float32"}, {"name": "440", "dtype": "float32"}, {"name": "441", "dtype": "float32"}, {"name": "442", "dtype": "float32"}, {"name": "443", "dtype": "float32"}, {"name": "444", "dtype": "float32"}, {"name": "445", "dtype": "float32"}, {"name": "446", "dtype": "float32"}, {"name": "447", "dtype": "float32"}, {"name": "448", "dtype": "float32"}, {"name": "449", "dtype": "float32"}, {"name": "450", "dtype": "float32"}, {"name": "451", "dtype": "float32"}, {"name": "452", "dtype": "float32"}, {"name": "453", "dtype": "float32"}, {"name": "454", "dtype": "float32"}, {"name": "455", "dtype": "float32"}, {"name": "456", "dtype": "float32"}, {"name": "457", "dtype": "float32"}, {"name": "458", "dtype": "float32"}, {"name": "459", "dtype": "float32"}, {"name": "460", "dtype": "float32"}, {"name": "461", "dtype": "float32"}, {"name": "462", "dtype": "float32"}, {"name": "463", "dtype": "float32"}, {"name": "464", "dtype": "float32"}, {"name": "465", "dtype": "float32"}, {"name": "466", "dtype": "float32"}, {"name": "467", "dtype": "float32"}, {"name": "468", "dtype": "float32"}, {"name": "469", "dtype": "float32"}, {"name": "470", "dtype": "float32"}, {"name": "471", "dtype": "float32"}, {"name": "472", "dtype": "float32"}, {"name": "473", "dtype": "float32"}, {"name": "474", "dtype": "float32"}, {"name": "475", "dtype": "float32"}, {"name": "476", "dtype": "float32"}, {"name": "477", "dtype": "float32"}, {"name": "478", "dtype": "float32"}, {"name": "479", "dtype": "float32"}, {"name": "480", "dtype": "float32"}, {"name": "481", "dtype": "float32"}, {"name": "482", "dtype": "float32"}, {"name": "483", "dtype": "float32"}, {"name": "484", "dtype": "float32"}, {"name": "485", "dtype": "float32"}, {"name": "486", "dtype": "float32"}, {"name": "487", "dtype": "float32"}, {"name": "488", "dtype": "float32"}, {"name": "489", "dtype": "float32"}, {"name": "490", "dtype": "float32"}, {"name": "491", "dtype": "float32"}, {"name": "492", "dtype": "float32"}, {"name": "493", "dtype": "float32"}, {"name": "494", "dtype": "float32"}, {"name": "495", "dtype": "float32"}, {"name": "496", "dtype": "float32"}, {"name": "497", "dtype": "float32"}, {"name": "498", "dtype": "float32"}, {"name": "499", "dtype": "float32"}, {"name": "500", "dtype": "float32"}, {"name": "501", "dtype": "float32"}, {"name": "502", "dtype": "float32"}, {"name": "503", "dtype": "float32"}, {"name": "504", "dtype": "float32"}, {"name": "505", "dtype": "float32"}, {"name": "506", "dtype": "float32"}, {"name": "507", "dtype": "float32"}, {"name": "508", "dtype": "float32"}, {"name": "509", "dtype": "float32"}, {"name": "510", "dtype": "float32"}, {"name": "511", "dtype": "float32"}, {"name": "512", "dtype": "float32"}, {"name": "513", "dtype": "float32"}, {"name": "514", "dtype": "float32"}, {"name": "515", "dtype": "float32"}, {"name": "516", "dtype": "float32"}, {"name": "517", "dtype": "float32"}, {"name": "518", "dtype": "float32"}, {"name": "519", "dtype": "float32"}, {"name": "520", "dtype": "float32"}, {"name": "521", "dtype": "float32"}, {"name": "522", "dtype": "float32"}, {"name": "523", "dtype": "float32"}, {"name": "524", "dtype": "float32"}, {"name": "525", "dtype": "float32"}, {"name": "526", "dtype": "float32"}, {"name": "527", "dtype": "float32"}, {"name": "528", "dtype": "float32"}, {"name": "529", "dtype": "float32"}, {"name": "530", "dtype": "float32"}, {"name": "531", "dtype": "float32"}, {"name": "532", "dtype": "float32"}, {"name": "533", "dtype": "float32"}, {"name": "534", "dtype": "float32"}, {"name": "535", "dtype": "float32"}, {"name": "536", "dtype": "float32"}, {"name": "537", "dtype": "float32"}, {"name": "538", "dtype": "float32"}, {"name": "539", "dtype": "float32"}, {"name": "540", "dtype": "float32"}, {"name": "541", "dtype": "float32"}, {"name": "542", "dtype": "float32"}, {"name": "543", "dtype": "float32"}, {"name": "544", "dtype": "float32"}, {"name": "545", "dtype": "float32"}, {"name": "546", "dtype": "float32"}, {"name": "547", "dtype": "float32"}, {"name": "548", "dtype": "float32"}, {"name": "549", "dtype": "float32"}, {"name": "550", "dtype": "float32"}, {"name": "551", "dtype": "float32"}, {"name": "552", "dtype": "float32"}, {"name": "553", "dtype": "float32"}, {"name": "554", "dtype": "float32"}, {"name": "555", "dtype": "float32"}, {"name": "556", "dtype": "float32"}, {"name": "557", "dtype": "float32"}, {"name": "558", "dtype": "float32"}, {"name": "559", "dtype": "float32"}, {"name": "560", "dtype": "float32"}, {"name": "561", "dtype": "float32"}, {"name": "562", "dtype": "float32"}, {"name": "563", "dtype": "float32"}, {"name": "564", "dtype": "float32"}, {"name": "565", "dtype": "float32"}, {"name": "566", "dtype": "float32"}, {"name": "567", "dtype": "float32"}, {"name": "568", "dtype": "float32"}, {"name": "569", "dtype": "float32"}, {"name": "570", "dtype": "float32"}, {"name": "571", "dtype": "float32"}, {"name": "572", "dtype": "float32"}, {"name": "573", "dtype": "float32"}, {"name": "574", "dtype": "float32"}, {"name": "575", "dtype": "float32"}, {"name": "576", "dtype": "float32"}, {"name": "577", "dtype": "float32"}, {"name": "578", "dtype": "float32"}, {"name": "579", "dtype": "float32"}, {"name": "580", "dtype": "float32"}, {"name": "581", "dtype": "float32"}, {"name": "582", "dtype": "float32"}, {"name": "583", "dtype": "float32"}, {"name": "584", "dtype": "float32"}, {"name": "585", "dtype": "float32"}, {"name": "586", "dtype": "float32"}, {"name": "587", "dtype": "float32"}, {"name": "588", "dtype": "float32"}, {"name": "589", "dtype": "float32"}, {"name": "590", "dtype": "float32"}, {"name": "591", "dtype": "float32"}, {"name": "592", "dtype": "float32"}, {"name": "593", "dtype": "float32"}, {"name": "594", "dtype": "float32"}, {"name": "595", "dtype": "float32"}, {"name": "596", "dtype": "float32"}, {"name": "597", "dtype": "float32"}, {"name": "598", "dtype": "float32"}, {"name": "599", "dtype": "float32"}, {"name": "600", "dtype": "float32"}, {"name": "601", "dtype": "float32"}, {"name": "602", "dtype": "float32"}, {"name": "603", "dtype": "float32"}, {"name": "604", "dtype": "float32"}, {"name": "605", "dtype": "float32"}, {"name": "606", "dtype": "float32"}, {"name": "607", "dtype": "float32"}, {"name": "608", "dtype": "float32"}, {"name": "609", "dtype": "float32"}, {"name": "610", "dtype": "float32"}, {"name": "611", "dtype": "float32"}, {"name": "612", "dtype": "float32"}, {"name": "613", "dtype": "float32"}, {"name": "614", "dtype": "float32"}, {"name": "615", "dtype": "float32"}, {"name": "616", "dtype": "float32"}, {"name": "617", "dtype": "float32"}, {"name": "618", "dtype": "float32"}, {"name": "619", "dtype": "float32"}, {"name": "620", "dtype": "float32"}, {"name": "621", "dtype": "float32"}, {"name": "622", "dtype": "float32"}, {"name": "623", "dtype": "float32"}, {"name": "624", "dtype": "float32"}, {"name": "625", "dtype": "float32"}, {"name": "626", "dtype": "float32"}, {"name": "627", "dtype": "float32"}, {"name": "628", "dtype": "float32"}, {"name": "629", "dtype": "float32"}, {"name": "630", "dtype": "float32"}, {"name": "631", "dtype": "float32"}, {"name": "632", "dtype": "float32"}, {"name": "633", "dtype": "float32"}, {"name": "634", "dtype": "float32"}, {"name": "635", "dtype": "float32"}, {"name": "636", "dtype": "float32"}, {"name": "637", "dtype": "float32"}, {"name": "638", "dtype": "float32"}, {"name": "639", "dtype": "float32"}, {"name": "640", "dtype": "float32"}, {"name": "641", "dtype": "float32"}, {"name": "642", "dtype": "float32"}, {"name": "643", "dtype": "float32"}, {"name": "644", "dtype": "float32"}, {"name": "645", "dtype": "float32"}, {"name": "646", "dtype": "float32"}, {"name": "647", "dtype": "float32"}, {"name": "648", "dtype": "float32"}, {"name": "649", "dtype": "float32"}, {"name": "650", "dtype": "float32"}, {"name": "651", "dtype": "float32"}, {"name": "652", "dtype": "float32"}, {"name": "653", "dtype": "float32"}, {"name": "654", "dtype": "float32"}, {"name": "655", "dtype": "float32"}, {"name": "656", "dtype": "float32"}, {"name": "657", "dtype": "float32"}, {"name": "658", "dtype": "float32"}, {"name": "659", "dtype": "float32"}, {"name": "660", "dtype": "float32"}, {"name": "661", "dtype": "float32"}, {"name": "662", "dtype": "float32"}, {"name": "663", "dtype": "float32"}, {"name": "664", "dtype": "float32"}, {"name": "665", "dtype": "float32"}, {"name": "666", "dtype": "float32"}, {"name": "667", "dtype": "float32"}, {"name": "668", "dtype": "float32"}, {"name": "669", "dtype": "float32"}, {"name": "670", "dtype": "float32"}, {"name": "671", "dtype": "float32"}, {"name": "672", "dtype": "float32"}, {"name": "673", "dtype": "float32"}, {"name": "674", "dtype": "float32"}, {"name": "675", "dtype": "float32"}, {"name": "676", "dtype": "float32"}, {"name": "677", "dtype": "float32"}, {"name": "678", "dtype": "float32"}, {"name": "679", "dtype": "float32"}, {"name": "680", "dtype": "float32"}, {"name": "681", "dtype": "float32"}, {"name": "682", "dtype": "float32"}, {"name": "683", "dtype": "float32"}, {"name": "684", "dtype": "float32"}, {"name": "685", "dtype": "float32"}, {"name": "686", "dtype": "float32"}, {"name": "687", "dtype": "float32"}, {"name": "688", "dtype": "float32"}, {"name": "689", "dtype": "float32"}, {"name": "690", "dtype": "float32"}, {"name": "691", "dtype": "float32"}, {"name": "692", "dtype": "float32"}, {"name": "693", "dtype": "float32"}, {"name": "694", "dtype": "float32"}, {"name": "695", "dtype": "float32"}, {"name": "696", "dtype": "float32"}, {"name": "697", "dtype": "float32"}, {"name": "698", "dtype": "float32"}, {"name": "699", "dtype": "float32"}, {"name": "700", "dtype": "float32"}, {"name": "701", "dtype": "float32"}, {"name": "702", "dtype": "float32"}, {"name": "703", "dtype": "float32"}, {"name": "704", "dtype": "float32"}, {"name": "705", "dtype": "float32"}, {"name": "706", "dtype": "float32"}, {"name": "707", "dtype": "float32"}, {"name": "708", "dtype": "float32"}, {"name": "709", "dtype": "float32"}, {"name": "710", "dtype": "float32"}, {"name": "711", "dtype": "float32"}, {"name": "712", "dtype": "float32"}, {"name": "713", "dtype": "float32"}, {"name": "714", "dtype": "float32"}, {"name": "715", "dtype": "float32"}, {"name": "716", "dtype": "float32"}, {"name": "717", "dtype": "float32"}, {"name": "718", "dtype": "float32"}, {"name": "719", "dtype": "float32"}, {"name": "720", "dtype": "float32"}, {"name": "721", "dtype": "float32"}, {"name": "722", "dtype": "float32"}, {"name": "723", "dtype": "float32"}, {"name": "724", "dtype": "float32"}, {"name": "725", "dtype": "float32"}, {"name": "726", "dtype": "float32"}, {"name": "727", "dtype": "float32"}, {"name": "728", "dtype": "float32"}, {"name": "729", "dtype": "float32"}, {"name": "730", "dtype": "float32"}, {"name": "731", "dtype": "float32"}, {"name": "732", "dtype": "float32"}, {"name": "733", "dtype": "float32"}, {"name": "734", "dtype": "float32"}, {"name": "735", "dtype": "float32"}, {"name": "736", "dtype": "float32"}, {"name": "737", "dtype": "float32"}, {"name": "738", "dtype": "float32"}, {"name": "739", "dtype": "float32"}, {"name": "740", "dtype": "float32"}, {"name": "741", "dtype": "float32"}, {"name": "742", "dtype": "float32"}, {"name": "743", "dtype": "float32"}, {"name": "744", "dtype": "float32"}, {"name": "745", "dtype": "float32"}, {"name": "746", "dtype": "float32"}, {"name": "747", "dtype": "float32"}, {"name": "748", "dtype": "float32"}, {"name": "749", "dtype": "float32"}, {"name": "750", "dtype": "float32"}, {"name": "751", "dtype": "float32"}, {"name": "752", "dtype": "float32"}, {"name": "753", "dtype": "float32"}, {"name": "754", "dtype": "float32"}, {"name": "755", "dtype": "float32"}, {"name": "756", "dtype": "float32"}, {"name": "757", "dtype": "float32"}, {"name": "758", "dtype": "float32"}, {"name": "759", "dtype": "float32"}, {"name": "760", "dtype": "float32"}, {"name": "761", "dtype": "float32"}, {"name": "762", "dtype": "float32"}, {"name": "763", "dtype": "float32"}, {"name": "764", "dtype": "float32"}, {"name": "765", "dtype": "float32"}, {"name": "766", "dtype": "float32"}, {"name": "767", "dtype": "float32"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 115621178.4375, "num_examples": 37500}, {"name": "test", "num_bytes": 38540392.5, "num_examples": 12500}], "download_size": 211877843, "dataset_size": 154161570.9375}}
|
2023-08-23T06:31:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "CSIC_BERT_Finetuned"
More Information needed
|
[
"# Dataset Card for \"CSIC_BERT_Finetuned\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"CSIC_BERT_Finetuned\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"CSIC_BERT_Finetuned\"\n\nMore Information needed"
] |
7862bc8207ff68dc1ee66bbddcf377c70439e2e5
|
# Dataset of komeiji_satori/古明地さとり/코메이지사토리 (Touhou)
This is the dataset of komeiji_satori/古明地さとり/코메이지사토리 (Touhou), containing 500 images and their tags.
The core tags of this character are `short_hair, hairband, third_eye, pink_hair, pink_eyes, black_hairband, bangs, hair_ornament`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 729.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komeiji_satori_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 430.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komeiji_satori_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1233 | 895.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komeiji_satori_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 652.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/komeiji_satori_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1233 | 1.19 GiB | [Download](https://huggingface.co/datasets/CyberHarem/komeiji_satori_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/komeiji_satori_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 18 |  |  |  |  |  | 1girl, eyeball, heart, solo, skirt, red_eyes |
| 1 | 9 |  |  |  |  |  | 1girl, heart, long_sleeves, shirt, solo, wide_sleeves, looking_at_viewer, eyeball, purple_eyes, purple_hair, pink_skirt |
| 2 | 20 |  |  |  |  |  | 1girl, blue_shirt, long_sleeves, looking_at_viewer, solo, wide_sleeves, frilled_sleeves, pink_skirt, simple_background, white_background, frilled_shirt_collar, closed_mouth, blouse, eyeball, heart_hair_ornament, blush, buttons, cowboy_shot, hair_between_eyes, ribbon_trim, smile, rose_print |
| 3 | 11 |  |  |  |  |  | 1girl, heart, long_sleeves, shirt, solo, looking_at_viewer, wide_sleeves, blush, upper_body, open_mouth, eyeball |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | eyeball | heart | solo | skirt | red_eyes | long_sleeves | shirt | wide_sleeves | looking_at_viewer | purple_eyes | purple_hair | pink_skirt | blue_shirt | frilled_sleeves | simple_background | white_background | frilled_shirt_collar | closed_mouth | blouse | heart_hair_ornament | blush | buttons | cowboy_shot | hair_between_eyes | ribbon_trim | smile | rose_print | upper_body | open_mouth |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------|:--------|:-------|:--------|:-----------|:---------------|:--------|:---------------|:--------------------|:--------------|:--------------|:-------------|:-------------|:------------------|:--------------------|:-------------------|:-----------------------|:---------------|:---------|:----------------------|:--------|:----------|:--------------|:--------------------|:--------------|:--------|:-------------|:-------------|:-------------|
| 0 | 18 |  |  |  |  |  | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | X | X | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 2 | 20 |  |  |  |  |  | X | X | | X | | | X | | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | |
| 3 | 11 |  |  |  |  |  | X | X | X | X | | | X | X | X | X | | | | | | | | | | | | X | | | | | | | X | X |
|
CyberHarem/komeiji_satori_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T18:41:06+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T08:12:52+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of komeiji\_satori/古明地さとり/코메이지사토리 (Touhou)
==================================================
This is the dataset of komeiji\_satori/古明地さとり/코메이지사토리 (Touhou), containing 500 images and their tags.
The core tags of this character are 'short\_hair, hairband, third\_eye, pink\_hair, pink\_eyes, black\_hairband, bangs, hair\_ornament', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
8f7ae3a27620c3c628d61da84c72a8e7ad18bd07
|
# Dataset Card for "CSIC_RoBERTa_Finetuned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
EgilKarlsen/CSIC_RoBERTa_Finetuned
|
[
"region:us"
] |
2023-08-17T18:45:44+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "0", "dtype": "float32"}, {"name": "1", "dtype": "float32"}, {"name": "2", "dtype": "float32"}, {"name": "3", "dtype": "float32"}, {"name": "4", "dtype": "float32"}, {"name": "5", "dtype": "float32"}, {"name": "6", "dtype": "float32"}, {"name": "7", "dtype": "float32"}, {"name": "8", "dtype": "float32"}, {"name": "9", "dtype": "float32"}, {"name": "10", "dtype": "float32"}, {"name": "11", "dtype": "float32"}, {"name": "12", "dtype": "float32"}, {"name": "13", "dtype": "float32"}, {"name": "14", "dtype": "float32"}, {"name": "15", "dtype": "float32"}, {"name": "16", "dtype": "float32"}, {"name": "17", "dtype": "float32"}, {"name": "18", "dtype": "float32"}, {"name": "19", "dtype": "float32"}, {"name": "20", "dtype": "float32"}, {"name": "21", "dtype": "float32"}, {"name": "22", "dtype": "float32"}, {"name": "23", "dtype": "float32"}, {"name": "24", "dtype": "float32"}, {"name": "25", "dtype": "float32"}, {"name": "26", "dtype": "float32"}, {"name": "27", "dtype": "float32"}, {"name": "28", "dtype": "float32"}, {"name": "29", "dtype": "float32"}, {"name": "30", "dtype": "float32"}, {"name": "31", "dtype": "float32"}, {"name": "32", "dtype": "float32"}, {"name": "33", "dtype": "float32"}, {"name": "34", "dtype": "float32"}, {"name": "35", "dtype": "float32"}, {"name": "36", "dtype": "float32"}, {"name": "37", "dtype": "float32"}, {"name": "38", "dtype": "float32"}, {"name": "39", "dtype": "float32"}, {"name": "40", "dtype": "float32"}, {"name": "41", "dtype": "float32"}, {"name": "42", "dtype": "float32"}, {"name": "43", "dtype": "float32"}, {"name": "44", "dtype": "float32"}, {"name": "45", "dtype": "float32"}, {"name": "46", "dtype": "float32"}, {"name": "47", "dtype": "float32"}, {"name": "48", "dtype": "float32"}, {"name": "49", "dtype": "float32"}, {"name": "50", "dtype": "float32"}, {"name": "51", "dtype": "float32"}, {"name": "52", "dtype": "float32"}, {"name": "53", "dtype": "float32"}, {"name": "54", "dtype": "float32"}, {"name": "55", "dtype": "float32"}, {"name": "56", "dtype": "float32"}, {"name": "57", "dtype": "float32"}, {"name": "58", "dtype": "float32"}, {"name": "59", "dtype": "float32"}, {"name": "60", "dtype": "float32"}, {"name": "61", "dtype": "float32"}, {"name": "62", "dtype": "float32"}, {"name": "63", "dtype": "float32"}, {"name": "64", "dtype": "float32"}, {"name": "65", "dtype": "float32"}, {"name": "66", "dtype": "float32"}, {"name": "67", "dtype": "float32"}, {"name": "68", "dtype": "float32"}, {"name": "69", "dtype": "float32"}, {"name": "70", "dtype": "float32"}, {"name": "71", "dtype": "float32"}, {"name": "72", "dtype": "float32"}, {"name": "73", "dtype": "float32"}, {"name": "74", "dtype": "float32"}, {"name": "75", "dtype": "float32"}, {"name": "76", "dtype": "float32"}, {"name": "77", "dtype": "float32"}, {"name": "78", "dtype": "float32"}, {"name": "79", "dtype": "float32"}, {"name": "80", "dtype": "float32"}, {"name": "81", "dtype": "float32"}, {"name": "82", "dtype": "float32"}, {"name": "83", "dtype": "float32"}, {"name": "84", "dtype": "float32"}, {"name": "85", "dtype": "float32"}, {"name": "86", "dtype": "float32"}, {"name": "87", "dtype": "float32"}, {"name": "88", "dtype": "float32"}, {"name": "89", "dtype": "float32"}, {"name": "90", "dtype": "float32"}, {"name": "91", "dtype": "float32"}, {"name": "92", "dtype": "float32"}, {"name": "93", "dtype": "float32"}, {"name": "94", "dtype": "float32"}, {"name": "95", "dtype": "float32"}, {"name": "96", "dtype": "float32"}, {"name": "97", "dtype": "float32"}, {"name": "98", "dtype": "float32"}, {"name": "99", "dtype": "float32"}, {"name": "100", "dtype": "float32"}, {"name": "101", "dtype": "float32"}, {"name": "102", "dtype": "float32"}, {"name": "103", "dtype": "float32"}, {"name": "104", "dtype": "float32"}, {"name": "105", "dtype": "float32"}, {"name": "106", "dtype": "float32"}, {"name": "107", "dtype": "float32"}, {"name": "108", "dtype": "float32"}, {"name": "109", "dtype": "float32"}, {"name": "110", "dtype": "float32"}, {"name": "111", "dtype": "float32"}, {"name": "112", "dtype": "float32"}, {"name": "113", "dtype": "float32"}, {"name": "114", "dtype": "float32"}, {"name": "115", "dtype": "float32"}, {"name": "116", "dtype": "float32"}, {"name": "117", "dtype": "float32"}, {"name": "118", "dtype": "float32"}, {"name": "119", "dtype": "float32"}, {"name": "120", "dtype": "float32"}, {"name": "121", "dtype": "float32"}, {"name": "122", "dtype": "float32"}, {"name": "123", "dtype": "float32"}, {"name": "124", "dtype": "float32"}, {"name": "125", "dtype": "float32"}, {"name": "126", "dtype": "float32"}, {"name": "127", "dtype": "float32"}, {"name": "128", "dtype": "float32"}, {"name": "129", "dtype": "float32"}, {"name": "130", "dtype": "float32"}, {"name": "131", "dtype": "float32"}, {"name": "132", "dtype": "float32"}, {"name": "133", "dtype": "float32"}, {"name": "134", "dtype": "float32"}, {"name": "135", "dtype": "float32"}, {"name": "136", "dtype": "float32"}, {"name": "137", "dtype": "float32"}, {"name": "138", "dtype": "float32"}, {"name": "139", "dtype": "float32"}, {"name": "140", "dtype": "float32"}, {"name": "141", "dtype": "float32"}, {"name": "142", "dtype": "float32"}, {"name": "143", "dtype": "float32"}, {"name": "144", "dtype": "float32"}, {"name": "145", "dtype": "float32"}, {"name": "146", "dtype": "float32"}, {"name": "147", "dtype": "float32"}, {"name": "148", "dtype": "float32"}, {"name": "149", "dtype": "float32"}, {"name": "150", "dtype": "float32"}, {"name": "151", "dtype": "float32"}, {"name": "152", "dtype": "float32"}, {"name": "153", "dtype": "float32"}, {"name": "154", "dtype": "float32"}, {"name": "155", "dtype": "float32"}, {"name": "156", "dtype": "float32"}, {"name": "157", "dtype": "float32"}, {"name": "158", "dtype": "float32"}, {"name": "159", "dtype": "float32"}, {"name": "160", "dtype": "float32"}, {"name": "161", "dtype": "float32"}, {"name": "162", "dtype": "float32"}, {"name": "163", "dtype": "float32"}, {"name": "164", "dtype": "float32"}, {"name": "165", "dtype": "float32"}, {"name": "166", "dtype": "float32"}, {"name": "167", "dtype": "float32"}, {"name": "168", "dtype": "float32"}, {"name": "169", "dtype": "float32"}, {"name": "170", "dtype": "float32"}, {"name": "171", "dtype": "float32"}, {"name": "172", "dtype": "float32"}, {"name": "173", "dtype": "float32"}, {"name": "174", "dtype": "float32"}, {"name": "175", "dtype": "float32"}, {"name": "176", "dtype": "float32"}, {"name": "177", "dtype": "float32"}, {"name": "178", "dtype": "float32"}, {"name": "179", "dtype": "float32"}, {"name": "180", "dtype": "float32"}, {"name": "181", "dtype": "float32"}, {"name": "182", "dtype": "float32"}, {"name": "183", "dtype": "float32"}, {"name": "184", "dtype": "float32"}, {"name": "185", "dtype": "float32"}, {"name": "186", "dtype": "float32"}, {"name": "187", "dtype": "float32"}, {"name": "188", "dtype": "float32"}, {"name": "189", "dtype": "float32"}, {"name": "190", "dtype": "float32"}, {"name": "191", "dtype": "float32"}, {"name": "192", "dtype": "float32"}, {"name": "193", "dtype": "float32"}, {"name": "194", "dtype": "float32"}, {"name": "195", "dtype": "float32"}, {"name": "196", "dtype": "float32"}, {"name": "197", "dtype": "float32"}, {"name": "198", "dtype": "float32"}, {"name": "199", "dtype": "float32"}, {"name": "200", "dtype": "float32"}, {"name": "201", "dtype": "float32"}, {"name": "202", "dtype": "float32"}, {"name": "203", "dtype": "float32"}, {"name": "204", "dtype": "float32"}, {"name": "205", "dtype": "float32"}, {"name": "206", "dtype": "float32"}, {"name": "207", "dtype": "float32"}, {"name": "208", "dtype": "float32"}, {"name": "209", "dtype": "float32"}, {"name": "210", "dtype": "float32"}, {"name": "211", "dtype": "float32"}, {"name": "212", "dtype": "float32"}, {"name": "213", "dtype": "float32"}, {"name": "214", "dtype": "float32"}, {"name": "215", "dtype": "float32"}, {"name": "216", "dtype": "float32"}, {"name": "217", "dtype": "float32"}, {"name": "218", "dtype": "float32"}, {"name": "219", "dtype": "float32"}, {"name": "220", "dtype": "float32"}, {"name": "221", "dtype": "float32"}, {"name": "222", "dtype": "float32"}, {"name": "223", "dtype": "float32"}, {"name": "224", "dtype": "float32"}, {"name": "225", "dtype": "float32"}, {"name": "226", "dtype": "float32"}, {"name": "227", "dtype": "float32"}, {"name": "228", "dtype": "float32"}, {"name": "229", "dtype": "float32"}, {"name": "230", "dtype": "float32"}, {"name": "231", "dtype": "float32"}, {"name": "232", "dtype": "float32"}, {"name": "233", "dtype": "float32"}, {"name": "234", "dtype": "float32"}, {"name": "235", "dtype": "float32"}, {"name": "236", "dtype": "float32"}, {"name": "237", "dtype": "float32"}, {"name": "238", "dtype": "float32"}, {"name": "239", "dtype": "float32"}, {"name": "240", "dtype": "float32"}, {"name": "241", "dtype": "float32"}, {"name": "242", "dtype": "float32"}, {"name": "243", "dtype": "float32"}, {"name": "244", "dtype": "float32"}, {"name": "245", "dtype": "float32"}, {"name": "246", "dtype": "float32"}, {"name": "247", "dtype": "float32"}, {"name": "248", "dtype": "float32"}, {"name": "249", "dtype": "float32"}, {"name": "250", "dtype": "float32"}, {"name": "251", "dtype": "float32"}, {"name": "252", "dtype": "float32"}, {"name": "253", "dtype": "float32"}, {"name": "254", "dtype": "float32"}, {"name": "255", "dtype": "float32"}, {"name": "256", "dtype": "float32"}, {"name": "257", "dtype": "float32"}, {"name": "258", "dtype": "float32"}, {"name": "259", "dtype": "float32"}, {"name": "260", "dtype": "float32"}, {"name": "261", "dtype": "float32"}, {"name": "262", "dtype": "float32"}, {"name": "263", "dtype": "float32"}, {"name": "264", "dtype": "float32"}, {"name": "265", "dtype": "float32"}, {"name": "266", "dtype": "float32"}, {"name": "267", "dtype": "float32"}, {"name": "268", "dtype": "float32"}, {"name": "269", "dtype": "float32"}, {"name": "270", "dtype": "float32"}, {"name": "271", "dtype": "float32"}, {"name": "272", "dtype": "float32"}, {"name": "273", "dtype": "float32"}, {"name": "274", "dtype": "float32"}, {"name": "275", "dtype": "float32"}, {"name": "276", "dtype": "float32"}, {"name": "277", "dtype": "float32"}, {"name": "278", "dtype": "float32"}, {"name": "279", "dtype": "float32"}, {"name": "280", "dtype": "float32"}, {"name": "281", "dtype": "float32"}, {"name": "282", "dtype": "float32"}, {"name": "283", "dtype": "float32"}, {"name": "284", "dtype": "float32"}, {"name": "285", "dtype": "float32"}, {"name": "286", "dtype": "float32"}, {"name": "287", "dtype": "float32"}, {"name": "288", "dtype": "float32"}, {"name": "289", "dtype": "float32"}, {"name": "290", "dtype": "float32"}, {"name": "291", "dtype": "float32"}, {"name": "292", "dtype": "float32"}, {"name": "293", "dtype": "float32"}, {"name": "294", "dtype": "float32"}, {"name": "295", "dtype": "float32"}, {"name": "296", "dtype": "float32"}, {"name": "297", "dtype": "float32"}, {"name": "298", "dtype": "float32"}, {"name": "299", "dtype": "float32"}, {"name": "300", "dtype": "float32"}, {"name": "301", "dtype": "float32"}, {"name": "302", "dtype": "float32"}, {"name": "303", "dtype": "float32"}, {"name": "304", "dtype": "float32"}, {"name": "305", "dtype": "float32"}, {"name": "306", "dtype": "float32"}, {"name": "307", "dtype": "float32"}, {"name": "308", "dtype": "float32"}, {"name": "309", "dtype": "float32"}, {"name": "310", "dtype": "float32"}, {"name": "311", "dtype": "float32"}, {"name": "312", "dtype": "float32"}, {"name": "313", "dtype": "float32"}, {"name": "314", "dtype": "float32"}, {"name": "315", "dtype": "float32"}, {"name": "316", "dtype": "float32"}, {"name": "317", "dtype": "float32"}, {"name": "318", "dtype": "float32"}, {"name": "319", "dtype": "float32"}, {"name": "320", "dtype": "float32"}, {"name": "321", "dtype": "float32"}, {"name": "322", "dtype": "float32"}, {"name": "323", "dtype": "float32"}, {"name": "324", "dtype": "float32"}, {"name": "325", "dtype": "float32"}, {"name": "326", "dtype": "float32"}, {"name": "327", "dtype": "float32"}, {"name": "328", "dtype": "float32"}, {"name": "329", "dtype": "float32"}, {"name": "330", "dtype": "float32"}, {"name": "331", "dtype": "float32"}, {"name": "332", "dtype": "float32"}, {"name": "333", "dtype": "float32"}, {"name": "334", "dtype": "float32"}, {"name": "335", "dtype": "float32"}, {"name": "336", "dtype": "float32"}, {"name": "337", "dtype": "float32"}, {"name": "338", "dtype": "float32"}, {"name": "339", "dtype": "float32"}, {"name": "340", "dtype": "float32"}, {"name": "341", "dtype": "float32"}, {"name": "342", "dtype": "float32"}, {"name": "343", "dtype": "float32"}, {"name": "344", "dtype": "float32"}, {"name": "345", "dtype": "float32"}, {"name": "346", "dtype": "float32"}, {"name": "347", "dtype": "float32"}, {"name": "348", "dtype": "float32"}, {"name": "349", "dtype": "float32"}, {"name": "350", "dtype": "float32"}, {"name": "351", "dtype": "float32"}, {"name": "352", "dtype": "float32"}, {"name": "353", "dtype": "float32"}, {"name": "354", "dtype": "float32"}, {"name": "355", "dtype": "float32"}, {"name": "356", "dtype": "float32"}, {"name": "357", "dtype": "float32"}, {"name": "358", "dtype": "float32"}, {"name": "359", "dtype": "float32"}, {"name": "360", "dtype": "float32"}, {"name": "361", "dtype": "float32"}, {"name": "362", "dtype": "float32"}, {"name": "363", "dtype": "float32"}, {"name": "364", "dtype": "float32"}, {"name": "365", "dtype": "float32"}, {"name": "366", "dtype": "float32"}, {"name": "367", "dtype": "float32"}, {"name": "368", "dtype": "float32"}, {"name": "369", "dtype": "float32"}, {"name": "370", "dtype": "float32"}, {"name": "371", "dtype": "float32"}, {"name": "372", "dtype": "float32"}, {"name": "373", "dtype": "float32"}, {"name": "374", "dtype": "float32"}, {"name": "375", "dtype": "float32"}, {"name": "376", "dtype": "float32"}, {"name": "377", "dtype": "float32"}, {"name": "378", "dtype": "float32"}, {"name": "379", "dtype": "float32"}, {"name": "380", "dtype": "float32"}, {"name": "381", "dtype": "float32"}, {"name": "382", "dtype": "float32"}, {"name": "383", "dtype": "float32"}, {"name": "384", "dtype": "float32"}, {"name": "385", "dtype": "float32"}, {"name": "386", "dtype": "float32"}, {"name": "387", "dtype": "float32"}, {"name": "388", "dtype": "float32"}, {"name": "389", "dtype": "float32"}, {"name": "390", "dtype": "float32"}, {"name": "391", "dtype": "float32"}, {"name": "392", "dtype": "float32"}, {"name": "393", "dtype": "float32"}, {"name": "394", "dtype": "float32"}, {"name": "395", "dtype": "float32"}, {"name": "396", "dtype": "float32"}, {"name": "397", "dtype": "float32"}, {"name": "398", "dtype": "float32"}, {"name": "399", "dtype": "float32"}, {"name": "400", "dtype": "float32"}, {"name": "401", "dtype": "float32"}, {"name": "402", "dtype": "float32"}, {"name": "403", "dtype": "float32"}, {"name": "404", "dtype": "float32"}, {"name": "405", "dtype": "float32"}, {"name": "406", "dtype": "float32"}, {"name": "407", "dtype": "float32"}, {"name": "408", "dtype": "float32"}, {"name": "409", "dtype": "float32"}, {"name": "410", "dtype": "float32"}, {"name": "411", "dtype": "float32"}, {"name": "412", "dtype": "float32"}, {"name": "413", "dtype": "float32"}, {"name": "414", "dtype": "float32"}, {"name": "415", "dtype": "float32"}, {"name": "416", "dtype": "float32"}, {"name": "417", "dtype": "float32"}, {"name": "418", "dtype": "float32"}, {"name": "419", "dtype": "float32"}, {"name": "420", "dtype": "float32"}, {"name": "421", "dtype": "float32"}, {"name": "422", "dtype": "float32"}, {"name": "423", "dtype": "float32"}, {"name": "424", "dtype": "float32"}, {"name": "425", "dtype": "float32"}, {"name": "426", "dtype": "float32"}, {"name": "427", "dtype": "float32"}, {"name": "428", "dtype": "float32"}, {"name": "429", "dtype": "float32"}, {"name": "430", "dtype": "float32"}, {"name": "431", "dtype": "float32"}, {"name": "432", "dtype": "float32"}, {"name": "433", "dtype": "float32"}, {"name": "434", "dtype": "float32"}, {"name": "435", "dtype": "float32"}, {"name": "436", "dtype": "float32"}, {"name": "437", "dtype": "float32"}, {"name": "438", "dtype": "float32"}, {"name": "439", "dtype": "float32"}, {"name": "440", "dtype": "float32"}, {"name": "441", "dtype": "float32"}, {"name": "442", "dtype": "float32"}, {"name": "443", "dtype": "float32"}, {"name": "444", "dtype": "float32"}, {"name": "445", "dtype": "float32"}, {"name": "446", "dtype": "float32"}, {"name": "447", "dtype": "float32"}, {"name": "448", "dtype": "float32"}, {"name": "449", "dtype": "float32"}, {"name": "450", "dtype": "float32"}, {"name": "451", "dtype": "float32"}, {"name": "452", "dtype": "float32"}, {"name": "453", "dtype": "float32"}, {"name": "454", "dtype": "float32"}, {"name": "455", "dtype": "float32"}, {"name": "456", "dtype": "float32"}, {"name": "457", "dtype": "float32"}, {"name": "458", "dtype": "float32"}, {"name": "459", "dtype": "float32"}, {"name": "460", "dtype": "float32"}, {"name": "461", "dtype": "float32"}, {"name": "462", "dtype": "float32"}, {"name": "463", "dtype": "float32"}, {"name": "464", "dtype": "float32"}, {"name": "465", "dtype": "float32"}, {"name": "466", "dtype": "float32"}, {"name": "467", "dtype": "float32"}, {"name": "468", "dtype": "float32"}, {"name": "469", "dtype": "float32"}, {"name": "470", "dtype": "float32"}, {"name": "471", "dtype": "float32"}, {"name": "472", "dtype": "float32"}, {"name": "473", "dtype": "float32"}, {"name": "474", "dtype": "float32"}, {"name": "475", "dtype": "float32"}, {"name": "476", "dtype": "float32"}, {"name": "477", "dtype": "float32"}, {"name": "478", "dtype": "float32"}, {"name": "479", "dtype": "float32"}, {"name": "480", "dtype": "float32"}, {"name": "481", "dtype": "float32"}, {"name": "482", "dtype": "float32"}, {"name": "483", "dtype": "float32"}, {"name": "484", "dtype": "float32"}, {"name": "485", "dtype": "float32"}, {"name": "486", "dtype": "float32"}, {"name": "487", "dtype": "float32"}, {"name": "488", "dtype": "float32"}, {"name": "489", "dtype": "float32"}, {"name": "490", "dtype": "float32"}, {"name": "491", "dtype": "float32"}, {"name": "492", "dtype": "float32"}, {"name": "493", "dtype": "float32"}, {"name": "494", "dtype": "float32"}, {"name": "495", "dtype": "float32"}, {"name": "496", "dtype": "float32"}, {"name": "497", "dtype": "float32"}, {"name": "498", "dtype": "float32"}, {"name": "499", "dtype": "float32"}, {"name": "500", "dtype": "float32"}, {"name": "501", "dtype": "float32"}, {"name": "502", "dtype": "float32"}, {"name": "503", "dtype": "float32"}, {"name": "504", "dtype": "float32"}, {"name": "505", "dtype": "float32"}, {"name": "506", "dtype": "float32"}, {"name": "507", "dtype": "float32"}, {"name": "508", "dtype": "float32"}, {"name": "509", "dtype": "float32"}, {"name": "510", "dtype": "float32"}, {"name": "511", "dtype": "float32"}, {"name": "512", "dtype": "float32"}, {"name": "513", "dtype": "float32"}, {"name": "514", "dtype": "float32"}, {"name": "515", "dtype": "float32"}, {"name": "516", "dtype": "float32"}, {"name": "517", "dtype": "float32"}, {"name": "518", "dtype": "float32"}, {"name": "519", "dtype": "float32"}, {"name": "520", "dtype": "float32"}, {"name": "521", "dtype": "float32"}, {"name": "522", "dtype": "float32"}, {"name": "523", "dtype": "float32"}, {"name": "524", "dtype": "float32"}, {"name": "525", "dtype": "float32"}, {"name": "526", "dtype": "float32"}, {"name": "527", "dtype": "float32"}, {"name": "528", "dtype": "float32"}, {"name": "529", "dtype": "float32"}, {"name": "530", "dtype": "float32"}, {"name": "531", "dtype": "float32"}, {"name": "532", "dtype": "float32"}, {"name": "533", "dtype": "float32"}, {"name": "534", "dtype": "float32"}, {"name": "535", "dtype": "float32"}, {"name": "536", "dtype": "float32"}, {"name": "537", "dtype": "float32"}, {"name": "538", "dtype": "float32"}, {"name": "539", "dtype": "float32"}, {"name": "540", "dtype": "float32"}, {"name": "541", "dtype": "float32"}, {"name": "542", "dtype": "float32"}, {"name": "543", "dtype": "float32"}, {"name": "544", "dtype": "float32"}, {"name": "545", "dtype": "float32"}, {"name": "546", "dtype": "float32"}, {"name": "547", "dtype": "float32"}, {"name": "548", "dtype": "float32"}, {"name": "549", "dtype": "float32"}, {"name": "550", "dtype": "float32"}, {"name": "551", "dtype": "float32"}, {"name": "552", "dtype": "float32"}, {"name": "553", "dtype": "float32"}, {"name": "554", "dtype": "float32"}, {"name": "555", "dtype": "float32"}, {"name": "556", "dtype": "float32"}, {"name": "557", "dtype": "float32"}, {"name": "558", "dtype": "float32"}, {"name": "559", "dtype": "float32"}, {"name": "560", "dtype": "float32"}, {"name": "561", "dtype": "float32"}, {"name": "562", "dtype": "float32"}, {"name": "563", "dtype": "float32"}, {"name": "564", "dtype": "float32"}, {"name": "565", "dtype": "float32"}, {"name": "566", "dtype": "float32"}, {"name": "567", "dtype": "float32"}, {"name": "568", "dtype": "float32"}, {"name": "569", "dtype": "float32"}, {"name": "570", "dtype": "float32"}, {"name": "571", "dtype": "float32"}, {"name": "572", "dtype": "float32"}, {"name": "573", "dtype": "float32"}, {"name": "574", "dtype": "float32"}, {"name": "575", "dtype": "float32"}, {"name": "576", "dtype": "float32"}, {"name": "577", "dtype": "float32"}, {"name": "578", "dtype": "float32"}, {"name": "579", "dtype": "float32"}, {"name": "580", "dtype": "float32"}, {"name": "581", "dtype": "float32"}, {"name": "582", "dtype": "float32"}, {"name": "583", "dtype": "float32"}, {"name": "584", "dtype": "float32"}, {"name": "585", "dtype": "float32"}, {"name": "586", "dtype": "float32"}, {"name": "587", "dtype": "float32"}, {"name": "588", "dtype": "float32"}, {"name": "589", "dtype": "float32"}, {"name": "590", "dtype": "float32"}, {"name": "591", "dtype": "float32"}, {"name": "592", "dtype": "float32"}, {"name": "593", "dtype": "float32"}, {"name": "594", "dtype": "float32"}, {"name": "595", "dtype": "float32"}, {"name": "596", "dtype": "float32"}, {"name": "597", "dtype": "float32"}, {"name": "598", "dtype": "float32"}, {"name": "599", "dtype": "float32"}, {"name": "600", "dtype": "float32"}, {"name": "601", "dtype": "float32"}, {"name": "602", "dtype": "float32"}, {"name": "603", "dtype": "float32"}, {"name": "604", "dtype": "float32"}, {"name": "605", "dtype": "float32"}, {"name": "606", "dtype": "float32"}, {"name": "607", "dtype": "float32"}, {"name": "608", "dtype": "float32"}, {"name": "609", "dtype": "float32"}, {"name": "610", "dtype": "float32"}, {"name": "611", "dtype": "float32"}, {"name": "612", "dtype": "float32"}, {"name": "613", "dtype": "float32"}, {"name": "614", "dtype": "float32"}, {"name": "615", "dtype": "float32"}, {"name": "616", "dtype": "float32"}, {"name": "617", "dtype": "float32"}, {"name": "618", "dtype": "float32"}, {"name": "619", "dtype": "float32"}, {"name": "620", "dtype": "float32"}, {"name": "621", "dtype": "float32"}, {"name": "622", "dtype": "float32"}, {"name": "623", "dtype": "float32"}, {"name": "624", "dtype": "float32"}, {"name": "625", "dtype": "float32"}, {"name": "626", "dtype": "float32"}, {"name": "627", "dtype": "float32"}, {"name": "628", "dtype": "float32"}, {"name": "629", "dtype": "float32"}, {"name": "630", "dtype": "float32"}, {"name": "631", "dtype": "float32"}, {"name": "632", "dtype": "float32"}, {"name": "633", "dtype": "float32"}, {"name": "634", "dtype": "float32"}, {"name": "635", "dtype": "float32"}, {"name": "636", "dtype": "float32"}, {"name": "637", "dtype": "float32"}, {"name": "638", "dtype": "float32"}, {"name": "639", "dtype": "float32"}, {"name": "640", "dtype": "float32"}, {"name": "641", "dtype": "float32"}, {"name": "642", "dtype": "float32"}, {"name": "643", "dtype": "float32"}, {"name": "644", "dtype": "float32"}, {"name": "645", "dtype": "float32"}, {"name": "646", "dtype": "float32"}, {"name": "647", "dtype": "float32"}, {"name": "648", "dtype": "float32"}, {"name": "649", "dtype": "float32"}, {"name": "650", "dtype": "float32"}, {"name": "651", "dtype": "float32"}, {"name": "652", "dtype": "float32"}, {"name": "653", "dtype": "float32"}, {"name": "654", "dtype": "float32"}, {"name": "655", "dtype": "float32"}, {"name": "656", "dtype": "float32"}, {"name": "657", "dtype": "float32"}, {"name": "658", "dtype": "float32"}, {"name": "659", "dtype": "float32"}, {"name": "660", "dtype": "float32"}, {"name": "661", "dtype": "float32"}, {"name": "662", "dtype": "float32"}, {"name": "663", "dtype": "float32"}, {"name": "664", "dtype": "float32"}, {"name": "665", "dtype": "float32"}, {"name": "666", "dtype": "float32"}, {"name": "667", "dtype": "float32"}, {"name": "668", "dtype": "float32"}, {"name": "669", "dtype": "float32"}, {"name": "670", "dtype": "float32"}, {"name": "671", "dtype": "float32"}, {"name": "672", "dtype": "float32"}, {"name": "673", "dtype": "float32"}, {"name": "674", "dtype": "float32"}, {"name": "675", "dtype": "float32"}, {"name": "676", "dtype": "float32"}, {"name": "677", "dtype": "float32"}, {"name": "678", "dtype": "float32"}, {"name": "679", "dtype": "float32"}, {"name": "680", "dtype": "float32"}, {"name": "681", "dtype": "float32"}, {"name": "682", "dtype": "float32"}, {"name": "683", "dtype": "float32"}, {"name": "684", "dtype": "float32"}, {"name": "685", "dtype": "float32"}, {"name": "686", "dtype": "float32"}, {"name": "687", "dtype": "float32"}, {"name": "688", "dtype": "float32"}, {"name": "689", "dtype": "float32"}, {"name": "690", "dtype": "float32"}, {"name": "691", "dtype": "float32"}, {"name": "692", "dtype": "float32"}, {"name": "693", "dtype": "float32"}, {"name": "694", "dtype": "float32"}, {"name": "695", "dtype": "float32"}, {"name": "696", "dtype": "float32"}, {"name": "697", "dtype": "float32"}, {"name": "698", "dtype": "float32"}, {"name": "699", "dtype": "float32"}, {"name": "700", "dtype": "float32"}, {"name": "701", "dtype": "float32"}, {"name": "702", "dtype": "float32"}, {"name": "703", "dtype": "float32"}, {"name": "704", "dtype": "float32"}, {"name": "705", "dtype": "float32"}, {"name": "706", "dtype": "float32"}, {"name": "707", "dtype": "float32"}, {"name": "708", "dtype": "float32"}, {"name": "709", "dtype": "float32"}, {"name": "710", "dtype": "float32"}, {"name": "711", "dtype": "float32"}, {"name": "712", "dtype": "float32"}, {"name": "713", "dtype": "float32"}, {"name": "714", "dtype": "float32"}, {"name": "715", "dtype": "float32"}, {"name": "716", "dtype": "float32"}, {"name": "717", "dtype": "float32"}, {"name": "718", "dtype": "float32"}, {"name": "719", "dtype": "float32"}, {"name": "720", "dtype": "float32"}, {"name": "721", "dtype": "float32"}, {"name": "722", "dtype": "float32"}, {"name": "723", "dtype": "float32"}, {"name": "724", "dtype": "float32"}, {"name": "725", "dtype": "float32"}, {"name": "726", "dtype": "float32"}, {"name": "727", "dtype": "float32"}, {"name": "728", "dtype": "float32"}, {"name": "729", "dtype": "float32"}, {"name": "730", "dtype": "float32"}, {"name": "731", "dtype": "float32"}, {"name": "732", "dtype": "float32"}, {"name": "733", "dtype": "float32"}, {"name": "734", "dtype": "float32"}, {"name": "735", "dtype": "float32"}, {"name": "736", "dtype": "float32"}, {"name": "737", "dtype": "float32"}, {"name": "738", "dtype": "float32"}, {"name": "739", "dtype": "float32"}, {"name": "740", "dtype": "float32"}, {"name": "741", "dtype": "float32"}, {"name": "742", "dtype": "float32"}, {"name": "743", "dtype": "float32"}, {"name": "744", "dtype": "float32"}, {"name": "745", "dtype": "float32"}, {"name": "746", "dtype": "float32"}, {"name": "747", "dtype": "float32"}, {"name": "748", "dtype": "float32"}, {"name": "749", "dtype": "float32"}, {"name": "750", "dtype": "float32"}, {"name": "751", "dtype": "float32"}, {"name": "752", "dtype": "float32"}, {"name": "753", "dtype": "float32"}, {"name": "754", "dtype": "float32"}, {"name": "755", "dtype": "float32"}, {"name": "756", "dtype": "float32"}, {"name": "757", "dtype": "float32"}, {"name": "758", "dtype": "float32"}, {"name": "759", "dtype": "float32"}, {"name": "760", "dtype": "float32"}, {"name": "761", "dtype": "float32"}, {"name": "762", "dtype": "float32"}, {"name": "763", "dtype": "float32"}, {"name": "764", "dtype": "float32"}, {"name": "765", "dtype": "float32"}, {"name": "766", "dtype": "float32"}, {"name": "767", "dtype": "float32"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 115621178.4375, "num_examples": 37500}, {"name": "test", "num_bytes": 38540392.5, "num_examples": 12500}], "download_size": 211878304, "dataset_size": 154161570.9375}}
|
2023-08-23T06:40:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "CSIC_RoBERTa_Finetuned"
More Information needed
|
[
"# Dataset Card for \"CSIC_RoBERTa_Finetuned\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"CSIC_RoBERTa_Finetuned\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"CSIC_RoBERTa_Finetuned\"\n\nMore Information needed"
] |
f74da723b68789871237c7ce5a140a674e8420cf
|
# JamendoLyrics MultiLang dataset for lyrics research
A dataset containing 80 songs with different genres and languages along with lyrics that
are time-aligned on a word-by-word level (with start and end times) to the music.
To cite this dataset and for more information, please refer to the following paper, where this
dataset was first used:
[Similarity-based Audio-Lyrics Alignment of Multiple Languages
](https://arxiv.org/abs/2306.07744)
\
[ICASSP 2023](https://ieeexplore.ieee.org/document/10096725)
\
Simon Durand, Daniel Stoller, Sebastian Ewert
## Installation
The dataset can be used without installation by cloning it from this Github repository.
For running any of the included scripts, we require Python 3.10 with packages installed as
listed in ``requirements.txt.``
## Metadata CSV
All songs are listed in `JamendoLyrics.csv` together with their metadata.
To load annotations you are interested in, you can iterate over this CSV and use the `Filepath`
column to build file paths to files containing the data for each song (audio file, lyrics
annotations). Among the metadata, "LyricOverlap" refers to whether or not the lyrics in the song overlap,
“Polyphonic” refers to whether or not there are multiple singers singing the same lyrics, but with different melodies,
and "NonLexical" refers to whether or not there is non-lexical singing (eg: scatting).
## Lyrics files
In the `lyrics` subfolder, we provide the lyrics to each song as `SONG_NAME.txt` (normalized, e.
g. special characters and characters not supported in `vocab/international.characters` are removed)
Furthermore, `SONG_NAME.words.txt` contains all the words, separated by
lines, ignoring the paragraph structure of the original lyrics. This is used for the word-level timestamp annotations.
## Time-aligned lyrics annotations
We have aligned the lyrics on a word-by-word and line-by-line basis to the music.
Word-by-word start and end timestamps are stored in the "annotations/words" subfolder, and they
also indicate whether the word represents the end of a line as well (it will have the word end
timestamp set instead of NaN).
A line-by-line version of the lyrics is stored in the subfolder
"annotations/lines" as CSV files, denoting the start and end time of each lyrical line in the audio.
These contain one row per line in the form of `(start_time, end_time, lyrics_line)` and can be
used to train or evaluate models only on a line-by-line level.
### Modifying word-by-word timestamps
In case the word timestamps are modified, one needs to run `generate_lines.py` to
update the line-level timestamp files in "annotations/lines" accordingly.
This is because the line-level annotation in "annotations/lines" is auto-generated based on the manual
word-by-word annotations: The start timestamp for each line is set to be the start timestamp of the
word after an end-of-line word.
In case you find errors in the timestamp annotations, we encourage you to submit a pull request
to this repository so we can correct the errors.
## Acknowledgements
We want to acknowledge our 2022 Research intern, [Emir Demirel](https://emirdemirel.github.io/),
and Torr Yatco for their help in assembling this dataset.
## Original dataset
This dataset is an extended version of the original JamendoLyrics dataset presented in the paper
[End-to-end Lyrics Alignment for Polyphonic Music Using an Audio-to-Character Recognition Model](https://arxiv.org/abs/1902.06797)
It originally contained only 20 English songs and is now deprecated as annotations are slightly improved,
so we discourage its use in the future.
You can find it archived [here](https://github.com/f90/jamendolyrics/releases/tag/original).
|
morgangautho/jamendolyrics
|
[
"arxiv:2306.07744",
"arxiv:1902.06797",
"region:us"
] |
2023-08-17T18:48:18+00:00
|
{}
|
2023-08-17T18:51:56+00:00
|
[
"2306.07744",
"1902.06797"
] |
[] |
TAGS
#arxiv-2306.07744 #arxiv-1902.06797 #region-us
|
# JamendoLyrics MultiLang dataset for lyrics research
A dataset containing 80 songs with different genres and languages along with lyrics that
are time-aligned on a word-by-word level (with start and end times) to the music.
To cite this dataset and for more information, please refer to the following paper, where this
dataset was first used:
Similarity-based Audio-Lyrics Alignment of Multiple Languages
\
ICASSP 2023
\
Simon Durand, Daniel Stoller, Sebastian Ewert
## Installation
The dataset can be used without installation by cloning it from this Github repository.
For running any of the included scripts, we require Python 3.10 with packages installed as
listed in ''URL.''
## Metadata CSV
All songs are listed in 'URL' together with their metadata.
To load annotations you are interested in, you can iterate over this CSV and use the 'Filepath'
column to build file paths to files containing the data for each song (audio file, lyrics
annotations). Among the metadata, "LyricOverlap" refers to whether or not the lyrics in the song overlap,
“Polyphonic” refers to whether or not there are multiple singers singing the same lyrics, but with different melodies,
and "NonLexical" refers to whether or not there is non-lexical singing (eg: scatting).
## Lyrics files
In the 'lyrics' subfolder, we provide the lyrics to each song as 'SONG_NAME.txt' (normalized, e.
g. special characters and characters not supported in 'vocab/international.characters' are removed)
Furthermore, 'SONG_NAME.URL' contains all the words, separated by
lines, ignoring the paragraph structure of the original lyrics. This is used for the word-level timestamp annotations.
## Time-aligned lyrics annotations
We have aligned the lyrics on a word-by-word and line-by-line basis to the music.
Word-by-word start and end timestamps are stored in the "annotations/words" subfolder, and they
also indicate whether the word represents the end of a line as well (it will have the word end
timestamp set instead of NaN).
A line-by-line version of the lyrics is stored in the subfolder
"annotations/lines" as CSV files, denoting the start and end time of each lyrical line in the audio.
These contain one row per line in the form of '(start_time, end_time, lyrics_line)' and can be
used to train or evaluate models only on a line-by-line level.
### Modifying word-by-word timestamps
In case the word timestamps are modified, one needs to run 'generate_lines.py' to
update the line-level timestamp files in "annotations/lines" accordingly.
This is because the line-level annotation in "annotations/lines" is auto-generated based on the manual
word-by-word annotations: The start timestamp for each line is set to be the start timestamp of the
word after an end-of-line word.
In case you find errors in the timestamp annotations, we encourage you to submit a pull request
to this repository so we can correct the errors.
## Acknowledgements
We want to acknowledge our 2022 Research intern, Emir Demirel,
and Torr Yatco for their help in assembling this dataset.
## Original dataset
This dataset is an extended version of the original JamendoLyrics dataset presented in the paper
End-to-end Lyrics Alignment for Polyphonic Music Using an Audio-to-Character Recognition Model
It originally contained only 20 English songs and is now deprecated as annotations are slightly improved,
so we discourage its use in the future.
You can find it archived here.
|
[
"# JamendoLyrics MultiLang dataset for lyrics research\n\nA dataset containing 80 songs with different genres and languages along with lyrics that \nare time-aligned on a word-by-word level (with start and end times) to the music.\n\nTo cite this dataset and for more information, please refer to the following paper, where this \ndataset was first used:\n\nSimilarity-based Audio-Lyrics Alignment of Multiple Languages\n\n\\\nICASSP 2023\n\\\nSimon Durand, Daniel Stoller, Sebastian Ewert",
"## Installation\n\nThe dataset can be used without installation by cloning it from this Github repository. \n\nFor running any of the included scripts, we require Python 3.10 with packages installed as \nlisted in ''URL.''",
"## Metadata CSV\n\nAll songs are listed in 'URL' together with their metadata.\nTo load annotations you are interested in, you can iterate over this CSV and use the 'Filepath' \ncolumn to build file paths to files containing the data for each song (audio file, lyrics \nannotations). Among the metadata, \"LyricOverlap\" refers to whether or not the lyrics in the song overlap,\n“Polyphonic” refers to whether or not there are multiple singers singing the same lyrics, but with different melodies,\nand \"NonLexical\" refers to whether or not there is non-lexical singing (eg: scatting).",
"## Lyrics files\n\nIn the 'lyrics' subfolder, we provide the lyrics to each song as 'SONG_NAME.txt' (normalized, e.\ng. special characters and characters not supported in 'vocab/international.characters' are removed)\n\nFurthermore, 'SONG_NAME.URL' contains all the words, separated by \nlines, ignoring the paragraph structure of the original lyrics. This is used for the word-level timestamp annotations.",
"## Time-aligned lyrics annotations\n\nWe have aligned the lyrics on a word-by-word and line-by-line basis to the music.\n\nWord-by-word start and end timestamps are stored in the \"annotations/words\" subfolder, and they \nalso indicate whether the word represents the end of a line as well (it will have the word end \ntimestamp set instead of NaN).\n\nA line-by-line version of the lyrics is stored in the subfolder\n\"annotations/lines\" as CSV files, denoting the start and end time of each lyrical line in the audio.\nThese contain one row per line in the form of '(start_time, end_time, lyrics_line)' and can be\nused to train or evaluate models only on a line-by-line level.",
"### Modifying word-by-word timestamps\n\nIn case the word timestamps are modified, one needs to run 'generate_lines.py' to \nupdate the line-level timestamp files in \"annotations/lines\" accordingly. \n\nThis is because the line-level annotation in \"annotations/lines\" is auto-generated based on the manual\nword-by-word annotations: The start timestamp for each line is set to be the start timestamp of the \nword after an end-of-line word.\n\nIn case you find errors in the timestamp annotations, we encourage you to submit a pull request \nto this repository so we can correct the errors.",
"## Acknowledgements\n\nWe want to acknowledge our 2022 Research intern, Emir Demirel, \nand Torr Yatco for their help in assembling this dataset.",
"## Original dataset\n\nThis dataset is an extended version of the original JamendoLyrics dataset presented in the paper\n\nEnd-to-end Lyrics Alignment for Polyphonic Music Using an Audio-to-Character Recognition Model\n\nIt originally contained only 20 English songs and is now deprecated as annotations are slightly improved, \nso we discourage its use in the future.\nYou can find it archived here."
] |
[
"TAGS\n#arxiv-2306.07744 #arxiv-1902.06797 #region-us \n",
"# JamendoLyrics MultiLang dataset for lyrics research\n\nA dataset containing 80 songs with different genres and languages along with lyrics that \nare time-aligned on a word-by-word level (with start and end times) to the music.\n\nTo cite this dataset and for more information, please refer to the following paper, where this \ndataset was first used:\n\nSimilarity-based Audio-Lyrics Alignment of Multiple Languages\n\n\\\nICASSP 2023\n\\\nSimon Durand, Daniel Stoller, Sebastian Ewert",
"## Installation\n\nThe dataset can be used without installation by cloning it from this Github repository. \n\nFor running any of the included scripts, we require Python 3.10 with packages installed as \nlisted in ''URL.''",
"## Metadata CSV\n\nAll songs are listed in 'URL' together with their metadata.\nTo load annotations you are interested in, you can iterate over this CSV and use the 'Filepath' \ncolumn to build file paths to files containing the data for each song (audio file, lyrics \nannotations). Among the metadata, \"LyricOverlap\" refers to whether or not the lyrics in the song overlap,\n“Polyphonic” refers to whether or not there are multiple singers singing the same lyrics, but with different melodies,\nand \"NonLexical\" refers to whether or not there is non-lexical singing (eg: scatting).",
"## Lyrics files\n\nIn the 'lyrics' subfolder, we provide the lyrics to each song as 'SONG_NAME.txt' (normalized, e.\ng. special characters and characters not supported in 'vocab/international.characters' are removed)\n\nFurthermore, 'SONG_NAME.URL' contains all the words, separated by \nlines, ignoring the paragraph structure of the original lyrics. This is used for the word-level timestamp annotations.",
"## Time-aligned lyrics annotations\n\nWe have aligned the lyrics on a word-by-word and line-by-line basis to the music.\n\nWord-by-word start and end timestamps are stored in the \"annotations/words\" subfolder, and they \nalso indicate whether the word represents the end of a line as well (it will have the word end \ntimestamp set instead of NaN).\n\nA line-by-line version of the lyrics is stored in the subfolder\n\"annotations/lines\" as CSV files, denoting the start and end time of each lyrical line in the audio.\nThese contain one row per line in the form of '(start_time, end_time, lyrics_line)' and can be\nused to train or evaluate models only on a line-by-line level.",
"### Modifying word-by-word timestamps\n\nIn case the word timestamps are modified, one needs to run 'generate_lines.py' to \nupdate the line-level timestamp files in \"annotations/lines\" accordingly. \n\nThis is because the line-level annotation in \"annotations/lines\" is auto-generated based on the manual\nword-by-word annotations: The start timestamp for each line is set to be the start timestamp of the \nword after an end-of-line word.\n\nIn case you find errors in the timestamp annotations, we encourage you to submit a pull request \nto this repository so we can correct the errors.",
"## Acknowledgements\n\nWe want to acknowledge our 2022 Research intern, Emir Demirel, \nand Torr Yatco for their help in assembling this dataset.",
"## Original dataset\n\nThis dataset is an extended version of the original JamendoLyrics dataset presented in the paper\n\nEnd-to-end Lyrics Alignment for Polyphonic Music Using an Audio-to-Character Recognition Model\n\nIt originally contained only 20 English songs and is now deprecated as annotations are slightly improved, \nso we discourage its use in the future.\nYou can find it archived here."
] |
[
23,
118,
48,
155,
107,
187,
156,
35,
100
] |
[
"passage: TAGS\n#arxiv-2306.07744 #arxiv-1902.06797 #region-us \n# JamendoLyrics MultiLang dataset for lyrics research\n\nA dataset containing 80 songs with different genres and languages along with lyrics that \nare time-aligned on a word-by-word level (with start and end times) to the music.\n\nTo cite this dataset and for more information, please refer to the following paper, where this \ndataset was first used:\n\nSimilarity-based Audio-Lyrics Alignment of Multiple Languages\n\n\\\nICASSP 2023\n\\\nSimon Durand, Daniel Stoller, Sebastian Ewert## Installation\n\nThe dataset can be used without installation by cloning it from this Github repository. \n\nFor running any of the included scripts, we require Python 3.10 with packages installed as \nlisted in ''URL.''## Metadata CSV\n\nAll songs are listed in 'URL' together with their metadata.\nTo load annotations you are interested in, you can iterate over this CSV and use the 'Filepath' \ncolumn to build file paths to files containing the data for each song (audio file, lyrics \nannotations). Among the metadata, \"LyricOverlap\" refers to whether or not the lyrics in the song overlap,\n“Polyphonic” refers to whether or not there are multiple singers singing the same lyrics, but with different melodies,\nand \"NonLexical\" refers to whether or not there is non-lexical singing (eg: scatting).## Lyrics files\n\nIn the 'lyrics' subfolder, we provide the lyrics to each song as 'SONG_NAME.txt' (normalized, e.\ng. special characters and characters not supported in 'vocab/international.characters' are removed)\n\nFurthermore, 'SONG_NAME.URL' contains all the words, separated by \nlines, ignoring the paragraph structure of the original lyrics. This is used for the word-level timestamp annotations."
] |
b925861c278f1a052eca96bd0090a76c8696adb3
|
# Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
msong/github-issues
|
[
"region:us"
] |
2023-08-17T18:50:31+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "labels", "list": [{"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "assignees", "list": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "milestone", "struct": [{"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "creator", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "open_issues", "dtype": "int64"}, {"name": "closed_issues", "dtype": "int64"}, {"name": "state", "dtype": "string"}, {"name": "created_at", "dtype": "int64"}, {"name": "updated_at", "dtype": "int64"}, {"name": "due_on", "dtype": "int64"}, {"name": "closed_at", "dtype": "int64"}]}, {"name": "comments", "sequence": "string"}, {"name": "created_at", "dtype": "int64"}, {"name": "updated_at", "dtype": "int64"}, {"name": "closed_at", "dtype": "int64"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "null"}, {"name": "pull_request", "struct": [{"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "diff_url", "dtype": "string"}, {"name": "patch_url", "dtype": "string"}]}, {"name": "body", "dtype": "string"}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "null"}, {"name": "is_pull_request", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 10233851, "num_examples": 3019}], "download_size": 0, "dataset_size": 10233851}}
|
2023-08-17T19:02:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "github-issues"
More Information needed
|
[
"# Dataset Card for \"github-issues\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"github-issues\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"github-issues\"\n\nMore Information needed"
] |
9996a168465c3c47ffc081b8bc95649241b89fc0
|
# Dataset Card for "CSIC_DistilRoBERTa_Finetuned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
EgilKarlsen/CSIC_DistilRoBERTa_Finetuned
|
[
"region:us"
] |
2023-08-17T18:54:04+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "0", "dtype": "float32"}, {"name": "1", "dtype": "float32"}, {"name": "2", "dtype": "float32"}, {"name": "3", "dtype": "float32"}, {"name": "4", "dtype": "float32"}, {"name": "5", "dtype": "float32"}, {"name": "6", "dtype": "float32"}, {"name": "7", "dtype": "float32"}, {"name": "8", "dtype": "float32"}, {"name": "9", "dtype": "float32"}, {"name": "10", "dtype": "float32"}, {"name": "11", "dtype": "float32"}, {"name": "12", "dtype": "float32"}, {"name": "13", "dtype": "float32"}, {"name": "14", "dtype": "float32"}, {"name": "15", "dtype": "float32"}, {"name": "16", "dtype": "float32"}, {"name": "17", "dtype": "float32"}, {"name": "18", "dtype": "float32"}, {"name": "19", "dtype": "float32"}, {"name": "20", "dtype": "float32"}, {"name": "21", "dtype": "float32"}, {"name": "22", "dtype": "float32"}, {"name": "23", "dtype": "float32"}, {"name": "24", "dtype": "float32"}, {"name": "25", "dtype": "float32"}, {"name": "26", "dtype": "float32"}, {"name": "27", "dtype": "float32"}, {"name": "28", "dtype": "float32"}, {"name": "29", "dtype": "float32"}, {"name": "30", "dtype": "float32"}, {"name": "31", "dtype": "float32"}, {"name": "32", "dtype": "float32"}, {"name": "33", "dtype": "float32"}, {"name": "34", "dtype": "float32"}, {"name": "35", "dtype": "float32"}, {"name": "36", "dtype": "float32"}, {"name": "37", "dtype": "float32"}, {"name": "38", "dtype": "float32"}, {"name": "39", "dtype": "float32"}, {"name": "40", "dtype": "float32"}, {"name": "41", "dtype": "float32"}, {"name": "42", "dtype": "float32"}, {"name": "43", "dtype": "float32"}, {"name": "44", "dtype": "float32"}, {"name": "45", "dtype": "float32"}, {"name": "46", "dtype": "float32"}, {"name": "47", "dtype": "float32"}, {"name": "48", "dtype": "float32"}, {"name": "49", "dtype": "float32"}, {"name": "50", "dtype": "float32"}, {"name": "51", "dtype": "float32"}, {"name": "52", "dtype": "float32"}, {"name": "53", "dtype": "float32"}, {"name": "54", "dtype": "float32"}, {"name": "55", "dtype": "float32"}, {"name": "56", "dtype": "float32"}, {"name": "57", "dtype": "float32"}, {"name": "58", "dtype": "float32"}, {"name": "59", "dtype": "float32"}, {"name": "60", "dtype": "float32"}, {"name": "61", "dtype": "float32"}, {"name": "62", "dtype": "float32"}, {"name": "63", "dtype": "float32"}, {"name": "64", "dtype": "float32"}, {"name": "65", "dtype": "float32"}, {"name": "66", "dtype": "float32"}, {"name": "67", "dtype": "float32"}, {"name": "68", "dtype": "float32"}, {"name": "69", "dtype": "float32"}, {"name": "70", "dtype": "float32"}, {"name": "71", "dtype": "float32"}, {"name": "72", "dtype": "float32"}, {"name": "73", "dtype": "float32"}, {"name": "74", "dtype": "float32"}, {"name": "75", "dtype": "float32"}, {"name": "76", "dtype": "float32"}, {"name": "77", "dtype": "float32"}, {"name": "78", "dtype": "float32"}, {"name": "79", "dtype": "float32"}, {"name": "80", "dtype": "float32"}, {"name": "81", "dtype": "float32"}, {"name": "82", "dtype": "float32"}, {"name": "83", "dtype": "float32"}, {"name": "84", "dtype": "float32"}, {"name": "85", "dtype": "float32"}, {"name": "86", "dtype": "float32"}, {"name": "87", "dtype": "float32"}, {"name": "88", "dtype": "float32"}, {"name": "89", "dtype": "float32"}, {"name": "90", "dtype": "float32"}, {"name": "91", "dtype": "float32"}, {"name": "92", "dtype": "float32"}, {"name": "93", "dtype": "float32"}, {"name": "94", "dtype": "float32"}, {"name": "95", "dtype": "float32"}, {"name": "96", "dtype": "float32"}, {"name": "97", "dtype": "float32"}, {"name": "98", "dtype": "float32"}, {"name": "99", "dtype": "float32"}, {"name": "100", "dtype": "float32"}, {"name": "101", "dtype": "float32"}, {"name": "102", "dtype": "float32"}, {"name": "103", "dtype": "float32"}, {"name": "104", "dtype": "float32"}, {"name": "105", "dtype": "float32"}, {"name": "106", "dtype": "float32"}, {"name": "107", "dtype": "float32"}, {"name": "108", "dtype": "float32"}, {"name": "109", "dtype": "float32"}, {"name": "110", "dtype": "float32"}, {"name": "111", "dtype": "float32"}, {"name": "112", "dtype": "float32"}, {"name": "113", "dtype": "float32"}, {"name": "114", "dtype": "float32"}, {"name": "115", "dtype": "float32"}, {"name": "116", "dtype": "float32"}, {"name": "117", "dtype": "float32"}, {"name": "118", "dtype": "float32"}, {"name": "119", "dtype": "float32"}, {"name": "120", "dtype": "float32"}, {"name": "121", "dtype": "float32"}, {"name": "122", "dtype": "float32"}, {"name": "123", "dtype": "float32"}, {"name": "124", "dtype": "float32"}, {"name": "125", "dtype": "float32"}, {"name": "126", "dtype": "float32"}, {"name": "127", "dtype": "float32"}, {"name": "128", "dtype": "float32"}, {"name": "129", "dtype": "float32"}, {"name": "130", "dtype": "float32"}, {"name": "131", "dtype": "float32"}, {"name": "132", "dtype": "float32"}, {"name": "133", "dtype": "float32"}, {"name": "134", "dtype": "float32"}, {"name": "135", "dtype": "float32"}, {"name": "136", "dtype": "float32"}, {"name": "137", "dtype": "float32"}, {"name": "138", "dtype": "float32"}, {"name": "139", "dtype": "float32"}, {"name": "140", "dtype": "float32"}, {"name": "141", "dtype": "float32"}, {"name": "142", "dtype": "float32"}, {"name": "143", "dtype": "float32"}, {"name": "144", "dtype": "float32"}, {"name": "145", "dtype": "float32"}, {"name": "146", "dtype": "float32"}, {"name": "147", "dtype": "float32"}, {"name": "148", "dtype": "float32"}, {"name": "149", "dtype": "float32"}, {"name": "150", "dtype": "float32"}, {"name": "151", "dtype": "float32"}, {"name": "152", "dtype": "float32"}, {"name": "153", "dtype": "float32"}, {"name": "154", "dtype": "float32"}, {"name": "155", "dtype": "float32"}, {"name": "156", "dtype": "float32"}, {"name": "157", "dtype": "float32"}, {"name": "158", "dtype": "float32"}, {"name": "159", "dtype": "float32"}, {"name": "160", "dtype": "float32"}, {"name": "161", "dtype": "float32"}, {"name": "162", "dtype": "float32"}, {"name": "163", "dtype": "float32"}, {"name": "164", "dtype": "float32"}, {"name": "165", "dtype": "float32"}, {"name": "166", "dtype": "float32"}, {"name": "167", "dtype": "float32"}, {"name": "168", "dtype": "float32"}, {"name": "169", "dtype": "float32"}, {"name": "170", "dtype": "float32"}, {"name": "171", "dtype": "float32"}, {"name": "172", "dtype": "float32"}, {"name": "173", "dtype": "float32"}, {"name": "174", "dtype": "float32"}, {"name": "175", "dtype": "float32"}, {"name": "176", "dtype": "float32"}, {"name": "177", "dtype": "float32"}, {"name": "178", "dtype": "float32"}, {"name": "179", "dtype": "float32"}, {"name": "180", "dtype": "float32"}, {"name": "181", "dtype": "float32"}, {"name": "182", "dtype": "float32"}, {"name": "183", "dtype": "float32"}, {"name": "184", "dtype": "float32"}, {"name": "185", "dtype": "float32"}, {"name": "186", "dtype": "float32"}, {"name": "187", "dtype": "float32"}, {"name": "188", "dtype": "float32"}, {"name": "189", "dtype": "float32"}, {"name": "190", "dtype": "float32"}, {"name": "191", "dtype": "float32"}, {"name": "192", "dtype": "float32"}, {"name": "193", "dtype": "float32"}, {"name": "194", "dtype": "float32"}, {"name": "195", "dtype": "float32"}, {"name": "196", "dtype": "float32"}, {"name": "197", "dtype": "float32"}, {"name": "198", "dtype": "float32"}, {"name": "199", "dtype": "float32"}, {"name": "200", "dtype": "float32"}, {"name": "201", "dtype": "float32"}, {"name": "202", "dtype": "float32"}, {"name": "203", "dtype": "float32"}, {"name": "204", "dtype": "float32"}, {"name": "205", "dtype": "float32"}, {"name": "206", "dtype": "float32"}, {"name": "207", "dtype": "float32"}, {"name": "208", "dtype": "float32"}, {"name": "209", "dtype": "float32"}, {"name": "210", "dtype": "float32"}, {"name": "211", "dtype": "float32"}, {"name": "212", "dtype": "float32"}, {"name": "213", "dtype": "float32"}, {"name": "214", "dtype": "float32"}, {"name": "215", "dtype": "float32"}, {"name": "216", "dtype": "float32"}, {"name": "217", "dtype": "float32"}, {"name": "218", "dtype": "float32"}, {"name": "219", "dtype": "float32"}, {"name": "220", "dtype": "float32"}, {"name": "221", "dtype": "float32"}, {"name": "222", "dtype": "float32"}, {"name": "223", "dtype": "float32"}, {"name": "224", "dtype": "float32"}, {"name": "225", "dtype": "float32"}, {"name": "226", "dtype": "float32"}, {"name": "227", "dtype": "float32"}, {"name": "228", "dtype": "float32"}, {"name": "229", "dtype": "float32"}, {"name": "230", "dtype": "float32"}, {"name": "231", "dtype": "float32"}, {"name": "232", "dtype": "float32"}, {"name": "233", "dtype": "float32"}, {"name": "234", "dtype": "float32"}, {"name": "235", "dtype": "float32"}, {"name": "236", "dtype": "float32"}, {"name": "237", "dtype": "float32"}, {"name": "238", "dtype": "float32"}, {"name": "239", "dtype": "float32"}, {"name": "240", "dtype": "float32"}, {"name": "241", "dtype": "float32"}, {"name": "242", "dtype": "float32"}, {"name": "243", "dtype": "float32"}, {"name": "244", "dtype": "float32"}, {"name": "245", "dtype": "float32"}, {"name": "246", "dtype": "float32"}, {"name": "247", "dtype": "float32"}, {"name": "248", "dtype": "float32"}, {"name": "249", "dtype": "float32"}, {"name": "250", "dtype": "float32"}, {"name": "251", "dtype": "float32"}, {"name": "252", "dtype": "float32"}, {"name": "253", "dtype": "float32"}, {"name": "254", "dtype": "float32"}, {"name": "255", "dtype": "float32"}, {"name": "256", "dtype": "float32"}, {"name": "257", "dtype": "float32"}, {"name": "258", "dtype": "float32"}, {"name": "259", "dtype": "float32"}, {"name": "260", "dtype": "float32"}, {"name": "261", "dtype": "float32"}, {"name": "262", "dtype": "float32"}, {"name": "263", "dtype": "float32"}, {"name": "264", "dtype": "float32"}, {"name": "265", "dtype": "float32"}, {"name": "266", "dtype": "float32"}, {"name": "267", "dtype": "float32"}, {"name": "268", "dtype": "float32"}, {"name": "269", "dtype": "float32"}, {"name": "270", "dtype": "float32"}, {"name": "271", "dtype": "float32"}, {"name": "272", "dtype": "float32"}, {"name": "273", "dtype": "float32"}, {"name": "274", "dtype": "float32"}, {"name": "275", "dtype": "float32"}, {"name": "276", "dtype": "float32"}, {"name": "277", "dtype": "float32"}, {"name": "278", "dtype": "float32"}, {"name": "279", "dtype": "float32"}, {"name": "280", "dtype": "float32"}, {"name": "281", "dtype": "float32"}, {"name": "282", "dtype": "float32"}, {"name": "283", "dtype": "float32"}, {"name": "284", "dtype": "float32"}, {"name": "285", "dtype": "float32"}, {"name": "286", "dtype": "float32"}, {"name": "287", "dtype": "float32"}, {"name": "288", "dtype": "float32"}, {"name": "289", "dtype": "float32"}, {"name": "290", "dtype": "float32"}, {"name": "291", "dtype": "float32"}, {"name": "292", "dtype": "float32"}, {"name": "293", "dtype": "float32"}, {"name": "294", "dtype": "float32"}, {"name": "295", "dtype": "float32"}, {"name": "296", "dtype": "float32"}, {"name": "297", "dtype": "float32"}, {"name": "298", "dtype": "float32"}, {"name": "299", "dtype": "float32"}, {"name": "300", "dtype": "float32"}, {"name": "301", "dtype": "float32"}, {"name": "302", "dtype": "float32"}, {"name": "303", "dtype": "float32"}, {"name": "304", "dtype": "float32"}, {"name": "305", "dtype": "float32"}, {"name": "306", "dtype": "float32"}, {"name": "307", "dtype": "float32"}, {"name": "308", "dtype": "float32"}, {"name": "309", "dtype": "float32"}, {"name": "310", "dtype": "float32"}, {"name": "311", "dtype": "float32"}, {"name": "312", "dtype": "float32"}, {"name": "313", "dtype": "float32"}, {"name": "314", "dtype": "float32"}, {"name": "315", "dtype": "float32"}, {"name": "316", "dtype": "float32"}, {"name": "317", "dtype": "float32"}, {"name": "318", "dtype": "float32"}, {"name": "319", "dtype": "float32"}, {"name": "320", "dtype": "float32"}, {"name": "321", "dtype": "float32"}, {"name": "322", "dtype": "float32"}, {"name": "323", "dtype": "float32"}, {"name": "324", "dtype": "float32"}, {"name": "325", "dtype": "float32"}, {"name": "326", "dtype": "float32"}, {"name": "327", "dtype": "float32"}, {"name": "328", "dtype": "float32"}, {"name": "329", "dtype": "float32"}, {"name": "330", "dtype": "float32"}, {"name": "331", "dtype": "float32"}, {"name": "332", "dtype": "float32"}, {"name": "333", "dtype": "float32"}, {"name": "334", "dtype": "float32"}, {"name": "335", "dtype": "float32"}, {"name": "336", "dtype": "float32"}, {"name": "337", "dtype": "float32"}, {"name": "338", "dtype": "float32"}, {"name": "339", "dtype": "float32"}, {"name": "340", "dtype": "float32"}, {"name": "341", "dtype": "float32"}, {"name": "342", "dtype": "float32"}, {"name": "343", "dtype": "float32"}, {"name": "344", "dtype": "float32"}, {"name": "345", "dtype": "float32"}, {"name": "346", "dtype": "float32"}, {"name": "347", "dtype": "float32"}, {"name": "348", "dtype": "float32"}, {"name": "349", "dtype": "float32"}, {"name": "350", "dtype": "float32"}, {"name": "351", "dtype": "float32"}, {"name": "352", "dtype": "float32"}, {"name": "353", "dtype": "float32"}, {"name": "354", "dtype": "float32"}, {"name": "355", "dtype": "float32"}, {"name": "356", "dtype": "float32"}, {"name": "357", "dtype": "float32"}, {"name": "358", "dtype": "float32"}, {"name": "359", "dtype": "float32"}, {"name": "360", "dtype": "float32"}, {"name": "361", "dtype": "float32"}, {"name": "362", "dtype": "float32"}, {"name": "363", "dtype": "float32"}, {"name": "364", "dtype": "float32"}, {"name": "365", "dtype": "float32"}, {"name": "366", "dtype": "float32"}, {"name": "367", "dtype": "float32"}, {"name": "368", "dtype": "float32"}, {"name": "369", "dtype": "float32"}, {"name": "370", "dtype": "float32"}, {"name": "371", "dtype": "float32"}, {"name": "372", "dtype": "float32"}, {"name": "373", "dtype": "float32"}, {"name": "374", "dtype": "float32"}, {"name": "375", "dtype": "float32"}, {"name": "376", "dtype": "float32"}, {"name": "377", "dtype": "float32"}, {"name": "378", "dtype": "float32"}, {"name": "379", "dtype": "float32"}, {"name": "380", "dtype": "float32"}, {"name": "381", "dtype": "float32"}, {"name": "382", "dtype": "float32"}, {"name": "383", "dtype": "float32"}, {"name": "384", "dtype": "float32"}, {"name": "385", "dtype": "float32"}, {"name": "386", "dtype": "float32"}, {"name": "387", "dtype": "float32"}, {"name": "388", "dtype": "float32"}, {"name": "389", "dtype": "float32"}, {"name": "390", "dtype": "float32"}, {"name": "391", "dtype": "float32"}, {"name": "392", "dtype": "float32"}, {"name": "393", "dtype": "float32"}, {"name": "394", "dtype": "float32"}, {"name": "395", "dtype": "float32"}, {"name": "396", "dtype": "float32"}, {"name": "397", "dtype": "float32"}, {"name": "398", "dtype": "float32"}, {"name": "399", "dtype": "float32"}, {"name": "400", "dtype": "float32"}, {"name": "401", "dtype": "float32"}, {"name": "402", "dtype": "float32"}, {"name": "403", "dtype": "float32"}, {"name": "404", "dtype": "float32"}, {"name": "405", "dtype": "float32"}, {"name": "406", "dtype": "float32"}, {"name": "407", "dtype": "float32"}, {"name": "408", "dtype": "float32"}, {"name": "409", "dtype": "float32"}, {"name": "410", "dtype": "float32"}, {"name": "411", "dtype": "float32"}, {"name": "412", "dtype": "float32"}, {"name": "413", "dtype": "float32"}, {"name": "414", "dtype": "float32"}, {"name": "415", "dtype": "float32"}, {"name": "416", "dtype": "float32"}, {"name": "417", "dtype": "float32"}, {"name": "418", "dtype": "float32"}, {"name": "419", "dtype": "float32"}, {"name": "420", "dtype": "float32"}, {"name": "421", "dtype": "float32"}, {"name": "422", "dtype": "float32"}, {"name": "423", "dtype": "float32"}, {"name": "424", "dtype": "float32"}, {"name": "425", "dtype": "float32"}, {"name": "426", "dtype": "float32"}, {"name": "427", "dtype": "float32"}, {"name": "428", "dtype": "float32"}, {"name": "429", "dtype": "float32"}, {"name": "430", "dtype": "float32"}, {"name": "431", "dtype": "float32"}, {"name": "432", "dtype": "float32"}, {"name": "433", "dtype": "float32"}, {"name": "434", "dtype": "float32"}, {"name": "435", "dtype": "float32"}, {"name": "436", "dtype": "float32"}, {"name": "437", "dtype": "float32"}, {"name": "438", "dtype": "float32"}, {"name": "439", "dtype": "float32"}, {"name": "440", "dtype": "float32"}, {"name": "441", "dtype": "float32"}, {"name": "442", "dtype": "float32"}, {"name": "443", "dtype": "float32"}, {"name": "444", "dtype": "float32"}, {"name": "445", "dtype": "float32"}, {"name": "446", "dtype": "float32"}, {"name": "447", "dtype": "float32"}, {"name": "448", "dtype": "float32"}, {"name": "449", "dtype": "float32"}, {"name": "450", "dtype": "float32"}, {"name": "451", "dtype": "float32"}, {"name": "452", "dtype": "float32"}, {"name": "453", "dtype": "float32"}, {"name": "454", "dtype": "float32"}, {"name": "455", "dtype": "float32"}, {"name": "456", "dtype": "float32"}, {"name": "457", "dtype": "float32"}, {"name": "458", "dtype": "float32"}, {"name": "459", "dtype": "float32"}, {"name": "460", "dtype": "float32"}, {"name": "461", "dtype": "float32"}, {"name": "462", "dtype": "float32"}, {"name": "463", "dtype": "float32"}, {"name": "464", "dtype": "float32"}, {"name": "465", "dtype": "float32"}, {"name": "466", "dtype": "float32"}, {"name": "467", "dtype": "float32"}, {"name": "468", "dtype": "float32"}, {"name": "469", "dtype": "float32"}, {"name": "470", "dtype": "float32"}, {"name": "471", "dtype": "float32"}, {"name": "472", "dtype": "float32"}, {"name": "473", "dtype": "float32"}, {"name": "474", "dtype": "float32"}, {"name": "475", "dtype": "float32"}, {"name": "476", "dtype": "float32"}, {"name": "477", "dtype": "float32"}, {"name": "478", "dtype": "float32"}, {"name": "479", "dtype": "float32"}, {"name": "480", "dtype": "float32"}, {"name": "481", "dtype": "float32"}, {"name": "482", "dtype": "float32"}, {"name": "483", "dtype": "float32"}, {"name": "484", "dtype": "float32"}, {"name": "485", "dtype": "float32"}, {"name": "486", "dtype": "float32"}, {"name": "487", "dtype": "float32"}, {"name": "488", "dtype": "float32"}, {"name": "489", "dtype": "float32"}, {"name": "490", "dtype": "float32"}, {"name": "491", "dtype": "float32"}, {"name": "492", "dtype": "float32"}, {"name": "493", "dtype": "float32"}, {"name": "494", "dtype": "float32"}, {"name": "495", "dtype": "float32"}, {"name": "496", "dtype": "float32"}, {"name": "497", "dtype": "float32"}, {"name": "498", "dtype": "float32"}, {"name": "499", "dtype": "float32"}, {"name": "500", "dtype": "float32"}, {"name": "501", "dtype": "float32"}, {"name": "502", "dtype": "float32"}, {"name": "503", "dtype": "float32"}, {"name": "504", "dtype": "float32"}, {"name": "505", "dtype": "float32"}, {"name": "506", "dtype": "float32"}, {"name": "507", "dtype": "float32"}, {"name": "508", "dtype": "float32"}, {"name": "509", "dtype": "float32"}, {"name": "510", "dtype": "float32"}, {"name": "511", "dtype": "float32"}, {"name": "512", "dtype": "float32"}, {"name": "513", "dtype": "float32"}, {"name": "514", "dtype": "float32"}, {"name": "515", "dtype": "float32"}, {"name": "516", "dtype": "float32"}, {"name": "517", "dtype": "float32"}, {"name": "518", "dtype": "float32"}, {"name": "519", "dtype": "float32"}, {"name": "520", "dtype": "float32"}, {"name": "521", "dtype": "float32"}, {"name": "522", "dtype": "float32"}, {"name": "523", "dtype": "float32"}, {"name": "524", "dtype": "float32"}, {"name": "525", "dtype": "float32"}, {"name": "526", "dtype": "float32"}, {"name": "527", "dtype": "float32"}, {"name": "528", "dtype": "float32"}, {"name": "529", "dtype": "float32"}, {"name": "530", "dtype": "float32"}, {"name": "531", "dtype": "float32"}, {"name": "532", "dtype": "float32"}, {"name": "533", "dtype": "float32"}, {"name": "534", "dtype": "float32"}, {"name": "535", "dtype": "float32"}, {"name": "536", "dtype": "float32"}, {"name": "537", "dtype": "float32"}, {"name": "538", "dtype": "float32"}, {"name": "539", "dtype": "float32"}, {"name": "540", "dtype": "float32"}, {"name": "541", "dtype": "float32"}, {"name": "542", "dtype": "float32"}, {"name": "543", "dtype": "float32"}, {"name": "544", "dtype": "float32"}, {"name": "545", "dtype": "float32"}, {"name": "546", "dtype": "float32"}, {"name": "547", "dtype": "float32"}, {"name": "548", "dtype": "float32"}, {"name": "549", "dtype": "float32"}, {"name": "550", "dtype": "float32"}, {"name": "551", "dtype": "float32"}, {"name": "552", "dtype": "float32"}, {"name": "553", "dtype": "float32"}, {"name": "554", "dtype": "float32"}, {"name": "555", "dtype": "float32"}, {"name": "556", "dtype": "float32"}, {"name": "557", "dtype": "float32"}, {"name": "558", "dtype": "float32"}, {"name": "559", "dtype": "float32"}, {"name": "560", "dtype": "float32"}, {"name": "561", "dtype": "float32"}, {"name": "562", "dtype": "float32"}, {"name": "563", "dtype": "float32"}, {"name": "564", "dtype": "float32"}, {"name": "565", "dtype": "float32"}, {"name": "566", "dtype": "float32"}, {"name": "567", "dtype": "float32"}, {"name": "568", "dtype": "float32"}, {"name": "569", "dtype": "float32"}, {"name": "570", "dtype": "float32"}, {"name": "571", "dtype": "float32"}, {"name": "572", "dtype": "float32"}, {"name": "573", "dtype": "float32"}, {"name": "574", "dtype": "float32"}, {"name": "575", "dtype": "float32"}, {"name": "576", "dtype": "float32"}, {"name": "577", "dtype": "float32"}, {"name": "578", "dtype": "float32"}, {"name": "579", "dtype": "float32"}, {"name": "580", "dtype": "float32"}, {"name": "581", "dtype": "float32"}, {"name": "582", "dtype": "float32"}, {"name": "583", "dtype": "float32"}, {"name": "584", "dtype": "float32"}, {"name": "585", "dtype": "float32"}, {"name": "586", "dtype": "float32"}, {"name": "587", "dtype": "float32"}, {"name": "588", "dtype": "float32"}, {"name": "589", "dtype": "float32"}, {"name": "590", "dtype": "float32"}, {"name": "591", "dtype": "float32"}, {"name": "592", "dtype": "float32"}, {"name": "593", "dtype": "float32"}, {"name": "594", "dtype": "float32"}, {"name": "595", "dtype": "float32"}, {"name": "596", "dtype": "float32"}, {"name": "597", "dtype": "float32"}, {"name": "598", "dtype": "float32"}, {"name": "599", "dtype": "float32"}, {"name": "600", "dtype": "float32"}, {"name": "601", "dtype": "float32"}, {"name": "602", "dtype": "float32"}, {"name": "603", "dtype": "float32"}, {"name": "604", "dtype": "float32"}, {"name": "605", "dtype": "float32"}, {"name": "606", "dtype": "float32"}, {"name": "607", "dtype": "float32"}, {"name": "608", "dtype": "float32"}, {"name": "609", "dtype": "float32"}, {"name": "610", "dtype": "float32"}, {"name": "611", "dtype": "float32"}, {"name": "612", "dtype": "float32"}, {"name": "613", "dtype": "float32"}, {"name": "614", "dtype": "float32"}, {"name": "615", "dtype": "float32"}, {"name": "616", "dtype": "float32"}, {"name": "617", "dtype": "float32"}, {"name": "618", "dtype": "float32"}, {"name": "619", "dtype": "float32"}, {"name": "620", "dtype": "float32"}, {"name": "621", "dtype": "float32"}, {"name": "622", "dtype": "float32"}, {"name": "623", "dtype": "float32"}, {"name": "624", "dtype": "float32"}, {"name": "625", "dtype": "float32"}, {"name": "626", "dtype": "float32"}, {"name": "627", "dtype": "float32"}, {"name": "628", "dtype": "float32"}, {"name": "629", "dtype": "float32"}, {"name": "630", "dtype": "float32"}, {"name": "631", "dtype": "float32"}, {"name": "632", "dtype": "float32"}, {"name": "633", "dtype": "float32"}, {"name": "634", "dtype": "float32"}, {"name": "635", "dtype": "float32"}, {"name": "636", "dtype": "float32"}, {"name": "637", "dtype": "float32"}, {"name": "638", "dtype": "float32"}, {"name": "639", "dtype": "float32"}, {"name": "640", "dtype": "float32"}, {"name": "641", "dtype": "float32"}, {"name": "642", "dtype": "float32"}, {"name": "643", "dtype": "float32"}, {"name": "644", "dtype": "float32"}, {"name": "645", "dtype": "float32"}, {"name": "646", "dtype": "float32"}, {"name": "647", "dtype": "float32"}, {"name": "648", "dtype": "float32"}, {"name": "649", "dtype": "float32"}, {"name": "650", "dtype": "float32"}, {"name": "651", "dtype": "float32"}, {"name": "652", "dtype": "float32"}, {"name": "653", "dtype": "float32"}, {"name": "654", "dtype": "float32"}, {"name": "655", "dtype": "float32"}, {"name": "656", "dtype": "float32"}, {"name": "657", "dtype": "float32"}, {"name": "658", "dtype": "float32"}, {"name": "659", "dtype": "float32"}, {"name": "660", "dtype": "float32"}, {"name": "661", "dtype": "float32"}, {"name": "662", "dtype": "float32"}, {"name": "663", "dtype": "float32"}, {"name": "664", "dtype": "float32"}, {"name": "665", "dtype": "float32"}, {"name": "666", "dtype": "float32"}, {"name": "667", "dtype": "float32"}, {"name": "668", "dtype": "float32"}, {"name": "669", "dtype": "float32"}, {"name": "670", "dtype": "float32"}, {"name": "671", "dtype": "float32"}, {"name": "672", "dtype": "float32"}, {"name": "673", "dtype": "float32"}, {"name": "674", "dtype": "float32"}, {"name": "675", "dtype": "float32"}, {"name": "676", "dtype": "float32"}, {"name": "677", "dtype": "float32"}, {"name": "678", "dtype": "float32"}, {"name": "679", "dtype": "float32"}, {"name": "680", "dtype": "float32"}, {"name": "681", "dtype": "float32"}, {"name": "682", "dtype": "float32"}, {"name": "683", "dtype": "float32"}, {"name": "684", "dtype": "float32"}, {"name": "685", "dtype": "float32"}, {"name": "686", "dtype": "float32"}, {"name": "687", "dtype": "float32"}, {"name": "688", "dtype": "float32"}, {"name": "689", "dtype": "float32"}, {"name": "690", "dtype": "float32"}, {"name": "691", "dtype": "float32"}, {"name": "692", "dtype": "float32"}, {"name": "693", "dtype": "float32"}, {"name": "694", "dtype": "float32"}, {"name": "695", "dtype": "float32"}, {"name": "696", "dtype": "float32"}, {"name": "697", "dtype": "float32"}, {"name": "698", "dtype": "float32"}, {"name": "699", "dtype": "float32"}, {"name": "700", "dtype": "float32"}, {"name": "701", "dtype": "float32"}, {"name": "702", "dtype": "float32"}, {"name": "703", "dtype": "float32"}, {"name": "704", "dtype": "float32"}, {"name": "705", "dtype": "float32"}, {"name": "706", "dtype": "float32"}, {"name": "707", "dtype": "float32"}, {"name": "708", "dtype": "float32"}, {"name": "709", "dtype": "float32"}, {"name": "710", "dtype": "float32"}, {"name": "711", "dtype": "float32"}, {"name": "712", "dtype": "float32"}, {"name": "713", "dtype": "float32"}, {"name": "714", "dtype": "float32"}, {"name": "715", "dtype": "float32"}, {"name": "716", "dtype": "float32"}, {"name": "717", "dtype": "float32"}, {"name": "718", "dtype": "float32"}, {"name": "719", "dtype": "float32"}, {"name": "720", "dtype": "float32"}, {"name": "721", "dtype": "float32"}, {"name": "722", "dtype": "float32"}, {"name": "723", "dtype": "float32"}, {"name": "724", "dtype": "float32"}, {"name": "725", "dtype": "float32"}, {"name": "726", "dtype": "float32"}, {"name": "727", "dtype": "float32"}, {"name": "728", "dtype": "float32"}, {"name": "729", "dtype": "float32"}, {"name": "730", "dtype": "float32"}, {"name": "731", "dtype": "float32"}, {"name": "732", "dtype": "float32"}, {"name": "733", "dtype": "float32"}, {"name": "734", "dtype": "float32"}, {"name": "735", "dtype": "float32"}, {"name": "736", "dtype": "float32"}, {"name": "737", "dtype": "float32"}, {"name": "738", "dtype": "float32"}, {"name": "739", "dtype": "float32"}, {"name": "740", "dtype": "float32"}, {"name": "741", "dtype": "float32"}, {"name": "742", "dtype": "float32"}, {"name": "743", "dtype": "float32"}, {"name": "744", "dtype": "float32"}, {"name": "745", "dtype": "float32"}, {"name": "746", "dtype": "float32"}, {"name": "747", "dtype": "float32"}, {"name": "748", "dtype": "float32"}, {"name": "749", "dtype": "float32"}, {"name": "750", "dtype": "float32"}, {"name": "751", "dtype": "float32"}, {"name": "752", "dtype": "float32"}, {"name": "753", "dtype": "float32"}, {"name": "754", "dtype": "float32"}, {"name": "755", "dtype": "float32"}, {"name": "756", "dtype": "float32"}, {"name": "757", "dtype": "float32"}, {"name": "758", "dtype": "float32"}, {"name": "759", "dtype": "float32"}, {"name": "760", "dtype": "float32"}, {"name": "761", "dtype": "float32"}, {"name": "762", "dtype": "float32"}, {"name": "763", "dtype": "float32"}, {"name": "764", "dtype": "float32"}, {"name": "765", "dtype": "float32"}, {"name": "766", "dtype": "float32"}, {"name": "767", "dtype": "float32"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 115621178.4375, "num_examples": 37500}, {"name": "test", "num_bytes": 38540392.5, "num_examples": 12500}], "download_size": 211877871, "dataset_size": 154161570.9375}}
|
2023-08-23T06:48:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "CSIC_DistilRoBERTa_Finetuned"
More Information needed
|
[
"# Dataset Card for \"CSIC_DistilRoBERTa_Finetuned\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"CSIC_DistilRoBERTa_Finetuned\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"CSIC_DistilRoBERTa_Finetuned\"\n\nMore Information needed"
] |
8fce0bf256964850146e275dcbf3a9f12df50372
|
# Dataset Card for "CSIC_GPT2_Finetuned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
EgilKarlsen/CSIC_GPT2_Finetuned
|
[
"region:us"
] |
2023-08-17T19:04:30+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "0", "dtype": "float32"}, {"name": "1", "dtype": "float32"}, {"name": "2", "dtype": "float32"}, {"name": "3", "dtype": "float32"}, {"name": "4", "dtype": "float32"}, {"name": "5", "dtype": "float32"}, {"name": "6", "dtype": "float32"}, {"name": "7", "dtype": "float32"}, {"name": "8", "dtype": "float32"}, {"name": "9", "dtype": "float32"}, {"name": "10", "dtype": "float32"}, {"name": "11", "dtype": "float32"}, {"name": "12", "dtype": "float32"}, {"name": "13", "dtype": "float32"}, {"name": "14", "dtype": "float32"}, {"name": "15", "dtype": "float32"}, {"name": "16", "dtype": "float32"}, {"name": "17", "dtype": "float32"}, {"name": "18", "dtype": "float32"}, {"name": "19", "dtype": "float32"}, {"name": "20", "dtype": "float32"}, {"name": "21", "dtype": "float32"}, {"name": "22", "dtype": "float32"}, {"name": "23", "dtype": "float32"}, {"name": "24", "dtype": "float32"}, {"name": "25", "dtype": "float32"}, {"name": "26", "dtype": "float32"}, {"name": "27", "dtype": "float32"}, {"name": "28", "dtype": "float32"}, {"name": "29", "dtype": "float32"}, {"name": "30", "dtype": "float32"}, {"name": "31", "dtype": "float32"}, {"name": "32", "dtype": "float32"}, {"name": "33", "dtype": "float32"}, {"name": "34", "dtype": "float32"}, {"name": "35", "dtype": "float32"}, {"name": "36", "dtype": "float32"}, {"name": "37", "dtype": "float32"}, {"name": "38", "dtype": "float32"}, {"name": "39", "dtype": "float32"}, {"name": "40", "dtype": "float32"}, {"name": "41", "dtype": "float32"}, {"name": "42", "dtype": "float32"}, {"name": "43", "dtype": "float32"}, {"name": "44", "dtype": "float32"}, {"name": "45", "dtype": "float32"}, {"name": "46", "dtype": "float32"}, {"name": "47", "dtype": "float32"}, {"name": "48", "dtype": "float32"}, {"name": "49", "dtype": "float32"}, {"name": "50", "dtype": "float32"}, {"name": "51", "dtype": "float32"}, {"name": "52", "dtype": "float32"}, {"name": "53", "dtype": "float32"}, {"name": "54", "dtype": "float32"}, {"name": "55", "dtype": "float32"}, {"name": "56", "dtype": "float32"}, {"name": "57", "dtype": "float32"}, {"name": "58", "dtype": "float32"}, {"name": "59", "dtype": "float32"}, {"name": "60", "dtype": "float32"}, {"name": "61", "dtype": "float32"}, {"name": "62", "dtype": "float32"}, {"name": "63", "dtype": "float32"}, {"name": "64", "dtype": "float32"}, {"name": "65", "dtype": "float32"}, {"name": "66", "dtype": "float32"}, {"name": "67", "dtype": "float32"}, {"name": "68", "dtype": "float32"}, {"name": "69", "dtype": "float32"}, {"name": "70", "dtype": "float32"}, {"name": "71", "dtype": "float32"}, {"name": "72", "dtype": "float32"}, {"name": "73", "dtype": "float32"}, {"name": "74", "dtype": "float32"}, {"name": "75", "dtype": "float32"}, {"name": "76", "dtype": "float32"}, {"name": "77", "dtype": "float32"}, {"name": "78", "dtype": "float32"}, {"name": "79", "dtype": "float32"}, {"name": "80", "dtype": "float32"}, {"name": "81", "dtype": "float32"}, {"name": "82", "dtype": "float32"}, {"name": "83", "dtype": "float32"}, {"name": "84", "dtype": "float32"}, {"name": "85", "dtype": "float32"}, {"name": "86", "dtype": "float32"}, {"name": "87", "dtype": "float32"}, {"name": "88", "dtype": "float32"}, {"name": "89", "dtype": "float32"}, {"name": "90", "dtype": "float32"}, {"name": "91", "dtype": "float32"}, {"name": "92", "dtype": "float32"}, {"name": "93", "dtype": "float32"}, {"name": "94", "dtype": "float32"}, {"name": "95", "dtype": "float32"}, {"name": "96", "dtype": "float32"}, {"name": "97", "dtype": "float32"}, {"name": "98", "dtype": "float32"}, {"name": "99", "dtype": "float32"}, {"name": "100", "dtype": "float32"}, {"name": "101", "dtype": "float32"}, {"name": "102", "dtype": "float32"}, {"name": "103", "dtype": "float32"}, {"name": "104", "dtype": "float32"}, {"name": "105", "dtype": "float32"}, {"name": "106", "dtype": "float32"}, {"name": "107", "dtype": "float32"}, {"name": "108", "dtype": "float32"}, {"name": "109", "dtype": "float32"}, {"name": "110", "dtype": "float32"}, {"name": "111", "dtype": "float32"}, {"name": "112", "dtype": "float32"}, {"name": "113", "dtype": "float32"}, {"name": "114", "dtype": "float32"}, {"name": "115", "dtype": "float32"}, {"name": "116", "dtype": "float32"}, {"name": "117", "dtype": "float32"}, {"name": "118", "dtype": "float32"}, {"name": "119", "dtype": "float32"}, {"name": "120", "dtype": "float32"}, {"name": "121", "dtype": "float32"}, {"name": "122", "dtype": "float32"}, {"name": "123", "dtype": "float32"}, {"name": "124", "dtype": "float32"}, {"name": "125", "dtype": "float32"}, {"name": "126", "dtype": "float32"}, {"name": "127", "dtype": "float32"}, {"name": "128", "dtype": "float32"}, {"name": "129", "dtype": "float32"}, {"name": "130", "dtype": "float32"}, {"name": "131", "dtype": "float32"}, {"name": "132", "dtype": "float32"}, {"name": "133", "dtype": "float32"}, {"name": "134", "dtype": "float32"}, {"name": "135", "dtype": "float32"}, {"name": "136", "dtype": "float32"}, {"name": "137", "dtype": "float32"}, {"name": "138", "dtype": "float32"}, {"name": "139", "dtype": "float32"}, {"name": "140", "dtype": "float32"}, {"name": "141", "dtype": "float32"}, {"name": "142", "dtype": "float32"}, {"name": "143", "dtype": "float32"}, {"name": "144", "dtype": "float32"}, {"name": "145", "dtype": "float32"}, {"name": "146", "dtype": "float32"}, {"name": "147", "dtype": "float32"}, {"name": "148", "dtype": "float32"}, {"name": "149", "dtype": "float32"}, {"name": "150", "dtype": "float32"}, {"name": "151", "dtype": "float32"}, {"name": "152", "dtype": "float32"}, {"name": "153", "dtype": "float32"}, {"name": "154", "dtype": "float32"}, {"name": "155", "dtype": "float32"}, {"name": "156", "dtype": "float32"}, {"name": "157", "dtype": "float32"}, {"name": "158", "dtype": "float32"}, {"name": "159", "dtype": "float32"}, {"name": "160", "dtype": "float32"}, {"name": "161", "dtype": "float32"}, {"name": "162", "dtype": "float32"}, {"name": "163", "dtype": "float32"}, {"name": "164", "dtype": "float32"}, {"name": "165", "dtype": "float32"}, {"name": "166", "dtype": "float32"}, {"name": "167", "dtype": "float32"}, {"name": "168", "dtype": "float32"}, {"name": "169", "dtype": "float32"}, {"name": "170", "dtype": "float32"}, {"name": "171", "dtype": "float32"}, {"name": "172", "dtype": "float32"}, {"name": "173", "dtype": "float32"}, {"name": "174", "dtype": "float32"}, {"name": "175", "dtype": "float32"}, {"name": "176", "dtype": "float32"}, {"name": "177", "dtype": "float32"}, {"name": "178", "dtype": "float32"}, {"name": "179", "dtype": "float32"}, {"name": "180", "dtype": "float32"}, {"name": "181", "dtype": "float32"}, {"name": "182", "dtype": "float32"}, {"name": "183", "dtype": "float32"}, {"name": "184", "dtype": "float32"}, {"name": "185", "dtype": "float32"}, {"name": "186", "dtype": "float32"}, {"name": "187", "dtype": "float32"}, {"name": "188", "dtype": "float32"}, {"name": "189", "dtype": "float32"}, {"name": "190", "dtype": "float32"}, {"name": "191", "dtype": "float32"}, {"name": "192", "dtype": "float32"}, {"name": "193", "dtype": "float32"}, {"name": "194", "dtype": "float32"}, {"name": "195", "dtype": "float32"}, {"name": "196", "dtype": "float32"}, {"name": "197", "dtype": "float32"}, {"name": "198", "dtype": "float32"}, {"name": "199", "dtype": "float32"}, {"name": "200", "dtype": "float32"}, {"name": "201", "dtype": "float32"}, {"name": "202", "dtype": "float32"}, {"name": "203", "dtype": "float32"}, {"name": "204", "dtype": "float32"}, {"name": "205", "dtype": "float32"}, {"name": "206", "dtype": "float32"}, {"name": "207", "dtype": "float32"}, {"name": "208", "dtype": "float32"}, {"name": "209", "dtype": "float32"}, {"name": "210", "dtype": "float32"}, {"name": "211", "dtype": "float32"}, {"name": "212", "dtype": "float32"}, {"name": "213", "dtype": "float32"}, {"name": "214", "dtype": "float32"}, {"name": "215", "dtype": "float32"}, {"name": "216", "dtype": "float32"}, {"name": "217", "dtype": "float32"}, {"name": "218", "dtype": "float32"}, {"name": "219", "dtype": "float32"}, {"name": "220", "dtype": "float32"}, {"name": "221", "dtype": "float32"}, {"name": "222", "dtype": "float32"}, {"name": "223", "dtype": "float32"}, {"name": "224", "dtype": "float32"}, {"name": "225", "dtype": "float32"}, {"name": "226", "dtype": "float32"}, {"name": "227", "dtype": "float32"}, {"name": "228", "dtype": "float32"}, {"name": "229", "dtype": "float32"}, {"name": "230", "dtype": "float32"}, {"name": "231", "dtype": "float32"}, {"name": "232", "dtype": "float32"}, {"name": "233", "dtype": "float32"}, {"name": "234", "dtype": "float32"}, {"name": "235", "dtype": "float32"}, {"name": "236", "dtype": "float32"}, {"name": "237", "dtype": "float32"}, {"name": "238", "dtype": "float32"}, {"name": "239", "dtype": "float32"}, {"name": "240", "dtype": "float32"}, {"name": "241", "dtype": "float32"}, {"name": "242", "dtype": "float32"}, {"name": "243", "dtype": "float32"}, {"name": "244", "dtype": "float32"}, {"name": "245", "dtype": "float32"}, {"name": "246", "dtype": "float32"}, {"name": "247", "dtype": "float32"}, {"name": "248", "dtype": "float32"}, {"name": "249", "dtype": "float32"}, {"name": "250", "dtype": "float32"}, {"name": "251", "dtype": "float32"}, {"name": "252", "dtype": "float32"}, {"name": "253", "dtype": "float32"}, {"name": "254", "dtype": "float32"}, {"name": "255", "dtype": "float32"}, {"name": "256", "dtype": "float32"}, {"name": "257", "dtype": "float32"}, {"name": "258", "dtype": "float32"}, {"name": "259", "dtype": "float32"}, {"name": "260", "dtype": "float32"}, {"name": "261", "dtype": "float32"}, {"name": "262", "dtype": "float32"}, {"name": "263", "dtype": "float32"}, {"name": "264", "dtype": "float32"}, {"name": "265", "dtype": "float32"}, {"name": "266", "dtype": "float32"}, {"name": "267", "dtype": "float32"}, {"name": "268", "dtype": "float32"}, {"name": "269", "dtype": "float32"}, {"name": "270", "dtype": "float32"}, {"name": "271", "dtype": "float32"}, {"name": "272", "dtype": "float32"}, {"name": "273", "dtype": "float32"}, {"name": "274", "dtype": "float32"}, {"name": "275", "dtype": "float32"}, {"name": "276", "dtype": "float32"}, {"name": "277", "dtype": "float32"}, {"name": "278", "dtype": "float32"}, {"name": "279", "dtype": "float32"}, {"name": "280", "dtype": "float32"}, {"name": "281", "dtype": "float32"}, {"name": "282", "dtype": "float32"}, {"name": "283", "dtype": "float32"}, {"name": "284", "dtype": "float32"}, {"name": "285", "dtype": "float32"}, {"name": "286", "dtype": "float32"}, {"name": "287", "dtype": "float32"}, {"name": "288", "dtype": "float32"}, {"name": "289", "dtype": "float32"}, {"name": "290", "dtype": "float32"}, {"name": "291", "dtype": "float32"}, {"name": "292", "dtype": "float32"}, {"name": "293", "dtype": "float32"}, {"name": "294", "dtype": "float32"}, {"name": "295", "dtype": "float32"}, {"name": "296", "dtype": "float32"}, {"name": "297", "dtype": "float32"}, {"name": "298", "dtype": "float32"}, {"name": "299", "dtype": "float32"}, {"name": "300", "dtype": "float32"}, {"name": "301", "dtype": "float32"}, {"name": "302", "dtype": "float32"}, {"name": "303", "dtype": "float32"}, {"name": "304", "dtype": "float32"}, {"name": "305", "dtype": "float32"}, {"name": "306", "dtype": "float32"}, {"name": "307", "dtype": "float32"}, {"name": "308", "dtype": "float32"}, {"name": "309", "dtype": "float32"}, {"name": "310", "dtype": "float32"}, {"name": "311", "dtype": "float32"}, {"name": "312", "dtype": "float32"}, {"name": "313", "dtype": "float32"}, {"name": "314", "dtype": "float32"}, {"name": "315", "dtype": "float32"}, {"name": "316", "dtype": "float32"}, {"name": "317", "dtype": "float32"}, {"name": "318", "dtype": "float32"}, {"name": "319", "dtype": "float32"}, {"name": "320", "dtype": "float32"}, {"name": "321", "dtype": "float32"}, {"name": "322", "dtype": "float32"}, {"name": "323", "dtype": "float32"}, {"name": "324", "dtype": "float32"}, {"name": "325", "dtype": "float32"}, {"name": "326", "dtype": "float32"}, {"name": "327", "dtype": "float32"}, {"name": "328", "dtype": "float32"}, {"name": "329", "dtype": "float32"}, {"name": "330", "dtype": "float32"}, {"name": "331", "dtype": "float32"}, {"name": "332", "dtype": "float32"}, {"name": "333", "dtype": "float32"}, {"name": "334", "dtype": "float32"}, {"name": "335", "dtype": "float32"}, {"name": "336", "dtype": "float32"}, {"name": "337", "dtype": "float32"}, {"name": "338", "dtype": "float32"}, {"name": "339", "dtype": "float32"}, {"name": "340", "dtype": "float32"}, {"name": "341", "dtype": "float32"}, {"name": "342", "dtype": "float32"}, {"name": "343", "dtype": "float32"}, {"name": "344", "dtype": "float32"}, {"name": "345", "dtype": "float32"}, {"name": "346", "dtype": "float32"}, {"name": "347", "dtype": "float32"}, {"name": "348", "dtype": "float32"}, {"name": "349", "dtype": "float32"}, {"name": "350", "dtype": "float32"}, {"name": "351", "dtype": "float32"}, {"name": "352", "dtype": "float32"}, {"name": "353", "dtype": "float32"}, {"name": "354", "dtype": "float32"}, {"name": "355", "dtype": "float32"}, {"name": "356", "dtype": "float32"}, {"name": "357", "dtype": "float32"}, {"name": "358", "dtype": "float32"}, {"name": "359", "dtype": "float32"}, {"name": "360", "dtype": "float32"}, {"name": "361", "dtype": "float32"}, {"name": "362", "dtype": "float32"}, {"name": "363", "dtype": "float32"}, {"name": "364", "dtype": "float32"}, {"name": "365", "dtype": "float32"}, {"name": "366", "dtype": "float32"}, {"name": "367", "dtype": "float32"}, {"name": "368", "dtype": "float32"}, {"name": "369", "dtype": "float32"}, {"name": "370", "dtype": "float32"}, {"name": "371", "dtype": "float32"}, {"name": "372", "dtype": "float32"}, {"name": "373", "dtype": "float32"}, {"name": "374", "dtype": "float32"}, {"name": "375", "dtype": "float32"}, {"name": "376", "dtype": "float32"}, {"name": "377", "dtype": "float32"}, {"name": "378", "dtype": "float32"}, {"name": "379", "dtype": "float32"}, {"name": "380", "dtype": "float32"}, {"name": "381", "dtype": "float32"}, {"name": "382", "dtype": "float32"}, {"name": "383", "dtype": "float32"}, {"name": "384", "dtype": "float32"}, {"name": "385", "dtype": "float32"}, {"name": "386", "dtype": "float32"}, {"name": "387", "dtype": "float32"}, {"name": "388", "dtype": "float32"}, {"name": "389", "dtype": "float32"}, {"name": "390", "dtype": "float32"}, {"name": "391", "dtype": "float32"}, {"name": "392", "dtype": "float32"}, {"name": "393", "dtype": "float32"}, {"name": "394", "dtype": "float32"}, {"name": "395", "dtype": "float32"}, {"name": "396", "dtype": "float32"}, {"name": "397", "dtype": "float32"}, {"name": "398", "dtype": "float32"}, {"name": "399", "dtype": "float32"}, {"name": "400", "dtype": "float32"}, {"name": "401", "dtype": "float32"}, {"name": "402", "dtype": "float32"}, {"name": "403", "dtype": "float32"}, {"name": "404", "dtype": "float32"}, {"name": "405", "dtype": "float32"}, {"name": "406", "dtype": "float32"}, {"name": "407", "dtype": "float32"}, {"name": "408", "dtype": "float32"}, {"name": "409", "dtype": "float32"}, {"name": "410", "dtype": "float32"}, {"name": "411", "dtype": "float32"}, {"name": "412", "dtype": "float32"}, {"name": "413", "dtype": "float32"}, {"name": "414", "dtype": "float32"}, {"name": "415", "dtype": "float32"}, {"name": "416", "dtype": "float32"}, {"name": "417", "dtype": "float32"}, {"name": "418", "dtype": "float32"}, {"name": "419", "dtype": "float32"}, {"name": "420", "dtype": "float32"}, {"name": "421", "dtype": "float32"}, {"name": "422", "dtype": "float32"}, {"name": "423", "dtype": "float32"}, {"name": "424", "dtype": "float32"}, {"name": "425", "dtype": "float32"}, {"name": "426", "dtype": "float32"}, {"name": "427", "dtype": "float32"}, {"name": "428", "dtype": "float32"}, {"name": "429", "dtype": "float32"}, {"name": "430", "dtype": "float32"}, {"name": "431", "dtype": "float32"}, {"name": "432", "dtype": "float32"}, {"name": "433", "dtype": "float32"}, {"name": "434", "dtype": "float32"}, {"name": "435", "dtype": "float32"}, {"name": "436", "dtype": "float32"}, {"name": "437", "dtype": "float32"}, {"name": "438", "dtype": "float32"}, {"name": "439", "dtype": "float32"}, {"name": "440", "dtype": "float32"}, {"name": "441", "dtype": "float32"}, {"name": "442", "dtype": "float32"}, {"name": "443", "dtype": "float32"}, {"name": "444", "dtype": "float32"}, {"name": "445", "dtype": "float32"}, {"name": "446", "dtype": "float32"}, {"name": "447", "dtype": "float32"}, {"name": "448", "dtype": "float32"}, {"name": "449", "dtype": "float32"}, {"name": "450", "dtype": "float32"}, {"name": "451", "dtype": "float32"}, {"name": "452", "dtype": "float32"}, {"name": "453", "dtype": "float32"}, {"name": "454", "dtype": "float32"}, {"name": "455", "dtype": "float32"}, {"name": "456", "dtype": "float32"}, {"name": "457", "dtype": "float32"}, {"name": "458", "dtype": "float32"}, {"name": "459", "dtype": "float32"}, {"name": "460", "dtype": "float32"}, {"name": "461", "dtype": "float32"}, {"name": "462", "dtype": "float32"}, {"name": "463", "dtype": "float32"}, {"name": "464", "dtype": "float32"}, {"name": "465", "dtype": "float32"}, {"name": "466", "dtype": "float32"}, {"name": "467", "dtype": "float32"}, {"name": "468", "dtype": "float32"}, {"name": "469", "dtype": "float32"}, {"name": "470", "dtype": "float32"}, {"name": "471", "dtype": "float32"}, {"name": "472", "dtype": "float32"}, {"name": "473", "dtype": "float32"}, {"name": "474", "dtype": "float32"}, {"name": "475", "dtype": "float32"}, {"name": "476", "dtype": "float32"}, {"name": "477", "dtype": "float32"}, {"name": "478", "dtype": "float32"}, {"name": "479", "dtype": "float32"}, {"name": "480", "dtype": "float32"}, {"name": "481", "dtype": "float32"}, {"name": "482", "dtype": "float32"}, {"name": "483", "dtype": "float32"}, {"name": "484", "dtype": "float32"}, {"name": "485", "dtype": "float32"}, {"name": "486", "dtype": "float32"}, {"name": "487", "dtype": "float32"}, {"name": "488", "dtype": "float32"}, {"name": "489", "dtype": "float32"}, {"name": "490", "dtype": "float32"}, {"name": "491", "dtype": "float32"}, {"name": "492", "dtype": "float32"}, {"name": "493", "dtype": "float32"}, {"name": "494", "dtype": "float32"}, {"name": "495", "dtype": "float32"}, {"name": "496", "dtype": "float32"}, {"name": "497", "dtype": "float32"}, {"name": "498", "dtype": "float32"}, {"name": "499", "dtype": "float32"}, {"name": "500", "dtype": "float32"}, {"name": "501", "dtype": "float32"}, {"name": "502", "dtype": "float32"}, {"name": "503", "dtype": "float32"}, {"name": "504", "dtype": "float32"}, {"name": "505", "dtype": "float32"}, {"name": "506", "dtype": "float32"}, {"name": "507", "dtype": "float32"}, {"name": "508", "dtype": "float32"}, {"name": "509", "dtype": "float32"}, {"name": "510", "dtype": "float32"}, {"name": "511", "dtype": "float32"}, {"name": "512", "dtype": "float32"}, {"name": "513", "dtype": "float32"}, {"name": "514", "dtype": "float32"}, {"name": "515", "dtype": "float32"}, {"name": "516", "dtype": "float32"}, {"name": "517", "dtype": "float32"}, {"name": "518", "dtype": "float32"}, {"name": "519", "dtype": "float32"}, {"name": "520", "dtype": "float32"}, {"name": "521", "dtype": "float32"}, {"name": "522", "dtype": "float32"}, {"name": "523", "dtype": "float32"}, {"name": "524", "dtype": "float32"}, {"name": "525", "dtype": "float32"}, {"name": "526", "dtype": "float32"}, {"name": "527", "dtype": "float32"}, {"name": "528", "dtype": "float32"}, {"name": "529", "dtype": "float32"}, {"name": "530", "dtype": "float32"}, {"name": "531", "dtype": "float32"}, {"name": "532", "dtype": "float32"}, {"name": "533", "dtype": "float32"}, {"name": "534", "dtype": "float32"}, {"name": "535", "dtype": "float32"}, {"name": "536", "dtype": "float32"}, {"name": "537", "dtype": "float32"}, {"name": "538", "dtype": "float32"}, {"name": "539", "dtype": "float32"}, {"name": "540", "dtype": "float32"}, {"name": "541", "dtype": "float32"}, {"name": "542", "dtype": "float32"}, {"name": "543", "dtype": "float32"}, {"name": "544", "dtype": "float32"}, {"name": "545", "dtype": "float32"}, {"name": "546", "dtype": "float32"}, {"name": "547", "dtype": "float32"}, {"name": "548", "dtype": "float32"}, {"name": "549", "dtype": "float32"}, {"name": "550", "dtype": "float32"}, {"name": "551", "dtype": "float32"}, {"name": "552", "dtype": "float32"}, {"name": "553", "dtype": "float32"}, {"name": "554", "dtype": "float32"}, {"name": "555", "dtype": "float32"}, {"name": "556", "dtype": "float32"}, {"name": "557", "dtype": "float32"}, {"name": "558", "dtype": "float32"}, {"name": "559", "dtype": "float32"}, {"name": "560", "dtype": "float32"}, {"name": "561", "dtype": "float32"}, {"name": "562", "dtype": "float32"}, {"name": "563", "dtype": "float32"}, {"name": "564", "dtype": "float32"}, {"name": "565", "dtype": "float32"}, {"name": "566", "dtype": "float32"}, {"name": "567", "dtype": "float32"}, {"name": "568", "dtype": "float32"}, {"name": "569", "dtype": "float32"}, {"name": "570", "dtype": "float32"}, {"name": "571", "dtype": "float32"}, {"name": "572", "dtype": "float32"}, {"name": "573", "dtype": "float32"}, {"name": "574", "dtype": "float32"}, {"name": "575", "dtype": "float32"}, {"name": "576", "dtype": "float32"}, {"name": "577", "dtype": "float32"}, {"name": "578", "dtype": "float32"}, {"name": "579", "dtype": "float32"}, {"name": "580", "dtype": "float32"}, {"name": "581", "dtype": "float32"}, {"name": "582", "dtype": "float32"}, {"name": "583", "dtype": "float32"}, {"name": "584", "dtype": "float32"}, {"name": "585", "dtype": "float32"}, {"name": "586", "dtype": "float32"}, {"name": "587", "dtype": "float32"}, {"name": "588", "dtype": "float32"}, {"name": "589", "dtype": "float32"}, {"name": "590", "dtype": "float32"}, {"name": "591", "dtype": "float32"}, {"name": "592", "dtype": "float32"}, {"name": "593", "dtype": "float32"}, {"name": "594", "dtype": "float32"}, {"name": "595", "dtype": "float32"}, {"name": "596", "dtype": "float32"}, {"name": "597", "dtype": "float32"}, {"name": "598", "dtype": "float32"}, {"name": "599", "dtype": "float32"}, {"name": "600", "dtype": "float32"}, {"name": "601", "dtype": "float32"}, {"name": "602", "dtype": "float32"}, {"name": "603", "dtype": "float32"}, {"name": "604", "dtype": "float32"}, {"name": "605", "dtype": "float32"}, {"name": "606", "dtype": "float32"}, {"name": "607", "dtype": "float32"}, {"name": "608", "dtype": "float32"}, {"name": "609", "dtype": "float32"}, {"name": "610", "dtype": "float32"}, {"name": "611", "dtype": "float32"}, {"name": "612", "dtype": "float32"}, {"name": "613", "dtype": "float32"}, {"name": "614", "dtype": "float32"}, {"name": "615", "dtype": "float32"}, {"name": "616", "dtype": "float32"}, {"name": "617", "dtype": "float32"}, {"name": "618", "dtype": "float32"}, {"name": "619", "dtype": "float32"}, {"name": "620", "dtype": "float32"}, {"name": "621", "dtype": "float32"}, {"name": "622", "dtype": "float32"}, {"name": "623", "dtype": "float32"}, {"name": "624", "dtype": "float32"}, {"name": "625", "dtype": "float32"}, {"name": "626", "dtype": "float32"}, {"name": "627", "dtype": "float32"}, {"name": "628", "dtype": "float32"}, {"name": "629", "dtype": "float32"}, {"name": "630", "dtype": "float32"}, {"name": "631", "dtype": "float32"}, {"name": "632", "dtype": "float32"}, {"name": "633", "dtype": "float32"}, {"name": "634", "dtype": "float32"}, {"name": "635", "dtype": "float32"}, {"name": "636", "dtype": "float32"}, {"name": "637", "dtype": "float32"}, {"name": "638", "dtype": "float32"}, {"name": "639", "dtype": "float32"}, {"name": "640", "dtype": "float32"}, {"name": "641", "dtype": "float32"}, {"name": "642", "dtype": "float32"}, {"name": "643", "dtype": "float32"}, {"name": "644", "dtype": "float32"}, {"name": "645", "dtype": "float32"}, {"name": "646", "dtype": "float32"}, {"name": "647", "dtype": "float32"}, {"name": "648", "dtype": "float32"}, {"name": "649", "dtype": "float32"}, {"name": "650", "dtype": "float32"}, {"name": "651", "dtype": "float32"}, {"name": "652", "dtype": "float32"}, {"name": "653", "dtype": "float32"}, {"name": "654", "dtype": "float32"}, {"name": "655", "dtype": "float32"}, {"name": "656", "dtype": "float32"}, {"name": "657", "dtype": "float32"}, {"name": "658", "dtype": "float32"}, {"name": "659", "dtype": "float32"}, {"name": "660", "dtype": "float32"}, {"name": "661", "dtype": "float32"}, {"name": "662", "dtype": "float32"}, {"name": "663", "dtype": "float32"}, {"name": "664", "dtype": "float32"}, {"name": "665", "dtype": "float32"}, {"name": "666", "dtype": "float32"}, {"name": "667", "dtype": "float32"}, {"name": "668", "dtype": "float32"}, {"name": "669", "dtype": "float32"}, {"name": "670", "dtype": "float32"}, {"name": "671", "dtype": "float32"}, {"name": "672", "dtype": "float32"}, {"name": "673", "dtype": "float32"}, {"name": "674", "dtype": "float32"}, {"name": "675", "dtype": "float32"}, {"name": "676", "dtype": "float32"}, {"name": "677", "dtype": "float32"}, {"name": "678", "dtype": "float32"}, {"name": "679", "dtype": "float32"}, {"name": "680", "dtype": "float32"}, {"name": "681", "dtype": "float32"}, {"name": "682", "dtype": "float32"}, {"name": "683", "dtype": "float32"}, {"name": "684", "dtype": "float32"}, {"name": "685", "dtype": "float32"}, {"name": "686", "dtype": "float32"}, {"name": "687", "dtype": "float32"}, {"name": "688", "dtype": "float32"}, {"name": "689", "dtype": "float32"}, {"name": "690", "dtype": "float32"}, {"name": "691", "dtype": "float32"}, {"name": "692", "dtype": "float32"}, {"name": "693", "dtype": "float32"}, {"name": "694", "dtype": "float32"}, {"name": "695", "dtype": "float32"}, {"name": "696", "dtype": "float32"}, {"name": "697", "dtype": "float32"}, {"name": "698", "dtype": "float32"}, {"name": "699", "dtype": "float32"}, {"name": "700", "dtype": "float32"}, {"name": "701", "dtype": "float32"}, {"name": "702", "dtype": "float32"}, {"name": "703", "dtype": "float32"}, {"name": "704", "dtype": "float32"}, {"name": "705", "dtype": "float32"}, {"name": "706", "dtype": "float32"}, {"name": "707", "dtype": "float32"}, {"name": "708", "dtype": "float32"}, {"name": "709", "dtype": "float32"}, {"name": "710", "dtype": "float32"}, {"name": "711", "dtype": "float32"}, {"name": "712", "dtype": "float32"}, {"name": "713", "dtype": "float32"}, {"name": "714", "dtype": "float32"}, {"name": "715", "dtype": "float32"}, {"name": "716", "dtype": "float32"}, {"name": "717", "dtype": "float32"}, {"name": "718", "dtype": "float32"}, {"name": "719", "dtype": "float32"}, {"name": "720", "dtype": "float32"}, {"name": "721", "dtype": "float32"}, {"name": "722", "dtype": "float32"}, {"name": "723", "dtype": "float32"}, {"name": "724", "dtype": "float32"}, {"name": "725", "dtype": "float32"}, {"name": "726", "dtype": "float32"}, {"name": "727", "dtype": "float32"}, {"name": "728", "dtype": "float32"}, {"name": "729", "dtype": "float32"}, {"name": "730", "dtype": "float32"}, {"name": "731", "dtype": "float32"}, {"name": "732", "dtype": "float32"}, {"name": "733", "dtype": "float32"}, {"name": "734", "dtype": "float32"}, {"name": "735", "dtype": "float32"}, {"name": "736", "dtype": "float32"}, {"name": "737", "dtype": "float32"}, {"name": "738", "dtype": "float32"}, {"name": "739", "dtype": "float32"}, {"name": "740", "dtype": "float32"}, {"name": "741", "dtype": "float32"}, {"name": "742", "dtype": "float32"}, {"name": "743", "dtype": "float32"}, {"name": "744", "dtype": "float32"}, {"name": "745", "dtype": "float32"}, {"name": "746", "dtype": "float32"}, {"name": "747", "dtype": "float32"}, {"name": "748", "dtype": "float32"}, {"name": "749", "dtype": "float32"}, {"name": "750", "dtype": "float32"}, {"name": "751", "dtype": "float32"}, {"name": "752", "dtype": "float32"}, {"name": "753", "dtype": "float32"}, {"name": "754", "dtype": "float32"}, {"name": "755", "dtype": "float32"}, {"name": "756", "dtype": "float32"}, {"name": "757", "dtype": "float32"}, {"name": "758", "dtype": "float32"}, {"name": "759", "dtype": "float32"}, {"name": "760", "dtype": "float32"}, {"name": "761", "dtype": "float32"}, {"name": "762", "dtype": "float32"}, {"name": "763", "dtype": "float32"}, {"name": "764", "dtype": "float32"}, {"name": "765", "dtype": "float32"}, {"name": "766", "dtype": "float32"}, {"name": "767", "dtype": "float32"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 115621178.4375, "num_examples": 37500}, {"name": "test", "num_bytes": 38540392.5, "num_examples": 12500}], "download_size": 211864778, "dataset_size": 154161570.9375}}
|
2023-08-23T06:58:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "CSIC_GPT2_Finetuned"
More Information needed
|
[
"# Dataset Card for \"CSIC_GPT2_Finetuned\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"CSIC_GPT2_Finetuned\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"CSIC_GPT2_Finetuned\"\n\nMore Information needed"
] |
96e591119cf405ed22d5d383bbc51805720fba13
|
# Dataset of ibaraki_kasen/茨華仙 (Touhou)
This is the dataset of ibaraki_kasen/茨華仙 (Touhou), containing 500 images and their tags.
The core tags of this character are `pink_hair, hair_bun, double_bun, short_hair, pink_eyes, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 609.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ibaraki_kasen_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 388.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ibaraki_kasen_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1139 | 776.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ibaraki_kasen_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 559.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ibaraki_kasen_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1139 | 1.01 GiB | [Download](https://huggingface.co/datasets/CyberHarem/ibaraki_kasen_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ibaraki_kasen_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, bandaged_arm, bun_cover, green_skirt, pink_rose, puffy_short_sleeves, smile, solo, tabard, white_shirt, bangs, blush, closed_mouth, hair_between_eyes, looking_at_viewer, upper_body, white_background, simple_background, chinese_clothes, petals |
| 1 | 5 |  |  |  |  |  | 1girl, ahoge, bandaged_arm, bangs, bun_cover, chain, green_skirt, hair_between_eyes, looking_at_viewer, pink_rose, puffy_short_sleeves, shackles, solo, tabard, white_shirt, chinese_clothes, closed_mouth, simple_background, white_background, smile, upper_body |
| 2 | 24 |  |  |  |  |  | 1girl, bandages, bun_cover, rose, shackles, solo, skirt, tabard, chain, smile, chinese_clothes, red_eyes |
| 3 | 16 |  |  |  |  |  | 1girl, bandages, bun_cover, chinese_clothes, rose, solo, skirt, red_eyes, tabard, smile |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bandaged_arm | bun_cover | green_skirt | pink_rose | puffy_short_sleeves | smile | solo | tabard | white_shirt | bangs | blush | closed_mouth | hair_between_eyes | looking_at_viewer | upper_body | white_background | simple_background | chinese_clothes | petals | ahoge | chain | shackles | bandages | rose | skirt | red_eyes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:------------|:--------------|:------------|:----------------------|:--------|:-------|:---------|:--------------|:--------|:--------|:---------------|:--------------------|:--------------------|:-------------|:-------------------|:--------------------|:------------------|:---------|:--------|:--------|:-----------|:-----------|:-------|:--------|:-----------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | X | X | X | X | X | X | X | | X | X | X | | | | |
| 2 | 24 |  |  |  |  |  | X | | X | | | | X | X | X | | | | | | | | | | X | | | X | X | X | X | X | X |
| 3 | 16 |  |  |  |  |  | X | | X | | | | X | X | X | | | | | | | | | | X | | | | | X | X | X | X |
|
CyberHarem/ibaraki_kasen_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T19:20:24+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T17:27:09+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of ibaraki\_kasen/茨華仙 (Touhou)
======================================
This is the dataset of ibaraki\_kasen/茨華仙 (Touhou), containing 500 images and their tags.
The core tags of this character are 'pink\_hair, hair\_bun, double\_bun, short\_hair, pink\_eyes, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
fb49bcaa81cec1d46811a46d257eb80f1904ea09
|
# Dataset of inubashiri_momiji/犬走椛/이누바시리모미지 (Touhou)
This is the dataset of inubashiri_momiji/犬走椛/이누바시리모미지 (Touhou), containing 500 images and their tags.
The core tags of this character are `animal_ears, wolf_ears, short_hair, red_eyes, hat, tokin_hat, white_hair, tail, wolf_tail, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 602.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/inubashiri_momiji_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 378.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/inubashiri_momiji_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1193 | 772.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/inubashiri_momiji_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 549.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/inubashiri_momiji_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1193 | 1.00 GiB | [Download](https://huggingface.co/datasets/CyberHarem/inubashiri_momiji_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/inubashiri_momiji_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, blush, detached_sleeves, grey_hair, looking_at_viewer, obi, solo, bridal_gauntlets, japanese_clothes, kourindou_tengu_costume, smile, wide_sleeves, long_sleeves, sitting, skirt |
| 1 | 9 |  |  |  |  |  | 1girl, maple_leaf, solo, sword, detached_sleeves, pom_pom_(clothes), skirt, looking_at_viewer, autumn_leaves, shield, wide_sleeves, sarashi |
| 2 | 7 |  |  |  |  |  | 1girl, bangs, bare_shoulders, detached_sleeves, looking_at_viewer, pom_pom_(clothes), red_headwear, solo, white_shirt, animal_ear_fluff, autumn_leaves, black_skirt, blush, closed_mouth, maple_leaf, wide_sleeves, large_breasts, ribbon-trimmed_sleeves, sleeveless_shirt, hair_between_eyes, navel, smile, turtleneck |
| 3 | 24 |  |  |  |  |  | 1girl, solo, bare_shoulders, detached_sleeves, looking_at_viewer, blush, pom_pom_(clothes), smile, large_breasts, open_mouth, skirt |
| 4 | 8 |  |  |  |  |  | 1girl, detached_sleeves, solo, sword, water, maple_leaf, red_scarf, skirt |
| 5 | 7 |  |  |  |  |  | 1girl, detached_sleeves, solo, skirt, midriff, navel, sword, bare_shoulders, looking_at_viewer, maple_leaf, medium_breasts, scarf, shield |
| 6 | 5 |  |  |  |  |  | 1girl, blush, cleavage, large_breasts, looking_at_viewer, solo, navel, smile, collarbone, covered_nipples, day, lens_flare, micro_bikini, open_mouth, red_bikini, side-tie_bikini_bottom, sky |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | detached_sleeves | grey_hair | looking_at_viewer | obi | solo | bridal_gauntlets | japanese_clothes | kourindou_tengu_costume | smile | wide_sleeves | long_sleeves | sitting | skirt | maple_leaf | sword | pom_pom_(clothes) | autumn_leaves | shield | sarashi | bangs | bare_shoulders | red_headwear | white_shirt | animal_ear_fluff | black_skirt | closed_mouth | large_breasts | ribbon-trimmed_sleeves | sleeveless_shirt | hair_between_eyes | navel | turtleneck | open_mouth | water | red_scarf | midriff | medium_breasts | scarf | cleavage | collarbone | covered_nipples | day | lens_flare | micro_bikini | red_bikini | side-tie_bikini_bottom | sky |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------------------|:------------|:--------------------|:------|:-------|:-------------------|:-------------------|:--------------------------|:--------|:---------------|:---------------|:----------|:--------|:-------------|:--------|:--------------------|:----------------|:---------|:----------|:--------|:-----------------|:---------------|:--------------|:-------------------|:--------------|:---------------|:----------------|:-------------------------|:-------------------|:--------------------|:--------|:-------------|:-------------|:--------|:------------|:----------|:-----------------|:--------|:-----------|:-------------|:------------------|:------|:-------------|:---------------|:-------------|:-------------------------|:------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | | X | | X | | X | | | | | X | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | X | | X | | X | | | | X | X | | | | X | | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 3 | 24 |  |  |  |  |  | X | X | X | | X | | X | | | | X | | | | X | | | X | | | | | X | | | | | | X | | | | | | X | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | | X | | | | X | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | | X | | X | | X | | | | | | | | X | X | X | | | X | | | X | | | | | | | | | | X | | | | | X | X | X | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | X | | | X | | X | | | | X | | | | | | | | | | | | | | | | | | X | | | | X | | X | | | | | | X | X | X | X | X | X | X | X | X |
|
CyberHarem/inubashiri_momiji_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T19:22:04+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T10:08:10+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of inubashiri\_momiji/犬走椛/이누바시리모미지 (Touhou)
===================================================
This is the dataset of inubashiri\_momiji/犬走椛/이누바시리모미지 (Touhou), containing 500 images and their tags.
The core tags of this character are 'animal\_ears, wolf\_ears, short\_hair, red\_eyes, hat, tokin\_hat, white\_hair, tail, wolf\_tail, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
e5996ba0ce4d7a512ecd696416d3cdab6bfe0d39
|
# Dataset Card for AIVision360-8k
## Dataset Description
AIVision360 is the pioneering domain-specific dataset tailor-made for media and journalism, designed expressly for the instruction fine-tuning of Large Language Models (LLMs).\
The AIVision360-8k dataset is a curated collection sourced from "ainewshub.ie", a platform dedicated to Artificial Intelligence news from quality-controlled publishers. It is designed to provide a comprehensive representation of AI-related discussions, highlighting current developments and trends in the field. Each entry in the dataset contains three columns: "question", "response", and "context". These columns offer a structured view of AI news interactions, where the "question" and "response" provide insights on AI subjects, and the "context" column gives additional background information.
### Key Features
• Domain Specificity: The dataset is focused on AI news, catering to researchers, developers, and specialists in the domain.\
• Source Reliability: Data is sourced from established publishers featured on "ainewshub.ie", ensuring content reliability.\
• Licensing: It is distributed under the Apache 2.0 open-source license, facilitating its use and modification.\
• Accessibility: Intended for public use to support collaboration and analysis in the AI community.\
• Volume: Contains over 8,000 entries, making it a significant resource for AI news analysis.
### Intended Use Cases
• Model Training: Suitable for training language models, enhancing their capacity in AI news discussions.\
• Research: Useful for AI trend analysis, sentiment analysis, and linguistic pattern study.
### Limitations
• Despite careful curation, potential biases from AI news sources may persist in the dataset.\
• Its focus is on AI news, which may reflect specific perspectives of this niche.
## Language
English
### Data Privacy
The dataset comprises publicly available news articles and does not include private identifiers or sensitive information.
### License/Attribution
Copyright © 2023 CeADAR Connect Group. Developed by CeADAR (ceadar.ie), its use is governed by the Apache 2.0 license.
### Sources
Curated exclusively from ainewshub.ie, a recognized platform for AI news.
## Annotator Guidelines
• Question: Represents a query derived from the news article.\
• Response: Provides an answer based on the article's content.\
• Context: Offers background information for the query-answer pair.
### Feedback
For any questions or feedback related to the dataset, please direct your communications to [email protected]
### Disclaimer
This dataset is provided "as is" without any guarantees or warranty. Although the data has been processed with care, CeADAR Connect Group is not responsible for any errors, omissions, or discrepancies within the data. Users are advised to use this dataset at their discretion and assume any risks associated with its use.
|
ceadar-ie/AIVision360-8k
|
[
"task_categories:question-answering",
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"LLM",
"Generative AI",
"Finetune",
"Domain Specific Data",
"doi:10.57967/hf/0998",
"region:us"
] |
2023-08-17T19:27:23+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering", "conversational", "text-generation"], "tags": ["LLM", "Generative AI", "Finetune", "Domain Specific Data"]}
|
2023-08-17T21:04:53+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-question-answering #task_categories-conversational #task_categories-text-generation #size_categories-1K<n<10K #language-English #license-apache-2.0 #LLM #Generative AI #Finetune #Domain Specific Data #doi-10.57967/hf/0998 #region-us
|
# Dataset Card for AIVision360-8k
## Dataset Description
AIVision360 is the pioneering domain-specific dataset tailor-made for media and journalism, designed expressly for the instruction fine-tuning of Large Language Models (LLMs).\
The AIVision360-8k dataset is a curated collection sourced from "URL", a platform dedicated to Artificial Intelligence news from quality-controlled publishers. It is designed to provide a comprehensive representation of AI-related discussions, highlighting current developments and trends in the field. Each entry in the dataset contains three columns: "question", "response", and "context". These columns offer a structured view of AI news interactions, where the "question" and "response" provide insights on AI subjects, and the "context" column gives additional background information.
### Key Features
• Domain Specificity: The dataset is focused on AI news, catering to researchers, developers, and specialists in the domain.\
• Source Reliability: Data is sourced from established publishers featured on "URL", ensuring content reliability.\
• Licensing: It is distributed under the Apache 2.0 open-source license, facilitating its use and modification.\
• Accessibility: Intended for public use to support collaboration and analysis in the AI community.\
• Volume: Contains over 8,000 entries, making it a significant resource for AI news analysis.
### Intended Use Cases
• Model Training: Suitable for training language models, enhancing their capacity in AI news discussions.\
• Research: Useful for AI trend analysis, sentiment analysis, and linguistic pattern study.
### Limitations
• Despite careful curation, potential biases from AI news sources may persist in the dataset.\
• Its focus is on AI news, which may reflect specific perspectives of this niche.
## Language
English
### Data Privacy
The dataset comprises publicly available news articles and does not include private identifiers or sensitive information.
### License/Attribution
Copyright © 2023 CeADAR Connect Group. Developed by CeADAR (URL), its use is governed by the Apache 2.0 license.
### Sources
Curated exclusively from URL, a recognized platform for AI news.
## Annotator Guidelines
• Question: Represents a query derived from the news article.\
• Response: Provides an answer based on the article's content.\
• Context: Offers background information for the query-answer pair.
### Feedback
For any questions or feedback related to the dataset, please direct your communications to URL@URL
### Disclaimer
This dataset is provided "as is" without any guarantees or warranty. Although the data has been processed with care, CeADAR Connect Group is not responsible for any errors, omissions, or discrepancies within the data. Users are advised to use this dataset at their discretion and assume any risks associated with its use.
|
[
"# Dataset Card for AIVision360-8k",
"## Dataset Description\n\nAIVision360 is the pioneering domain-specific dataset tailor-made for media and journalism, designed expressly for the instruction fine-tuning of Large Language Models (LLMs).\\\nThe AIVision360-8k dataset is a curated collection sourced from \"URL\", a platform dedicated to Artificial Intelligence news from quality-controlled publishers. It is designed to provide a comprehensive representation of AI-related discussions, highlighting current developments and trends in the field. Each entry in the dataset contains three columns: \"question\", \"response\", and \"context\". These columns offer a structured view of AI news interactions, where the \"question\" and \"response\" provide insights on AI subjects, and the \"context\" column gives additional background information.",
"### Key Features\n\n•\tDomain Specificity: The dataset is focused on AI news, catering to researchers, developers, and specialists in the domain.\\\n•\tSource Reliability: Data is sourced from established publishers featured on \"URL\", ensuring content reliability.\\\n•\tLicensing: It is distributed under the Apache 2.0 open-source license, facilitating its use and modification.\\\n•\tAccessibility: Intended for public use to support collaboration and analysis in the AI community.\\\n•\tVolume: Contains over 8,000 entries, making it a significant resource for AI news analysis.",
"### Intended Use Cases\n\n•\tModel Training: Suitable for training language models, enhancing their capacity in AI news discussions.\\\n•\tResearch: Useful for AI trend analysis, sentiment analysis, and linguistic pattern study.",
"### Limitations\n\n•\tDespite careful curation, potential biases from AI news sources may persist in the dataset.\\\n•\tIts focus is on AI news, which may reflect specific perspectives of this niche.",
"## Language\n\nEnglish",
"### Data Privacy\n\nThe dataset comprises publicly available news articles and does not include private identifiers or sensitive information.",
"### License/Attribution\n\nCopyright © 2023 CeADAR Connect Group. Developed by CeADAR (URL), its use is governed by the Apache 2.0 license.",
"### Sources\n\nCurated exclusively from URL, a recognized platform for AI news.",
"## Annotator Guidelines\n\n•\tQuestion: Represents a query derived from the news article.\\\n•\tResponse: Provides an answer based on the article's content.\\\n•\tContext: Offers background information for the query-answer pair.",
"### Feedback\n\nFor any questions or feedback related to the dataset, please direct your communications to URL@URL",
"### Disclaimer\n\nThis dataset is provided \"as is\" without any guarantees or warranty. Although the data has been processed with care, CeADAR Connect Group is not responsible for any errors, omissions, or discrepancies within the data. Users are advised to use this dataset at their discretion and assume any risks associated with its use."
] |
[
"TAGS\n#task_categories-question-answering #task_categories-conversational #task_categories-text-generation #size_categories-1K<n<10K #language-English #license-apache-2.0 #LLM #Generative AI #Finetune #Domain Specific Data #doi-10.57967/hf/0998 #region-us \n",
"# Dataset Card for AIVision360-8k",
"## Dataset Description\n\nAIVision360 is the pioneering domain-specific dataset tailor-made for media and journalism, designed expressly for the instruction fine-tuning of Large Language Models (LLMs).\\\nThe AIVision360-8k dataset is a curated collection sourced from \"URL\", a platform dedicated to Artificial Intelligence news from quality-controlled publishers. It is designed to provide a comprehensive representation of AI-related discussions, highlighting current developments and trends in the field. Each entry in the dataset contains three columns: \"question\", \"response\", and \"context\". These columns offer a structured view of AI news interactions, where the \"question\" and \"response\" provide insights on AI subjects, and the \"context\" column gives additional background information.",
"### Key Features\n\n•\tDomain Specificity: The dataset is focused on AI news, catering to researchers, developers, and specialists in the domain.\\\n•\tSource Reliability: Data is sourced from established publishers featured on \"URL\", ensuring content reliability.\\\n•\tLicensing: It is distributed under the Apache 2.0 open-source license, facilitating its use and modification.\\\n•\tAccessibility: Intended for public use to support collaboration and analysis in the AI community.\\\n•\tVolume: Contains over 8,000 entries, making it a significant resource for AI news analysis.",
"### Intended Use Cases\n\n•\tModel Training: Suitable for training language models, enhancing their capacity in AI news discussions.\\\n•\tResearch: Useful for AI trend analysis, sentiment analysis, and linguistic pattern study.",
"### Limitations\n\n•\tDespite careful curation, potential biases from AI news sources may persist in the dataset.\\\n•\tIts focus is on AI news, which may reflect specific perspectives of this niche.",
"## Language\n\nEnglish",
"### Data Privacy\n\nThe dataset comprises publicly available news articles and does not include private identifiers or sensitive information.",
"### License/Attribution\n\nCopyright © 2023 CeADAR Connect Group. Developed by CeADAR (URL), its use is governed by the Apache 2.0 license.",
"### Sources\n\nCurated exclusively from URL, a recognized platform for AI news.",
"## Annotator Guidelines\n\n•\tQuestion: Represents a query derived from the news article.\\\n•\tResponse: Provides an answer based on the article's content.\\\n•\tContext: Offers background information for the query-answer pair.",
"### Feedback\n\nFor any questions or feedback related to the dataset, please direct your communications to URL@URL",
"### Disclaimer\n\nThis dataset is provided \"as is\" without any guarantees or warranty. Although the data has been processed with care, CeADAR Connect Group is not responsible for any errors, omissions, or discrepancies within the data. Users are advised to use this dataset at their discretion and assume any risks associated with its use."
] |
[
92,
10,
185,
133,
51,
48,
3,
26,
34,
19,
56,
23,
78
] |
[
"passage: TAGS\n#task_categories-question-answering #task_categories-conversational #task_categories-text-generation #size_categories-1K<n<10K #language-English #license-apache-2.0 #LLM #Generative AI #Finetune #Domain Specific Data #doi-10.57967/hf/0998 #region-us \n# Dataset Card for AIVision360-8k## Dataset Description\n\nAIVision360 is the pioneering domain-specific dataset tailor-made for media and journalism, designed expressly for the instruction fine-tuning of Large Language Models (LLMs).\\\nThe AIVision360-8k dataset is a curated collection sourced from \"URL\", a platform dedicated to Artificial Intelligence news from quality-controlled publishers. It is designed to provide a comprehensive representation of AI-related discussions, highlighting current developments and trends in the field. Each entry in the dataset contains three columns: \"question\", \"response\", and \"context\". These columns offer a structured view of AI news interactions, where the \"question\" and \"response\" provide insights on AI subjects, and the \"context\" column gives additional background information.### Key Features\n\n•\tDomain Specificity: The dataset is focused on AI news, catering to researchers, developers, and specialists in the domain.\\\n•\tSource Reliability: Data is sourced from established publishers featured on \"URL\", ensuring content reliability.\\\n•\tLicensing: It is distributed under the Apache 2.0 open-source license, facilitating its use and modification.\\\n•\tAccessibility: Intended for public use to support collaboration and analysis in the AI community.\\\n•\tVolume: Contains over 8,000 entries, making it a significant resource for AI news analysis.### Intended Use Cases\n\n•\tModel Training: Suitable for training language models, enhancing their capacity in AI news discussions.\\\n•\tResearch: Useful for AI trend analysis, sentiment analysis, and linguistic pattern study."
] |
783519b0fc58c79048510a69ee45d4639859df77
|
# Dataset Card for "cross_en_laws"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nc33/cross_en_laws
|
[
"region:us"
] |
2023-08-17T19:49:48+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "label", "dtype": "float64"}, {"name": "is_answer", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 310300510, "num_examples": 189507}], "download_size": 80495498, "dataset_size": 310300510}}
|
2023-08-17T20:07:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cross_en_laws"
More Information needed
|
[
"# Dataset Card for \"cross_en_laws\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cross_en_laws\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cross_en_laws\"\n\nMore Information needed"
] |
5e3d8d2d92918462e32790ac33553182039f150f
|
# Dataset Card for "CSIC_GPTNEO_Finetuned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
EgilKarlsen/CSIC_GPTNEO_Finetuned
|
[
"region:us"
] |
2023-08-17T19:55:27+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "0", "dtype": "float32"}, {"name": "1", "dtype": "float32"}, {"name": "2", "dtype": "float32"}, {"name": "3", "dtype": "float32"}, {"name": "4", "dtype": "float32"}, {"name": "5", "dtype": "float32"}, {"name": "6", "dtype": "float32"}, {"name": "7", "dtype": "float32"}, {"name": "8", "dtype": "float32"}, {"name": "9", "dtype": "float32"}, {"name": "10", "dtype": "float32"}, {"name": "11", "dtype": "float32"}, {"name": "12", "dtype": "float32"}, {"name": "13", "dtype": "float32"}, {"name": "14", "dtype": "float32"}, {"name": "15", "dtype": "float32"}, {"name": "16", "dtype": "float32"}, {"name": "17", "dtype": "float32"}, {"name": "18", "dtype": "float32"}, {"name": "19", "dtype": "float32"}, {"name": "20", "dtype": "float32"}, {"name": "21", "dtype": "float32"}, {"name": "22", "dtype": "float32"}, {"name": "23", "dtype": "float32"}, {"name": "24", "dtype": "float32"}, {"name": "25", "dtype": "float32"}, {"name": "26", "dtype": "float32"}, {"name": "27", "dtype": "float32"}, {"name": "28", "dtype": "float32"}, {"name": "29", "dtype": "float32"}, {"name": "30", "dtype": "float32"}, {"name": "31", "dtype": "float32"}, {"name": "32", "dtype": "float32"}, {"name": "33", "dtype": "float32"}, {"name": "34", "dtype": "float32"}, {"name": "35", "dtype": "float32"}, {"name": "36", "dtype": "float32"}, {"name": "37", "dtype": "float32"}, {"name": "38", "dtype": "float32"}, {"name": "39", "dtype": "float32"}, {"name": "40", "dtype": "float32"}, {"name": "41", "dtype": "float32"}, {"name": "42", "dtype": "float32"}, {"name": "43", "dtype": "float32"}, {"name": "44", "dtype": "float32"}, {"name": "45", "dtype": "float32"}, {"name": "46", "dtype": "float32"}, {"name": "47", "dtype": "float32"}, {"name": "48", "dtype": "float32"}, {"name": "49", "dtype": "float32"}, {"name": "50", "dtype": "float32"}, {"name": "51", "dtype": "float32"}, {"name": "52", "dtype": "float32"}, {"name": "53", "dtype": "float32"}, {"name": "54", "dtype": "float32"}, {"name": "55", "dtype": "float32"}, {"name": "56", "dtype": "float32"}, {"name": "57", "dtype": "float32"}, {"name": "58", "dtype": "float32"}, {"name": "59", "dtype": "float32"}, {"name": "60", "dtype": "float32"}, {"name": "61", "dtype": "float32"}, {"name": "62", "dtype": "float32"}, {"name": "63", "dtype": "float32"}, {"name": "64", "dtype": "float32"}, {"name": "65", "dtype": "float32"}, {"name": "66", "dtype": "float32"}, {"name": "67", "dtype": "float32"}, {"name": "68", "dtype": "float32"}, {"name": "69", "dtype": "float32"}, {"name": "70", "dtype": "float32"}, {"name": "71", "dtype": "float32"}, {"name": "72", "dtype": "float32"}, {"name": "73", "dtype": "float32"}, {"name": "74", "dtype": "float32"}, {"name": "75", "dtype": "float32"}, {"name": "76", "dtype": "float32"}, {"name": "77", "dtype": "float32"}, {"name": "78", "dtype": "float32"}, {"name": "79", "dtype": "float32"}, {"name": "80", "dtype": "float32"}, {"name": "81", "dtype": "float32"}, {"name": "82", "dtype": "float32"}, {"name": "83", "dtype": "float32"}, {"name": "84", "dtype": "float32"}, {"name": "85", "dtype": "float32"}, {"name": "86", "dtype": "float32"}, {"name": "87", "dtype": "float32"}, {"name": "88", "dtype": "float32"}, {"name": "89", "dtype": "float32"}, {"name": "90", "dtype": "float32"}, {"name": "91", "dtype": "float32"}, {"name": "92", "dtype": "float32"}, {"name": "93", "dtype": "float32"}, {"name": "94", "dtype": "float32"}, {"name": "95", "dtype": "float32"}, {"name": "96", "dtype": "float32"}, {"name": "97", "dtype": "float32"}, {"name": "98", "dtype": "float32"}, {"name": "99", "dtype": "float32"}, {"name": "100", "dtype": "float32"}, {"name": "101", "dtype": "float32"}, {"name": "102", "dtype": "float32"}, {"name": "103", "dtype": "float32"}, {"name": "104", "dtype": "float32"}, {"name": "105", "dtype": "float32"}, {"name": "106", "dtype": "float32"}, {"name": "107", "dtype": "float32"}, {"name": "108", "dtype": "float32"}, {"name": "109", "dtype": "float32"}, {"name": "110", "dtype": "float32"}, {"name": "111", "dtype": "float32"}, {"name": "112", "dtype": "float32"}, {"name": "113", "dtype": "float32"}, {"name": "114", "dtype": "float32"}, {"name": "115", "dtype": "float32"}, {"name": "116", "dtype": "float32"}, {"name": "117", "dtype": "float32"}, {"name": "118", "dtype": "float32"}, {"name": "119", "dtype": "float32"}, {"name": "120", "dtype": "float32"}, {"name": "121", "dtype": "float32"}, {"name": "122", "dtype": "float32"}, {"name": "123", "dtype": "float32"}, {"name": "124", "dtype": "float32"}, {"name": "125", "dtype": "float32"}, {"name": "126", "dtype": "float32"}, {"name": "127", "dtype": "float32"}, {"name": "128", "dtype": "float32"}, {"name": "129", "dtype": "float32"}, {"name": "130", "dtype": "float32"}, {"name": "131", "dtype": "float32"}, {"name": "132", "dtype": "float32"}, {"name": "133", "dtype": "float32"}, {"name": "134", "dtype": "float32"}, {"name": "135", "dtype": "float32"}, {"name": "136", "dtype": "float32"}, {"name": "137", "dtype": "float32"}, {"name": "138", "dtype": "float32"}, {"name": "139", "dtype": "float32"}, {"name": "140", "dtype": "float32"}, {"name": "141", "dtype": "float32"}, {"name": "142", "dtype": "float32"}, {"name": "143", "dtype": "float32"}, {"name": "144", "dtype": "float32"}, {"name": "145", "dtype": "float32"}, {"name": "146", "dtype": "float32"}, {"name": "147", "dtype": "float32"}, {"name": "148", "dtype": "float32"}, {"name": "149", "dtype": "float32"}, {"name": "150", "dtype": "float32"}, {"name": "151", "dtype": "float32"}, {"name": "152", "dtype": "float32"}, {"name": "153", "dtype": "float32"}, {"name": "154", "dtype": "float32"}, {"name": "155", "dtype": "float32"}, {"name": "156", "dtype": "float32"}, {"name": "157", "dtype": "float32"}, {"name": "158", "dtype": "float32"}, {"name": "159", "dtype": "float32"}, {"name": "160", "dtype": "float32"}, {"name": "161", "dtype": "float32"}, {"name": "162", "dtype": "float32"}, {"name": "163", "dtype": "float32"}, {"name": "164", "dtype": "float32"}, {"name": "165", "dtype": "float32"}, {"name": "166", "dtype": "float32"}, {"name": "167", "dtype": "float32"}, {"name": "168", "dtype": "float32"}, {"name": "169", "dtype": "float32"}, {"name": "170", "dtype": "float32"}, {"name": "171", "dtype": "float32"}, {"name": "172", "dtype": "float32"}, {"name": "173", "dtype": "float32"}, {"name": "174", "dtype": "float32"}, {"name": "175", "dtype": "float32"}, {"name": "176", "dtype": "float32"}, {"name": "177", "dtype": "float32"}, {"name": "178", "dtype": "float32"}, {"name": "179", "dtype": "float32"}, {"name": "180", "dtype": "float32"}, {"name": "181", "dtype": "float32"}, {"name": "182", "dtype": "float32"}, {"name": "183", "dtype": "float32"}, {"name": "184", "dtype": "float32"}, {"name": "185", "dtype": "float32"}, {"name": "186", "dtype": "float32"}, {"name": "187", "dtype": "float32"}, {"name": "188", "dtype": "float32"}, {"name": "189", "dtype": "float32"}, {"name": "190", "dtype": "float32"}, {"name": "191", "dtype": "float32"}, {"name": "192", "dtype": "float32"}, {"name": "193", "dtype": "float32"}, {"name": "194", "dtype": "float32"}, {"name": "195", "dtype": "float32"}, {"name": "196", "dtype": "float32"}, {"name": "197", "dtype": "float32"}, {"name": "198", "dtype": "float32"}, {"name": "199", "dtype": "float32"}, {"name": "200", "dtype": "float32"}, {"name": "201", "dtype": "float32"}, {"name": "202", "dtype": "float32"}, {"name": "203", "dtype": "float32"}, {"name": "204", "dtype": "float32"}, {"name": "205", "dtype": "float32"}, {"name": "206", "dtype": "float32"}, {"name": "207", "dtype": "float32"}, {"name": "208", "dtype": "float32"}, {"name": "209", "dtype": "float32"}, {"name": "210", "dtype": "float32"}, {"name": "211", "dtype": "float32"}, {"name": "212", "dtype": "float32"}, {"name": "213", "dtype": "float32"}, {"name": "214", "dtype": "float32"}, {"name": "215", "dtype": "float32"}, {"name": "216", "dtype": "float32"}, {"name": "217", "dtype": "float32"}, {"name": "218", "dtype": "float32"}, {"name": "219", "dtype": "float32"}, {"name": "220", "dtype": "float32"}, {"name": "221", "dtype": "float32"}, {"name": "222", "dtype": "float32"}, {"name": "223", "dtype": "float32"}, {"name": "224", "dtype": "float32"}, {"name": "225", "dtype": "float32"}, {"name": "226", "dtype": "float32"}, {"name": "227", "dtype": "float32"}, {"name": "228", "dtype": "float32"}, {"name": "229", "dtype": "float32"}, {"name": "230", "dtype": "float32"}, {"name": "231", "dtype": "float32"}, {"name": "232", "dtype": "float32"}, {"name": "233", "dtype": "float32"}, {"name": "234", "dtype": "float32"}, {"name": "235", "dtype": "float32"}, {"name": "236", "dtype": "float32"}, {"name": "237", "dtype": "float32"}, {"name": "238", "dtype": "float32"}, {"name": "239", "dtype": "float32"}, {"name": "240", "dtype": "float32"}, {"name": "241", "dtype": "float32"}, {"name": "242", "dtype": "float32"}, {"name": "243", "dtype": "float32"}, {"name": "244", "dtype": "float32"}, {"name": "245", "dtype": "float32"}, {"name": "246", "dtype": "float32"}, {"name": "247", "dtype": "float32"}, {"name": "248", "dtype": "float32"}, {"name": "249", "dtype": "float32"}, {"name": "250", "dtype": "float32"}, {"name": "251", "dtype": "float32"}, {"name": "252", "dtype": "float32"}, {"name": "253", "dtype": "float32"}, {"name": "254", "dtype": "float32"}, {"name": "255", "dtype": "float32"}, {"name": "256", "dtype": "float32"}, {"name": "257", "dtype": "float32"}, {"name": "258", "dtype": "float32"}, {"name": "259", "dtype": "float32"}, {"name": "260", "dtype": "float32"}, {"name": "261", "dtype": "float32"}, {"name": "262", "dtype": "float32"}, {"name": "263", "dtype": "float32"}, {"name": "264", "dtype": "float32"}, {"name": "265", "dtype": "float32"}, {"name": "266", "dtype": "float32"}, {"name": "267", "dtype": "float32"}, {"name": "268", "dtype": "float32"}, {"name": "269", "dtype": "float32"}, {"name": "270", "dtype": "float32"}, {"name": "271", "dtype": "float32"}, {"name": "272", "dtype": "float32"}, {"name": "273", "dtype": "float32"}, {"name": "274", "dtype": "float32"}, {"name": "275", "dtype": "float32"}, {"name": "276", "dtype": "float32"}, {"name": "277", "dtype": "float32"}, {"name": "278", "dtype": "float32"}, {"name": "279", "dtype": "float32"}, {"name": "280", "dtype": "float32"}, {"name": "281", "dtype": "float32"}, {"name": "282", "dtype": "float32"}, {"name": "283", "dtype": "float32"}, {"name": "284", "dtype": "float32"}, {"name": "285", "dtype": "float32"}, {"name": "286", "dtype": "float32"}, {"name": "287", "dtype": "float32"}, {"name": "288", "dtype": "float32"}, {"name": "289", "dtype": "float32"}, {"name": "290", "dtype": "float32"}, {"name": "291", "dtype": "float32"}, {"name": "292", "dtype": "float32"}, {"name": "293", "dtype": "float32"}, {"name": "294", "dtype": "float32"}, {"name": "295", "dtype": "float32"}, {"name": "296", "dtype": "float32"}, {"name": "297", "dtype": "float32"}, {"name": "298", "dtype": "float32"}, {"name": "299", "dtype": "float32"}, {"name": "300", "dtype": "float32"}, {"name": "301", "dtype": "float32"}, {"name": "302", "dtype": "float32"}, {"name": "303", "dtype": "float32"}, {"name": "304", "dtype": "float32"}, {"name": "305", "dtype": "float32"}, {"name": "306", "dtype": "float32"}, {"name": "307", "dtype": "float32"}, {"name": "308", "dtype": "float32"}, {"name": "309", "dtype": "float32"}, {"name": "310", "dtype": "float32"}, {"name": "311", "dtype": "float32"}, {"name": "312", "dtype": "float32"}, {"name": "313", "dtype": "float32"}, {"name": "314", "dtype": "float32"}, {"name": "315", "dtype": "float32"}, {"name": "316", "dtype": "float32"}, {"name": "317", "dtype": "float32"}, {"name": "318", "dtype": "float32"}, {"name": "319", "dtype": "float32"}, {"name": "320", "dtype": "float32"}, {"name": "321", "dtype": "float32"}, {"name": "322", "dtype": "float32"}, {"name": "323", "dtype": "float32"}, {"name": "324", "dtype": "float32"}, {"name": "325", "dtype": "float32"}, {"name": "326", "dtype": "float32"}, {"name": "327", "dtype": "float32"}, {"name": "328", "dtype": "float32"}, {"name": "329", "dtype": "float32"}, {"name": "330", "dtype": "float32"}, {"name": "331", "dtype": "float32"}, {"name": "332", "dtype": "float32"}, {"name": "333", "dtype": "float32"}, {"name": "334", "dtype": "float32"}, {"name": "335", "dtype": "float32"}, {"name": "336", "dtype": "float32"}, {"name": "337", "dtype": "float32"}, {"name": "338", "dtype": "float32"}, {"name": "339", "dtype": "float32"}, {"name": "340", "dtype": "float32"}, {"name": "341", "dtype": "float32"}, {"name": "342", "dtype": "float32"}, {"name": "343", "dtype": "float32"}, {"name": "344", "dtype": "float32"}, {"name": "345", "dtype": "float32"}, {"name": "346", "dtype": "float32"}, {"name": "347", "dtype": "float32"}, {"name": "348", "dtype": "float32"}, {"name": "349", "dtype": "float32"}, {"name": "350", "dtype": "float32"}, {"name": "351", "dtype": "float32"}, {"name": "352", "dtype": "float32"}, {"name": "353", "dtype": "float32"}, {"name": "354", "dtype": "float32"}, {"name": "355", "dtype": "float32"}, {"name": "356", "dtype": "float32"}, {"name": "357", "dtype": "float32"}, {"name": "358", "dtype": "float32"}, {"name": "359", "dtype": "float32"}, {"name": "360", "dtype": "float32"}, {"name": "361", "dtype": "float32"}, {"name": "362", "dtype": "float32"}, {"name": "363", "dtype": "float32"}, {"name": "364", "dtype": "float32"}, {"name": "365", "dtype": "float32"}, {"name": "366", "dtype": "float32"}, {"name": "367", "dtype": "float32"}, {"name": "368", "dtype": "float32"}, {"name": "369", "dtype": "float32"}, {"name": "370", "dtype": "float32"}, {"name": "371", "dtype": "float32"}, {"name": "372", "dtype": "float32"}, {"name": "373", "dtype": "float32"}, {"name": "374", "dtype": "float32"}, {"name": "375", "dtype": "float32"}, {"name": "376", "dtype": "float32"}, {"name": "377", "dtype": "float32"}, {"name": "378", "dtype": "float32"}, {"name": "379", "dtype": "float32"}, {"name": "380", "dtype": "float32"}, {"name": "381", "dtype": "float32"}, {"name": "382", "dtype": "float32"}, {"name": "383", "dtype": "float32"}, {"name": "384", "dtype": "float32"}, {"name": "385", "dtype": "float32"}, {"name": "386", "dtype": "float32"}, {"name": "387", "dtype": "float32"}, {"name": "388", "dtype": "float32"}, {"name": "389", "dtype": "float32"}, {"name": "390", "dtype": "float32"}, {"name": "391", "dtype": "float32"}, {"name": "392", "dtype": "float32"}, {"name": "393", "dtype": "float32"}, {"name": "394", "dtype": "float32"}, {"name": "395", "dtype": "float32"}, {"name": "396", "dtype": "float32"}, {"name": "397", "dtype": "float32"}, {"name": "398", "dtype": "float32"}, {"name": "399", "dtype": "float32"}, {"name": "400", "dtype": "float32"}, {"name": "401", "dtype": "float32"}, {"name": "402", "dtype": "float32"}, {"name": "403", "dtype": "float32"}, {"name": "404", "dtype": "float32"}, {"name": "405", "dtype": "float32"}, {"name": "406", "dtype": "float32"}, {"name": "407", "dtype": "float32"}, {"name": "408", "dtype": "float32"}, {"name": "409", "dtype": "float32"}, {"name": "410", "dtype": "float32"}, {"name": "411", "dtype": "float32"}, {"name": "412", "dtype": "float32"}, {"name": "413", "dtype": "float32"}, {"name": "414", "dtype": "float32"}, {"name": "415", "dtype": "float32"}, {"name": "416", "dtype": "float32"}, {"name": "417", "dtype": "float32"}, {"name": "418", "dtype": "float32"}, {"name": "419", "dtype": "float32"}, {"name": "420", "dtype": "float32"}, {"name": "421", "dtype": "float32"}, {"name": "422", "dtype": "float32"}, {"name": "423", "dtype": "float32"}, {"name": "424", "dtype": "float32"}, {"name": "425", "dtype": "float32"}, {"name": "426", "dtype": "float32"}, {"name": "427", "dtype": "float32"}, {"name": "428", "dtype": "float32"}, {"name": "429", "dtype": "float32"}, {"name": "430", "dtype": "float32"}, {"name": "431", "dtype": "float32"}, {"name": "432", "dtype": "float32"}, {"name": "433", "dtype": "float32"}, {"name": "434", "dtype": "float32"}, {"name": "435", "dtype": "float32"}, {"name": "436", "dtype": "float32"}, {"name": "437", "dtype": "float32"}, {"name": "438", "dtype": "float32"}, {"name": "439", "dtype": "float32"}, {"name": "440", "dtype": "float32"}, {"name": "441", "dtype": "float32"}, {"name": "442", "dtype": "float32"}, {"name": "443", "dtype": "float32"}, {"name": "444", "dtype": "float32"}, {"name": "445", "dtype": "float32"}, {"name": "446", "dtype": "float32"}, {"name": "447", "dtype": "float32"}, {"name": "448", "dtype": "float32"}, {"name": "449", "dtype": "float32"}, {"name": "450", "dtype": "float32"}, {"name": "451", "dtype": "float32"}, {"name": "452", "dtype": "float32"}, {"name": "453", "dtype": "float32"}, {"name": "454", "dtype": "float32"}, {"name": "455", "dtype": "float32"}, {"name": "456", "dtype": "float32"}, {"name": "457", "dtype": "float32"}, {"name": "458", "dtype": "float32"}, {"name": "459", "dtype": "float32"}, {"name": "460", "dtype": "float32"}, {"name": "461", "dtype": "float32"}, {"name": "462", "dtype": "float32"}, {"name": "463", "dtype": "float32"}, {"name": "464", "dtype": "float32"}, {"name": "465", "dtype": "float32"}, {"name": "466", "dtype": "float32"}, {"name": "467", "dtype": "float32"}, {"name": "468", "dtype": "float32"}, {"name": "469", "dtype": "float32"}, {"name": "470", "dtype": "float32"}, {"name": "471", "dtype": "float32"}, {"name": "472", "dtype": "float32"}, {"name": "473", "dtype": "float32"}, {"name": "474", "dtype": "float32"}, {"name": "475", "dtype": "float32"}, {"name": "476", "dtype": "float32"}, {"name": "477", "dtype": "float32"}, {"name": "478", "dtype": "float32"}, {"name": "479", "dtype": "float32"}, {"name": "480", "dtype": "float32"}, {"name": "481", "dtype": "float32"}, {"name": "482", "dtype": "float32"}, {"name": "483", "dtype": "float32"}, {"name": "484", "dtype": "float32"}, {"name": "485", "dtype": "float32"}, {"name": "486", "dtype": "float32"}, {"name": "487", "dtype": "float32"}, {"name": "488", "dtype": "float32"}, {"name": "489", "dtype": "float32"}, {"name": "490", "dtype": "float32"}, {"name": "491", "dtype": "float32"}, {"name": "492", "dtype": "float32"}, {"name": "493", "dtype": "float32"}, {"name": "494", "dtype": "float32"}, {"name": "495", "dtype": "float32"}, {"name": "496", "dtype": "float32"}, {"name": "497", "dtype": "float32"}, {"name": "498", "dtype": "float32"}, {"name": "499", "dtype": "float32"}, {"name": "500", "dtype": "float32"}, {"name": "501", "dtype": "float32"}, {"name": "502", "dtype": "float32"}, {"name": "503", "dtype": "float32"}, {"name": "504", "dtype": "float32"}, {"name": "505", "dtype": "float32"}, {"name": "506", "dtype": "float32"}, {"name": "507", "dtype": "float32"}, {"name": "508", "dtype": "float32"}, {"name": "509", "dtype": "float32"}, {"name": "510", "dtype": "float32"}, {"name": "511", "dtype": "float32"}, {"name": "512", "dtype": "float32"}, {"name": "513", "dtype": "float32"}, {"name": "514", "dtype": "float32"}, {"name": "515", "dtype": "float32"}, {"name": "516", "dtype": "float32"}, {"name": "517", "dtype": "float32"}, {"name": "518", "dtype": "float32"}, {"name": "519", "dtype": "float32"}, {"name": "520", "dtype": "float32"}, {"name": "521", "dtype": "float32"}, {"name": "522", "dtype": "float32"}, {"name": "523", "dtype": "float32"}, {"name": "524", "dtype": "float32"}, {"name": "525", "dtype": "float32"}, {"name": "526", "dtype": "float32"}, {"name": "527", "dtype": "float32"}, {"name": "528", "dtype": "float32"}, {"name": "529", "dtype": "float32"}, {"name": "530", "dtype": "float32"}, {"name": "531", "dtype": "float32"}, {"name": "532", "dtype": "float32"}, {"name": "533", "dtype": "float32"}, {"name": "534", "dtype": "float32"}, {"name": "535", "dtype": "float32"}, {"name": "536", "dtype": "float32"}, {"name": "537", "dtype": "float32"}, {"name": "538", "dtype": "float32"}, {"name": "539", "dtype": "float32"}, {"name": "540", "dtype": "float32"}, {"name": "541", "dtype": "float32"}, {"name": "542", "dtype": "float32"}, {"name": "543", "dtype": "float32"}, {"name": "544", "dtype": "float32"}, {"name": "545", "dtype": "float32"}, {"name": "546", "dtype": "float32"}, {"name": "547", "dtype": "float32"}, {"name": "548", "dtype": "float32"}, {"name": "549", "dtype": "float32"}, {"name": "550", "dtype": "float32"}, {"name": "551", "dtype": "float32"}, {"name": "552", "dtype": "float32"}, {"name": "553", "dtype": "float32"}, {"name": "554", "dtype": "float32"}, {"name": "555", "dtype": "float32"}, {"name": "556", "dtype": "float32"}, {"name": "557", "dtype": "float32"}, {"name": "558", "dtype": "float32"}, {"name": "559", "dtype": "float32"}, {"name": "560", "dtype": "float32"}, {"name": "561", "dtype": "float32"}, {"name": "562", "dtype": "float32"}, {"name": "563", "dtype": "float32"}, {"name": "564", "dtype": "float32"}, {"name": "565", "dtype": "float32"}, {"name": "566", "dtype": "float32"}, {"name": "567", "dtype": "float32"}, {"name": "568", "dtype": "float32"}, {"name": "569", "dtype": "float32"}, {"name": "570", "dtype": "float32"}, {"name": "571", "dtype": "float32"}, {"name": "572", "dtype": "float32"}, {"name": "573", "dtype": "float32"}, {"name": "574", "dtype": "float32"}, {"name": "575", "dtype": "float32"}, {"name": "576", "dtype": "float32"}, {"name": "577", "dtype": "float32"}, {"name": "578", "dtype": "float32"}, {"name": "579", "dtype": "float32"}, {"name": "580", "dtype": "float32"}, {"name": "581", "dtype": "float32"}, {"name": "582", "dtype": "float32"}, {"name": "583", "dtype": "float32"}, {"name": "584", "dtype": "float32"}, {"name": "585", "dtype": "float32"}, {"name": "586", "dtype": "float32"}, {"name": "587", "dtype": "float32"}, {"name": "588", "dtype": "float32"}, {"name": "589", "dtype": "float32"}, {"name": "590", "dtype": "float32"}, {"name": "591", "dtype": "float32"}, {"name": "592", "dtype": "float32"}, {"name": "593", "dtype": "float32"}, {"name": "594", "dtype": "float32"}, {"name": "595", "dtype": "float32"}, {"name": "596", "dtype": "float32"}, {"name": "597", "dtype": "float32"}, {"name": "598", "dtype": "float32"}, {"name": "599", "dtype": "float32"}, {"name": "600", "dtype": "float32"}, {"name": "601", "dtype": "float32"}, {"name": "602", "dtype": "float32"}, {"name": "603", "dtype": "float32"}, {"name": "604", "dtype": "float32"}, {"name": "605", "dtype": "float32"}, {"name": "606", "dtype": "float32"}, {"name": "607", "dtype": "float32"}, {"name": "608", "dtype": "float32"}, {"name": "609", "dtype": "float32"}, {"name": "610", "dtype": "float32"}, {"name": "611", "dtype": "float32"}, {"name": "612", "dtype": "float32"}, {"name": "613", "dtype": "float32"}, {"name": "614", "dtype": "float32"}, {"name": "615", "dtype": "float32"}, {"name": "616", "dtype": "float32"}, {"name": "617", "dtype": "float32"}, {"name": "618", "dtype": "float32"}, {"name": "619", "dtype": "float32"}, {"name": "620", "dtype": "float32"}, {"name": "621", "dtype": "float32"}, {"name": "622", "dtype": "float32"}, {"name": "623", "dtype": "float32"}, {"name": "624", "dtype": "float32"}, {"name": "625", "dtype": "float32"}, {"name": "626", "dtype": "float32"}, {"name": "627", "dtype": "float32"}, {"name": "628", "dtype": "float32"}, {"name": "629", "dtype": "float32"}, {"name": "630", "dtype": "float32"}, {"name": "631", "dtype": "float32"}, {"name": "632", "dtype": "float32"}, {"name": "633", "dtype": "float32"}, {"name": "634", "dtype": "float32"}, {"name": "635", "dtype": "float32"}, {"name": "636", "dtype": "float32"}, {"name": "637", "dtype": "float32"}, {"name": "638", "dtype": "float32"}, {"name": "639", "dtype": "float32"}, {"name": "640", "dtype": "float32"}, {"name": "641", "dtype": "float32"}, {"name": "642", "dtype": "float32"}, {"name": "643", "dtype": "float32"}, {"name": "644", "dtype": "float32"}, {"name": "645", "dtype": "float32"}, {"name": "646", "dtype": "float32"}, {"name": "647", "dtype": "float32"}, {"name": "648", "dtype": "float32"}, {"name": "649", "dtype": "float32"}, {"name": "650", "dtype": "float32"}, {"name": "651", "dtype": "float32"}, {"name": "652", "dtype": "float32"}, {"name": "653", "dtype": "float32"}, {"name": "654", "dtype": "float32"}, {"name": "655", "dtype": "float32"}, {"name": "656", "dtype": "float32"}, {"name": "657", "dtype": "float32"}, {"name": "658", "dtype": "float32"}, {"name": "659", "dtype": "float32"}, {"name": "660", "dtype": "float32"}, {"name": "661", "dtype": "float32"}, {"name": "662", "dtype": "float32"}, {"name": "663", "dtype": "float32"}, {"name": "664", "dtype": "float32"}, {"name": "665", "dtype": "float32"}, {"name": "666", "dtype": "float32"}, {"name": "667", "dtype": "float32"}, {"name": "668", "dtype": "float32"}, {"name": "669", "dtype": "float32"}, {"name": "670", "dtype": "float32"}, {"name": "671", "dtype": "float32"}, {"name": "672", "dtype": "float32"}, {"name": "673", "dtype": "float32"}, {"name": "674", "dtype": "float32"}, {"name": "675", "dtype": "float32"}, {"name": "676", "dtype": "float32"}, {"name": "677", "dtype": "float32"}, {"name": "678", "dtype": "float32"}, {"name": "679", "dtype": "float32"}, {"name": "680", "dtype": "float32"}, {"name": "681", "dtype": "float32"}, {"name": "682", "dtype": "float32"}, {"name": "683", "dtype": "float32"}, {"name": "684", "dtype": "float32"}, {"name": "685", "dtype": "float32"}, {"name": "686", "dtype": "float32"}, {"name": "687", "dtype": "float32"}, {"name": "688", "dtype": "float32"}, {"name": "689", "dtype": "float32"}, {"name": "690", "dtype": "float32"}, {"name": "691", "dtype": "float32"}, {"name": "692", "dtype": "float32"}, {"name": "693", "dtype": "float32"}, {"name": "694", "dtype": "float32"}, {"name": "695", "dtype": "float32"}, {"name": "696", "dtype": "float32"}, {"name": "697", "dtype": "float32"}, {"name": "698", "dtype": "float32"}, {"name": "699", "dtype": "float32"}, {"name": "700", "dtype": "float32"}, {"name": "701", "dtype": "float32"}, {"name": "702", "dtype": "float32"}, {"name": "703", "dtype": "float32"}, {"name": "704", "dtype": "float32"}, {"name": "705", "dtype": "float32"}, {"name": "706", "dtype": "float32"}, {"name": "707", "dtype": "float32"}, {"name": "708", "dtype": "float32"}, {"name": "709", "dtype": "float32"}, {"name": "710", "dtype": "float32"}, {"name": "711", "dtype": "float32"}, {"name": "712", "dtype": "float32"}, {"name": "713", "dtype": "float32"}, {"name": "714", "dtype": "float32"}, {"name": "715", "dtype": "float32"}, {"name": "716", "dtype": "float32"}, {"name": "717", "dtype": "float32"}, {"name": "718", "dtype": "float32"}, {"name": "719", "dtype": "float32"}, {"name": "720", "dtype": "float32"}, {"name": "721", "dtype": "float32"}, {"name": "722", "dtype": "float32"}, {"name": "723", "dtype": "float32"}, {"name": "724", "dtype": "float32"}, {"name": "725", "dtype": "float32"}, {"name": "726", "dtype": "float32"}, {"name": "727", "dtype": "float32"}, {"name": "728", "dtype": "float32"}, {"name": "729", "dtype": "float32"}, {"name": "730", "dtype": "float32"}, {"name": "731", "dtype": "float32"}, {"name": "732", "dtype": "float32"}, {"name": "733", "dtype": "float32"}, {"name": "734", "dtype": "float32"}, {"name": "735", "dtype": "float32"}, {"name": "736", "dtype": "float32"}, {"name": "737", "dtype": "float32"}, {"name": "738", "dtype": "float32"}, {"name": "739", "dtype": "float32"}, {"name": "740", "dtype": "float32"}, {"name": "741", "dtype": "float32"}, {"name": "742", "dtype": "float32"}, {"name": "743", "dtype": "float32"}, {"name": "744", "dtype": "float32"}, {"name": "745", "dtype": "float32"}, {"name": "746", "dtype": "float32"}, {"name": "747", "dtype": "float32"}, {"name": "748", "dtype": "float32"}, {"name": "749", "dtype": "float32"}, {"name": "750", "dtype": "float32"}, {"name": "751", "dtype": "float32"}, {"name": "752", "dtype": "float32"}, {"name": "753", "dtype": "float32"}, {"name": "754", "dtype": "float32"}, {"name": "755", "dtype": "float32"}, {"name": "756", "dtype": "float32"}, {"name": "757", "dtype": "float32"}, {"name": "758", "dtype": "float32"}, {"name": "759", "dtype": "float32"}, {"name": "760", "dtype": "float32"}, {"name": "761", "dtype": "float32"}, {"name": "762", "dtype": "float32"}, {"name": "763", "dtype": "float32"}, {"name": "764", "dtype": "float32"}, {"name": "765", "dtype": "float32"}, {"name": "766", "dtype": "float32"}, {"name": "767", "dtype": "float32"}, {"name": "768", "dtype": "float32"}, {"name": "769", "dtype": "float32"}, {"name": "770", "dtype": "float32"}, {"name": "771", "dtype": "float32"}, {"name": "772", "dtype": "float32"}, {"name": "773", "dtype": "float32"}, {"name": "774", "dtype": "float32"}, {"name": "775", "dtype": "float32"}, {"name": "776", "dtype": "float32"}, {"name": "777", "dtype": "float32"}, {"name": "778", "dtype": "float32"}, {"name": "779", "dtype": "float32"}, {"name": "780", "dtype": "float32"}, {"name": "781", "dtype": "float32"}, {"name": "782", "dtype": "float32"}, {"name": "783", "dtype": "float32"}, {"name": "784", "dtype": "float32"}, {"name": "785", "dtype": "float32"}, {"name": "786", "dtype": "float32"}, {"name": "787", "dtype": "float32"}, {"name": "788", "dtype": "float32"}, {"name": "789", "dtype": "float32"}, {"name": "790", "dtype": "float32"}, {"name": "791", "dtype": "float32"}, {"name": "792", "dtype": "float32"}, {"name": "793", "dtype": "float32"}, {"name": "794", "dtype": "float32"}, {"name": "795", "dtype": "float32"}, {"name": "796", "dtype": "float32"}, {"name": "797", "dtype": "float32"}, {"name": "798", "dtype": "float32"}, {"name": "799", "dtype": "float32"}, {"name": "800", "dtype": "float32"}, {"name": "801", "dtype": "float32"}, {"name": "802", "dtype": "float32"}, {"name": "803", "dtype": "float32"}, {"name": "804", "dtype": "float32"}, {"name": "805", "dtype": "float32"}, {"name": "806", "dtype": "float32"}, {"name": "807", "dtype": "float32"}, {"name": "808", "dtype": "float32"}, {"name": "809", "dtype": "float32"}, {"name": "810", "dtype": "float32"}, {"name": "811", "dtype": "float32"}, {"name": "812", "dtype": "float32"}, {"name": "813", "dtype": "float32"}, {"name": "814", "dtype": "float32"}, {"name": "815", "dtype": "float32"}, {"name": "816", "dtype": "float32"}, {"name": "817", "dtype": "float32"}, {"name": "818", "dtype": "float32"}, {"name": "819", "dtype": "float32"}, {"name": "820", "dtype": "float32"}, {"name": "821", "dtype": "float32"}, {"name": "822", "dtype": "float32"}, {"name": "823", "dtype": "float32"}, {"name": "824", "dtype": "float32"}, {"name": "825", "dtype": "float32"}, {"name": "826", "dtype": "float32"}, {"name": "827", "dtype": "float32"}, {"name": "828", "dtype": "float32"}, {"name": "829", "dtype": "float32"}, {"name": "830", "dtype": "float32"}, {"name": "831", "dtype": "float32"}, {"name": "832", "dtype": "float32"}, {"name": "833", "dtype": "float32"}, {"name": "834", "dtype": "float32"}, {"name": "835", "dtype": "float32"}, {"name": "836", "dtype": "float32"}, {"name": "837", "dtype": "float32"}, {"name": "838", "dtype": "float32"}, {"name": "839", "dtype": "float32"}, {"name": "840", "dtype": "float32"}, {"name": "841", "dtype": "float32"}, {"name": "842", "dtype": "float32"}, {"name": "843", "dtype": "float32"}, {"name": "844", "dtype": "float32"}, {"name": "845", "dtype": "float32"}, {"name": "846", "dtype": "float32"}, {"name": "847", "dtype": "float32"}, {"name": "848", "dtype": "float32"}, {"name": "849", "dtype": "float32"}, {"name": "850", "dtype": "float32"}, {"name": "851", "dtype": "float32"}, {"name": "852", "dtype": "float32"}, {"name": "853", "dtype": "float32"}, {"name": "854", "dtype": "float32"}, {"name": "855", "dtype": "float32"}, {"name": "856", "dtype": "float32"}, {"name": "857", "dtype": "float32"}, {"name": "858", "dtype": "float32"}, {"name": "859", "dtype": "float32"}, {"name": "860", "dtype": "float32"}, {"name": "861", "dtype": "float32"}, {"name": "862", "dtype": "float32"}, {"name": "863", "dtype": "float32"}, {"name": "864", "dtype": "float32"}, {"name": "865", "dtype": "float32"}, {"name": "866", "dtype": "float32"}, {"name": "867", "dtype": "float32"}, {"name": "868", "dtype": "float32"}, {"name": "869", "dtype": "float32"}, {"name": "870", "dtype": "float32"}, {"name": "871", "dtype": "float32"}, {"name": "872", "dtype": "float32"}, {"name": "873", "dtype": "float32"}, {"name": "874", "dtype": "float32"}, {"name": "875", "dtype": "float32"}, {"name": "876", "dtype": "float32"}, {"name": "877", "dtype": "float32"}, {"name": "878", "dtype": "float32"}, {"name": "879", "dtype": "float32"}, {"name": "880", "dtype": "float32"}, {"name": "881", "dtype": "float32"}, {"name": "882", "dtype": "float32"}, {"name": "883", "dtype": "float32"}, {"name": "884", "dtype": "float32"}, {"name": "885", "dtype": "float32"}, {"name": "886", "dtype": "float32"}, {"name": "887", "dtype": "float32"}, {"name": "888", "dtype": "float32"}, {"name": "889", "dtype": "float32"}, {"name": "890", "dtype": "float32"}, {"name": "891", "dtype": "float32"}, {"name": "892", "dtype": "float32"}, {"name": "893", "dtype": "float32"}, {"name": "894", "dtype": "float32"}, {"name": "895", "dtype": "float32"}, {"name": "896", "dtype": "float32"}, {"name": "897", "dtype": "float32"}, {"name": "898", "dtype": "float32"}, {"name": "899", "dtype": "float32"}, {"name": "900", "dtype": "float32"}, {"name": "901", "dtype": "float32"}, {"name": "902", "dtype": "float32"}, {"name": "903", "dtype": "float32"}, {"name": "904", "dtype": "float32"}, {"name": "905", "dtype": "float32"}, {"name": "906", "dtype": "float32"}, {"name": "907", "dtype": "float32"}, {"name": "908", "dtype": "float32"}, {"name": "909", "dtype": "float32"}, {"name": "910", "dtype": "float32"}, {"name": "911", "dtype": "float32"}, {"name": "912", "dtype": "float32"}, {"name": "913", "dtype": "float32"}, {"name": "914", "dtype": "float32"}, {"name": "915", "dtype": "float32"}, {"name": "916", "dtype": "float32"}, {"name": "917", "dtype": "float32"}, {"name": "918", "dtype": "float32"}, {"name": "919", "dtype": "float32"}, {"name": "920", "dtype": "float32"}, {"name": "921", "dtype": "float32"}, {"name": "922", "dtype": "float32"}, {"name": "923", "dtype": "float32"}, {"name": "924", "dtype": "float32"}, {"name": "925", "dtype": "float32"}, {"name": "926", "dtype": "float32"}, {"name": "927", "dtype": "float32"}, {"name": "928", "dtype": "float32"}, {"name": "929", "dtype": "float32"}, {"name": "930", "dtype": "float32"}, {"name": "931", "dtype": "float32"}, {"name": "932", "dtype": "float32"}, {"name": "933", "dtype": "float32"}, {"name": "934", "dtype": "float32"}, {"name": "935", "dtype": "float32"}, {"name": "936", "dtype": "float32"}, {"name": "937", "dtype": "float32"}, {"name": "938", "dtype": "float32"}, {"name": "939", "dtype": "float32"}, {"name": "940", "dtype": "float32"}, {"name": "941", "dtype": "float32"}, {"name": "942", "dtype": "float32"}, {"name": "943", "dtype": "float32"}, {"name": "944", "dtype": "float32"}, {"name": "945", "dtype": "float32"}, {"name": "946", "dtype": "float32"}, {"name": "947", "dtype": "float32"}, {"name": "948", "dtype": "float32"}, {"name": "949", "dtype": "float32"}, {"name": "950", "dtype": "float32"}, {"name": "951", "dtype": "float32"}, {"name": "952", "dtype": "float32"}, {"name": "953", "dtype": "float32"}, {"name": "954", "dtype": "float32"}, {"name": "955", "dtype": "float32"}, {"name": "956", "dtype": "float32"}, {"name": "957", "dtype": "float32"}, {"name": "958", "dtype": "float32"}, {"name": "959", "dtype": "float32"}, {"name": "960", "dtype": "float32"}, {"name": "961", "dtype": "float32"}, {"name": "962", "dtype": "float32"}, {"name": "963", "dtype": "float32"}, {"name": "964", "dtype": "float32"}, {"name": "965", "dtype": "float32"}, {"name": "966", "dtype": "float32"}, {"name": "967", "dtype": "float32"}, {"name": "968", "dtype": "float32"}, {"name": "969", "dtype": "float32"}, {"name": "970", "dtype": "float32"}, {"name": "971", "dtype": "float32"}, {"name": "972", "dtype": "float32"}, {"name": "973", "dtype": "float32"}, {"name": "974", "dtype": "float32"}, {"name": "975", "dtype": "float32"}, {"name": "976", "dtype": "float32"}, {"name": "977", "dtype": "float32"}, {"name": "978", "dtype": "float32"}, {"name": "979", "dtype": "float32"}, {"name": "980", "dtype": "float32"}, {"name": "981", "dtype": "float32"}, {"name": "982", "dtype": "float32"}, {"name": "983", "dtype": "float32"}, {"name": "984", "dtype": "float32"}, {"name": "985", "dtype": "float32"}, {"name": "986", "dtype": "float32"}, {"name": "987", "dtype": "float32"}, {"name": "988", "dtype": "float32"}, {"name": "989", "dtype": "float32"}, {"name": "990", "dtype": "float32"}, {"name": "991", "dtype": "float32"}, {"name": "992", "dtype": "float32"}, {"name": "993", "dtype": "float32"}, {"name": "994", "dtype": "float32"}, {"name": "995", "dtype": "float32"}, {"name": "996", "dtype": "float32"}, {"name": "997", "dtype": "float32"}, {"name": "998", "dtype": "float32"}, {"name": "999", "dtype": "float32"}, {"name": "1000", "dtype": "float32"}, {"name": "1001", "dtype": "float32"}, {"name": "1002", "dtype": "float32"}, {"name": "1003", "dtype": "float32"}, {"name": "1004", "dtype": "float32"}, {"name": "1005", "dtype": "float32"}, {"name": "1006", "dtype": "float32"}, {"name": "1007", "dtype": "float32"}, {"name": "1008", "dtype": "float32"}, {"name": "1009", "dtype": "float32"}, {"name": "1010", "dtype": "float32"}, {"name": "1011", "dtype": "float32"}, {"name": "1012", "dtype": "float32"}, {"name": "1013", "dtype": "float32"}, {"name": "1014", "dtype": "float32"}, {"name": "1015", "dtype": "float32"}, {"name": "1016", "dtype": "float32"}, {"name": "1017", "dtype": "float32"}, {"name": "1018", "dtype": "float32"}, {"name": "1019", "dtype": "float32"}, {"name": "1020", "dtype": "float32"}, {"name": "1021", "dtype": "float32"}, {"name": "1022", "dtype": "float32"}, {"name": "1023", "dtype": "float32"}, {"name": "1024", "dtype": "float32"}, {"name": "1025", "dtype": "float32"}, {"name": "1026", "dtype": "float32"}, {"name": "1027", "dtype": "float32"}, {"name": "1028", "dtype": "float32"}, {"name": "1029", "dtype": "float32"}, {"name": "1030", "dtype": "float32"}, {"name": "1031", "dtype": "float32"}, {"name": "1032", "dtype": "float32"}, {"name": "1033", "dtype": "float32"}, {"name": "1034", "dtype": "float32"}, {"name": "1035", "dtype": "float32"}, {"name": "1036", "dtype": "float32"}, {"name": "1037", "dtype": "float32"}, {"name": "1038", "dtype": "float32"}, {"name": "1039", "dtype": "float32"}, {"name": "1040", "dtype": "float32"}, {"name": "1041", "dtype": "float32"}, {"name": "1042", "dtype": "float32"}, {"name": "1043", "dtype": "float32"}, {"name": "1044", "dtype": "float32"}, {"name": "1045", "dtype": "float32"}, {"name": "1046", "dtype": "float32"}, {"name": "1047", "dtype": "float32"}, {"name": "1048", "dtype": "float32"}, {"name": "1049", "dtype": "float32"}, {"name": "1050", "dtype": "float32"}, {"name": "1051", "dtype": "float32"}, {"name": "1052", "dtype": "float32"}, {"name": "1053", "dtype": "float32"}, {"name": "1054", "dtype": "float32"}, {"name": "1055", "dtype": "float32"}, {"name": "1056", "dtype": "float32"}, {"name": "1057", "dtype": "float32"}, {"name": "1058", "dtype": "float32"}, {"name": "1059", "dtype": "float32"}, {"name": "1060", "dtype": "float32"}, {"name": "1061", "dtype": "float32"}, {"name": "1062", "dtype": "float32"}, {"name": "1063", "dtype": "float32"}, {"name": "1064", "dtype": "float32"}, {"name": "1065", "dtype": "float32"}, {"name": "1066", "dtype": "float32"}, {"name": "1067", "dtype": "float32"}, {"name": "1068", "dtype": "float32"}, {"name": "1069", "dtype": "float32"}, {"name": "1070", "dtype": "float32"}, {"name": "1071", "dtype": "float32"}, {"name": "1072", "dtype": "float32"}, {"name": "1073", "dtype": "float32"}, {"name": "1074", "dtype": "float32"}, {"name": "1075", "dtype": "float32"}, {"name": "1076", "dtype": "float32"}, {"name": "1077", "dtype": "float32"}, {"name": "1078", "dtype": "float32"}, {"name": "1079", "dtype": "float32"}, {"name": "1080", "dtype": "float32"}, {"name": "1081", "dtype": "float32"}, {"name": "1082", "dtype": "float32"}, {"name": "1083", "dtype": "float32"}, {"name": "1084", "dtype": "float32"}, {"name": "1085", "dtype": "float32"}, {"name": "1086", "dtype": "float32"}, {"name": "1087", "dtype": "float32"}, {"name": "1088", "dtype": "float32"}, {"name": "1089", "dtype": "float32"}, {"name": "1090", "dtype": "float32"}, {"name": "1091", "dtype": "float32"}, {"name": "1092", "dtype": "float32"}, {"name": "1093", "dtype": "float32"}, {"name": "1094", "dtype": "float32"}, {"name": "1095", "dtype": "float32"}, {"name": "1096", "dtype": "float32"}, {"name": "1097", "dtype": "float32"}, {"name": "1098", "dtype": "float32"}, {"name": "1099", "dtype": "float32"}, {"name": "1100", "dtype": "float32"}, {"name": "1101", "dtype": "float32"}, {"name": "1102", "dtype": "float32"}, {"name": "1103", "dtype": "float32"}, {"name": "1104", "dtype": "float32"}, {"name": "1105", "dtype": "float32"}, {"name": "1106", "dtype": "float32"}, {"name": "1107", "dtype": "float32"}, {"name": "1108", "dtype": "float32"}, {"name": "1109", "dtype": "float32"}, {"name": "1110", "dtype": "float32"}, {"name": "1111", "dtype": "float32"}, {"name": "1112", "dtype": "float32"}, {"name": "1113", "dtype": "float32"}, {"name": "1114", "dtype": "float32"}, {"name": "1115", "dtype": "float32"}, {"name": "1116", "dtype": "float32"}, {"name": "1117", "dtype": "float32"}, {"name": "1118", "dtype": "float32"}, {"name": "1119", "dtype": "float32"}, {"name": "1120", "dtype": "float32"}, {"name": "1121", "dtype": "float32"}, {"name": "1122", "dtype": "float32"}, {"name": "1123", "dtype": "float32"}, {"name": "1124", "dtype": "float32"}, {"name": "1125", "dtype": "float32"}, {"name": "1126", "dtype": "float32"}, {"name": "1127", "dtype": "float32"}, {"name": "1128", "dtype": "float32"}, {"name": "1129", "dtype": "float32"}, {"name": "1130", "dtype": "float32"}, {"name": "1131", "dtype": "float32"}, {"name": "1132", "dtype": "float32"}, {"name": "1133", "dtype": "float32"}, {"name": "1134", "dtype": "float32"}, {"name": "1135", "dtype": "float32"}, {"name": "1136", "dtype": "float32"}, {"name": "1137", "dtype": "float32"}, {"name": "1138", "dtype": "float32"}, {"name": "1139", "dtype": "float32"}, {"name": "1140", "dtype": "float32"}, {"name": "1141", "dtype": "float32"}, {"name": "1142", "dtype": "float32"}, {"name": "1143", "dtype": "float32"}, {"name": "1144", "dtype": "float32"}, {"name": "1145", "dtype": "float32"}, {"name": "1146", "dtype": "float32"}, {"name": "1147", "dtype": "float32"}, {"name": "1148", "dtype": "float32"}, {"name": "1149", "dtype": "float32"}, {"name": "1150", "dtype": "float32"}, {"name": "1151", "dtype": "float32"}, {"name": "1152", "dtype": "float32"}, {"name": "1153", "dtype": "float32"}, {"name": "1154", "dtype": "float32"}, {"name": "1155", "dtype": "float32"}, {"name": "1156", "dtype": "float32"}, {"name": "1157", "dtype": "float32"}, {"name": "1158", "dtype": "float32"}, {"name": "1159", "dtype": "float32"}, {"name": "1160", "dtype": "float32"}, {"name": "1161", "dtype": "float32"}, {"name": "1162", "dtype": "float32"}, {"name": "1163", "dtype": "float32"}, {"name": "1164", "dtype": "float32"}, {"name": "1165", "dtype": "float32"}, {"name": "1166", "dtype": "float32"}, {"name": "1167", "dtype": "float32"}, {"name": "1168", "dtype": "float32"}, {"name": "1169", "dtype": "float32"}, {"name": "1170", "dtype": "float32"}, {"name": "1171", "dtype": "float32"}, {"name": "1172", "dtype": "float32"}, {"name": "1173", "dtype": "float32"}, {"name": "1174", "dtype": "float32"}, {"name": "1175", "dtype": "float32"}, {"name": "1176", "dtype": "float32"}, {"name": "1177", "dtype": "float32"}, {"name": "1178", "dtype": "float32"}, {"name": "1179", "dtype": "float32"}, {"name": "1180", "dtype": "float32"}, {"name": "1181", "dtype": "float32"}, {"name": "1182", "dtype": "float32"}, {"name": "1183", "dtype": "float32"}, {"name": "1184", "dtype": "float32"}, {"name": "1185", "dtype": "float32"}, {"name": "1186", "dtype": "float32"}, {"name": "1187", "dtype": "float32"}, {"name": "1188", "dtype": "float32"}, {"name": "1189", "dtype": "float32"}, {"name": "1190", "dtype": "float32"}, {"name": "1191", "dtype": "float32"}, {"name": "1192", "dtype": "float32"}, {"name": "1193", "dtype": "float32"}, {"name": "1194", "dtype": "float32"}, {"name": "1195", "dtype": "float32"}, {"name": "1196", "dtype": "float32"}, {"name": "1197", "dtype": "float32"}, {"name": "1198", "dtype": "float32"}, {"name": "1199", "dtype": "float32"}, {"name": "1200", "dtype": "float32"}, {"name": "1201", "dtype": "float32"}, {"name": "1202", "dtype": "float32"}, {"name": "1203", "dtype": "float32"}, {"name": "1204", "dtype": "float32"}, {"name": "1205", "dtype": "float32"}, {"name": "1206", "dtype": "float32"}, {"name": "1207", "dtype": "float32"}, {"name": "1208", "dtype": "float32"}, {"name": "1209", "dtype": "float32"}, {"name": "1210", "dtype": "float32"}, {"name": "1211", "dtype": "float32"}, {"name": "1212", "dtype": "float32"}, {"name": "1213", "dtype": "float32"}, {"name": "1214", "dtype": "float32"}, {"name": "1215", "dtype": "float32"}, {"name": "1216", "dtype": "float32"}, {"name": "1217", "dtype": "float32"}, {"name": "1218", "dtype": "float32"}, {"name": "1219", "dtype": "float32"}, {"name": "1220", "dtype": "float32"}, {"name": "1221", "dtype": "float32"}, {"name": "1222", "dtype": "float32"}, {"name": "1223", "dtype": "float32"}, {"name": "1224", "dtype": "float32"}, {"name": "1225", "dtype": "float32"}, {"name": "1226", "dtype": "float32"}, {"name": "1227", "dtype": "float32"}, {"name": "1228", "dtype": "float32"}, {"name": "1229", "dtype": "float32"}, {"name": "1230", "dtype": "float32"}, {"name": "1231", "dtype": "float32"}, {"name": "1232", "dtype": "float32"}, {"name": "1233", "dtype": "float32"}, {"name": "1234", "dtype": "float32"}, {"name": "1235", "dtype": "float32"}, {"name": "1236", "dtype": "float32"}, {"name": "1237", "dtype": "float32"}, {"name": "1238", "dtype": "float32"}, {"name": "1239", "dtype": "float32"}, {"name": "1240", "dtype": "float32"}, {"name": "1241", "dtype": "float32"}, {"name": "1242", "dtype": "float32"}, {"name": "1243", "dtype": "float32"}, {"name": "1244", "dtype": "float32"}, {"name": "1245", "dtype": "float32"}, {"name": "1246", "dtype": "float32"}, {"name": "1247", "dtype": "float32"}, {"name": "1248", "dtype": "float32"}, {"name": "1249", "dtype": "float32"}, {"name": "1250", "dtype": "float32"}, {"name": "1251", "dtype": "float32"}, {"name": "1252", "dtype": "float32"}, {"name": "1253", "dtype": "float32"}, {"name": "1254", "dtype": "float32"}, {"name": "1255", "dtype": "float32"}, {"name": "1256", "dtype": "float32"}, {"name": "1257", "dtype": "float32"}, {"name": "1258", "dtype": "float32"}, {"name": "1259", "dtype": "float32"}, {"name": "1260", "dtype": "float32"}, {"name": "1261", "dtype": "float32"}, {"name": "1262", "dtype": "float32"}, {"name": "1263", "dtype": "float32"}, {"name": "1264", "dtype": "float32"}, {"name": "1265", "dtype": "float32"}, {"name": "1266", "dtype": "float32"}, {"name": "1267", "dtype": "float32"}, {"name": "1268", "dtype": "float32"}, {"name": "1269", "dtype": "float32"}, {"name": "1270", "dtype": "float32"}, {"name": "1271", "dtype": "float32"}, {"name": "1272", "dtype": "float32"}, {"name": "1273", "dtype": "float32"}, {"name": "1274", "dtype": "float32"}, {"name": "1275", "dtype": "float32"}, {"name": "1276", "dtype": "float32"}, {"name": "1277", "dtype": "float32"}, {"name": "1278", "dtype": "float32"}, {"name": "1279", "dtype": "float32"}, {"name": "1280", "dtype": "float32"}, {"name": "1281", "dtype": "float32"}, {"name": "1282", "dtype": "float32"}, {"name": "1283", "dtype": "float32"}, {"name": "1284", "dtype": "float32"}, {"name": "1285", "dtype": "float32"}, {"name": "1286", "dtype": "float32"}, {"name": "1287", "dtype": "float32"}, {"name": "1288", "dtype": "float32"}, {"name": "1289", "dtype": "float32"}, {"name": "1290", "dtype": "float32"}, {"name": "1291", "dtype": "float32"}, {"name": "1292", "dtype": "float32"}, {"name": "1293", "dtype": "float32"}, {"name": "1294", "dtype": "float32"}, {"name": "1295", "dtype": "float32"}, {"name": "1296", "dtype": "float32"}, {"name": "1297", "dtype": "float32"}, {"name": "1298", "dtype": "float32"}, {"name": "1299", "dtype": "float32"}, {"name": "1300", "dtype": "float32"}, {"name": "1301", "dtype": "float32"}, {"name": "1302", "dtype": "float32"}, {"name": "1303", "dtype": "float32"}, {"name": "1304", "dtype": "float32"}, {"name": "1305", "dtype": "float32"}, {"name": "1306", "dtype": "float32"}, {"name": "1307", "dtype": "float32"}, {"name": "1308", "dtype": "float32"}, {"name": "1309", "dtype": "float32"}, {"name": "1310", "dtype": "float32"}, {"name": "1311", "dtype": "float32"}, {"name": "1312", "dtype": "float32"}, {"name": "1313", "dtype": "float32"}, {"name": "1314", "dtype": "float32"}, {"name": "1315", "dtype": "float32"}, {"name": "1316", "dtype": "float32"}, {"name": "1317", "dtype": "float32"}, {"name": "1318", "dtype": "float32"}, {"name": "1319", "dtype": "float32"}, {"name": "1320", "dtype": "float32"}, {"name": "1321", "dtype": "float32"}, {"name": "1322", "dtype": "float32"}, {"name": "1323", "dtype": "float32"}, {"name": "1324", "dtype": "float32"}, {"name": "1325", "dtype": "float32"}, {"name": "1326", "dtype": "float32"}, {"name": "1327", "dtype": "float32"}, {"name": "1328", "dtype": "float32"}, {"name": "1329", "dtype": "float32"}, {"name": "1330", "dtype": "float32"}, {"name": "1331", "dtype": "float32"}, {"name": "1332", "dtype": "float32"}, {"name": "1333", "dtype": "float32"}, {"name": "1334", "dtype": "float32"}, {"name": "1335", "dtype": "float32"}, {"name": "1336", "dtype": "float32"}, {"name": "1337", "dtype": "float32"}, {"name": "1338", "dtype": "float32"}, {"name": "1339", "dtype": "float32"}, {"name": "1340", "dtype": "float32"}, {"name": "1341", "dtype": "float32"}, {"name": "1342", "dtype": "float32"}, {"name": "1343", "dtype": "float32"}, {"name": "1344", "dtype": "float32"}, {"name": "1345", "dtype": "float32"}, {"name": "1346", "dtype": "float32"}, {"name": "1347", "dtype": "float32"}, {"name": "1348", "dtype": "float32"}, {"name": "1349", "dtype": "float32"}, {"name": "1350", "dtype": "float32"}, {"name": "1351", "dtype": "float32"}, {"name": "1352", "dtype": "float32"}, {"name": "1353", "dtype": "float32"}, {"name": "1354", "dtype": "float32"}, {"name": "1355", "dtype": "float32"}, {"name": "1356", "dtype": "float32"}, {"name": "1357", "dtype": "float32"}, {"name": "1358", "dtype": "float32"}, {"name": "1359", "dtype": "float32"}, {"name": "1360", "dtype": "float32"}, {"name": "1361", "dtype": "float32"}, {"name": "1362", "dtype": "float32"}, {"name": "1363", "dtype": "float32"}, {"name": "1364", "dtype": "float32"}, {"name": "1365", "dtype": "float32"}, {"name": "1366", "dtype": "float32"}, {"name": "1367", "dtype": "float32"}, {"name": "1368", "dtype": "float32"}, {"name": "1369", "dtype": "float32"}, {"name": "1370", "dtype": "float32"}, {"name": "1371", "dtype": "float32"}, {"name": "1372", "dtype": "float32"}, {"name": "1373", "dtype": "float32"}, {"name": "1374", "dtype": "float32"}, {"name": "1375", "dtype": "float32"}, {"name": "1376", "dtype": "float32"}, {"name": "1377", "dtype": "float32"}, {"name": "1378", "dtype": "float32"}, {"name": "1379", "dtype": "float32"}, {"name": "1380", "dtype": "float32"}, {"name": "1381", "dtype": "float32"}, {"name": "1382", "dtype": "float32"}, {"name": "1383", "dtype": "float32"}, {"name": "1384", "dtype": "float32"}, {"name": "1385", "dtype": "float32"}, {"name": "1386", "dtype": "float32"}, {"name": "1387", "dtype": "float32"}, {"name": "1388", "dtype": "float32"}, {"name": "1389", "dtype": "float32"}, {"name": "1390", "dtype": "float32"}, {"name": "1391", "dtype": "float32"}, {"name": "1392", "dtype": "float32"}, {"name": "1393", "dtype": "float32"}, {"name": "1394", "dtype": "float32"}, {"name": "1395", "dtype": "float32"}, {"name": "1396", "dtype": "float32"}, {"name": "1397", "dtype": "float32"}, {"name": "1398", "dtype": "float32"}, {"name": "1399", "dtype": "float32"}, {"name": "1400", "dtype": "float32"}, {"name": "1401", "dtype": "float32"}, {"name": "1402", "dtype": "float32"}, {"name": "1403", "dtype": "float32"}, {"name": "1404", "dtype": "float32"}, {"name": "1405", "dtype": "float32"}, {"name": "1406", "dtype": "float32"}, {"name": "1407", "dtype": "float32"}, {"name": "1408", "dtype": "float32"}, {"name": "1409", "dtype": "float32"}, {"name": "1410", "dtype": "float32"}, {"name": "1411", "dtype": "float32"}, {"name": "1412", "dtype": "float32"}, {"name": "1413", "dtype": "float32"}, {"name": "1414", "dtype": "float32"}, {"name": "1415", "dtype": "float32"}, {"name": "1416", "dtype": "float32"}, {"name": "1417", "dtype": "float32"}, {"name": "1418", "dtype": "float32"}, {"name": "1419", "dtype": "float32"}, {"name": "1420", "dtype": "float32"}, {"name": "1421", "dtype": "float32"}, {"name": "1422", "dtype": "float32"}, {"name": "1423", "dtype": "float32"}, {"name": "1424", "dtype": "float32"}, {"name": "1425", "dtype": "float32"}, {"name": "1426", "dtype": "float32"}, {"name": "1427", "dtype": "float32"}, {"name": "1428", "dtype": "float32"}, {"name": "1429", "dtype": "float32"}, {"name": "1430", "dtype": "float32"}, {"name": "1431", "dtype": "float32"}, {"name": "1432", "dtype": "float32"}, {"name": "1433", "dtype": "float32"}, {"name": "1434", "dtype": "float32"}, {"name": "1435", "dtype": "float32"}, {"name": "1436", "dtype": "float32"}, {"name": "1437", "dtype": "float32"}, {"name": "1438", "dtype": "float32"}, {"name": "1439", "dtype": "float32"}, {"name": "1440", "dtype": "float32"}, {"name": "1441", "dtype": "float32"}, {"name": "1442", "dtype": "float32"}, {"name": "1443", "dtype": "float32"}, {"name": "1444", "dtype": "float32"}, {"name": "1445", "dtype": "float32"}, {"name": "1446", "dtype": "float32"}, {"name": "1447", "dtype": "float32"}, {"name": "1448", "dtype": "float32"}, {"name": "1449", "dtype": "float32"}, {"name": "1450", "dtype": "float32"}, {"name": "1451", "dtype": "float32"}, {"name": "1452", "dtype": "float32"}, {"name": "1453", "dtype": "float32"}, {"name": "1454", "dtype": "float32"}, {"name": "1455", "dtype": "float32"}, {"name": "1456", "dtype": "float32"}, {"name": "1457", "dtype": "float32"}, {"name": "1458", "dtype": "float32"}, {"name": "1459", "dtype": "float32"}, {"name": "1460", "dtype": "float32"}, {"name": "1461", "dtype": "float32"}, {"name": "1462", "dtype": "float32"}, {"name": "1463", "dtype": "float32"}, {"name": "1464", "dtype": "float32"}, {"name": "1465", "dtype": "float32"}, {"name": "1466", "dtype": "float32"}, {"name": "1467", "dtype": "float32"}, {"name": "1468", "dtype": "float32"}, {"name": "1469", "dtype": "float32"}, {"name": "1470", "dtype": "float32"}, {"name": "1471", "dtype": "float32"}, {"name": "1472", "dtype": "float32"}, {"name": "1473", "dtype": "float32"}, {"name": "1474", "dtype": "float32"}, {"name": "1475", "dtype": "float32"}, {"name": "1476", "dtype": "float32"}, {"name": "1477", "dtype": "float32"}, {"name": "1478", "dtype": "float32"}, {"name": "1479", "dtype": "float32"}, {"name": "1480", "dtype": "float32"}, {"name": "1481", "dtype": "float32"}, {"name": "1482", "dtype": "float32"}, {"name": "1483", "dtype": "float32"}, {"name": "1484", "dtype": "float32"}, {"name": "1485", "dtype": "float32"}, {"name": "1486", "dtype": "float32"}, {"name": "1487", "dtype": "float32"}, {"name": "1488", "dtype": "float32"}, {"name": "1489", "dtype": "float32"}, {"name": "1490", "dtype": "float32"}, {"name": "1491", "dtype": "float32"}, {"name": "1492", "dtype": "float32"}, {"name": "1493", "dtype": "float32"}, {"name": "1494", "dtype": "float32"}, {"name": "1495", "dtype": "float32"}, {"name": "1496", "dtype": "float32"}, {"name": "1497", "dtype": "float32"}, {"name": "1498", "dtype": "float32"}, {"name": "1499", "dtype": "float32"}, {"name": "1500", "dtype": "float32"}, {"name": "1501", "dtype": "float32"}, {"name": "1502", "dtype": "float32"}, {"name": "1503", "dtype": "float32"}, {"name": "1504", "dtype": "float32"}, {"name": "1505", "dtype": "float32"}, {"name": "1506", "dtype": "float32"}, {"name": "1507", "dtype": "float32"}, {"name": "1508", "dtype": "float32"}, {"name": "1509", "dtype": "float32"}, {"name": "1510", "dtype": "float32"}, {"name": "1511", "dtype": "float32"}, {"name": "1512", "dtype": "float32"}, {"name": "1513", "dtype": "float32"}, {"name": "1514", "dtype": "float32"}, {"name": "1515", "dtype": "float32"}, {"name": "1516", "dtype": "float32"}, {"name": "1517", "dtype": "float32"}, {"name": "1518", "dtype": "float32"}, {"name": "1519", "dtype": "float32"}, {"name": "1520", "dtype": "float32"}, {"name": "1521", "dtype": "float32"}, {"name": "1522", "dtype": "float32"}, {"name": "1523", "dtype": "float32"}, {"name": "1524", "dtype": "float32"}, {"name": "1525", "dtype": "float32"}, {"name": "1526", "dtype": "float32"}, {"name": "1527", "dtype": "float32"}, {"name": "1528", "dtype": "float32"}, {"name": "1529", "dtype": "float32"}, {"name": "1530", "dtype": "float32"}, {"name": "1531", "dtype": "float32"}, {"name": "1532", "dtype": "float32"}, {"name": "1533", "dtype": "float32"}, {"name": "1534", "dtype": "float32"}, {"name": "1535", "dtype": "float32"}, {"name": "1536", "dtype": "float32"}, {"name": "1537", "dtype": "float32"}, {"name": "1538", "dtype": "float32"}, {"name": "1539", "dtype": "float32"}, {"name": "1540", "dtype": "float32"}, {"name": "1541", "dtype": "float32"}, {"name": "1542", "dtype": "float32"}, {"name": "1543", "dtype": "float32"}, {"name": "1544", "dtype": "float32"}, {"name": "1545", "dtype": "float32"}, {"name": "1546", "dtype": "float32"}, {"name": "1547", "dtype": "float32"}, {"name": "1548", "dtype": "float32"}, {"name": "1549", "dtype": "float32"}, {"name": "1550", "dtype": "float32"}, {"name": "1551", "dtype": "float32"}, {"name": "1552", "dtype": "float32"}, {"name": "1553", "dtype": "float32"}, {"name": "1554", "dtype": "float32"}, {"name": "1555", "dtype": "float32"}, {"name": "1556", "dtype": "float32"}, {"name": "1557", "dtype": "float32"}, {"name": "1558", "dtype": "float32"}, {"name": "1559", "dtype": "float32"}, {"name": "1560", "dtype": "float32"}, {"name": "1561", "dtype": "float32"}, {"name": "1562", "dtype": "float32"}, {"name": "1563", "dtype": "float32"}, {"name": "1564", "dtype": "float32"}, {"name": "1565", "dtype": "float32"}, {"name": "1566", "dtype": "float32"}, {"name": "1567", "dtype": "float32"}, {"name": "1568", "dtype": "float32"}, {"name": "1569", "dtype": "float32"}, {"name": "1570", "dtype": "float32"}, {"name": "1571", "dtype": "float32"}, {"name": "1572", "dtype": "float32"}, {"name": "1573", "dtype": "float32"}, {"name": "1574", "dtype": "float32"}, {"name": "1575", "dtype": "float32"}, {"name": "1576", "dtype": "float32"}, {"name": "1577", "dtype": "float32"}, {"name": "1578", "dtype": "float32"}, {"name": "1579", "dtype": "float32"}, {"name": "1580", "dtype": "float32"}, {"name": "1581", "dtype": "float32"}, {"name": "1582", "dtype": "float32"}, {"name": "1583", "dtype": "float32"}, {"name": "1584", "dtype": "float32"}, {"name": "1585", "dtype": "float32"}, {"name": "1586", "dtype": "float32"}, {"name": "1587", "dtype": "float32"}, {"name": "1588", "dtype": "float32"}, {"name": "1589", "dtype": "float32"}, {"name": "1590", "dtype": "float32"}, {"name": "1591", "dtype": "float32"}, {"name": "1592", "dtype": "float32"}, {"name": "1593", "dtype": "float32"}, {"name": "1594", "dtype": "float32"}, {"name": "1595", "dtype": "float32"}, {"name": "1596", "dtype": "float32"}, {"name": "1597", "dtype": "float32"}, {"name": "1598", "dtype": "float32"}, {"name": "1599", "dtype": "float32"}, {"name": "1600", "dtype": "float32"}, {"name": "1601", "dtype": "float32"}, {"name": "1602", "dtype": "float32"}, {"name": "1603", "dtype": "float32"}, {"name": "1604", "dtype": "float32"}, {"name": "1605", "dtype": "float32"}, {"name": "1606", "dtype": "float32"}, {"name": "1607", "dtype": "float32"}, {"name": "1608", "dtype": "float32"}, {"name": "1609", "dtype": "float32"}, {"name": "1610", "dtype": "float32"}, {"name": "1611", "dtype": "float32"}, {"name": "1612", "dtype": "float32"}, {"name": "1613", "dtype": "float32"}, {"name": "1614", "dtype": "float32"}, {"name": "1615", "dtype": "float32"}, {"name": "1616", "dtype": "float32"}, {"name": "1617", "dtype": "float32"}, {"name": "1618", "dtype": "float32"}, {"name": "1619", "dtype": "float32"}, {"name": "1620", "dtype": "float32"}, {"name": "1621", "dtype": "float32"}, {"name": "1622", "dtype": "float32"}, {"name": "1623", "dtype": "float32"}, {"name": "1624", "dtype": "float32"}, {"name": "1625", "dtype": "float32"}, {"name": "1626", "dtype": "float32"}, {"name": "1627", "dtype": "float32"}, {"name": "1628", "dtype": "float32"}, {"name": "1629", "dtype": "float32"}, {"name": "1630", "dtype": "float32"}, {"name": "1631", "dtype": "float32"}, {"name": "1632", "dtype": "float32"}, {"name": "1633", "dtype": "float32"}, {"name": "1634", "dtype": "float32"}, {"name": "1635", "dtype": "float32"}, {"name": "1636", "dtype": "float32"}, {"name": "1637", "dtype": "float32"}, {"name": "1638", "dtype": "float32"}, {"name": "1639", "dtype": "float32"}, {"name": "1640", "dtype": "float32"}, {"name": "1641", "dtype": "float32"}, {"name": "1642", "dtype": "float32"}, {"name": "1643", "dtype": "float32"}, {"name": "1644", "dtype": "float32"}, {"name": "1645", "dtype": "float32"}, {"name": "1646", "dtype": "float32"}, {"name": "1647", "dtype": "float32"}, {"name": "1648", "dtype": "float32"}, {"name": "1649", "dtype": "float32"}, {"name": "1650", "dtype": "float32"}, {"name": "1651", "dtype": "float32"}, {"name": "1652", "dtype": "float32"}, {"name": "1653", "dtype": "float32"}, {"name": "1654", "dtype": "float32"}, {"name": "1655", "dtype": "float32"}, {"name": "1656", "dtype": "float32"}, {"name": "1657", "dtype": "float32"}, {"name": "1658", "dtype": "float32"}, {"name": "1659", "dtype": "float32"}, {"name": "1660", "dtype": "float32"}, {"name": "1661", "dtype": "float32"}, {"name": "1662", "dtype": "float32"}, {"name": "1663", "dtype": "float32"}, {"name": "1664", "dtype": "float32"}, {"name": "1665", "dtype": "float32"}, {"name": "1666", "dtype": "float32"}, {"name": "1667", "dtype": "float32"}, {"name": "1668", "dtype": "float32"}, {"name": "1669", "dtype": "float32"}, {"name": "1670", "dtype": "float32"}, {"name": "1671", "dtype": "float32"}, {"name": "1672", "dtype": "float32"}, {"name": "1673", "dtype": "float32"}, {"name": "1674", "dtype": "float32"}, {"name": "1675", "dtype": "float32"}, {"name": "1676", "dtype": "float32"}, {"name": "1677", "dtype": "float32"}, {"name": "1678", "dtype": "float32"}, {"name": "1679", "dtype": "float32"}, {"name": "1680", "dtype": "float32"}, {"name": "1681", "dtype": "float32"}, {"name": "1682", "dtype": "float32"}, {"name": "1683", "dtype": "float32"}, {"name": "1684", "dtype": "float32"}, {"name": "1685", "dtype": "float32"}, {"name": "1686", "dtype": "float32"}, {"name": "1687", "dtype": "float32"}, {"name": "1688", "dtype": "float32"}, {"name": "1689", "dtype": "float32"}, {"name": "1690", "dtype": "float32"}, {"name": "1691", "dtype": "float32"}, {"name": "1692", "dtype": "float32"}, {"name": "1693", "dtype": "float32"}, {"name": "1694", "dtype": "float32"}, {"name": "1695", "dtype": "float32"}, {"name": "1696", "dtype": "float32"}, {"name": "1697", "dtype": "float32"}, {"name": "1698", "dtype": "float32"}, {"name": "1699", "dtype": "float32"}, {"name": "1700", "dtype": "float32"}, {"name": "1701", "dtype": "float32"}, {"name": "1702", "dtype": "float32"}, {"name": "1703", "dtype": "float32"}, {"name": "1704", "dtype": "float32"}, {"name": "1705", "dtype": "float32"}, {"name": "1706", "dtype": "float32"}, {"name": "1707", "dtype": "float32"}, {"name": "1708", "dtype": "float32"}, {"name": "1709", "dtype": "float32"}, {"name": "1710", "dtype": "float32"}, {"name": "1711", "dtype": "float32"}, {"name": "1712", "dtype": "float32"}, {"name": "1713", "dtype": "float32"}, {"name": "1714", "dtype": "float32"}, {"name": "1715", "dtype": "float32"}, {"name": "1716", "dtype": "float32"}, {"name": "1717", "dtype": "float32"}, {"name": "1718", "dtype": "float32"}, {"name": "1719", "dtype": "float32"}, {"name": "1720", "dtype": "float32"}, {"name": "1721", "dtype": "float32"}, {"name": "1722", "dtype": "float32"}, {"name": "1723", "dtype": "float32"}, {"name": "1724", "dtype": "float32"}, {"name": "1725", "dtype": "float32"}, {"name": "1726", "dtype": "float32"}, {"name": "1727", "dtype": "float32"}, {"name": "1728", "dtype": "float32"}, {"name": "1729", "dtype": "float32"}, {"name": "1730", "dtype": "float32"}, {"name": "1731", "dtype": "float32"}, {"name": "1732", "dtype": "float32"}, {"name": "1733", "dtype": "float32"}, {"name": "1734", "dtype": "float32"}, {"name": "1735", "dtype": "float32"}, {"name": "1736", "dtype": "float32"}, {"name": "1737", "dtype": "float32"}, {"name": "1738", "dtype": "float32"}, {"name": "1739", "dtype": "float32"}, {"name": "1740", "dtype": "float32"}, {"name": "1741", "dtype": "float32"}, {"name": "1742", "dtype": "float32"}, {"name": "1743", "dtype": "float32"}, {"name": "1744", "dtype": "float32"}, {"name": "1745", "dtype": "float32"}, {"name": "1746", "dtype": "float32"}, {"name": "1747", "dtype": "float32"}, {"name": "1748", "dtype": "float32"}, {"name": "1749", "dtype": "float32"}, {"name": "1750", "dtype": "float32"}, {"name": "1751", "dtype": "float32"}, {"name": "1752", "dtype": "float32"}, {"name": "1753", "dtype": "float32"}, {"name": "1754", "dtype": "float32"}, {"name": "1755", "dtype": "float32"}, {"name": "1756", "dtype": "float32"}, {"name": "1757", "dtype": "float32"}, {"name": "1758", "dtype": "float32"}, {"name": "1759", "dtype": "float32"}, {"name": "1760", "dtype": "float32"}, {"name": "1761", "dtype": "float32"}, {"name": "1762", "dtype": "float32"}, {"name": "1763", "dtype": "float32"}, {"name": "1764", "dtype": "float32"}, {"name": "1765", "dtype": "float32"}, {"name": "1766", "dtype": "float32"}, {"name": "1767", "dtype": "float32"}, {"name": "1768", "dtype": "float32"}, {"name": "1769", "dtype": "float32"}, {"name": "1770", "dtype": "float32"}, {"name": "1771", "dtype": "float32"}, {"name": "1772", "dtype": "float32"}, {"name": "1773", "dtype": "float32"}, {"name": "1774", "dtype": "float32"}, {"name": "1775", "dtype": "float32"}, {"name": "1776", "dtype": "float32"}, {"name": "1777", "dtype": "float32"}, {"name": "1778", "dtype": "float32"}, {"name": "1779", "dtype": "float32"}, {"name": "1780", "dtype": "float32"}, {"name": "1781", "dtype": "float32"}, {"name": "1782", "dtype": "float32"}, {"name": "1783", "dtype": "float32"}, {"name": "1784", "dtype": "float32"}, {"name": "1785", "dtype": "float32"}, {"name": "1786", "dtype": "float32"}, {"name": "1787", "dtype": "float32"}, {"name": "1788", "dtype": "float32"}, {"name": "1789", "dtype": "float32"}, {"name": "1790", "dtype": "float32"}, {"name": "1791", "dtype": "float32"}, {"name": "1792", "dtype": "float32"}, {"name": "1793", "dtype": "float32"}, {"name": "1794", "dtype": "float32"}, {"name": "1795", "dtype": "float32"}, {"name": "1796", "dtype": "float32"}, {"name": "1797", "dtype": "float32"}, {"name": "1798", "dtype": "float32"}, {"name": "1799", "dtype": "float32"}, {"name": "1800", "dtype": "float32"}, {"name": "1801", "dtype": "float32"}, {"name": "1802", "dtype": "float32"}, {"name": "1803", "dtype": "float32"}, {"name": "1804", "dtype": "float32"}, {"name": "1805", "dtype": "float32"}, {"name": "1806", "dtype": "float32"}, {"name": "1807", "dtype": "float32"}, {"name": "1808", "dtype": "float32"}, {"name": "1809", "dtype": "float32"}, {"name": "1810", "dtype": "float32"}, {"name": "1811", "dtype": "float32"}, {"name": "1812", "dtype": "float32"}, {"name": "1813", "dtype": "float32"}, {"name": "1814", "dtype": "float32"}, {"name": "1815", "dtype": "float32"}, {"name": "1816", "dtype": "float32"}, {"name": "1817", "dtype": "float32"}, {"name": "1818", "dtype": "float32"}, {"name": "1819", "dtype": "float32"}, {"name": "1820", "dtype": "float32"}, {"name": "1821", "dtype": "float32"}, {"name": "1822", "dtype": "float32"}, {"name": "1823", "dtype": "float32"}, {"name": "1824", "dtype": "float32"}, {"name": "1825", "dtype": "float32"}, {"name": "1826", "dtype": "float32"}, {"name": "1827", "dtype": "float32"}, {"name": "1828", "dtype": "float32"}, {"name": "1829", "dtype": "float32"}, {"name": "1830", "dtype": "float32"}, {"name": "1831", "dtype": "float32"}, {"name": "1832", "dtype": "float32"}, {"name": "1833", "dtype": "float32"}, {"name": "1834", "dtype": "float32"}, {"name": "1835", "dtype": "float32"}, {"name": "1836", "dtype": "float32"}, {"name": "1837", "dtype": "float32"}, {"name": "1838", "dtype": "float32"}, {"name": "1839", "dtype": "float32"}, {"name": "1840", "dtype": "float32"}, {"name": "1841", "dtype": "float32"}, {"name": "1842", "dtype": "float32"}, {"name": "1843", "dtype": "float32"}, {"name": "1844", "dtype": "float32"}, {"name": "1845", "dtype": "float32"}, {"name": "1846", "dtype": "float32"}, {"name": "1847", "dtype": "float32"}, {"name": "1848", "dtype": "float32"}, {"name": "1849", "dtype": "float32"}, {"name": "1850", "dtype": "float32"}, {"name": "1851", "dtype": "float32"}, {"name": "1852", "dtype": "float32"}, {"name": "1853", "dtype": "float32"}, {"name": "1854", "dtype": "float32"}, {"name": "1855", "dtype": "float32"}, {"name": "1856", "dtype": "float32"}, {"name": "1857", "dtype": "float32"}, {"name": "1858", "dtype": "float32"}, {"name": "1859", "dtype": "float32"}, {"name": "1860", "dtype": "float32"}, {"name": "1861", "dtype": "float32"}, {"name": "1862", "dtype": "float32"}, {"name": "1863", "dtype": "float32"}, {"name": "1864", "dtype": "float32"}, {"name": "1865", "dtype": "float32"}, {"name": "1866", "dtype": "float32"}, {"name": "1867", "dtype": "float32"}, {"name": "1868", "dtype": "float32"}, {"name": "1869", "dtype": "float32"}, {"name": "1870", "dtype": "float32"}, {"name": "1871", "dtype": "float32"}, {"name": "1872", "dtype": "float32"}, {"name": "1873", "dtype": "float32"}, {"name": "1874", "dtype": "float32"}, {"name": "1875", "dtype": "float32"}, {"name": "1876", "dtype": "float32"}, {"name": "1877", "dtype": "float32"}, {"name": "1878", "dtype": "float32"}, {"name": "1879", "dtype": "float32"}, {"name": "1880", "dtype": "float32"}, {"name": "1881", "dtype": "float32"}, {"name": "1882", "dtype": "float32"}, {"name": "1883", "dtype": "float32"}, {"name": "1884", "dtype": "float32"}, {"name": "1885", "dtype": "float32"}, {"name": "1886", "dtype": "float32"}, {"name": "1887", "dtype": "float32"}, {"name": "1888", "dtype": "float32"}, {"name": "1889", "dtype": "float32"}, {"name": "1890", "dtype": "float32"}, {"name": "1891", "dtype": "float32"}, {"name": "1892", "dtype": "float32"}, {"name": "1893", "dtype": "float32"}, {"name": "1894", "dtype": "float32"}, {"name": "1895", "dtype": "float32"}, {"name": "1896", "dtype": "float32"}, {"name": "1897", "dtype": "float32"}, {"name": "1898", "dtype": "float32"}, {"name": "1899", "dtype": "float32"}, {"name": "1900", "dtype": "float32"}, {"name": "1901", "dtype": "float32"}, {"name": "1902", "dtype": "float32"}, {"name": "1903", "dtype": "float32"}, {"name": "1904", "dtype": "float32"}, {"name": "1905", "dtype": "float32"}, {"name": "1906", "dtype": "float32"}, {"name": "1907", "dtype": "float32"}, {"name": "1908", "dtype": "float32"}, {"name": "1909", "dtype": "float32"}, {"name": "1910", "dtype": "float32"}, {"name": "1911", "dtype": "float32"}, {"name": "1912", "dtype": "float32"}, {"name": "1913", "dtype": "float32"}, {"name": "1914", "dtype": "float32"}, {"name": "1915", "dtype": "float32"}, {"name": "1916", "dtype": "float32"}, {"name": "1917", "dtype": "float32"}, {"name": "1918", "dtype": "float32"}, {"name": "1919", "dtype": "float32"}, {"name": "1920", "dtype": "float32"}, {"name": "1921", "dtype": "float32"}, {"name": "1922", "dtype": "float32"}, {"name": "1923", "dtype": "float32"}, {"name": "1924", "dtype": "float32"}, {"name": "1925", "dtype": "float32"}, {"name": "1926", "dtype": "float32"}, {"name": "1927", "dtype": "float32"}, {"name": "1928", "dtype": "float32"}, {"name": "1929", "dtype": "float32"}, {"name": "1930", "dtype": "float32"}, {"name": "1931", "dtype": "float32"}, {"name": "1932", "dtype": "float32"}, {"name": "1933", "dtype": "float32"}, {"name": "1934", "dtype": "float32"}, {"name": "1935", "dtype": "float32"}, {"name": "1936", "dtype": "float32"}, {"name": "1937", "dtype": "float32"}, {"name": "1938", "dtype": "float32"}, {"name": "1939", "dtype": "float32"}, {"name": "1940", "dtype": "float32"}, {"name": "1941", "dtype": "float32"}, {"name": "1942", "dtype": "float32"}, {"name": "1943", "dtype": "float32"}, {"name": "1944", "dtype": "float32"}, {"name": "1945", "dtype": "float32"}, {"name": "1946", "dtype": "float32"}, {"name": "1947", "dtype": "float32"}, {"name": "1948", "dtype": "float32"}, {"name": "1949", "dtype": "float32"}, {"name": "1950", "dtype": "float32"}, {"name": "1951", "dtype": "float32"}, {"name": "1952", "dtype": "float32"}, {"name": "1953", "dtype": "float32"}, {"name": "1954", "dtype": "float32"}, {"name": "1955", "dtype": "float32"}, {"name": "1956", "dtype": "float32"}, {"name": "1957", "dtype": "float32"}, {"name": "1958", "dtype": "float32"}, {"name": "1959", "dtype": "float32"}, {"name": "1960", "dtype": "float32"}, {"name": "1961", "dtype": "float32"}, {"name": "1962", "dtype": "float32"}, {"name": "1963", "dtype": "float32"}, {"name": "1964", "dtype": "float32"}, {"name": "1965", "dtype": "float32"}, {"name": "1966", "dtype": "float32"}, {"name": "1967", "dtype": "float32"}, {"name": "1968", "dtype": "float32"}, {"name": "1969", "dtype": "float32"}, {"name": "1970", "dtype": "float32"}, {"name": "1971", "dtype": "float32"}, {"name": "1972", "dtype": "float32"}, {"name": "1973", "dtype": "float32"}, {"name": "1974", "dtype": "float32"}, {"name": "1975", "dtype": "float32"}, {"name": "1976", "dtype": "float32"}, {"name": "1977", "dtype": "float32"}, {"name": "1978", "dtype": "float32"}, {"name": "1979", "dtype": "float32"}, {"name": "1980", "dtype": "float32"}, {"name": "1981", "dtype": "float32"}, {"name": "1982", "dtype": "float32"}, {"name": "1983", "dtype": "float32"}, {"name": "1984", "dtype": "float32"}, {"name": "1985", "dtype": "float32"}, {"name": "1986", "dtype": "float32"}, {"name": "1987", "dtype": "float32"}, {"name": "1988", "dtype": "float32"}, {"name": "1989", "dtype": "float32"}, {"name": "1990", "dtype": "float32"}, {"name": "1991", "dtype": "float32"}, {"name": "1992", "dtype": "float32"}, {"name": "1993", "dtype": "float32"}, {"name": "1994", "dtype": "float32"}, {"name": "1995", "dtype": "float32"}, {"name": "1996", "dtype": "float32"}, {"name": "1997", "dtype": "float32"}, {"name": "1998", "dtype": "float32"}, {"name": "1999", "dtype": "float32"}, {"name": "2000", "dtype": "float32"}, {"name": "2001", "dtype": "float32"}, {"name": "2002", "dtype": "float32"}, {"name": "2003", "dtype": "float32"}, {"name": "2004", "dtype": "float32"}, {"name": "2005", "dtype": "float32"}, {"name": "2006", "dtype": "float32"}, {"name": "2007", "dtype": "float32"}, {"name": "2008", "dtype": "float32"}, {"name": "2009", "dtype": "float32"}, {"name": "2010", "dtype": "float32"}, {"name": "2011", "dtype": "float32"}, {"name": "2012", "dtype": "float32"}, {"name": "2013", "dtype": "float32"}, {"name": "2014", "dtype": "float32"}, {"name": "2015", "dtype": "float32"}, {"name": "2016", "dtype": "float32"}, {"name": "2017", "dtype": "float32"}, {"name": "2018", "dtype": "float32"}, {"name": "2019", "dtype": "float32"}, {"name": "2020", "dtype": "float32"}, {"name": "2021", "dtype": "float32"}, {"name": "2022", "dtype": "float32"}, {"name": "2023", "dtype": "float32"}, {"name": "2024", "dtype": "float32"}, {"name": "2025", "dtype": "float32"}, {"name": "2026", "dtype": "float32"}, {"name": "2027", "dtype": "float32"}, {"name": "2028", "dtype": "float32"}, {"name": "2029", "dtype": "float32"}, {"name": "2030", "dtype": "float32"}, {"name": "2031", "dtype": "float32"}, {"name": "2032", "dtype": "float32"}, {"name": "2033", "dtype": "float32"}, {"name": "2034", "dtype": "float32"}, {"name": "2035", "dtype": "float32"}, {"name": "2036", "dtype": "float32"}, {"name": "2037", "dtype": "float32"}, {"name": "2038", "dtype": "float32"}, {"name": "2039", "dtype": "float32"}, {"name": "2040", "dtype": "float32"}, {"name": "2041", "dtype": "float32"}, {"name": "2042", "dtype": "float32"}, {"name": "2043", "dtype": "float32"}, {"name": "2044", "dtype": "float32"}, {"name": "2045", "dtype": "float32"}, {"name": "2046", "dtype": "float32"}, {"name": "2047", "dtype": "float32"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 307621178.4375, "num_examples": 37500}, {"name": "test", "num_bytes": 102540392.5, "num_examples": 12500}], "download_size": 565382346, "dataset_size": 410161570.9375}}
|
2023-08-23T07:52:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "CSIC_GPTNEO_Finetuned"
More Information needed
|
[
"# Dataset Card for \"CSIC_GPTNEO_Finetuned\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"CSIC_GPTNEO_Finetuned\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"CSIC_GPTNEO_Finetuned\"\n\nMore Information needed"
] |
e09bc3746f2019cfc00eb41f4bc38612981d2b6e
|
# Dataset of moriya_suwako/洩矢諏訪子/모리야스와코 (Touhou)
This is the dataset of moriya_suwako/洩矢諏訪子/모리야스와코 (Touhou), containing 500 images and their tags.
The core tags of this character are `blonde_hair, hat, ribbon, hair_ribbon, short_hair, yellow_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 651.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/moriya_suwako_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 401.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/moriya_suwako_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1141 | 794.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/moriya_suwako_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 586.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/moriya_suwako_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1141 | 1.04 GiB | [Download](https://huggingface.co/datasets/CyberHarem/moriya_suwako_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/moriya_suwako_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, blue_eyes, solo, white_thighhighs, smile, zettai_ryouiki, open_mouth |
| 1 | 10 |  |  |  |  |  | 1girl, solo, white_thighhighs, wide_sleeves, skirt, open_mouth, smile, zettai_ryouiki |
| 2 | 5 |  |  |  |  |  | 1girl, long_sleeves, looking_at_viewer, skirt, solo, white_thighhighs, wide_sleeves |
| 3 | 7 |  |  |  |  |  | 1girl, long_sleeves, solo, vest, wide_sleeves, looking_at_viewer, shirt, simple_background, skirt_set, smile, white_background, blush |
| 4 | 8 |  |  |  |  |  | 1girl, frog_print, long_sleeves, purple_skirt, purple_vest, red_ribbon, solo, white_shirt, wide_sleeves, brown_headwear, white_thighhighs, looking_at_viewer, open_mouth, parted_bangs, shoes, black_footwear, full_body, simple_background, turtleneck, white_background, medium_hair, :d, blush, long_hair, skirt_set |
| 5 | 9 |  |  |  |  |  | 1girl, bangs, long_sleeves, solo, turtleneck, upper_body, white_shirt, purple_vest, red_ribbon, simple_background, white_background, looking_at_viewer, sidelocks, brown_headwear, closed_mouth, smile, long_hair, wide_sleeves |
| 6 | 7 |  |  |  |  |  | 1girl, solo, looking_at_viewer, open_mouth, blush, upper_body, :d, heart, purple_eyes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_eyes | solo | white_thighhighs | smile | zettai_ryouiki | open_mouth | wide_sleeves | skirt | long_sleeves | looking_at_viewer | vest | shirt | simple_background | skirt_set | white_background | blush | frog_print | purple_skirt | purple_vest | red_ribbon | white_shirt | brown_headwear | parted_bangs | shoes | black_footwear | full_body | turtleneck | medium_hair | :d | long_hair | bangs | upper_body | sidelocks | closed_mouth | heart | purple_eyes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------|:-------|:-------------------|:--------|:-----------------|:-------------|:---------------|:--------|:---------------|:--------------------|:-------|:--------|:--------------------|:------------|:-------------------|:--------|:-------------|:---------------|:--------------|:-------------|:--------------|:-----------------|:---------------|:--------|:-----------------|:------------|:-------------|:--------------|:-----|:------------|:--------|:-------------|:------------|:---------------|:--------|:--------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | | X | X | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | | X | | X | | | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | | X | X | | | X | X | | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | |
| 5 | 9 |  |  |  |  |  | X | | X | | X | | | X | | X | X | | | X | | X | | | | X | X | X | X | | | | | X | | | X | X | X | X | X | | |
| 6 | 7 |  |  |  |  |  | X | | X | | | | X | | | | X | | | | | | X | | | | | | | | | | | | | X | | | X | | | X | X |
|
CyberHarem/moriya_suwako_touhou
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-08-17T19:58:25+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-14T12:02:10+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of moriya\_suwako/洩矢諏訪子/모리야스와코 (Touhou)
===============================================
This is the dataset of moriya\_suwako/洩矢諏訪子/모리야스와코 (Touhou), containing 500 images and their tags.
The core tags of this character are 'blonde\_hair, hat, ribbon, hair\_ribbon, short\_hair, yellow\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
da5d4b487fb9d7e66f376194f36888fce2cf2485
|
# CVE To Metasploit Module Prompt
This dataset is a submodule to the overall project to create an LLM that can look at newly published CVE writeups and create metasploit modules. The main repo for the project can be found [here](https://github.com/roostercoopllc/metAIsploit-assistant).
## Usage
*TO-DO*
## References
*TO-DO*
|
icantiemyshoe/cve-to-metasploit-module
|
[
"size_categories:1K<n<10K",
"language:en",
"license:bsd-2-clause",
"region:us"
] |
2023-08-17T19:59:08+00:00
|
{"language": ["en"], "license": "bsd-2-clause", "size_categories": ["1K<n<10K"], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "cve", "dtype": "string"}, {"name": "script_type", "dtype": "string"}]}}
|
2023-08-27T21:27:41+00:00
|
[] |
[
"en"
] |
TAGS
#size_categories-1K<n<10K #language-English #license-bsd-2-clause #region-us
|
# CVE To Metasploit Module Prompt
This dataset is a submodule to the overall project to create an LLM that can look at newly published CVE writeups and create metasploit modules. The main repo for the project can be found here.
## Usage
*TO-DO*
## References
*TO-DO*
|
[
"# CVE To Metasploit Module Prompt\n\nThis dataset is a submodule to the overall project to create an LLM that can look at newly published CVE writeups and create metasploit modules. The main repo for the project can be found here.",
"## Usage\n*TO-DO*",
"## References\n*TO-DO*"
] |
[
"TAGS\n#size_categories-1K<n<10K #language-English #license-bsd-2-clause #region-us \n",
"# CVE To Metasploit Module Prompt\n\nThis dataset is a submodule to the overall project to create an LLM that can look at newly published CVE writeups and create metasploit modules. The main repo for the project can be found here.",
"## Usage\n*TO-DO*",
"## References\n*TO-DO*"
] |
[
32,
60,
8,
8
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #language-English #license-bsd-2-clause #region-us \n# CVE To Metasploit Module Prompt\n\nThis dataset is a submodule to the overall project to create an LLM that can look at newly published CVE writeups and create metasploit modules. The main repo for the project can be found here.## Usage\n*TO-DO*## References\n*TO-DO*"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.