sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b077f867d53935f1e9719e4b217dd0f6bd6549ec
|
# Dataset of Misaana Farrengram
This is the dataset of Misaana Farrengram, containing 135 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 135 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 282 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 135 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 135 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 135 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 135 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 135 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 282 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 282 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 282 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/misaana_farrengram_kumakumakumabear
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-16T18:41:32+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:43:19+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Misaana Farrengram
=============================
This is the dataset of Misaana Farrengram, containing 135 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
51cbdbd49feb64c99f6ec7865f8e4c202aa0e910
|
# Dataset Card for "Large_training_set_40kclaims"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nikchar/Large_training_set_40kclaims
|
[
"region:us"
] |
2023-09-16T18:45:33+00:00
|
{"dataset_info": {"features": [{"name": "label", "dtype": "string"}, {"name": "claim", "dtype": "string"}, {"name": "evidence_wiki_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3252366, "num_examples": 39752}], "download_size": 1954676, "dataset_size": 3252366}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-16T18:45:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Large_training_set_40kclaims"
More Information needed
|
[
"# Dataset Card for \"Large_training_set_40kclaims\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Large_training_set_40kclaims\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Large_training_set_40kclaims\"\n\nMore Information needed"
] |
70b358cb6544c3f241e94e151b0c744258ab0310
|
# Dataset Card for "Large_training_set_55kdocs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nikchar/Large_training_set_55kdocs
|
[
"region:us"
] |
2023-09-16T18:45:34+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37559617, "num_examples": 56816}], "download_size": 23914506, "dataset_size": 37559617}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-16T18:45:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Large_training_set_55kdocs"
More Information needed
|
[
"# Dataset Card for \"Large_training_set_55kdocs\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Large_training_set_55kdocs\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Large_training_set_55kdocs\"\n\nMore Information needed"
] |
5182a10a10e1d46daeecbe7f60a0d23175d22c62
|
# Dataset of honda_roko (THE iDOLM@STER: Million Live!)
This is the dataset of honda_roko (THE iDOLM@STER: Million Live!), containing 40 images and their tags.
The core tags of this character are `long_hair, bow, yellow_eyes, hair_bow, breasts, grey_hair, twintails, bangs, green_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 40 | 42.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/honda_roko_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 40 | 28.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/honda_roko_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 84 | 54.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/honda_roko_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 40 | 38.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/honda_roko_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 84 | 69.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/honda_roko_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/honda_roko_theidolmstermillionlive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 14 |  |  |  |  |  | 1girl, solo, blush, looking_at_viewer, open_mouth, navel, :d, nipples, nude, small_breasts, hair_ornament, hat, jewelry, pussy |
| 1 | 5 |  |  |  |  |  | 1girl, 1boy, blush, hetero, penis, solo_focus, sweat, looking_at_viewer, mosaic_censoring, nipples, open_clothes, open_mouth, spread_legs, thighhighs, after_sex, bra, clothed_sex, cum_in_pussy, large_breasts, lying, m_legs, panties, polka_dot, smile, vaginal |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | blush | looking_at_viewer | open_mouth | navel | :d | nipples | nude | small_breasts | hair_ornament | hat | jewelry | pussy | 1boy | hetero | penis | solo_focus | sweat | mosaic_censoring | open_clothes | spread_legs | thighhighs | after_sex | bra | clothed_sex | cum_in_pussy | large_breasts | lying | m_legs | panties | polka_dot | smile | vaginal |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:--------------------|:-------------|:--------|:-----|:----------|:-------|:----------------|:----------------|:------|:----------|:--------|:-------|:---------|:--------|:-------------|:--------|:-------------------|:---------------|:--------------|:-------------|:------------|:------|:--------------|:---------------|:----------------|:--------|:---------|:----------|:------------|:--------|:----------|
| 0 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | | X | X | X | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/honda_roko_theidolmstermillionlive
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-16T18:47:57+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T02:36:24+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of honda\_roko (THE iDOLM@STER: Million Live!)
======================================================
This is the dataset of honda\_roko (THE iDOLM@STER: Million Live!), containing 40 images and their tags.
The core tags of this character are 'long\_hair, bow, yellow\_eyes, hair\_bow, breasts, grey\_hair, twintails, bangs, green\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
ba874e4381e4e9f4b3204ff41739c71e15b11881
|
# Dataset of シア・フォシュローゼ
This is the dataset of シア・フォシュローゼ, containing 201 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 201 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 480 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 201 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 201 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 201 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 201 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 201 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 480 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 480 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 480 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/shiahuoshiyuroze_kumakumakumabear
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-16T19:03:20+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:43:23+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of シア・フォシュローゼ
=====================
This is the dataset of シア・フォシュローゼ, containing 201 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
014f8acbb775fc2bb6e6d163851f9fda44a33e49
|
# Dataset Card for "open-music-dataset-demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
realfolkcode/open-music-dataset-demo
|
[
"region:us"
] |
2023-09-16T19:09:35+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 387155570.0, "num_examples": 8}], "download_size": 386530208, "dataset_size": 387155570.0}}
|
2023-09-16T19:20:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "open-music-dataset-demo"
More Information needed
|
[
"# Dataset Card for \"open-music-dataset-demo\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"open-music-dataset-demo\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"open-music-dataset-demo\"\n\nMore Information needed"
] |
5bac304176ec408d54f965b90a23fa710f7b09fa
|
SEE https://huggingface.co/datasets/TIGER-Lab/MathInstruct
This is only here for convenience
|
typeof/TIGER-Lab-MathInstruct_PoT
|
[
"region:us"
] |
2023-09-16T19:10:59+00:00
|
{}
|
2023-09-16T19:33:18+00:00
|
[] |
[] |
TAGS
#region-us
|
SEE URL
This is only here for convenience
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
22e61580d0cf47931624a82be5496a77b0ec5889
|
# Dataset of アトラ
This is the dataset of アトラ, containing 100 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 100 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 216 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 100 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 100 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 100 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 100 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 100 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 216 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 216 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 216 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
CyberHarem/atora_kumakumakumabear
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-16T19:12:19+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-17T16:43:25+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of アトラ
==============
This is the dataset of アトラ, containing 100 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
07854f971bc7f5a8a0ddd97b21d81a0463a9b662
|
# Dataset Card for "6971f242"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-muse256-muse512-wuerst-sdv15/6971f242
|
[
"region:us"
] |
2023-09-16T19:24:50+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 227, "num_examples": 10}], "download_size": 1445, "dataset_size": 227}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-16T19:24:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "6971f242"
More Information needed
|
[
"# Dataset Card for \"6971f242\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"6971f242\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"6971f242\"\n\nMore Information needed"
] |
c59d9dff530e2f8bdfc40c0a1178bac476728c67
|
# Dataset Card for "31ba9674"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-muse256-muse512-wuerst-sdv15/31ba9674
|
[
"region:us"
] |
2023-09-16T19:24:52+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 227, "num_examples": 10}], "download_size": 1445, "dataset_size": 227}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-16T19:24:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "31ba9674"
More Information needed
|
[
"# Dataset Card for \"31ba9674\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"31ba9674\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"31ba9674\"\n\nMore Information needed"
] |
fca7f026e5b6016c5b62db33d829538040e86680
|
# Dataset Card for "CC-MAIN-2023-23"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dominguesm/CC-MAIN-2023-23
|
[
"task_categories:text-generation",
"task_categories:fill-mask",
"size_categories:10B<n<100B",
"language:pt",
"license:cc-by-4.0",
"region:us"
] |
2023-09-16T19:32:49+00:00
|
{"language": ["pt"], "license": "cc-by-4.0", "size_categories": ["10B<n<100B"], "task_categories": ["text-generation", "fill-mask"], "pretty_name": "CC-MAIN-2023-23-PT", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "crawl_timestamp", "dtype": "timestamp[ns, tz=UTC]"}], "splits": [{"name": "train", "num_bytes": 97584560119, "num_examples": 16899389}], "download_size": 18490153155, "dataset_size": 97584560119}}
|
2023-09-16T23:02:06+00:00
|
[] |
[
"pt"
] |
TAGS
#task_categories-text-generation #task_categories-fill-mask #size_categories-10B<n<100B #language-Portuguese #license-cc-by-4.0 #region-us
|
# Dataset Card for "CC-MAIN-2023-23"
More Information needed
|
[
"# Dataset Card for \"CC-MAIN-2023-23\"\n\nMore Information needed"
] |
[
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #size_categories-10B<n<100B #language-Portuguese #license-cc-by-4.0 #region-us \n",
"# Dataset Card for \"CC-MAIN-2023-23\"\n\nMore Information needed"
] |
[
55,
17
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #size_categories-10B<n<100B #language-Portuguese #license-cc-by-4.0 #region-us \n# Dataset Card for \"CC-MAIN-2023-23\"\n\nMore Information needed"
] |
10a7f0454fcf1d63959aca967730af7dd7735ec8
|
# Dataset Card for "fbc48c23"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/fbc48c23
|
[
"region:us"
] |
2023-09-16T19:33:58+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 169, "num_examples": 10}], "download_size": 1322, "dataset_size": 169}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-16T19:33:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fbc48c23"
More Information needed
|
[
"# Dataset Card for \"fbc48c23\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fbc48c23\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fbc48c23\"\n\nMore Information needed"
] |
84a87909d09ff0c3ae040c4e0af25a6344d96531
|
# Overview
This dataset is a collection of approximately 38,500 poems from https://www.public-domain-poetry.com/.
## Language
The language of this dataset is English.
## License
All data in this dataset is public domain, which means you should be able to use it for anything you want, as long as you aren't breaking any law in the process of doing so.
|
DanFosing/public-domain-poetry
|
[
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc0-1.0",
"region:us"
] |
2023-09-16T19:46:31+00:00
|
{"language": ["en"], "license": "cc0-1.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "public-domain-poetry"}
|
2023-09-24T10:48:44+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-cc0-1.0 #region-us
|
# Overview
This dataset is a collection of approximately 38,500 poems from URL
## Language
The language of this dataset is English.
## License
All data in this dataset is public domain, which means you should be able to use it for anything you want, as long as you aren't breaking any law in the process of doing so.
|
[
"# Overview\n\nThis dataset is a collection of approximately 38,500 poems from URL",
"## Language\n\nThe language of this dataset is English.",
"## License\n\nAll data in this dataset is public domain, which means you should be able to use it for anything you want, as long as you aren't breaking any law in the process of doing so."
] |
[
"TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-cc0-1.0 #region-us \n",
"# Overview\n\nThis dataset is a collection of approximately 38,500 poems from URL",
"## Language\n\nThe language of this dataset is English.",
"## License\n\nAll data in this dataset is public domain, which means you should be able to use it for anything you want, as long as you aren't breaking any law in the process of doing so."
] |
[
41,
18,
11,
43
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-cc0-1.0 #region-us \n# Overview\n\nThis dataset is a collection of approximately 38,500 poems from URL## Language\n\nThe language of this dataset is English.## License\n\nAll data in this dataset is public domain, which means you should be able to use it for anything you want, as long as you aren't breaking any law in the process of doing so."
] |
f3b969aaf34f07e0b4a0051352b41cb6acd1d3eb
|
# Dataset Card for "llama2-chinese-couplet-100k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
chenqile09/llama2-chinese-couplet-100k
|
[
"region:us"
] |
2023-09-16T20:13:49+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 33921909.405820444, "num_examples": 100000}, {"name": "validation", "num_bytes": 1358512, "num_examples": 4000}], "download_size": 13630532, "dataset_size": 35280421.405820444}}
|
2023-09-17T21:03:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llama2-chinese-couplet-100k"
More Information needed
|
[
"# Dataset Card for \"llama2-chinese-couplet-100k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llama2-chinese-couplet-100k\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llama2-chinese-couplet-100k\"\n\nMore Information needed"
] |
929bab2cfaaa5c590a07598f8d366c90bc43ccd9
|
# Dataset Card for Evaluation run of circulus/Llama-2-7b-orca-v1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/circulus/Llama-2-7b-orca-v1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [circulus/Llama-2-7b-orca-v1](https://huggingface.co/circulus/Llama-2-7b-orca-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_circulus__Llama-2-7b-orca-v1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-16T21:26:35.463636](https://huggingface.co/datasets/open-llm-leaderboard/details_circulus__Llama-2-7b-orca-v1/blob/main/results_2023-09-16T21-26-35.463636.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.08557046979865772,
"em_stderr": 0.0028646840549845006,
"f1": 0.15811556208053656,
"f1_stderr": 0.003126158993030364,
"acc": 0.4151299715828343,
"acc_stderr": 0.009762520250486784
},
"harness|drop|3": {
"em": 0.08557046979865772,
"em_stderr": 0.0028646840549845006,
"f1": 0.15811556208053656,
"f1_stderr": 0.003126158993030364
},
"harness|gsm8k|5": {
"acc": 0.07808946171341925,
"acc_stderr": 0.007390654481108218
},
"harness|winogrande|5": {
"acc": 0.7521704814522494,
"acc_stderr": 0.01213438601986535
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_circulus__Llama-2-7b-orca-v1
|
[
"region:us"
] |
2023-09-16T20:26:39+00:00
|
{"pretty_name": "Evaluation run of circulus/Llama-2-7b-orca-v1", "dataset_summary": "Dataset automatically created during the evaluation run of model [circulus/Llama-2-7b-orca-v1](https://huggingface.co/circulus/Llama-2-7b-orca-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_circulus__Llama-2-7b-orca-v1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-16T21:26:35.463636](https://huggingface.co/datasets/open-llm-leaderboard/details_circulus__Llama-2-7b-orca-v1/blob/main/results_2023-09-16T21-26-35.463636.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.08557046979865772,\n \"em_stderr\": 0.0028646840549845006,\n \"f1\": 0.15811556208053656,\n \"f1_stderr\": 0.003126158993030364,\n \"acc\": 0.4151299715828343,\n \"acc_stderr\": 0.009762520250486784\n },\n \"harness|drop|3\": {\n \"em\": 0.08557046979865772,\n \"em_stderr\": 0.0028646840549845006,\n \"f1\": 0.15811556208053656,\n \"f1_stderr\": 0.003126158993030364\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07808946171341925,\n \"acc_stderr\": 0.007390654481108218\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7521704814522494,\n \"acc_stderr\": 0.01213438601986535\n }\n}\n```", "repo_url": "https://huggingface.co/circulus/Llama-2-7b-orca-v1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_16T21_26_35.463636", "path": ["**/details_harness|drop|3_2023-09-16T21-26-35.463636.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-16T21-26-35.463636.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_16T21_26_35.463636", "path": ["**/details_harness|gsm8k|5_2023-09-16T21-26-35.463636.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-16T21-26-35.463636.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_16T21_26_35.463636", "path": ["**/details_harness|winogrande|5_2023-09-16T21-26-35.463636.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-16T21-26-35.463636.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_16T21_26_35.463636", "path": ["results_2023-09-16T21-26-35.463636.parquet"]}, {"split": "latest", "path": ["results_2023-09-16T21-26-35.463636.parquet"]}]}]}
|
2023-09-16T20:26:47+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of circulus/Llama-2-7b-orca-v1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model circulus/Llama-2-7b-orca-v1 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-16T21:26:35.463636(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of circulus/Llama-2-7b-orca-v1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model circulus/Llama-2-7b-orca-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-16T21:26:35.463636(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of circulus/Llama-2-7b-orca-v1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model circulus/Llama-2-7b-orca-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-16T21:26:35.463636(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of circulus/Llama-2-7b-orca-v1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model circulus/Llama-2-7b-orca-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-16T21:26:35.463636(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
81cd1c28dfce4b9cfcc69a5b41528219f7e4e1f6
|
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Chris126/guanaco-llama2-1k
|
[
"region:us"
] |
2023-09-16T20:46:23+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1654448, "num_examples": 1000}], "download_size": 0, "dataset_size": 1654448}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-17T19:04:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "guanaco-llama2-1k"
More Information needed
|
[
"# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
f6785bbbb8c38a71888ac07fda83838dcc06b03e
|
# Dataset Card for Evaluation run of shaohang/Sparse0.5_OPT-1.3
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/shaohang/Sparse0.5_OPT-1.3
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [shaohang/Sparse0.5_OPT-1.3](https://huggingface.co/shaohang/Sparse0.5_OPT-1.3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_shaohang__Sparse0.5_OPT-1.3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-16T21:48:19.303713](https://huggingface.co/datasets/open-llm-leaderboard/details_shaohang__Sparse0.5_OPT-1.3/blob/main/results_2023-09-16T21-48-19.303713.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.003145973154362416,
"em_stderr": 0.0005734993648436398,
"f1": 0.047173867449664536,
"f1_stderr": 0.0012666649528854216,
"acc": 0.29319675461487227,
"acc_stderr": 0.007301498172995543
},
"harness|drop|3": {
"em": 0.003145973154362416,
"em_stderr": 0.0005734993648436398,
"f1": 0.047173867449664536,
"f1_stderr": 0.0012666649528854216
},
"harness|gsm8k|5": {
"acc": 0.000758150113722517,
"acc_stderr": 0.0007581501137225237
},
"harness|winogrande|5": {
"acc": 0.585635359116022,
"acc_stderr": 0.013844846232268563
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_shaohang__Sparse0.5_OPT-1.3
|
[
"region:us"
] |
2023-09-16T20:48:22+00:00
|
{"pretty_name": "Evaluation run of shaohang/Sparse0.5_OPT-1.3", "dataset_summary": "Dataset automatically created during the evaluation run of model [shaohang/Sparse0.5_OPT-1.3](https://huggingface.co/shaohang/Sparse0.5_OPT-1.3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_shaohang__Sparse0.5_OPT-1.3\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-16T21:48:19.303713](https://huggingface.co/datasets/open-llm-leaderboard/details_shaohang__Sparse0.5_OPT-1.3/blob/main/results_2023-09-16T21-48-19.303713.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.003145973154362416,\n \"em_stderr\": 0.0005734993648436398,\n \"f1\": 0.047173867449664536,\n \"f1_stderr\": 0.0012666649528854216,\n \"acc\": 0.29319675461487227,\n \"acc_stderr\": 0.007301498172995543\n },\n \"harness|drop|3\": {\n \"em\": 0.003145973154362416,\n \"em_stderr\": 0.0005734993648436398,\n \"f1\": 0.047173867449664536,\n \"f1_stderr\": 0.0012666649528854216\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.000758150113722517,\n \"acc_stderr\": 0.0007581501137225237\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.585635359116022,\n \"acc_stderr\": 0.013844846232268563\n }\n}\n```", "repo_url": "https://huggingface.co/shaohang/Sparse0.5_OPT-1.3", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_16T21_48_19.303713", "path": ["**/details_harness|drop|3_2023-09-16T21-48-19.303713.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-16T21-48-19.303713.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_16T21_48_19.303713", "path": ["**/details_harness|gsm8k|5_2023-09-16T21-48-19.303713.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-16T21-48-19.303713.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_16T21_48_19.303713", "path": ["**/details_harness|winogrande|5_2023-09-16T21-48-19.303713.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-16T21-48-19.303713.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_16T21_48_19.303713", "path": ["results_2023-09-16T21-48-19.303713.parquet"]}, {"split": "latest", "path": ["results_2023-09-16T21-48-19.303713.parquet"]}]}]}
|
2023-09-16T20:48:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of shaohang/Sparse0.5_OPT-1.3
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model shaohang/Sparse0.5_OPT-1.3 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-16T21:48:19.303713(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of shaohang/Sparse0.5_OPT-1.3",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model shaohang/Sparse0.5_OPT-1.3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-16T21:48:19.303713(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of shaohang/Sparse0.5_OPT-1.3",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model shaohang/Sparse0.5_OPT-1.3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-16T21:48:19.303713(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
21,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of shaohang/Sparse0.5_OPT-1.3## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model shaohang/Sparse0.5_OPT-1.3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-16T21:48:19.303713(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
68fed9fb77c17b4bf3fed0d2994379ab56357e71
|
# Dataset Card for "bbc_2017"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
RealTimeData/bbc_2017
|
[
"region:us"
] |
2023-09-16T21:26:45+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "published_date", "dtype": "string"}, {"name": "authors", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "section", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "link", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 42935783, "num_examples": 11381}], "download_size": 19022337, "dataset_size": 42935783}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-16T21:26:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bbc_2017"
More Information needed
|
[
"# Dataset Card for \"bbc_2017\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bbc_2017\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bbc_2017\"\n\nMore Information needed"
] |
4bce9e8fcbe3206c5e954908da1d2a71c126d136
|
# Dataset Card for Evaluation run of Brillibits/Instruct_Llama70B_Dolly15k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Brillibits/Instruct_Llama70B_Dolly15k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Brillibits/Instruct_Llama70B_Dolly15k](https://huggingface.co/Brillibits/Instruct_Llama70B_Dolly15k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Brillibits__Instruct_Llama70B_Dolly15k_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-07T07:12:49.365073](https://huggingface.co/datasets/open-llm-leaderboard/details_Brillibits__Instruct_Llama70B_Dolly15k_public/blob/main/results_2023-11-07T07-12-49.365073.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.2294463087248322,
"em_stderr": 0.004306075513502917,
"f1": 0.2826310822147651,
"f1_stderr": 0.004256290262260348,
"acc": 0.6348872917405918,
"acc_stderr": 0.01192527682309685
},
"harness|drop|3": {
"em": 0.2294463087248322,
"em_stderr": 0.004306075513502917,
"f1": 0.2826310822147651,
"f1_stderr": 0.004256290262260348
},
"harness|gsm8k|5": {
"acc": 0.4268385140257771,
"acc_stderr": 0.013624249696595222
},
"harness|winogrande|5": {
"acc": 0.8429360694554064,
"acc_stderr": 0.010226303949598477
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Brillibits__Instruct_Llama70B_Dolly15k
|
[
"region:us"
] |
2023-09-16T21:45:39+00:00
|
{"pretty_name": "Evaluation run of Brillibits/Instruct_Llama70B_Dolly15k", "dataset_summary": "Dataset automatically created during the evaluation run of model [Brillibits/Instruct_Llama70B_Dolly15k](https://huggingface.co/Brillibits/Instruct_Llama70B_Dolly15k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Brillibits__Instruct_Llama70B_Dolly15k_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-07T07:12:49.365073](https://huggingface.co/datasets/open-llm-leaderboard/details_Brillibits__Instruct_Llama70B_Dolly15k_public/blob/main/results_2023-11-07T07-12-49.365073.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.2294463087248322,\n \"em_stderr\": 0.004306075513502917,\n \"f1\": 0.2826310822147651,\n \"f1_stderr\": 0.004256290262260348,\n \"acc\": 0.6348872917405918,\n \"acc_stderr\": 0.01192527682309685\n },\n \"harness|drop|3\": {\n \"em\": 0.2294463087248322,\n \"em_stderr\": 0.004306075513502917,\n \"f1\": 0.2826310822147651,\n \"f1_stderr\": 0.004256290262260348\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4268385140257771,\n \"acc_stderr\": 0.013624249696595222\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8429360694554064,\n \"acc_stderr\": 0.010226303949598477\n }\n}\n```", "repo_url": "https://huggingface.co/Brillibits/Instruct_Llama70B_Dolly15k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_07T07_12_49.365073", "path": ["**/details_harness|drop|3_2023-11-07T07-12-49.365073.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-07T07-12-49.365073.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_07T07_12_49.365073", "path": ["**/details_harness|gsm8k|5_2023-11-07T07-12-49.365073.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-07T07-12-49.365073.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_07T07_12_49.365073", "path": ["**/details_harness|winogrande|5_2023-11-07T07-12-49.365073.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-07T07-12-49.365073.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_07T07_12_49.365073", "path": ["results_2023-11-07T07-12-49.365073.parquet"]}, {"split": "latest", "path": ["results_2023-11-07T07-12-49.365073.parquet"]}]}]}
|
2023-12-01T14:40:41+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Brillibits/Instruct_Llama70B_Dolly15k
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Brillibits/Instruct_Llama70B_Dolly15k on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-07T07:12:49.365073(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Brillibits/Instruct_Llama70B_Dolly15k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Brillibits/Instruct_Llama70B_Dolly15k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-07T07:12:49.365073(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Brillibits/Instruct_Llama70B_Dolly15k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Brillibits/Instruct_Llama70B_Dolly15k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-07T07:12:49.365073(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
26,
31,
175,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Brillibits/Instruct_Llama70B_Dolly15k## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Brillibits/Instruct_Llama70B_Dolly15k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-07T07:12:49.365073(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
eb2b6b63035a4e510d27e6802e0fe56abce1af64
|
# Dataset Card for "trivia_qa_wiki"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liyucheng/trivia_qa_wiki
|
[
"region:us"
] |
2023-09-16T22:09:25+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "question_id", "dtype": "string"}, {"name": "question_source", "dtype": "string"}, {"name": "entity_pages", "sequence": [{"name": "doc_source", "dtype": "string"}, {"name": "filename", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "wiki_context", "dtype": "string"}]}, {"name": "search_results", "sequence": [{"name": "description", "dtype": "string"}, {"name": "filename", "dtype": "string"}, {"name": "rank", "dtype": "int32"}, {"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "search_context", "dtype": "string"}]}, {"name": "answer", "struct": [{"name": "aliases", "sequence": "string"}, {"name": "normalized_aliases", "sequence": "string"}, {"name": "matched_wiki_entity_name", "dtype": "string"}, {"name": "normalized_matched_wiki_entity_name", "dtype": "string"}, {"name": "normalized_value", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3340799992, "num_examples": 61888}, {"name": "validation", "num_bytes": 430166050, "num_examples": 7993}, {"name": "test", "num_bytes": 406046504, "num_examples": 7701}], "download_size": 2293374081, "dataset_size": 4177012546}}
|
2023-09-16T22:12:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "trivia_qa_wiki"
More Information needed
|
[
"# Dataset Card for \"trivia_qa_wiki\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"trivia_qa_wiki\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"trivia_qa_wiki\"\n\nMore Information needed"
] |
35158e1afd4fdb323e302682bf7adce5563a1ed5
|
# Dataset Card for "trivia_qa_wiki_val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
liyucheng/trivia_qa_wiki_val
|
[
"region:us"
] |
2023-09-16T22:14:50+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "question_id", "dtype": "string"}, {"name": "question_source", "dtype": "string"}, {"name": "entity_pages", "sequence": [{"name": "doc_source", "dtype": "string"}, {"name": "filename", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "wiki_context", "dtype": "string"}]}, {"name": "search_results", "sequence": [{"name": "description", "dtype": "string"}, {"name": "filename", "dtype": "string"}, {"name": "rank", "dtype": "int32"}, {"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "search_context", "dtype": "string"}]}, {"name": "answer", "struct": [{"name": "aliases", "sequence": "string"}, {"name": "normalized_aliases", "sequence": "string"}, {"name": "matched_wiki_entity_name", "dtype": "string"}, {"name": "normalized_matched_wiki_entity_name", "dtype": "string"}, {"name": "normalized_value", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "wiki_context_sample", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 662010582, "num_examples": 7993}], "download_size": 355772611, "dataset_size": 662010582}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]}
|
2023-09-16T22:21:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "trivia_qa_wiki_val"
More Information needed
|
[
"# Dataset Card for \"trivia_qa_wiki_val\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"trivia_qa_wiki_val\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"trivia_qa_wiki_val\"\n\nMore Information needed"
] |
83bf9c445482a03400d7bf4683f1178c235f5d5c
|
# Dataset of baba_konomi/馬場このみ/바바코노미 (THE iDOLM@STER: Million Live!)
This is the dataset of baba_konomi/馬場このみ/바바코노미 (THE iDOLM@STER: Million Live!), containing 294 images and their tags.
The core tags of this character are `brown_hair, braid, long_hair, single_braid, hair_over_shoulder, aqua_eyes, breasts, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 294 | 315.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/baba_konomi_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 294 | 202.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/baba_konomi_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 694 | 421.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/baba_konomi_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 294 | 292.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/baba_konomi_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 694 | 561.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/baba_konomi_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/baba_konomi_theidolmstermillionlive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, smile, solo, dress, blush, hair_ornament, one_eye_closed, open_mouth |
| 1 | 12 |  |  |  |  |  | 1girl, blush, small_breasts, solo, looking_at_viewer, micro_bikini, navel, white_bikini, smile, open_mouth |
| 2 | 9 |  |  |  |  |  | 1girl, blush, nipples, female_pubic_hair, navel, nude, spread_legs, sweat, censored, looking_at_viewer, small_breasts, 1boy, cum_in_pussy, hetero, open_mouth, penis, solo_focus, cum_on_body, lying |
| 3 | 5 |  |  |  |  |  | 1girl, blush, pleated_skirt, randoseru, short_sleeves, white_shirt, looking_at_viewer, open_mouth, serafuku, solo, suspender_skirt, black_skirt, blue_skirt, hair_between_eyes, white_background, white_sailor_collar, black_footwear, bow, collared_shirt, green_eyes, kneehighs, red_bag, shoes, simple_background, striped, sweat, table, twin_braids, white_socks |
| 4 | 6 |  |  |  |  |  | 1girl, blush, looking_at_viewer, small_breasts, solo, navel, pillow, underwear_only, black_bra, black_panties, green_eyes, lying, open_mouth |
| 5 | 9 |  |  |  |  |  | playboy_bunny, rabbit_ears, 1girl, detached_collar, wrist_cuffs, blush, bowtie, fake_animal_ears, looking_at_viewer, solo, smile, bare_shoulders, rabbit_tail, ass, fishnet_pantyhose, strapless_leotard |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | smile | solo | dress | blush | hair_ornament | one_eye_closed | open_mouth | small_breasts | micro_bikini | navel | white_bikini | nipples | female_pubic_hair | nude | spread_legs | sweat | censored | 1boy | cum_in_pussy | hetero | penis | solo_focus | cum_on_body | lying | pleated_skirt | randoseru | short_sleeves | white_shirt | serafuku | suspender_skirt | black_skirt | blue_skirt | hair_between_eyes | white_background | white_sailor_collar | black_footwear | bow | collared_shirt | green_eyes | kneehighs | red_bag | shoes | simple_background | striped | table | twin_braids | white_socks | pillow | underwear_only | black_bra | black_panties | playboy_bunny | rabbit_ears | detached_collar | wrist_cuffs | bowtie | fake_animal_ears | bare_shoulders | rabbit_tail | ass | fishnet_pantyhose | strapless_leotard |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:--------|:-------|:--------|:--------|:----------------|:-----------------|:-------------|:----------------|:---------------|:--------|:---------------|:----------|:--------------------|:-------|:--------------|:--------|:-----------|:-------|:---------------|:---------|:--------|:-------------|:--------------|:--------|:----------------|:------------|:----------------|:--------------|:-----------|:------------------|:--------------|:-------------|:--------------------|:-------------------|:----------------------|:-----------------|:------|:-----------------|:-------------|:------------|:----------|:--------|:--------------------|:----------|:--------|:--------------|:--------------|:---------|:-----------------|:------------|:----------------|:----------------|:--------------|:------------------|:--------------|:---------|:-------------------|:-----------------|:--------------|:------|:--------------------|:--------------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 12 |  |  |  |  |  | X | X | X | X | | X | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | X | X | | | | X | | | X | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | | X | | X | | | X | | | | | | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | X | | X | | X | | | X | X | | X | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | | | | | | | | | | | |
| 5 | 9 |  |  |  |  |  | X | X | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/baba_konomi_theidolmstermillionlive
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-16T22:17:17+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T00:15:41+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of baba\_konomi/馬場このみ/바바코노미 (THE iDOLM@STER: Million Live!)
===================================================================
This is the dataset of baba\_konomi/馬場このみ/바바코노미 (THE iDOLM@STER: Million Live!), containing 294 images and their tags.
The core tags of this character are 'brown\_hair, braid, long\_hair, single\_braid, hair\_over\_shoulder, aqua\_eyes, breasts, bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
cd39b7696a302df969b2575f27a2768c129cccb6
|
# Dataset Card for "icd_code_prediction"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Siddharthr30/icd_code_prediction
|
[
"region:us"
] |
2023-09-16T22:27:13+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 673273, "num_examples": 2628}, {"name": "validation", "num_bytes": 227730, "num_examples": 876}, {"name": "test", "num_bytes": 223352, "num_examples": 876}], "download_size": 458207, "dataset_size": 1124355}}
|
2023-09-16T22:27:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "icd_code_prediction"
More Information needed
|
[
"# Dataset Card for \"icd_code_prediction\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"icd_code_prediction\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"icd_code_prediction\"\n\nMore Information needed"
] |
74142718e6e102cfbf425744a5f38c32b7fa86dd
|
# Dataset Card for Evaluation run of bavest/fin-llama-33b-merged
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/bavest/fin-llama-33b-merged
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [bavest/fin-llama-33b-merged](https://huggingface.co/bavest/fin-llama-33b-merged) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_bavest__fin-llama-33b-merged",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-16T23:28:46.893925](https://huggingface.co/datasets/open-llm-leaderboard/details_bavest__fin-llama-33b-merged/blob/main/results_2023-09-16T23-28-46.893925.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0018875838926174498,
"em_stderr": 0.0004445109990558753,
"f1": 0.06358221476510076,
"f1_stderr": 0.0013748196874116337,
"acc": 0.48127991536483655,
"acc_stderr": 0.010695229631509682
},
"harness|drop|3": {
"em": 0.0018875838926174498,
"em_stderr": 0.0004445109990558753,
"f1": 0.06358221476510076,
"f1_stderr": 0.0013748196874116337
},
"harness|gsm8k|5": {
"acc": 0.16224412433661864,
"acc_stderr": 0.010155130880393522
},
"harness|winogrande|5": {
"acc": 0.8003157063930545,
"acc_stderr": 0.011235328382625842
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_bavest__fin-llama-33b-merged
|
[
"region:us"
] |
2023-09-16T22:28:50+00:00
|
{"pretty_name": "Evaluation run of bavest/fin-llama-33b-merged", "dataset_summary": "Dataset automatically created during the evaluation run of model [bavest/fin-llama-33b-merged](https://huggingface.co/bavest/fin-llama-33b-merged) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bavest__fin-llama-33b-merged\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-16T23:28:46.893925](https://huggingface.co/datasets/open-llm-leaderboard/details_bavest__fin-llama-33b-merged/blob/main/results_2023-09-16T23-28-46.893925.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0018875838926174498,\n \"em_stderr\": 0.0004445109990558753,\n \"f1\": 0.06358221476510076,\n \"f1_stderr\": 0.0013748196874116337,\n \"acc\": 0.48127991536483655,\n \"acc_stderr\": 0.010695229631509682\n },\n \"harness|drop|3\": {\n \"em\": 0.0018875838926174498,\n \"em_stderr\": 0.0004445109990558753,\n \"f1\": 0.06358221476510076,\n \"f1_stderr\": 0.0013748196874116337\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.16224412433661864,\n \"acc_stderr\": 0.010155130880393522\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8003157063930545,\n \"acc_stderr\": 0.011235328382625842\n }\n}\n```", "repo_url": "https://huggingface.co/bavest/fin-llama-33b-merged", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_16T23_28_46.893925", "path": ["**/details_harness|drop|3_2023-09-16T23-28-46.893925.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-16T23-28-46.893925.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_16T23_28_46.893925", "path": ["**/details_harness|gsm8k|5_2023-09-16T23-28-46.893925.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-16T23-28-46.893925.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_16T23_28_46.893925", "path": ["**/details_harness|winogrande|5_2023-09-16T23-28-46.893925.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-16T23-28-46.893925.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_16T23_28_46.893925", "path": ["results_2023-09-16T23-28-46.893925.parquet"]}, {"split": "latest", "path": ["results_2023-09-16T23-28-46.893925.parquet"]}]}]}
|
2023-09-16T22:28:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of bavest/fin-llama-33b-merged
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model bavest/fin-llama-33b-merged on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-16T23:28:46.893925(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of bavest/fin-llama-33b-merged",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model bavest/fin-llama-33b-merged on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-16T23:28:46.893925(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of bavest/fin-llama-33b-merged",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model bavest/fin-llama-33b-merged on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-16T23:28:46.893925(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
21,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of bavest/fin-llama-33b-merged## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model bavest/fin-llama-33b-merged on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-16T23:28:46.893925(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
ec304a74a667e1bb4caa13b525145d419915077b
|
# Dataset Card for "nli_mix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nfliu/nli_mix
|
[
"region:us"
] |
2023-09-16T22:54:26+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "subset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 391794476, "num_examples": 1385328}, {"name": "validation", "num_bytes": 35382903, "num_examples": 127574}, {"name": "test", "num_bytes": 18367195, "num_examples": 68523}], "download_size": 175779896, "dataset_size": 445544574}}
|
2023-09-16T22:59:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "nli_mix"
More Information needed
|
[
"# Dataset Card for \"nli_mix\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"nli_mix\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"nli_mix\"\n\nMore Information needed"
] |
5f55cae9d5ea5926cc366d3e0482381ff83da332
|
# Dataset Card for "0dc6521d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/0dc6521d
|
[
"region:us"
] |
2023-09-16T22:57:14+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 166, "num_examples": 10}], "download_size": 1304, "dataset_size": 166}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-16T22:57:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "0dc6521d"
More Information needed
|
[
"# Dataset Card for \"0dc6521d\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"0dc6521d\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"0dc6521d\"\n\nMore Information needed"
] |
411671ad27568610c587a536dc238c69845796d5
|
# Dataset Card for "yara_dataset_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
boardsec/yara_dataset_v1
|
[
"region:us"
] |
2023-09-16T23:30:42+00:00
|
{"dataset_info": {"features": [{"name": "Chunk", "dtype": "string"}, {"name": "yara_rule", "dtype": "string"}, {"name": "cleaned_yara_rule", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 33823, "num_examples": 67}], "download_size": 14543, "dataset_size": 33823}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-16T23:30:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "yara_dataset_v1"
More Information needed
|
[
"# Dataset Card for \"yara_dataset_v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"yara_dataset_v1\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"yara_dataset_v1\"\n\nMore Information needed"
] |
66d11e894023ed44498c3993a057d1ec2d3cb86a
|
# Dataset Card for "yara_dataset_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
boardsec/yara_dataset_v2
|
[
"region:us"
] |
2023-09-16T23:35:13+00:00
|
{"dataset_info": {"features": [{"name": "Chunk", "dtype": "string"}, {"name": "yara_rule", "dtype": "string"}, {"name": "cleaned_yara_rule", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 36039, "num_examples": 67}], "download_size": 15832, "dataset_size": 36039}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-16T23:35:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "yara_dataset_v2"
More Information needed
|
[
"# Dataset Card for \"yara_dataset_v2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"yara_dataset_v2\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"yara_dataset_v2\"\n\nMore Information needed"
] |
5b49b0642950f504eec1e9e8e54337e7c8d04ef0
|
# Dataset of miyao_miya/宮尾美也/미야오미야 (THE iDOLM@STER: Million Live!)
This is the dataset of miyao_miya/宮尾美也/미야오미야 (THE iDOLM@STER: Million Live!), containing 284 images and their tags.
The core tags of this character are `brown_hair, long_hair, brown_eyes, bangs, thick_eyebrows, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 284 | 304.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miyao_miya_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 284 | 206.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miyao_miya_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 644 | 414.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miyao_miya_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 284 | 282.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miyao_miya_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 644 | 540.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miyao_miya_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/miyao_miya_theidolmstermillionlive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, bloomers, blush, looking_at_viewer, paw_gloves, simple_background, smile, solo, white_background, :3, bow, jingle_bell, animal_ear_fluff, cat_ears, cat_tail, dress, hands_up, long_sleeves, open_mouth, tiger_tail |
| 1 | 6 |  |  |  |  |  | 1girl, female_pubic_hair, nipples, blush, looking_at_viewer, navel, nude, solo, medium_breasts, :d, open_mouth |
| 2 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, puffy_short_sleeves, simple_background, solo, neck_ribbon, open_mouth, plaid_dress, red_ribbon, white_background, :d, brown_dress, collarbone, upper_body, blunt_bangs, blush_stickers, frills, hands_up, pink_dress, shirt, very_long_hair |
| 3 | 14 |  |  |  |  |  | serafuku, blue_neckerchief, blush, hair_bow, long_sleeves, white_shirt, 1girl, blue_skirt, open_mouth, pleated_skirt, looking_at_viewer, :d, blue_sky, cloud, day, solo_focus, white_background, white_sailor_collar, yellow_bow, 2girls, collarbone, outdoors, socks, standing |
| 4 | 21 |  |  |  |  |  | 1girl, looking_at_viewer, solo, medium_breasts, blush, navel, smile, cleavage, open_mouth, side-tie_bikini_bottom, cowboy_shot, simple_background, standing, white_background |
| 5 | 8 |  |  |  |  |  | detached_collar, fake_animal_ears, looking_at_viewer, playboy_bunny, rabbit_ears, strapless_leotard, black_leotard, medium_breasts, wrist_cuffs, 1girl, black_bowtie, cleavage, simple_background, solo, white_background, rabbit_tail, smile, cowboy_shot, fishnet_pantyhose, open_mouth |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bloomers | blush | looking_at_viewer | paw_gloves | simple_background | smile | solo | white_background | :3 | bow | jingle_bell | animal_ear_fluff | cat_ears | cat_tail | dress | hands_up | long_sleeves | open_mouth | tiger_tail | female_pubic_hair | nipples | navel | nude | medium_breasts | :d | puffy_short_sleeves | neck_ribbon | plaid_dress | red_ribbon | brown_dress | collarbone | upper_body | blunt_bangs | blush_stickers | frills | pink_dress | shirt | very_long_hair | serafuku | blue_neckerchief | hair_bow | white_shirt | blue_skirt | pleated_skirt | blue_sky | cloud | day | solo_focus | white_sailor_collar | yellow_bow | 2girls | outdoors | socks | standing | cleavage | side-tie_bikini_bottom | cowboy_shot | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | strapless_leotard | black_leotard | wrist_cuffs | black_bowtie | rabbit_tail | fishnet_pantyhose |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:--------|:--------------------|:-------------|:--------------------|:--------|:-------|:-------------------|:-----|:------|:--------------|:-------------------|:-----------|:-----------|:--------|:-----------|:---------------|:-------------|:-------------|:--------------------|:----------|:--------|:-------|:-----------------|:-----|:----------------------|:--------------|:--------------|:-------------|:--------------|:-------------|:-------------|:--------------|:-----------------|:---------|:-------------|:--------|:-----------------|:-----------|:-------------------|:-----------|:--------------|:-------------|:----------------|:-----------|:--------|:------|:-------------|:----------------------|:-------------|:---------|:-----------|:--------|:-----------|:-----------|:-------------------------|:--------------|:------------------|:-------------------|:----------------|:--------------|:--------------------|:----------------|:--------------|:---------------|:--------------|:--------------------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | | X | X | | | | X | | | | | | | | | | | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | | | X | | X | | X | X | | | | | | | | X | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 14 |  |  |  |  |  | X | | X | X | | | | | X | | | | | | | | | X | X | | | | | | | X | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 4 | 21 |  |  |  |  |  | X | | X | X | | X | X | X | X | | | | | | | | | | X | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | |
| 5 | 8 |  |  |  |  |  | X | | | X | | X | X | X | X | | | | | | | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/miyao_miya_theidolmstermillionlive
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-16T23:52:32+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T01:34:00+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of miyao\_miya/宮尾美也/미야오미야 (THE iDOLM@STER: Million Live!)
=================================================================
This is the dataset of miyao\_miya/宮尾美也/미야오미야 (THE iDOLM@STER: Million Live!), containing 284 images and their tags.
The core tags of this character are 'brown\_hair, long\_hair, brown\_eyes, bangs, thick\_eyebrows, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
639e61a024dd1b7743e682322a386f5ad0cbb8e1
|
# Dataset Card for "newAIHumanLLAMA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
stealthwriter/newAIHumanLLAMA
|
[
"region:us"
] |
2023-09-17T00:29:35+00:00
|
{"dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1185261, "num_examples": 10456}, {"name": "validation", "num_bytes": 297846, "num_examples": 2614}], "download_size": 970447, "dataset_size": 1483107}}
|
2023-09-17T00:29:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "newAIHumanLLAMA"
More Information needed
|
[
"# Dataset Card for \"newAIHumanLLAMA\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"newAIHumanLLAMA\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"newAIHumanLLAMA\"\n\nMore Information needed"
] |
b697a5889849a6be6a550c3913d1a9f6dbc2f47c
|
# Dataset Card for "yara_dataset_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
boardsec/yara_dataset_v3
|
[
"region:us"
] |
2023-09-17T00:46:39+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "Chunk", "dtype": "string"}, {"name": "yara_rule", "dtype": "string"}, {"name": "cleaned_yara_rule", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 33823, "num_examples": 67}], "download_size": 14543, "dataset_size": 33823}}
|
2023-09-17T00:46:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "yara_dataset_v3"
More Information needed
|
[
"# Dataset Card for \"yara_dataset_v3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"yara_dataset_v3\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"yara_dataset_v3\"\n\nMore Information needed"
] |
df519c75a5fb18b68cda64bdae33c03ae8adfb75
|
# Dataset of emily_stewart/エミリー・スチュアート/에밀리스튜어트 (THE iDOLM@STER: Million Live!)
This is the dataset of emily_stewart/エミリー・スチュアート/에밀리스튜어트 (THE iDOLM@STER: Million Live!), containing 234 images and their tags.
The core tags of this character are `blonde_hair, long_hair, purple_eyes, twintails, hairband, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 234 | 249.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emily_stewart_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 234 | 165.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emily_stewart_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 536 | 343.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emily_stewart_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 234 | 226.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emily_stewart_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 536 | 455.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emily_stewart_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/emily_stewart_theidolmstermillionlive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, hair_flower, open_mouth, solo, kimono, blush, looking_at_viewer, :d, floral_print, obi, own_hands_together |
| 1 | 9 |  |  |  |  |  | 1girl, looking_at_viewer, solo, white_background, :d, blush, open_mouth, simple_background, parted_bangs, long_sleeves, white_shirt, blue_dress, hair_bow, sleeveless |
| 2 | 5 |  |  |  |  |  | 1girl, blush, navel, nipples, female_pubic_hair, medium_breasts, completely_nude, looking_at_viewer, open_mouth, solo, 1boy, blonde_pubic_hair, hetero, pussy_juice, small_breasts, smile, spread_legs, sweat, uncensored |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | hair_flower | open_mouth | solo | kimono | blush | looking_at_viewer | :d | floral_print | obi | own_hands_together | white_background | simple_background | parted_bangs | long_sleeves | white_shirt | blue_dress | hair_bow | sleeveless | navel | nipples | female_pubic_hair | medium_breasts | completely_nude | 1boy | blonde_pubic_hair | hetero | pussy_juice | small_breasts | smile | spread_legs | sweat | uncensored |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:-------------|:-------|:---------|:--------|:--------------------|:-----|:---------------|:------|:---------------------|:-------------------|:--------------------|:---------------|:---------------|:--------------|:-------------|:-----------|:-------------|:--------|:----------|:--------------------|:-----------------|:------------------|:-------|:--------------------|:---------|:--------------|:----------------|:--------|:--------------|:--------|:-------------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | | X | X | | X | X | X | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | | X | X | | X | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/emily_stewart_theidolmstermillionlive
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-17T00:46:49+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T01:38:37+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of emily\_stewart/エミリー・スチュアート/에밀리스튜어트 (THE iDOLM@STER: Million Live!)
=============================================================================
This is the dataset of emily\_stewart/エミリー・スチュアート/에밀리스튜어트 (THE iDOLM@STER: Million Live!), containing 234 images and their tags.
The core tags of this character are 'blonde\_hair, long\_hair, purple\_eyes, twintails, hairband, bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
5b64a5b654b2567c2910b784e9e4186bcd6fe2c4
|
# Dataset Card for "yara_dataset_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
boardsec/yara_dataset_v4
|
[
"region:us"
] |
2023-09-17T00:52:08+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "Chunk", "dtype": "string"}, {"name": "yara_rule", "dtype": "string"}, {"name": "cleaned_yara_rule", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 36039, "num_examples": 67}], "download_size": 15832, "dataset_size": 36039}}
|
2023-09-17T00:52:10+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "yara_dataset_v4"
More Information needed
|
[
"# Dataset Card for \"yara_dataset_v4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"yara_dataset_v4\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"yara_dataset_v4\"\n\nMore Information needed"
] |
73895e31583a03655a5e6b48457d361099aa98d8
|
# Dataset of handa_roco/伴田路子 (THE iDOLM@STER: Million Live!)
This is the dataset of handa_roco/伴田路子 (THE iDOLM@STER: Million Live!), containing 183 images and their tags.
The core tags of this character are `long_hair, bow, yellow_eyes, hair_bow, bangs, twintails, green_eyes, grey_hair, parted_bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 183 | 191.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/handa_roco_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 183 | 125.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/handa_roco_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 392 | 243.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/handa_roco_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 183 | 174.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/handa_roco_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 392 | 318.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/handa_roco_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/handa_roco_theidolmstermillionlive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, black_bow, blush, headphones_around_neck, polka_dot_bow, black_skirt, bracelet, solo, long_sleeves, looking_at_viewer, open_mouth, blue_pantyhose, smile, very_long_hair, shirt, simple_background, light_brown_hair, low_twintails, pleated_skirt, white_jacket, wrist_scrunchie |
| 1 | 5 |  |  |  |  |  | 1girl, :d, open_mouth, solo, dress, hairclip, looking_at_viewer, microphone_stand, mini_hat, navel, necklace, necktie, star_(symbol), top_hat, wrist_cuffs |
| 2 | 7 |  |  |  |  |  | 1girl, solo, looking_at_viewer, small_breasts, blush, nipples, pussy, :d, nude, open_mouth, female_pubic_hair, navel |
| 3 | 7 |  |  |  |  |  | 1girl, blush, looking_at_viewer, solo, very_long_hair, :d, bare_shoulders, open_mouth, polka_dot, white_dress, collarbone, day, hair_flower, sleeveless_dress, white_flower, blue_sky, breasts, cloud, dated, frills, holding_bouquet, light_brown_hair, mini_crown, outdoors, pearl_necklace, strapless_dress |
| 4 | 8 |  |  |  |  |  | 1girl, blush, cleavage, collarbone, looking_at_viewer, navel, solo, earrings, side_ponytail, white_bikini, bare_shoulders, medium_breasts, open_mouth, bracelet, small_breasts, very_long_hair, bikini_skirt, blue_scrunchie, brown_eyes, frilled_bikini, hair_scrunchie, outdoors, see-through, sitting, sky, water |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_bow | blush | headphones_around_neck | polka_dot_bow | black_skirt | bracelet | solo | long_sleeves | looking_at_viewer | open_mouth | blue_pantyhose | smile | very_long_hair | shirt | simple_background | light_brown_hair | low_twintails | pleated_skirt | white_jacket | wrist_scrunchie | :d | dress | hairclip | microphone_stand | mini_hat | navel | necklace | necktie | star_(symbol) | top_hat | wrist_cuffs | small_breasts | nipples | pussy | nude | female_pubic_hair | bare_shoulders | polka_dot | white_dress | collarbone | day | hair_flower | sleeveless_dress | white_flower | blue_sky | breasts | cloud | dated | frills | holding_bouquet | mini_crown | outdoors | pearl_necklace | strapless_dress | cleavage | earrings | side_ponytail | white_bikini | medium_breasts | bikini_skirt | blue_scrunchie | brown_eyes | frilled_bikini | hair_scrunchie | see-through | sitting | sky | water |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------|:--------|:-------------------------|:----------------|:--------------|:-----------|:-------|:---------------|:--------------------|:-------------|:-----------------|:--------|:-----------------|:--------|:--------------------|:-------------------|:----------------|:----------------|:---------------|:------------------|:-----|:--------|:-----------|:-------------------|:-----------|:--------|:-----------|:----------|:----------------|:----------|:--------------|:----------------|:----------|:--------|:-------|:--------------------|:-----------------|:------------|:--------------|:-------------|:------|:--------------|:-------------------|:---------------|:-----------|:----------|:--------|:--------|:---------|:------------------|:-------------|:-----------|:-----------------|:------------------|:-----------|:-----------|:----------------|:---------------|:-----------------|:---------------|:-----------------|:-------------|:-----------------|:-----------------|:--------------|:----------|:------|:--------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | | | | | | | X | | X | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | | X | | | | | X | | X | X | | | | | | | | | | | X | | | | | X | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | | X | | | | | X | | X | X | | | X | | | X | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | | X | | | | X | X | | X | X | | | X | | | | | | | | | | | | | X | | | | | | X | | | | | X | | | X | | | | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/handa_roco_theidolmstermillionlive
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-17T01:36:45+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T23:34:07+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of handa\_roco/伴田路子 (THE iDOLM@STER: Million Live!)
===========================================================
This is the dataset of handa\_roco/伴田路子 (THE iDOLM@STER: Million Live!), containing 183 images and their tags.
The core tags of this character are 'long\_hair, bow, yellow\_eyes, hair\_bow, bangs, twintails, green\_eyes, grey\_hair, parted\_bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
fc3e103e98508acdae225897b1b3971673a1cc87
|
# Dataset Card for "76e05263"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/76e05263
|
[
"region:us"
] |
2023-09-17T01:45:19+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 197, "num_examples": 10}], "download_size": 1361, "dataset_size": 197}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-17T01:45:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "76e05263"
More Information needed
|
[
"# Dataset Card for \"76e05263\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"76e05263\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"76e05263\"\n\nMore Information needed"
] |
5b5143558b2c7bad32d5b4ad08da6b6af5f058a6
|
# Dataset Card for Evaluation run of PeanutJar/LLaMa-2-PeanutButter_v10-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/PeanutJar/LLaMa-2-PeanutButter_v10-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [PeanutJar/LLaMa-2-PeanutButter_v10-7B](https://huggingface.co/PeanutJar/LLaMa-2-PeanutButter_v10-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_PeanutJar__LLaMa-2-PeanutButter_v10-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T02:50:37.300317](https://huggingface.co/datasets/open-llm-leaderboard/details_PeanutJar__LLaMa-2-PeanutButter_v10-7B/blob/main/results_2023-09-17T02-50-37.300317.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.006082214765100671,
"em_stderr": 0.000796243239302896,
"f1": 0.059260696308725026,
"f1_stderr": 0.0014614581539411243,
"acc": 0.3839482806388088,
"acc_stderr": 0.009633147982899772
},
"harness|drop|3": {
"em": 0.006082214765100671,
"em_stderr": 0.000796243239302896,
"f1": 0.059260696308725026,
"f1_stderr": 0.0014614581539411243
},
"harness|gsm8k|5": {
"acc": 0.05913570887035633,
"acc_stderr": 0.006497266660428848
},
"harness|winogrande|5": {
"acc": 0.7087608524072613,
"acc_stderr": 0.012769029305370695
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_PeanutJar__LLaMa-2-PeanutButter_v10-7B
|
[
"region:us"
] |
2023-09-17T01:50:41+00:00
|
{"pretty_name": "Evaluation run of PeanutJar/LLaMa-2-PeanutButter_v10-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [PeanutJar/LLaMa-2-PeanutButter_v10-7B](https://huggingface.co/PeanutJar/LLaMa-2-PeanutButter_v10-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_PeanutJar__LLaMa-2-PeanutButter_v10-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-17T02:50:37.300317](https://huggingface.co/datasets/open-llm-leaderboard/details_PeanutJar__LLaMa-2-PeanutButter_v10-7B/blob/main/results_2023-09-17T02-50-37.300317.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.006082214765100671,\n \"em_stderr\": 0.000796243239302896,\n \"f1\": 0.059260696308725026,\n \"f1_stderr\": 0.0014614581539411243,\n \"acc\": 0.3839482806388088,\n \"acc_stderr\": 0.009633147982899772\n },\n \"harness|drop|3\": {\n \"em\": 0.006082214765100671,\n \"em_stderr\": 0.000796243239302896,\n \"f1\": 0.059260696308725026,\n \"f1_stderr\": 0.0014614581539411243\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.05913570887035633,\n \"acc_stderr\": 0.006497266660428848\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7087608524072613,\n \"acc_stderr\": 0.012769029305370695\n }\n}\n```", "repo_url": "https://huggingface.co/PeanutJar/LLaMa-2-PeanutButter_v10-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_17T02_50_37.300317", "path": ["**/details_harness|drop|3_2023-09-17T02-50-37.300317.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-17T02-50-37.300317.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_17T02_50_37.300317", "path": ["**/details_harness|gsm8k|5_2023-09-17T02-50-37.300317.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-17T02-50-37.300317.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_17T02_50_37.300317", "path": ["**/details_harness|winogrande|5_2023-09-17T02-50-37.300317.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-17T02-50-37.300317.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_17T02_50_37.300317", "path": ["results_2023-09-17T02-50-37.300317.parquet"]}, {"split": "latest", "path": ["results_2023-09-17T02-50-37.300317.parquet"]}]}]}
|
2023-09-17T01:50:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of PeanutJar/LLaMa-2-PeanutButter_v10-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model PeanutJar/LLaMa-2-PeanutButter_v10-7B on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-17T02:50:37.300317(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of PeanutJar/LLaMa-2-PeanutButter_v10-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model PeanutJar/LLaMa-2-PeanutButter_v10-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-17T02:50:37.300317(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of PeanutJar/LLaMa-2-PeanutButter_v10-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model PeanutJar/LLaMa-2-PeanutButter_v10-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-17T02:50:37.300317(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
29,
31,
177,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of PeanutJar/LLaMa-2-PeanutButter_v10-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model PeanutJar/LLaMa-2-PeanutButter_v10-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-17T02:50:37.300317(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
1c793742cd37c956ccc01ff34527d9e9acd50dc0
|
# Dataset of shinomiya_karen/篠宮可憐/시노미야카렌 (THE iDOLM@STER: Million Live!)
This is the dataset of shinomiya_karen/篠宮可憐/시노미야카렌 (THE iDOLM@STER: Million Live!), containing 71 images and their tags.
The core tags of this character are `long_hair, blonde_hair, blue_eyes, breasts, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 71 | 76.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinomiya_karen_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 71 | 48.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinomiya_karen_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 152 | 93.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinomiya_karen_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 71 | 69.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinomiya_karen_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 152 | 126.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinomiya_karen_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/shinomiya_karen_theidolmstermillionlive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 14 |  |  |  |  |  | 1girl, looking_at_viewer, solo, blush, gloves, hair_ornament, :d, open_mouth, sleeveless_dress |
| 1 | 12 |  |  |  |  |  | 1boy, 1girl, blush, open_mouth, solo_focus, hetero, nipples, penis, mosaic_censoring, sweat, looking_at_viewer, sex, vaginal, collarbone, cum, pov, completely_nude, female_pubic_hair, pussy |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | blush | gloves | hair_ornament | :d | open_mouth | sleeveless_dress | 1boy | solo_focus | hetero | nipples | penis | mosaic_censoring | sweat | sex | vaginal | collarbone | cum | pov | completely_nude | female_pubic_hair | pussy |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:--------|:---------|:----------------|:-----|:-------------|:-------------------|:-------|:-------------|:---------|:----------|:--------|:-------------------|:--------|:------|:----------|:-------------|:------|:------|:------------------|:--------------------|:--------|
| 0 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 1 | 12 |  |  |  |  |  | X | X | | X | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/shinomiya_karen_theidolmstermillionlive
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-17T01:52:45+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T02:07:38+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of shinomiya\_karen/篠宮可憐/시노미야카렌 (THE iDOLM@STER: Million Live!)
=======================================================================
This is the dataset of shinomiya\_karen/篠宮可憐/시노미야카렌 (THE iDOLM@STER: Million Live!), containing 71 images and their tags.
The core tags of this character are 'long\_hair, blonde\_hair, blue\_eyes, breasts, large\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
c680cf2dba881d30850e712afb0b545b56873653
|
# Dataset Card for Evaluation run of lizhuang144/starcoder_mirror
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/lizhuang144/starcoder_mirror
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [lizhuang144/starcoder_mirror](https://huggingface.co/lizhuang144/starcoder_mirror) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_lizhuang144__starcoder_mirror",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T02:55:35.893698](https://huggingface.co/datasets/open-llm-leaderboard/details_lizhuang144__starcoder_mirror/blob/main/results_2023-09-17T02-55-35.893698.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0018875838926174498,
"em_stderr": 0.0004445109990558897,
"f1": 0.04898594798657743,
"f1_stderr": 0.001215831642948078,
"acc": 0.3137813978564757,
"acc_stderr": 0.010101677905009763
},
"harness|drop|3": {
"em": 0.0018875838926174498,
"em_stderr": 0.0004445109990558897,
"f1": 0.04898594798657743,
"f1_stderr": 0.001215831642948078
},
"harness|gsm8k|5": {
"acc": 0.05534495830174375,
"acc_stderr": 0.006298221796179574
},
"harness|winogrande|5": {
"acc": 0.5722178374112076,
"acc_stderr": 0.013905134013839953
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_lizhuang144__starcoder_mirror
|
[
"region:us"
] |
2023-09-17T01:55:40+00:00
|
{"pretty_name": "Evaluation run of lizhuang144/starcoder_mirror", "dataset_summary": "Dataset automatically created during the evaluation run of model [lizhuang144/starcoder_mirror](https://huggingface.co/lizhuang144/starcoder_mirror) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lizhuang144__starcoder_mirror\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-17T02:55:35.893698](https://huggingface.co/datasets/open-llm-leaderboard/details_lizhuang144__starcoder_mirror/blob/main/results_2023-09-17T02-55-35.893698.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0018875838926174498,\n \"em_stderr\": 0.0004445109990558897,\n \"f1\": 0.04898594798657743,\n \"f1_stderr\": 0.001215831642948078,\n \"acc\": 0.3137813978564757,\n \"acc_stderr\": 0.010101677905009763\n },\n \"harness|drop|3\": {\n \"em\": 0.0018875838926174498,\n \"em_stderr\": 0.0004445109990558897,\n \"f1\": 0.04898594798657743,\n \"f1_stderr\": 0.001215831642948078\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.05534495830174375,\n \"acc_stderr\": 0.006298221796179574\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5722178374112076,\n \"acc_stderr\": 0.013905134013839953\n }\n}\n```", "repo_url": "https://huggingface.co/lizhuang144/starcoder_mirror", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_17T02_55_35.893698", "path": ["**/details_harness|drop|3_2023-09-17T02-55-35.893698.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-17T02-55-35.893698.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_17T02_55_35.893698", "path": ["**/details_harness|gsm8k|5_2023-09-17T02-55-35.893698.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-17T02-55-35.893698.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_17T02_55_35.893698", "path": ["**/details_harness|winogrande|5_2023-09-17T02-55-35.893698.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-17T02-55-35.893698.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_17T02_55_35.893698", "path": ["results_2023-09-17T02-55-35.893698.parquet"]}, {"split": "latest", "path": ["results_2023-09-17T02-55-35.893698.parquet"]}]}]}
|
2023-09-17T01:55:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of lizhuang144/starcoder_mirror
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model lizhuang144/starcoder_mirror on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-17T02:55:35.893698(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of lizhuang144/starcoder_mirror",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model lizhuang144/starcoder_mirror on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-17T02:55:35.893698(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of lizhuang144/starcoder_mirror",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model lizhuang144/starcoder_mirror on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-17T02:55:35.893698(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
21,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of lizhuang144/starcoder_mirror## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model lizhuang144/starcoder_mirror on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-17T02:55:35.893698(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
4d9fdf4c0a65e8bea9d0f5830da6e72f5ccfe0a5
|
# Dataset Card for "aug_data_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
linhqyy/aug_data_random
|
[
"region:us"
] |
2023-09-17T01:56:17+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "intent", "dtype": "string"}, {"name": "sentence_annotation", "dtype": "string"}, {"name": "entities", "list": [{"name": "type", "dtype": "string"}, {"name": "filler", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4801788, "num_examples": 16950}], "download_size": 1015835, "dataset_size": 4801788}}
|
2023-09-17T01:56:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "aug_data_random"
More Information needed
|
[
"# Dataset Card for \"aug_data_random\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"aug_data_random\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"aug_data_random\"\n\nMore Information needed"
] |
730a6a91508bcdb4869adc7eb27a87d0a38eeab5
|
# Dataset Card for "data_aug_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
linhqyy/data_aug_random
|
[
"region:us"
] |
2023-09-17T02:04:49+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "intent", "dtype": "string"}, {"name": "entities", "list": [{"name": "type", "dtype": "string"}, {"name": "filler", "dtype": "string"}]}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3248023, "num_examples": 15255}, {"name": "test", "num_bytes": 362513, "num_examples": 1695}], "download_size": 758928, "dataset_size": 3610536}}
|
2023-09-17T02:04:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "data_aug_random"
More Information needed
|
[
"# Dataset Card for \"data_aug_random\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"data_aug_random\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"data_aug_random\"\n\nMore Information needed"
] |
78b3be7c7d73672555022ea137153d276e235747
|
# Dataset of matsuda_arisa/松田亜利沙 (THE iDOLM@STER: Million Live!)
This is the dataset of matsuda_arisa/松田亜利沙 (THE iDOLM@STER: Million Live!), containing 134 images and their tags.
The core tags of this character are `brown_hair, twintails, long_hair, brown_eyes, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 134 | 132.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuda_arisa_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 134 | 91.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuda_arisa_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 280 | 175.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuda_arisa_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 134 | 120.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuda_arisa_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 280 | 221.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/matsuda_arisa_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/matsuda_arisa_theidolmstermillionlive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, open_mouth, skirt, solo, :d, blush, boots, hair_bow, jewelry |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | open_mouth | skirt | solo | :d | blush | boots | hair_bow | jewelry |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------------|:--------|:-------|:-----|:--------|:--------|:-----------|:----------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/matsuda_arisa_theidolmstermillionlive
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-17T02:19:48+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T02:26:00+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of matsuda\_arisa/松田亜利沙 (THE iDOLM@STER: Million Live!)
===============================================================
This is the dataset of matsuda\_arisa/松田亜利沙 (THE iDOLM@STER: Million Live!), containing 134 images and their tags.
The core tags of this character are 'brown\_hair, twintails, long\_hair, brown\_eyes, bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
61a982cb5760131ecc3f1a125905a167f2cde3f0
|
# Dataset Card for Evaluation run of conceptofmind/LLongMA-2-13b-16k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/conceptofmind/LLongMA-2-13b-16k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [conceptofmind/LLongMA-2-13b-16k](https://huggingface.co/conceptofmind/LLongMA-2-13b-16k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_conceptofmind__LLongMA-2-13b-16k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-23T08:30:56.994435](https://huggingface.co/datasets/open-llm-leaderboard/details_conceptofmind__LLongMA-2-13b-16k/blob/main/results_2023-09-23T08-30-56.994435.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.002202181208053691,
"em_stderr": 0.0004800510816619487,
"f1": 0.05451552013422845,
"f1_stderr": 0.001321043219231616,
"acc": 0.39035575610663886,
"acc_stderr": 0.009395368385266412
},
"harness|drop|3": {
"em": 0.002202181208053691,
"em_stderr": 0.0004800510816619487,
"f1": 0.05451552013422845,
"f1_stderr": 0.001321043219231616
},
"harness|gsm8k|5": {
"acc": 0.05458680818802123,
"acc_stderr": 0.006257444037912527
},
"harness|winogrande|5": {
"acc": 0.7261247040252565,
"acc_stderr": 0.012533292732620296
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_conceptofmind__LLongMA-2-13b-16k
|
[
"region:us"
] |
2023-09-17T02:22:01+00:00
|
{"pretty_name": "Evaluation run of conceptofmind/LLongMA-2-13b-16k", "dataset_summary": "Dataset automatically created during the evaluation run of model [conceptofmind/LLongMA-2-13b-16k](https://huggingface.co/conceptofmind/LLongMA-2-13b-16k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_conceptofmind__LLongMA-2-13b-16k\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-23T08:30:56.994435](https://huggingface.co/datasets/open-llm-leaderboard/details_conceptofmind__LLongMA-2-13b-16k/blob/main/results_2023-09-23T08-30-56.994435.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.002202181208053691,\n \"em_stderr\": 0.0004800510816619487,\n \"f1\": 0.05451552013422845,\n \"f1_stderr\": 0.001321043219231616,\n \"acc\": 0.39035575610663886,\n \"acc_stderr\": 0.009395368385266412\n },\n \"harness|drop|3\": {\n \"em\": 0.002202181208053691,\n \"em_stderr\": 0.0004800510816619487,\n \"f1\": 0.05451552013422845,\n \"f1_stderr\": 0.001321043219231616\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.05458680818802123,\n \"acc_stderr\": 0.006257444037912527\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7261247040252565,\n \"acc_stderr\": 0.012533292732620296\n }\n}\n```", "repo_url": "https://huggingface.co/conceptofmind/LLongMA-2-13b-16k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_17T03_21_57.837796", "path": ["**/details_harness|drop|3_2023-09-17T03-21-57.837796.parquet"]}, {"split": "2023_09_23T08_30_56.994435", "path": ["**/details_harness|drop|3_2023-09-23T08-30-56.994435.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-23T08-30-56.994435.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_17T03_21_57.837796", "path": ["**/details_harness|gsm8k|5_2023-09-17T03-21-57.837796.parquet"]}, {"split": "2023_09_23T08_30_56.994435", "path": ["**/details_harness|gsm8k|5_2023-09-23T08-30-56.994435.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-23T08-30-56.994435.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_17T03_21_57.837796", "path": ["**/details_harness|winogrande|5_2023-09-17T03-21-57.837796.parquet"]}, {"split": "2023_09_23T08_30_56.994435", "path": ["**/details_harness|winogrande|5_2023-09-23T08-30-56.994435.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-23T08-30-56.994435.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_17T03_21_57.837796", "path": ["results_2023-09-17T03-21-57.837796.parquet"]}, {"split": "2023_09_23T08_30_56.994435", "path": ["results_2023-09-23T08-30-56.994435.parquet"]}, {"split": "latest", "path": ["results_2023-09-23T08-30-56.994435.parquet"]}]}]}
|
2023-09-23T07:31:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of conceptofmind/LLongMA-2-13b-16k
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model conceptofmind/LLongMA-2-13b-16k on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-23T08:30:56.994435(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of conceptofmind/LLongMA-2-13b-16k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model conceptofmind/LLongMA-2-13b-16k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-23T08:30:56.994435(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of conceptofmind/LLongMA-2-13b-16k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model conceptofmind/LLongMA-2-13b-16k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-23T08:30:56.994435(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
21,
31,
169,
68,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of conceptofmind/LLongMA-2-13b-16k## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model conceptofmind/LLongMA-2-13b-16k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-23T08:30:56.994435(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
d5724fd7299e6c32a51edd61fe29be5e42cda6a0
|
# Dataset of nikaidou_chizuru/二階堂千鶴 (THE iDOLM@STER: Million Live!)
This is the dataset of nikaidou_chizuru/二階堂千鶴 (THE iDOLM@STER: Million Live!), containing 134 images and their tags.
The core tags of this character are `long_hair, brown_hair, green_eyes, ponytail, breasts, very_long_hair, hairband`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 134 | 144.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nikaidou_chizuru_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 134 | 101.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nikaidou_chizuru_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 282 | 190.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nikaidou_chizuru_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 134 | 132.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nikaidou_chizuru_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 282 | 246.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nikaidou_chizuru_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nikaidou_chizuru_theidolmstermillionlive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------|
| 0 | 56 |  |  |  |  |  | 1girl, looking_at_viewer, open_mouth, solo, blush, necklace, dress, :d, bracelet, earrings |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | open_mouth | solo | blush | necklace | dress | :d | bracelet | earrings |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------------|:-------|:--------|:-----------|:--------|:-----|:-----------|:-----------|
| 0 | 56 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/nikaidou_chizuru_theidolmstermillionlive
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-17T02:43:37+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T02:23:25+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of nikaidou\_chizuru/二階堂千鶴 (THE iDOLM@STER: Million Live!)
==================================================================
This is the dataset of nikaidou\_chizuru/二階堂千鶴 (THE iDOLM@STER: Million Live!), containing 134 images and their tags.
The core tags of this character are 'long\_hair, brown\_hair, green\_eyes, ponytail, breasts, very\_long\_hair, hairband', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
7a112db605049d3a9d7002441c185bc33c7d2bb4
|
# Dataset Card for "lima_llama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
HuggingFaceH4/lima_llama2
|
[
"region:us"
] |
2023-09-17T03:03:27+00:00
|
{"dataset_info": {"features": [{"name": "conversations", "sequence": "string"}, {"name": "source", "dtype": "string"}, {"name": "length", "dtype": "int64"}, {"name": "prompt_id", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "meta", "struct": [{"name": "category", "dtype": "string"}, {"name": "source", "dtype": "string"}]}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8806712, "num_examples": 1000}, {"name": "test", "num_bytes": 188848, "num_examples": 300}], "download_size": 5237615, "dataset_size": 8995560}}
|
2023-09-17T03:03:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "lima_llama2"
More Information needed
|
[
"# Dataset Card for \"lima_llama2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"lima_llama2\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"lima_llama2\"\n\nMore Information needed"
] |
56b69cd1bf813bea0ead73c2532c0c67ad212160
|
# Dataset Card for "cifar10_512x512px"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
MaxReynolds/cifar10_512x512px
|
[
"region:us"
] |
2023-09-17T03:04:14+00:00
|
{"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "airplane", "1": "automobile", "2": "bird", "3": "cat", "4": "deer", "5": "dog", "6": "frog", "7": "horse", "8": "ship", "9": "truck"}}}}, {"name": "pixel_values", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 6445891560.0, "num_examples": 50000}], "download_size": 6446258731, "dataset_size": 6445891560.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-17T03:10:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cifar10_512x512px"
More Information needed
|
[
"# Dataset Card for \"cifar10_512x512px\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cifar10_512x512px\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cifar10_512x512px\"\n\nMore Information needed"
] |
c5c370f5cca42437b323510af680be25ea6c1052
|
# Dataset of nonohara_akane/野々原茜/노노하라아카네 (THE iDOLM@STER: Million Live!)
This is the dataset of nonohara_akane/野々原茜/노노하라아카네 (THE iDOLM@STER: Million Live!), containing 127 images and their tags.
The core tags of this character are `short_hair, brown_hair, brown_eyes, bangs, red_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 127 | 126.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nonohara_akane_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 127 | 82.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nonohara_akane_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 268 | 159.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nonohara_akane_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 127 | 114.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nonohara_akane_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 268 | 212.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nonohara_akane_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nonohara_akane_theidolmstermillionlive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | earrings, gloves, looking_at_viewer, smile, 1girl, open_mouth, blush, hat, breasts, solo |
| 1 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, smile, solo, blush, earrings, one_eye_closed, open_mouth, puffy_short_sleeves, ;d, hair_bow, heart, black_dress, choker, gloves, hair_ornament, holding, red_eyes, ribbon, thighhighs |
| 2 | 26 |  |  |  |  |  | 1girl, smile, solo, looking_at_viewer, open_mouth, blush, jacket, one_eye_closed, simple_background, white_background, long_sleeves, necklace, ;d, hoodie, white_dress |
| 3 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, solo, dress, blush, earrings, hair_flower, puffy_sleeves, simple_background, white_background, bow, hairband, open_mouth, short_sleeves, smile, upper_body |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | earrings | gloves | looking_at_viewer | smile | 1girl | open_mouth | blush | hat | breasts | solo | one_eye_closed | puffy_short_sleeves | ;d | hair_bow | heart | black_dress | choker | hair_ornament | holding | red_eyes | ribbon | thighhighs | jacket | simple_background | white_background | long_sleeves | necklace | hoodie | white_dress | dress | hair_flower | puffy_sleeves | bow | hairband | short_sleeves | upper_body |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------|:---------|:--------------------|:--------|:--------|:-------------|:--------|:------|:----------|:-------|:-----------------|:----------------------|:-----|:-----------|:--------|:--------------|:---------|:----------------|:----------|:-----------|:---------|:-------------|:---------|:--------------------|:-------------------|:---------------|:-----------|:---------|:--------------|:--------|:--------------|:----------------|:------|:-----------|:----------------|:-------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 2 | 26 |  |  |  |  |  | | | X | X | X | X | X | | | X | X | | X | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | X | X | X | X | X | | | X | | | | | | | | | | | | | | X | X | | | | | X | X | X | X | X | X | X |
|
CyberHarem/nonohara_akane_theidolmstermillionlive
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-17T03:08:47+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T02:17:21+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of nonohara\_akane/野々原茜/노노하라아카네 (THE iDOLM@STER: Million Live!)
=======================================================================
This is the dataset of nonohara\_akane/野々原茜/노노하라아카네 (THE iDOLM@STER: Million Live!), containing 127 images and their tags.
The core tags of this character are 'short\_hair, brown\_hair, brown\_eyes, bangs, red\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
4f4d92c47b426c1f5374bb84434cc3f1f647ee62
|
### SiberiaSoft/SiberianPersonaChat
Датасет диалогов, QA
Данный датасет был создан для диалоговых агентов с имитацией личности.
Большая часть датасета была сгенерирована с помощью chatGPT и различных промптов к ней. Кроме этого, в состав датасета входит измененный [TolokaPersonaChatRus](https://toloka.ai/datasets/?category=nlp)
## Формат описаний личности
1. Я очень умная девушка, и хочу помочь своему другу полезными советами.
2. Я парень, консультант по разным вопросам. Я очень умный. Люблю помогать собеседнику.
Также в промпт можно подставлять факты о личности: ФИО, возраст и т.д
1. Я девушка 18 лет. Я учусь в институте. Живу с родителями. У меня есть кот. Я ищу парня для семьи.
Статья на habr: [ссылка](https://habr.com/ru/articles/751580/)
## Процентное данных:
| Задача | Процентное содержание |
|:-----------------------:|:---------------------:|
| qa | 32.088% |
| persons | 19.096% |
| man3 | 18.426% |
| woman | 17.433% |
| chitchat | 7.893% |
| man | 4.797% |
| reaction | 0.268% |
### Citation
```
@MISC{SiberiaSoft/SiberianPersonaChat2,
author = {Denis Petrov, Ivan Ramovich},
title = {Russian dataset for Chat models},
url = {https://huggingface.co/datasets/SiberiaSoft/SiberianPersonaChat-2},
year = 2023
}
```
|
SiberiaSoft/SiberianPersonaChat-2
|
[
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:ru",
"license:mit",
"region:us"
] |
2023-09-17T03:17:09+00:00
|
{"language": ["ru"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "text2text-generation", "conversational"]}
|
2023-09-17T03:29:12+00:00
|
[] |
[
"ru"
] |
TAGS
#task_categories-text-generation #task_categories-text2text-generation #task_categories-conversational #size_categories-100K<n<1M #language-Russian #license-mit #region-us
|
### SiberiaSoft/SiberianPersonaChat
Датасет диалогов, QA
Данный датасет был создан для диалоговых агентов с имитацией личности.
Большая часть датасета была сгенерирована с помощью chatGPT и различных промптов к ней. Кроме этого, в состав датасета входит измененный TolokaPersonaChatRus
Формат описаний личности
------------------------
1. Я очень умная девушка, и хочу помочь своему другу полезными советами.
2. Я парень, консультант по разным вопросам. Я очень умный. Люблю помогать собеседнику.
Также в промпт можно подставлять факты о личности: ФИО, возраст и т.д
1. Я девушка 18 лет. Я учусь в институте. Живу с родителями. У меня есть кот. Я ищу парня для семьи.
Статья на habr: ссылка
Процентное данных:
------------------
|
[
"### SiberiaSoft/SiberianPersonaChat\n\n\nДатасет диалогов, QA\n\n\nДанный датасет был создан для диалоговых агентов с имитацией личности.\n\n\nБольшая часть датасета была сгенерирована с помощью chatGPT и различных промптов к ней. Кроме этого, в состав датасета входит измененный TolokaPersonaChatRus\n\n\nФормат описаний личности\n------------------------\n\n\n1. Я очень умная девушка, и хочу помочь своему другу полезными советами.\n2. Я парень, консультант по разным вопросам. Я очень умный. Люблю помогать собеседнику.\n\n\nТакже в промпт можно подставлять факты о личности: ФИО, возраст и т.д\n\n\n1. Я девушка 18 лет. Я учусь в институте. Живу с родителями. У меня есть кот. Я ищу парня для семьи.\n\n\nСтатья на habr: ссылка\n\n\nПроцентное данных:\n------------------"
] |
[
"TAGS\n#task_categories-text-generation #task_categories-text2text-generation #task_categories-conversational #size_categories-100K<n<1M #language-Russian #license-mit #region-us \n",
"### SiberiaSoft/SiberianPersonaChat\n\n\nДатасет диалогов, QA\n\n\nДанный датасет был создан для диалоговых агентов с имитацией личности.\n\n\nБольшая часть датасета была сгенерирована с помощью chatGPT и различных промптов к ней. Кроме этого, в состав датасета входит измененный TolokaPersonaChatRus\n\n\nФормат описаний личности\n------------------------\n\n\n1. Я очень умная девушка, и хочу помочь своему другу полезными советами.\n2. Я парень, консультант по разным вопросам. Я очень умный. Люблю помогать собеседнику.\n\n\nТакже в промпт можно подставлять факты о личности: ФИО, возраст и т.д\n\n\n1. Я девушка 18 лет. Я учусь в институте. Живу с родителями. У меня есть кот. Я ищу парня для семьи.\n\n\nСтатья на habr: ссылка\n\n\nПроцентное данных:\n------------------"
] |
[
62,
193
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-text2text-generation #task_categories-conversational #size_categories-100K<n<1M #language-Russian #license-mit #region-us \n### SiberiaSoft/SiberianPersonaChat\n\n\nДатасет диалогов, QA\n\n\nДанный датасет был создан для диалоговых агентов с имитацией личности.\n\n\nБольшая часть датасета была сгенерирована с помощью chatGPT и различных промптов к ней. Кроме этого, в состав датасета входит измененный TolokaPersonaChatRus\n\n\nФормат описаний личности\n------------------------\n\n\n1. Я очень умная девушка, и хочу помочь своему другу полезными советами.\n2. Я парень, консультант по разным вопросам. Я очень умный. Люблю помогать собеседнику.\n\n\nТакже в промпт можно подставлять факты о личности: ФИО, возраст и т.д\n\n\n1. Я девушка 18 лет. Я учусь в институте. Живу с родителями. У меня есть кот. Я ищу парня для семьи.\n\n\nСтатья на habr: ссылка\n\n\nПроцентное данных:\n------------------"
] |
6f6a94c8229a7b647e8b6ba900b169aab82edaba
|
# Dataset of fukuda_noriko/福田のり子 (THE iDOLM@STER: Million Live!)
This is the dataset of fukuda_noriko/福田のり子 (THE iDOLM@STER: Million Live!), containing 156 images and their tags.
The core tags of this character are `short_hair, blonde_hair, brown_eyes, breasts, bangs, earrings`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 156 | 137.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fukuda_noriko_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 156 | 99.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fukuda_noriko_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 361 | 199.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fukuda_noriko_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 156 | 128.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fukuda_noriko_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 361 | 245.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/fukuda_noriko_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/fukuda_noriko_theidolmstermillionlive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, blush, nipples, open_mouth, sweat, 1boy, hetero, penis, pussy, solo_focus, female_pubic_hair, large_breasts, navel, sex, vaginal, medium_breasts, bar_censor, collarbone, completely_nude, cum, one_eye_closed, spread_legs |
| 1 | 10 |  |  |  |  |  | 1girl, white_background, blush, looking_at_viewer, simple_background, solo, blunt_bangs, collarbone, long_sleeves, star_earrings, upper_body, white_shirt, :d, black_jacket, leather_jacket, open_mouth |
| 2 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, solo, cleavage, navel, blush, collarbone, simple_background, white_background, blue_bikini, blunt_bangs, large_breasts, medium_breasts, one_eye_closed, open_mouth, smile |
| 3 | 12 |  |  |  |  |  | 1girl, smile, looking_at_viewer, open_mouth, solo, one_eye_closed, skirt, ;d, blush, gloves, jewelry, navel, microphone, midriff |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | nipples | open_mouth | sweat | 1boy | hetero | penis | pussy | solo_focus | female_pubic_hair | large_breasts | navel | sex | vaginal | medium_breasts | bar_censor | collarbone | completely_nude | cum | one_eye_closed | spread_legs | white_background | looking_at_viewer | simple_background | solo | blunt_bangs | long_sleeves | star_earrings | upper_body | white_shirt | :d | black_jacket | leather_jacket | cleavage | blue_bikini | smile | skirt | ;d | gloves | jewelry | microphone | midriff |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:----------|:-------------|:--------|:-------|:---------|:--------|:--------|:-------------|:--------------------|:----------------|:--------|:------|:----------|:-----------------|:-------------|:-------------|:------------------|:------|:-----------------|:--------------|:-------------------|:--------------------|:--------------------|:-------|:--------------|:---------------|:----------------|:-------------|:--------------|:-----|:---------------|:-----------------|:-----------|:--------------|:--------|:--------|:-----|:---------|:----------|:-------------|:----------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | X | | X | | | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | | X | | | | | | | | X | X | | | X | | X | | | X | | X | X | X | X | X | | | | | | | | X | X | X | | | | | | |
| 3 | 12 |  |  |  |  |  | X | X | | X | | | | | | | | | X | | | | | | | | X | | | X | | X | | | | | | | | | | | X | X | X | X | X | X | X |
|
CyberHarem/fukuda_noriko_theidolmstermillionlive
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-17T03:44:36+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-16T02:17:56+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of fukuda\_noriko/福田のり子 (THE iDOLM@STER: Million Live!)
===============================================================
This is the dataset of fukuda\_noriko/福田のり子 (THE iDOLM@STER: Million Live!), containing 156 images and their tags.
The core tags of this character are 'short\_hair, blonde\_hair, brown\_eyes, breasts, bangs, earrings', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
faca1dc7ba0e01574c8679efa1c03b29064c5cd3
|
# Dataset Card for "newAIHumanGPT3.5V2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
stealthwriter/newAIHumanGPT3.5V2
|
[
"region:us"
] |
2023-09-17T03:52:42+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4751074, "num_examples": 36000}, {"name": "validation", "num_bytes": 528788, "num_examples": 4000}], "download_size": 3478514, "dataset_size": 5279862}}
|
2023-09-17T12:12:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "newAIHumanGPT3.5V2"
More Information needed
|
[
"# Dataset Card for \"newAIHumanGPT3.5V2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"newAIHumanGPT3.5V2\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"newAIHumanGPT3.5V2\"\n\nMore Information needed"
] |
90989b18a51b59126de06511e4c4e48152706db9
|
# Dataset Card for Evaluation run of MBZUAI/lamini-cerebras-590m
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/MBZUAI/lamini-cerebras-590m
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [MBZUAI/lamini-cerebras-590m](https://huggingface.co/MBZUAI/lamini-cerebras-590m) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_MBZUAI__lamini-cerebras-590m",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T04:57:06.330423](https://huggingface.co/datasets/open-llm-leaderboard/details_MBZUAI__lamini-cerebras-590m/blob/main/results_2023-09-17T04-57-06.330423.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.007445469798657718,
"em_stderr": 0.0008803652515899861,
"f1": 0.07449664429530209,
"f1_stderr": 0.001794948262867366,
"acc": 0.24030037584379355,
"acc_stderr": 0.00755598242138111
},
"harness|drop|3": {
"em": 0.007445469798657718,
"em_stderr": 0.0008803652515899861,
"f1": 0.07449664429530209,
"f1_stderr": 0.001794948262867366
},
"harness|gsm8k|5": {
"acc": 0.001516300227445034,
"acc_stderr": 0.0010717793485492634
},
"harness|winogrande|5": {
"acc": 0.47908445146014206,
"acc_stderr": 0.014040185494212955
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_MBZUAI__lamini-cerebras-590m
|
[
"region:us"
] |
2023-09-17T03:57:09+00:00
|
{"pretty_name": "Evaluation run of MBZUAI/lamini-cerebras-590m", "dataset_summary": "Dataset automatically created during the evaluation run of model [MBZUAI/lamini-cerebras-590m](https://huggingface.co/MBZUAI/lamini-cerebras-590m) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_MBZUAI__lamini-cerebras-590m\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-17T04:57:06.330423](https://huggingface.co/datasets/open-llm-leaderboard/details_MBZUAI__lamini-cerebras-590m/blob/main/results_2023-09-17T04-57-06.330423.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.007445469798657718,\n \"em_stderr\": 0.0008803652515899861,\n \"f1\": 0.07449664429530209,\n \"f1_stderr\": 0.001794948262867366,\n \"acc\": 0.24030037584379355,\n \"acc_stderr\": 0.00755598242138111\n },\n \"harness|drop|3\": {\n \"em\": 0.007445469798657718,\n \"em_stderr\": 0.0008803652515899861,\n \"f1\": 0.07449664429530209,\n \"f1_stderr\": 0.001794948262867366\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.001516300227445034,\n \"acc_stderr\": 0.0010717793485492634\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.47908445146014206,\n \"acc_stderr\": 0.014040185494212955\n }\n}\n```", "repo_url": "https://huggingface.co/MBZUAI/lamini-cerebras-590m", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_17T04_57_06.330423", "path": ["**/details_harness|drop|3_2023-09-17T04-57-06.330423.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-17T04-57-06.330423.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_17T04_57_06.330423", "path": ["**/details_harness|gsm8k|5_2023-09-17T04-57-06.330423.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-17T04-57-06.330423.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_17T04_57_06.330423", "path": ["**/details_harness|winogrande|5_2023-09-17T04-57-06.330423.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-17T04-57-06.330423.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_17T04_57_06.330423", "path": ["results_2023-09-17T04-57-06.330423.parquet"]}, {"split": "latest", "path": ["results_2023-09-17T04-57-06.330423.parquet"]}]}]}
|
2023-09-17T03:57:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of MBZUAI/lamini-cerebras-590m
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model MBZUAI/lamini-cerebras-590m on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-17T04:57:06.330423(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of MBZUAI/lamini-cerebras-590m",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model MBZUAI/lamini-cerebras-590m on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-17T04:57:06.330423(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of MBZUAI/lamini-cerebras-590m",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model MBZUAI/lamini-cerebras-590m on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-17T04:57:06.330423(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
21,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of MBZUAI/lamini-cerebras-590m## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model MBZUAI/lamini-cerebras-590m on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-17T04:57:06.330423(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
98d960b064e8bf6f00189b20a1ad389832c8d565
|
# Questions Generated by LLM on 'How To Do Great Work'
http://paulgraham.com/greatwork.html
https://github.com/fastrepl/fastrepl/blob/main/exp/pg_essay_questions.ipynb
|
repllabs/questions_how_to_do_great_work
|
[
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"license:mit",
"region:us"
] |
2023-09-17T04:10:55+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["question-answering"], "configs": [{"config_name": "default", "data_files": [{"split": "processed", "path": "data/processed-*"}, {"split": "raw", "path": "data/raw-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "model", "dtype": "string"}], "splits": [{"name": "processed", "num_bytes": 17391, "num_examples": 142}, {"name": "raw", "num_bytes": 55307, "num_examples": 450}], "download_size": 28702, "dataset_size": 72698}}
|
2023-09-17T04:43:44+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-question-answering #size_categories-n<1K #language-English #license-mit #region-us
|
# Questions Generated by LLM on 'How To Do Great Work'
URL
URL
|
[
"# Questions Generated by LLM on 'How To Do Great Work'\n\nURL\n\nURL"
] |
[
"TAGS\n#task_categories-question-answering #size_categories-n<1K #language-English #license-mit #region-us \n",
"# Questions Generated by LLM on 'How To Do Great Work'\n\nURL\n\nURL"
] |
[
37,
18
] |
[
"passage: TAGS\n#task_categories-question-answering #size_categories-n<1K #language-English #license-mit #region-us \n# Questions Generated by LLM on 'How To Do Great Work'\n\nURL\n\nURL"
] |
06e1b212322faac5b61afa568faad06f92933eda
|
# Dataset Card for "ucberkeley-dlab-measuring-hate-speech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
okaris/ucberkeley-dlab-measuring-hate-speech
|
[
"region:us"
] |
2023-09-17T04:35:38+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "comment_id", "dtype": "int32"}, {"name": "annotator_id", "dtype": "int32"}, {"name": "platform", "dtype": "int8"}, {"name": "sentiment", "dtype": "float64"}, {"name": "respect", "dtype": "float64"}, {"name": "insult", "dtype": "float64"}, {"name": "humiliate", "dtype": "float64"}, {"name": "status", "dtype": "float64"}, {"name": "dehumanize", "dtype": "float64"}, {"name": "violence", "dtype": "float64"}, {"name": "genocide", "dtype": "float64"}, {"name": "attack_defend", "dtype": "float64"}, {"name": "hatespeech", "dtype": "float64"}, {"name": "hate_speech_score", "dtype": "float64"}, {"name": "text", "dtype": "string"}, {"name": "infitms", "dtype": "float64"}, {"name": "outfitms", "dtype": "float64"}, {"name": "annotator_severity", "dtype": "float64"}, {"name": "std_err", "dtype": "float64"}, {"name": "annotator_infitms", "dtype": "float64"}, {"name": "annotator_outfitms", "dtype": "float64"}, {"name": "hypothesis", "dtype": "float64"}, {"name": "target_race_asian", "dtype": "bool"}, {"name": "target_race_black", "dtype": "bool"}, {"name": "target_race_latinx", "dtype": "bool"}, {"name": "target_race_middle_eastern", "dtype": "bool"}, {"name": "target_race_native_american", "dtype": "bool"}, {"name": "target_race_pacific_islander", "dtype": "bool"}, {"name": "target_race_white", "dtype": "bool"}, {"name": "target_race_other", "dtype": "bool"}, {"name": "target_race", "dtype": "bool"}, {"name": "target_religion_atheist", "dtype": "bool"}, {"name": "target_religion_buddhist", "dtype": "bool"}, {"name": "target_religion_christian", "dtype": "bool"}, {"name": "target_religion_hindu", "dtype": "bool"}, {"name": "target_religion_jewish", "dtype": "bool"}, {"name": "target_religion_mormon", "dtype": "bool"}, {"name": "target_religion_muslim", "dtype": "bool"}, {"name": "target_religion_other", "dtype": "bool"}, {"name": "target_religion", "dtype": "bool"}, {"name": "target_origin_immigrant", "dtype": "bool"}, {"name": "target_origin_migrant_worker", "dtype": "bool"}, {"name": "target_origin_specific_country", "dtype": "bool"}, {"name": "target_origin_undocumented", "dtype": "bool"}, {"name": "target_origin_other", "dtype": "bool"}, {"name": "target_origin", "dtype": "bool"}, {"name": "target_gender_men", "dtype": "bool"}, {"name": "target_gender_non_binary", "dtype": "bool"}, {"name": "target_gender_transgender_men", "dtype": "bool"}, {"name": "target_gender_transgender_unspecified", "dtype": "bool"}, {"name": "target_gender_transgender_women", "dtype": "bool"}, {"name": "target_gender_women", "dtype": "bool"}, {"name": "target_gender_other", "dtype": "bool"}, {"name": "target_gender", "dtype": "bool"}, {"name": "target_sexuality_bisexual", "dtype": "bool"}, {"name": "target_sexuality_gay", "dtype": "bool"}, {"name": "target_sexuality_lesbian", "dtype": "bool"}, {"name": "target_sexuality_straight", "dtype": "bool"}, {"name": "target_sexuality_other", "dtype": "bool"}, {"name": "target_sexuality", "dtype": "bool"}, {"name": "target_age_children", "dtype": "bool"}, {"name": "target_age_teenagers", "dtype": "bool"}, {"name": "target_age_young_adults", "dtype": "bool"}, {"name": "target_age_middle_aged", "dtype": "bool"}, {"name": "target_age_seniors", "dtype": "bool"}, {"name": "target_age_other", "dtype": "bool"}, {"name": "target_age", "dtype": "bool"}, {"name": "target_disability_physical", "dtype": "bool"}, {"name": "target_disability_cognitive", "dtype": "bool"}, {"name": "target_disability_neurological", "dtype": "bool"}, {"name": "target_disability_visually_impaired", "dtype": "bool"}, {"name": "target_disability_hearing_impaired", "dtype": "bool"}, {"name": "target_disability_unspecific", "dtype": "bool"}, {"name": "target_disability_other", "dtype": "bool"}, {"name": "target_disability", "dtype": "bool"}, {"name": "annotator_gender", "dtype": "string"}, {"name": "annotator_trans", "dtype": "string"}, {"name": "annotator_educ", "dtype": "string"}, {"name": "annotator_income", "dtype": "string"}, {"name": "annotator_ideology", "dtype": "string"}, {"name": "annotator_gender_men", "dtype": "bool"}, {"name": "annotator_gender_women", "dtype": "bool"}, {"name": "annotator_gender_non_binary", "dtype": "bool"}, {"name": "annotator_gender_prefer_not_to_say", "dtype": "bool"}, {"name": "annotator_gender_self_describe", "dtype": "bool"}, {"name": "annotator_transgender", "dtype": "bool"}, {"name": "annotator_cisgender", "dtype": "bool"}, {"name": "annotator_transgender_prefer_not_to_say", "dtype": "bool"}, {"name": "annotator_education_some_high_school", "dtype": "bool"}, {"name": "annotator_education_high_school_grad", "dtype": "bool"}, {"name": "annotator_education_some_college", "dtype": "bool"}, {"name": "annotator_education_college_grad_aa", "dtype": "bool"}, {"name": "annotator_education_college_grad_ba", "dtype": "bool"}, {"name": "annotator_education_professional_degree", "dtype": "bool"}, {"name": "annotator_education_masters", "dtype": "bool"}, {"name": "annotator_education_phd", "dtype": "bool"}, {"name": "annotator_income_<10k", "dtype": "bool"}, {"name": "annotator_income_10k-50k", "dtype": "bool"}, {"name": "annotator_income_50k-100k", "dtype": "bool"}, {"name": "annotator_income_100k-200k", "dtype": "bool"}, {"name": "annotator_income_>200k", "dtype": "bool"}, {"name": "annotator_ideology_extremeley_conservative", "dtype": "bool"}, {"name": "annotator_ideology_conservative", "dtype": "bool"}, {"name": "annotator_ideology_slightly_conservative", "dtype": "bool"}, {"name": "annotator_ideology_neutral", "dtype": "bool"}, {"name": "annotator_ideology_slightly_liberal", "dtype": "bool"}, {"name": "annotator_ideology_liberal", "dtype": "bool"}, {"name": "annotator_ideology_extremeley_liberal", "dtype": "bool"}, {"name": "annotator_ideology_no_opinion", "dtype": "bool"}, {"name": "annotator_race_asian", "dtype": "bool"}, {"name": "annotator_race_black", "dtype": "bool"}, {"name": "annotator_race_latinx", "dtype": "bool"}, {"name": "annotator_race_middle_eastern", "dtype": "bool"}, {"name": "annotator_race_native_american", "dtype": "bool"}, {"name": "annotator_race_pacific_islander", "dtype": "bool"}, {"name": "annotator_race_white", "dtype": "bool"}, {"name": "annotator_race_other", "dtype": "bool"}, {"name": "annotator_age", "dtype": "float64"}, {"name": "annotator_religion_atheist", "dtype": "bool"}, {"name": "annotator_religion_buddhist", "dtype": "bool"}, {"name": "annotator_religion_christian", "dtype": "bool"}, {"name": "annotator_religion_hindu", "dtype": "bool"}, {"name": "annotator_religion_jewish", "dtype": "bool"}, {"name": "annotator_religion_mormon", "dtype": "bool"}, {"name": "annotator_religion_muslim", "dtype": "bool"}, {"name": "annotator_religion_nothing", "dtype": "bool"}, {"name": "annotator_religion_other", "dtype": "bool"}, {"name": "annotator_sexuality_bisexual", "dtype": "bool"}, {"name": "annotator_sexuality_gay", "dtype": "bool"}, {"name": "annotator_sexuality_straight", "dtype": "bool"}, {"name": "annotator_sexuality_other", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 52943809, "num_examples": 135556}], "download_size": 19680581, "dataset_size": 52943809}}
|
2023-09-17T04:36:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ucberkeley-dlab-measuring-hate-speech"
More Information needed
|
[
"# Dataset Card for \"ucberkeley-dlab-measuring-hate-speech\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ucberkeley-dlab-measuring-hate-speech\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ucberkeley-dlab-measuring-hate-speech\"\n\nMore Information needed"
] |
f9ff843dd8bdf691a513623628180d30832beeea
|
# Dataset Card for "arxiv_s2orc_cl_with_code"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ArtifactAI/arxiv_s2orc_cl_with_code
|
[
"region:us"
] |
2023-09-17T05:44:45+00:00
|
{"dataset_info": {"features": [{"name": "title", "sequence": "string"}, {"name": "author", "sequence": "string"}, {"name": "authoraffiliation", "sequence": "string"}, {"name": "venue", "sequence": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "pdfurls", "sequence": "string"}, {"name": "corpusid", "dtype": "int64"}, {"name": "arxivid", "dtype": "string"}, {"name": "pdfsha", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "github_urls", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 363103372, "num_examples": 6709}], "download_size": 173374265, "dataset_size": 363103372}}
|
2023-09-17T05:45:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "arxiv_s2orc_cl_with_code"
More Information needed
|
[
"# Dataset Card for \"arxiv_s2orc_cl_with_code\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"arxiv_s2orc_cl_with_code\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"arxiv_s2orc_cl_with_code\"\n\nMore Information needed"
] |
a29a3f251f52ceaa2fc8c0f44347cf6b35002566
|
# Dataset Card for Evaluation run of augtoma/qCammel-70v1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/augtoma/qCammel-70v1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [augtoma/qCammel-70v1](https://huggingface.co/augtoma/qCammel-70v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_augtoma__qCammel-70v1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T06:45:18.044644](https://huggingface.co/datasets/open-llm-leaderboard/details_augtoma__qCammel-70v1/blob/main/results_2023-09-17T06-45-18.044644.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.033766778523489936,
"em_stderr": 0.001849802869119515,
"f1": 0.10340918624161041,
"f1_stderr": 0.0022106009828094797,
"acc": 0.5700654570173166,
"acc_stderr": 0.011407494958111332
},
"harness|drop|3": {
"em": 0.033766778523489936,
"em_stderr": 0.001849802869119515,
"f1": 0.10340918624161041,
"f1_stderr": 0.0022106009828094797
},
"harness|gsm8k|5": {
"acc": 0.2971948445792267,
"acc_stderr": 0.012588685966624186
},
"harness|winogrande|5": {
"acc": 0.8429360694554064,
"acc_stderr": 0.010226303949598479
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_augtoma__qCammel-70v1
|
[
"region:us"
] |
2023-09-17T05:45:22+00:00
|
{"pretty_name": "Evaluation run of augtoma/qCammel-70v1", "dataset_summary": "Dataset automatically created during the evaluation run of model [augtoma/qCammel-70v1](https://huggingface.co/augtoma/qCammel-70v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_augtoma__qCammel-70v1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-17T06:45:18.044644](https://huggingface.co/datasets/open-llm-leaderboard/details_augtoma__qCammel-70v1/blob/main/results_2023-09-17T06-45-18.044644.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.033766778523489936,\n \"em_stderr\": 0.001849802869119515,\n \"f1\": 0.10340918624161041,\n \"f1_stderr\": 0.0022106009828094797,\n \"acc\": 0.5700654570173166,\n \"acc_stderr\": 0.011407494958111332\n },\n \"harness|drop|3\": {\n \"em\": 0.033766778523489936,\n \"em_stderr\": 0.001849802869119515,\n \"f1\": 0.10340918624161041,\n \"f1_stderr\": 0.0022106009828094797\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2971948445792267,\n \"acc_stderr\": 0.012588685966624186\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8429360694554064,\n \"acc_stderr\": 0.010226303949598479\n }\n}\n```", "repo_url": "https://huggingface.co/augtoma/qCammel-70v1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_17T06_45_18.044644", "path": ["**/details_harness|drop|3_2023-09-17T06-45-18.044644.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-17T06-45-18.044644.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_17T06_45_18.044644", "path": ["**/details_harness|gsm8k|5_2023-09-17T06-45-18.044644.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-17T06-45-18.044644.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_17T06_45_18.044644", "path": ["**/details_harness|winogrande|5_2023-09-17T06-45-18.044644.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-17T06-45-18.044644.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_17T06_45_18.044644", "path": ["results_2023-09-17T06-45-18.044644.parquet"]}, {"split": "latest", "path": ["results_2023-09-17T06-45-18.044644.parquet"]}]}]}
|
2023-09-17T05:45:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of augtoma/qCammel-70v1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model augtoma/qCammel-70v1 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-17T06:45:18.044644(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of augtoma/qCammel-70v1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model augtoma/qCammel-70v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-17T06:45:18.044644(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of augtoma/qCammel-70v1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model augtoma/qCammel-70v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-17T06:45:18.044644(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
18,
31,
166,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of augtoma/qCammel-70v1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model augtoma/qCammel-70v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-17T06:45:18.044644(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
6fd7579b9d482a5a676cd7725c49efe30617edcc
|
# Dataset Card for "data_aug_no_rand"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
linhqyy/data_aug_no_rand
|
[
"region:us"
] |
2023-09-17T05:52:25+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "intent", "dtype": "string"}, {"name": "entities", "list": [{"name": "type", "dtype": "string"}, {"name": "filler", "dtype": "string"}]}, {"name": "labels", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2378589, "num_examples": 11301}, {"name": "test", "num_bytes": 123440, "num_examples": 595}], "download_size": 575699, "dataset_size": 2502029}}
|
2023-09-18T02:54:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "data_aug_no_rand"
More Information needed
|
[
"# Dataset Card for \"data_aug_no_rand\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"data_aug_no_rand\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"data_aug_no_rand\"\n\nMore Information needed"
] |
6acb5e0b121446ff1aae38e7a694ae3137643f6e
|
# Dataset Card for "chinese_alpaca_unfiltered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DialogueCharacter/chinese_alpaca_unfiltered
|
[
"region:us"
] |
2023-09-17T06:04:28+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "response", "sequence": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32887574, "num_examples": 48818}], "download_size": 21230689, "dataset_size": 32887574}}
|
2023-09-17T06:04:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chinese_alpaca_unfiltered"
More Information needed
|
[
"# Dataset Card for \"chinese_alpaca_unfiltered\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chinese_alpaca_unfiltered\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chinese_alpaca_unfiltered\"\n\nMore Information needed"
] |
2984e094e12f956f2bcbc436b4a84acf0f0c30bb
|
# Dataset Card for "chinese_belle_unfiltered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DialogueCharacter/chinese_belle_unfiltered
|
[
"region:us"
] |
2023-09-17T06:09:06+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "response", "sequence": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4450892840, "num_examples": 3606402}], "download_size": 2752117439, "dataset_size": 4450892840}}
|
2023-09-17T06:12:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chinese_belle_unfiltered"
More Information needed
|
[
"# Dataset Card for \"chinese_belle_unfiltered\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chinese_belle_unfiltered\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chinese_belle_unfiltered\"\n\nMore Information needed"
] |
798c98cdac511bdfabc2ba974b72546874ae0d16
|
# Dataset Card for "chinese_firefly_unfiltered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DialogueCharacter/chinese_firefly_unfiltered
|
[
"region:us"
] |
2023-09-17T06:15:41+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "response", "sequence": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1127002621, "num_examples": 1649399}], "download_size": 793361458, "dataset_size": 1127002621}}
|
2023-09-17T06:16:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chinese_firefly_unfiltered"
More Information needed
|
[
"# Dataset Card for \"chinese_firefly_unfiltered\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chinese_firefly_unfiltered\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chinese_firefly_unfiltered\"\n\nMore Information needed"
] |
732419646c0348791fdbb60939728b5495a563b8
|
# Dataset Card for "chinese_instinwild_unfiltered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DialogueCharacter/chinese_instinwild_unfiltered
|
[
"region:us"
] |
2023-09-17T06:16:52+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "response", "sequence": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30197794, "num_examples": 51504}], "download_size": 17704859, "dataset_size": 30197794}}
|
2023-09-17T06:16:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chinese_instinwild_unfiltered"
More Information needed
|
[
"# Dataset Card for \"chinese_instinwild_unfiltered\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chinese_instinwild_unfiltered\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chinese_instinwild_unfiltered\"\n\nMore Information needed"
] |
9bd6116520e66994055c9f4e99dd227fa01dab9a
|
# Dataset Card for "chinese_moss_unfiltered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DialogueCharacter/chinese_moss_unfiltered
|
[
"region:us"
] |
2023-09-17T06:17:17+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "response", "sequence": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3264688861, "num_examples": 550415}], "download_size": 1534910020, "dataset_size": 3264688861}}
|
2023-09-17T06:19:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "chinese_moss_unfiltered"
More Information needed
|
[
"# Dataset Card for \"chinese_moss_unfiltered\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"chinese_moss_unfiltered\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"chinese_moss_unfiltered\"\n\nMore Information needed"
] |
9120a8d619ba49ffd5cb932c790ee125db3053d7
|
# Dataset Card for "english_moss_unfiltered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DialogueCharacter/english_moss_unfiltered
|
[
"region:us"
] |
2023-09-17T06:20:45+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "response", "sequence": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5150827100, "num_examples": 523390}], "download_size": 2313907146, "dataset_size": 5150827100}}
|
2023-09-17T06:22:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "english_moss_unfiltered"
More Information needed
|
[
"# Dataset Card for \"english_moss_unfiltered\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"english_moss_unfiltered\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"english_moss_unfiltered\"\n\nMore Information needed"
] |
7b53c34edbbe2f9dfd1911bedb19aaa2c39220d1
|
# Dataset Card for "english_soda_unfiltered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DialogueCharacter/english_soda_unfiltered
|
[
"region:us"
] |
2023-09-17T06:23:52+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "response", "sequence": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 913834615, "num_examples": 917016}], "download_size": 505828303, "dataset_size": 913834615}}
|
2023-09-17T06:24:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "english_soda_unfiltered"
More Information needed
|
[
"# Dataset Card for \"english_soda_unfiltered\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"english_soda_unfiltered\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"english_soda_unfiltered\"\n\nMore Information needed"
] |
7542aa4fcfe6063373a61fee56f737a479c4fa2e
|
# Dataset Card for Evaluation run of frank098/Wizard-Vicuna-13B-juniper
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/frank098/Wizard-Vicuna-13B-juniper
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [frank098/Wizard-Vicuna-13B-juniper](https://huggingface.co/frank098/Wizard-Vicuna-13B-juniper) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_frank098__Wizard-Vicuna-13B-juniper",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T07:24:44.144750](https://huggingface.co/datasets/open-llm-leaderboard/details_frank098__Wizard-Vicuna-13B-juniper/blob/main/results_2023-09-17T07-24-44.144750.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0025167785234899327,
"em_stderr": 0.0005131152834514818,
"f1": 0.06578020134228216,
"f1_stderr": 0.0014299327364359015,
"acc": 0.39984819046262715,
"acc_stderr": 0.009838812433518467
},
"harness|drop|3": {
"em": 0.0025167785234899327,
"em_stderr": 0.0005131152834514818,
"f1": 0.06578020134228216,
"f1_stderr": 0.0014299327364359015
},
"harness|gsm8k|5": {
"acc": 0.07278241091736164,
"acc_stderr": 0.007155604761167476
},
"harness|winogrande|5": {
"acc": 0.7269139700078927,
"acc_stderr": 0.012522020105869456
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_frank098__Wizard-Vicuna-13B-juniper
|
[
"region:us"
] |
2023-09-17T06:24:48+00:00
|
{"pretty_name": "Evaluation run of frank098/Wizard-Vicuna-13B-juniper", "dataset_summary": "Dataset automatically created during the evaluation run of model [frank098/Wizard-Vicuna-13B-juniper](https://huggingface.co/frank098/Wizard-Vicuna-13B-juniper) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_frank098__Wizard-Vicuna-13B-juniper\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-17T07:24:44.144750](https://huggingface.co/datasets/open-llm-leaderboard/details_frank098__Wizard-Vicuna-13B-juniper/blob/main/results_2023-09-17T07-24-44.144750.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0025167785234899327,\n \"em_stderr\": 0.0005131152834514818,\n \"f1\": 0.06578020134228216,\n \"f1_stderr\": 0.0014299327364359015,\n \"acc\": 0.39984819046262715,\n \"acc_stderr\": 0.009838812433518467\n },\n \"harness|drop|3\": {\n \"em\": 0.0025167785234899327,\n \"em_stderr\": 0.0005131152834514818,\n \"f1\": 0.06578020134228216,\n \"f1_stderr\": 0.0014299327364359015\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07278241091736164,\n \"acc_stderr\": 0.007155604761167476\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7269139700078927,\n \"acc_stderr\": 0.012522020105869456\n }\n}\n```", "repo_url": "https://huggingface.co/frank098/Wizard-Vicuna-13B-juniper", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_17T07_24_44.144750", "path": ["**/details_harness|drop|3_2023-09-17T07-24-44.144750.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-17T07-24-44.144750.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_17T07_24_44.144750", "path": ["**/details_harness|gsm8k|5_2023-09-17T07-24-44.144750.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-17T07-24-44.144750.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_17T07_24_44.144750", "path": ["**/details_harness|winogrande|5_2023-09-17T07-24-44.144750.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-17T07-24-44.144750.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_17T07_24_44.144750", "path": ["results_2023-09-17T07-24-44.144750.parquet"]}, {"split": "latest", "path": ["results_2023-09-17T07-24-44.144750.parquet"]}]}]}
|
2023-09-17T06:24:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of frank098/Wizard-Vicuna-13B-juniper
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model frank098/Wizard-Vicuna-13B-juniper on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-17T07:24:44.144750(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of frank098/Wizard-Vicuna-13B-juniper",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model frank098/Wizard-Vicuna-13B-juniper on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-17T07:24:44.144750(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of frank098/Wizard-Vicuna-13B-juniper",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model frank098/Wizard-Vicuna-13B-juniper on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-17T07:24:44.144750(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
25,
31,
173,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of frank098/Wizard-Vicuna-13B-juniper## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model frank098/Wizard-Vicuna-13B-juniper on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-17T07:24:44.144750(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
0a1e0027ae0676a8dbf4e53668945a26354dae56
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
Dinghan/Test
|
[
"task_categories:text-classification",
"license:apache-2.0",
"region:us"
] |
2023-09-17T06:25:06+00:00
|
{"license": "apache-2.0", "task_categories": ["text-classification"]}
|
2023-09-17T06:28:12+00:00
|
[] |
[] |
TAGS
#task_categories-text-classification #license-apache-2.0 #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-text-classification #license-apache-2.0 #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
25,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-text-classification #license-apache-2.0 #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
8099a175aecf3be9f0c137c68227529c9a85ffaf
|
# Dataset Card for "english_ultra_unfiltered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DialogueCharacter/english_ultra_unfiltered
|
[
"region:us"
] |
2023-09-17T06:26:24+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "response", "sequence": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4667203735, "num_examples": 711984}], "download_size": 2206564571, "dataset_size": 4667203735}}
|
2023-09-17T06:28:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "english_ultra_unfiltered"
More Information needed
|
[
"# Dataset Card for \"english_ultra_unfiltered\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"english_ultra_unfiltered\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"english_ultra_unfiltered\"\n\nMore Information needed"
] |
6317e3218421ddb9c53631ae31b6bf5bb36b7d66
|
# Dataset Card for "english_wizard_unfiltered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DialogueCharacter/english_wizard_unfiltered
|
[
"region:us"
] |
2023-09-17T06:32:59+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "response", "sequence": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 278812623, "num_examples": 121930}], "download_size": 144938153, "dataset_size": 278812623}}
|
2023-09-17T06:33:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "english_wizard_unfiltered"
More Information needed
|
[
"# Dataset Card for \"english_wizard_unfiltered\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"english_wizard_unfiltered\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"english_wizard_unfiltered\"\n\nMore Information needed"
] |
e11af56c3a5c687f118e9f90a2b9d9e48a4c6a0c
|
# Dataset Card for "RCS_Image_Stratified_Train_Test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Goorm-AI-04/RCS_Image_Stratified_Train_Test
|
[
"region:us"
] |
2023-09-17T06:33:18+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "rcs_image", "dtype": "image"}, {"name": "drone_type", "dtype": "string"}, {"name": "frequency", "dtype": "int64"}, {"name": "label", "dtype": {"class_label": {"names": {"0": 0, "1": 1, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9, "10": 10, "11": 11, "12": 12, "13": 13, "14": 14, "15": 15}}}}], "splits": [{"name": "train", "num_bytes": 24972888.0, "num_examples": 192}, {"name": "test", "num_bytes": 6243222.0, "num_examples": 48}], "download_size": 31218865, "dataset_size": 31216110.0}}
|
2023-09-17T09:46:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "RCS_Image_Stratified_Train_Test"
More Information needed
|
[
"# Dataset Card for \"RCS_Image_Stratified_Train_Test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"RCS_Image_Stratified_Train_Test\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"RCS_Image_Stratified_Train_Test\"\n\nMore Information needed"
] |
ffcc6114bbad8ece0ae84fa862694a5c4ffd720a
|
# Dataset Card for Evaluation run of Rardilit/Panther_v1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Rardilit/Panther_v1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Rardilit/Panther_v1](https://huggingface.co/Rardilit/Panther_v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Rardilit__Panther_v1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T07:57:01.737780](https://huggingface.co/datasets/open-llm-leaderboard/details_Rardilit__Panther_v1/blob/main/results_2023-09-17T07-57-01.737780.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 0.0,
"f1_stderr": 0.0,
"acc": 0.2478295185477506,
"acc_stderr": 0.007025978032038456
},
"harness|drop|3": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 0.0,
"f1_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.4956590370955012,
"acc_stderr": 0.014051956064076911
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Rardilit__Panther_v1
|
[
"region:us"
] |
2023-09-17T06:57:06+00:00
|
{"pretty_name": "Evaluation run of Rardilit/Panther_v1", "dataset_summary": "Dataset automatically created during the evaluation run of model [Rardilit/Panther_v1](https://huggingface.co/Rardilit/Panther_v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Rardilit__Panther_v1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-17T07:57:01.737780](https://huggingface.co/datasets/open-llm-leaderboard/details_Rardilit__Panther_v1/blob/main/results_2023-09-17T07-57-01.737780.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0,\n \"em_stderr\": 0.0,\n \"f1\": 0.0,\n \"f1_stderr\": 0.0,\n \"acc\": 0.2478295185477506,\n \"acc_stderr\": 0.007025978032038456\n },\n \"harness|drop|3\": {\n \"em\": 0.0,\n \"em_stderr\": 0.0,\n \"f1\": 0.0,\n \"f1_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.4956590370955012,\n \"acc_stderr\": 0.014051956064076911\n }\n}\n```", "repo_url": "https://huggingface.co/Rardilit/Panther_v1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_17T07_57_01.737780", "path": ["**/details_harness|drop|3_2023-09-17T07-57-01.737780.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-17T07-57-01.737780.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_17T07_57_01.737780", "path": ["**/details_harness|gsm8k|5_2023-09-17T07-57-01.737780.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-17T07-57-01.737780.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_17T07_57_01.737780", "path": ["**/details_harness|winogrande|5_2023-09-17T07-57-01.737780.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-17T07-57-01.737780.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_17T07_57_01.737780", "path": ["results_2023-09-17T07-57-01.737780.parquet"]}, {"split": "latest", "path": ["results_2023-09-17T07-57-01.737780.parquet"]}]}]}
|
2023-09-17T06:57:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Rardilit/Panther_v1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Rardilit/Panther_v1 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-17T07:57:01.737780(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Rardilit/Panther_v1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Rardilit/Panther_v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-17T07:57:01.737780(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Rardilit/Panther_v1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Rardilit/Panther_v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-17T07:57:01.737780(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
18,
31,
166,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Rardilit/Panther_v1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Rardilit/Panther_v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-17T07:57:01.737780(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
498171ce3f34d5387780bf7b6c3716ac5632acbe
|
# Dataset Card for "parasci_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
whateverweird17/parasci_data
|
[
"region:us"
] |
2023-09-17T07:46:52+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 9393333, "num_examples": 38883}, {"name": "validation", "num_bytes": 1878763.2317722398, "num_examples": 7777}], "download_size": 5445189, "dataset_size": 11272096.23177224}}
|
2023-09-17T07:46:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "parasci_data"
More Information needed
|
[
"# Dataset Card for \"parasci_data\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"parasci_data\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"parasci_data\"\n\nMore Information needed"
] |
0631137ef851c60c5efc7d5916678ab6690b5700
|
# Dataset Card for "elon_tweets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hynky/elon_tweets
|
[
"region:us"
] |
2023-09-17T07:52:04+00:00
|
{"dataset_info": {"features": [{"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "system", "dtype": "string"}, {"name": "date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2140590, "num_examples": 4821}], "download_size": 573097, "dataset_size": 2140590}}
|
2023-09-17T11:36:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "elon_tweets"
More Information needed
|
[
"# Dataset Card for \"elon_tweets\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"elon_tweets\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"elon_tweets\"\n\nMore Information needed"
] |
c6d50f0ab25b627f4344bad4205915b5eaea2909
|
# Dataset Card for "elon_tweets_instruct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hynky/elon_tweets_instruct
|
[
"region:us"
] |
2023-09-17T07:55:25+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 827106, "num_examples": 4821}], "download_size": 558180, "dataset_size": 827106}}
|
2023-09-17T07:55:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "elon_tweets_instruct"
More Information needed
|
[
"# Dataset Card for \"elon_tweets_instruct\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"elon_tweets_instruct\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"elon_tweets_instruct\"\n\nMore Information needed"
] |
d51cf841a54ed26868f756f343c0c3098e19cfd7
|
# Dataset Card for "Soldering-Data-pix2pix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ouvic215/Soldering-Data-pix2pix
|
[
"region:us"
] |
2023-09-17T08:35:34+00:00
|
{"dataset_info": {"features": [{"name": "mask_image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 108567615.5, "num_examples": 1338}], "download_size": 108539509, "dataset_size": 108567615.5}}
|
2023-09-19T10:20:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Soldering-Data-pix2pix"
More Information needed
|
[
"# Dataset Card for \"Soldering-Data-pix2pix\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Soldering-Data-pix2pix\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Soldering-Data-pix2pix\"\n\nMore Information needed"
] |
fc891d6346e71cf2ee96fdc9d074f77d70e10cbe
|
# Dataset Card for Evaluation run of Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k](https://huggingface.co/Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Panchovix__WizardLM-33B-V1.0-Uncensored-SuperHOT-8k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T17:53:55.496275](https://huggingface.co/datasets/open-llm-leaderboard/details_Panchovix__WizardLM-33B-V1.0-Uncensored-SuperHOT-8k/blob/main/results_2023-09-17T17-53-55.496275.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0005243288590604027,
"em_stderr": 0.00023443780464835843,
"f1": 0.0018907298657718122,
"f1_stderr": 0.0003791471390866532,
"acc": 0.255327545382794,
"acc_stderr": 0.007024647268145198
},
"harness|drop|3": {
"em": 0.0005243288590604027,
"em_stderr": 0.00023443780464835843,
"f1": 0.0018907298657718122,
"f1_stderr": 0.0003791471390866532
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.510655090765588,
"acc_stderr": 0.014049294536290396
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Panchovix__WizardLM-33B-V1.0-Uncensored-SuperHOT-8k
|
[
"region:us"
] |
2023-09-17T08:46:10+00:00
|
{"pretty_name": "Evaluation run of Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k", "dataset_summary": "Dataset automatically created during the evaluation run of model [Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k](https://huggingface.co/Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Panchovix__WizardLM-33B-V1.0-Uncensored-SuperHOT-8k\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-17T17:53:55.496275](https://huggingface.co/datasets/open-llm-leaderboard/details_Panchovix__WizardLM-33B-V1.0-Uncensored-SuperHOT-8k/blob/main/results_2023-09-17T17-53-55.496275.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0005243288590604027,\n \"em_stderr\": 0.00023443780464835843,\n \"f1\": 0.0018907298657718122,\n \"f1_stderr\": 0.0003791471390866532,\n \"acc\": 0.255327545382794,\n \"acc_stderr\": 0.007024647268145198\n },\n \"harness|drop|3\": {\n \"em\": 0.0005243288590604027,\n \"em_stderr\": 0.00023443780464835843,\n \"f1\": 0.0018907298657718122,\n \"f1_stderr\": 0.0003791471390866532\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.510655090765588,\n \"acc_stderr\": 0.014049294536290396\n }\n}\n```", "repo_url": "https://huggingface.co/Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_17T09_46_06.674365", "path": ["**/details_harness|drop|3_2023-09-17T09-46-06.674365.parquet"]}, {"split": "2023_09_17T17_53_55.496275", "path": ["**/details_harness|drop|3_2023-09-17T17-53-55.496275.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-17T17-53-55.496275.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_17T09_46_06.674365", "path": ["**/details_harness|gsm8k|5_2023-09-17T09-46-06.674365.parquet"]}, {"split": "2023_09_17T17_53_55.496275", "path": ["**/details_harness|gsm8k|5_2023-09-17T17-53-55.496275.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-17T17-53-55.496275.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_17T09_46_06.674365", "path": ["**/details_harness|winogrande|5_2023-09-17T09-46-06.674365.parquet"]}, {"split": "2023_09_17T17_53_55.496275", "path": ["**/details_harness|winogrande|5_2023-09-17T17-53-55.496275.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-17T17-53-55.496275.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_17T09_46_06.674365", "path": ["results_2023-09-17T09-46-06.674365.parquet"]}, {"split": "2023_09_17T17_53_55.496275", "path": ["results_2023-09-17T17-53-55.496275.parquet"]}, {"split": "latest", "path": ["results_2023-09-17T17-53-55.496275.parquet"]}]}]}
|
2023-09-17T16:54:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-17T17:53:55.496275(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-17T17:53:55.496275(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-17T17:53:55.496275(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
32,
31,
180,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-17T17:53:55.496275(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
789c70423287dec699e300643276f6d745eeffd0
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
MohamedTahir/text_to_jason
|
[
"task_categories:translation",
"size_categories:n<1K",
"region:us"
] |
2023-09-17T08:48:33+00:00
|
{"size_categories": ["n<1K"], "task_categories": ["translation"]}
|
2023-09-17T08:51:27+00:00
|
[] |
[] |
TAGS
#task_categories-translation #size_categories-n<1K #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-translation #size_categories-n<1K #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
25,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-translation #size_categories-n<1K #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
eee6840361263e775a48e756684642e8f623368e
|
# Dataset Card for "elon_conversations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hynky/elon_conversations
|
[
"region:us"
] |
2023-09-17T09:44:02+00:00
|
{"dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2196270, "num_examples": 4821}], "download_size": 513850, "dataset_size": 2196270}}
|
2023-09-17T09:53:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "elon_conversations"
More Information needed
|
[
"# Dataset Card for \"elon_conversations\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"elon_conversations\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"elon_conversations\"\n\nMore Information needed"
] |
06f84068074bf4c472578380578f1bc5b7ce8493
|
# Dataset Card for "RCS_Image_Stratified_Train_Test_Resized_181x181"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Goorm-AI-04/RCS_Image_Stratified_Train_Test_Resized_181x181
|
[
"region:us"
] |
2023-09-17T09:46:33+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "rcs_image", "dtype": "image"}, {"name": "drone_type", "dtype": "string"}, {"name": "frequency", "dtype": "int64"}, {"name": "label", "dtype": {"class_label": {"names": {"0": 0, "1": 1, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9, "10": 10, "11": 11, "12": 12, "13": 13, "14": 14, "15": 15}}}}], "splits": [{"name": "train", "num_bytes": 25192248.0, "num_examples": 192}, {"name": "test", "num_bytes": 6298062.0, "num_examples": 48}], "download_size": 31492855, "dataset_size": 31490310.0}}
|
2023-09-17T09:46:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "RCS_Image_Stratified_Train_Test_Resized_181x181"
More Information needed
|
[
"# Dataset Card for \"RCS_Image_Stratified_Train_Test_Resized_181x181\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"RCS_Image_Stratified_Train_Test_Resized_181x181\"\n\nMore Information needed"
] |
[
6,
32
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"RCS_Image_Stratified_Train_Test_Resized_181x181\"\n\nMore Information needed"
] |
8b6f7d7833cbf063941f86b0ae7f510f3ed884ef
|
# Dataset Card for "synapsellm-v0-2-llama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
WebraftAI/synapsellm-v0-2-llama2
|
[
"region:us"
] |
2023-09-17T09:48:11+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14577107, "num_examples": 18947}], "download_size": 8208827, "dataset_size": 14577107}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-17T09:48:12+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "synapsellm-v0-2-llama2"
More Information needed
|
[
"# Dataset Card for \"synapsellm-v0-2-llama2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"synapsellm-v0-2-llama2\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"synapsellm-v0-2-llama2\"\n\nMore Information needed"
] |
d13d47042704156d8002da87b109922bc0b69ad6
|
# Dataset Card for "context-aware-splits"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mhenrichsen/context-aware-splits
|
[
"region:us"
] |
2023-09-17T10:00:56+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 35235257, "num_examples": 12255}], "download_size": 20336185, "dataset_size": 35235257}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-17T10:00:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "context-aware-splits"
More Information needed
|
[
"# Dataset Card for \"context-aware-splits\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"context-aware-splits\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"context-aware-splits\"\n\nMore Information needed"
] |
9bdb95576c0cc9d7e06fb3a1faf29e4c27923bb1
|
# 🇪🇺 🗳️ European Parliament Amendments : Legislature 7 & 8
Source: https://zenodo.org/record/3757714
|
Tadorne/amendments
|
[
"language:en",
"license:eupl-1.1",
"region:us"
] |
2023-09-17T10:09:16+00:00
|
{"language": ["en"], "license": "eupl-1.1", "pretty_name": "Amendments EP - Legislature 7 & 8", "configs": [{"config_name": "ALDE", "data_files": "alde.jsonl.gz"}, {"config_name": "ECR", "data_files": "ecr.jsonl.gz"}, {"config_name": "EFD", "data_files": "efd.jsonl.gz"}, {"config_name": "ENF", "data_files": "enf.jsonl.gz"}, {"config_name": "EPP", "data_files": "epp.jsonl.gz"}, {"config_name": "EUL", "data_files": "eul.jsonl.gz"}, {"config_name": "GEFA", "data_files": "gefa.jsonl.gz"}, {"config_name": "ID", "data_files": "id.jsonl.gz"}, {"config_name": "NA", "data_files": "na.jsonl.gz"}, {"config_name": "RENEW", "data_files": "renew.jsonl.gz"}, {"config_name": "SD", "data_files": "sd.jsonl.gz"}]}
|
2023-09-18T09:24:18+00:00
|
[] |
[
"en"
] |
TAGS
#language-English #license-eupl-1.1 #region-us
|
# 🇪🇺 ️ European Parliament Amendments : Legislature 7 & 8
Source: URL
|
[
"# 🇪🇺 ️ European Parliament Amendments : Legislature 7 & 8 \n\nSource: URL"
] |
[
"TAGS\n#language-English #license-eupl-1.1 #region-us \n",
"# 🇪🇺 ️ European Parliament Amendments : Legislature 7 & 8 \n\nSource: URL"
] |
[
18,
20
] |
[
"passage: TAGS\n#language-English #license-eupl-1.1 #region-us \n# 🇪🇺 ️ European Parliament Amendments : Legislature 7 & 8 \n\nSource: URL"
] |
d1e065b2d598287b52ca0421db70f6c55f4b7a74
|
# Dataset Card for "COVID-QA-unique-context-test-10-percent-validation-10-percent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
minh21/COVID-QA-unique-context-test-10-percent-validation-10-percent
|
[
"region:us"
] |
2023-09-17T10:11:59+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer_text", "dtype": "string"}, {"name": "answer_start", "dtype": "int64"}, {"name": "is_impossible", "dtype": "bool"}, {"name": "document_id", "dtype": "int64"}, {"name": "id", "dtype": "int64"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2050073, "num_examples": 1615}, {"name": "test", "num_bytes": 260386, "num_examples": 202}, {"name": "validation", "num_bytes": 261992, "num_examples": 202}], "download_size": 0, "dataset_size": 2572451}}
|
2023-09-17T17:29:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "COVID-QA-unique-context-test-10-percent-validation-10-percent"
More Information needed
|
[
"# Dataset Card for \"COVID-QA-unique-context-test-10-percent-validation-10-percent\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"COVID-QA-unique-context-test-10-percent-validation-10-percent\"\n\nMore Information needed"
] |
[
6,
32
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"COVID-QA-unique-context-test-10-percent-validation-10-percent\"\n\nMore Information needed"
] |
ad29f48bc278f648b0d4c73af56d07c380357b26
|
# Dataset Card for "formatted_data_sales1.jsonl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pssubitha/formatted_data_sales1.jsonl
|
[
"region:us"
] |
2023-09-17T10:16:46+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 45883, "num_examples": 120}], "download_size": 24605, "dataset_size": 45883}}
|
2023-09-17T10:16:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "formatted_data_sales1.jsonl"
More Information needed
|
[
"# Dataset Card for \"formatted_data_sales1.jsonl\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"formatted_data_sales1.jsonl\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"formatted_data_sales1.jsonl\"\n\nMore Information needed"
] |
1419b913496f0a99f3631b6aeb3399ea5b0207f2
|
# Dataset Card for "fox_0_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/fox_0_prompts
|
[
"region:us"
] |
2023-09-17T10:17:34+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3047, "num_examples": 14}], "download_size": 3558, "dataset_size": 3047}}
|
2023-09-17T11:38:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fox_0_prompts"
More Information needed
|
[
"# Dataset Card for \"fox_0_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fox_0_prompts\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fox_0_prompts\"\n\nMore Information needed"
] |
1a9f9c48266b656cf13126a1eb010401112b1a39
|
# Dataset Card for "fox_1_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/fox_1_prompts
|
[
"region:us"
] |
2023-09-17T10:17:36+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2953, "num_examples": 13}], "download_size": 3796, "dataset_size": 2953}}
|
2023-09-17T11:38:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fox_1_prompts"
More Information needed
|
[
"# Dataset Card for \"fox_1_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fox_1_prompts\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fox_1_prompts\"\n\nMore Information needed"
] |
e0d9d34efe7dedaadc96c852d20ac97b1fb904cf
|
# Dataset Card for "fox_2_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/fox_2_prompts
|
[
"region:us"
] |
2023-09-17T10:17:37+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3602, "num_examples": 12}], "download_size": 4842, "dataset_size": 3602}}
|
2023-09-17T11:39:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fox_2_prompts"
More Information needed
|
[
"# Dataset Card for \"fox_2_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fox_2_prompts\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fox_2_prompts\"\n\nMore Information needed"
] |
3bae2be3b3de6ba4b5b5b3e095155e60cac80217
|
# Dataset Card for "above_70yo_elderly_people_datasetV2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aviroes/above_70yo_elderly_people_datasetV2
|
[
"region:us"
] |
2023-09-17T10:17:47+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 196941356.0, "num_examples": 4215}, {"name": "test", "num_bytes": 8586642.0, "num_examples": 166}, {"name": "validation", "num_bytes": 4592657.0, "num_examples": 100}], "download_size": 192899099, "dataset_size": 210120655.0}}
|
2023-09-17T10:18:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "above_70yo_elderly_people_datasetV2"
More Information needed
|
[
"# Dataset Card for \"above_70yo_elderly_people_datasetV2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"above_70yo_elderly_people_datasetV2\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"above_70yo_elderly_people_datasetV2\"\n\nMore Information needed"
] |
0ca22c55040acced834edc599ff2480f6b407ab4
|
# Dataset Card for "some_chives_ones"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
TristanPermentier/some_chives_ones
|
[
"region:us"
] |
2023-09-17T10:22:19+00:00
|
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 21662024.0, "num_examples": 29}], "download_size": 21484795, "dataset_size": 21662024.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-17T10:27:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "some_chives_ones"
More Information needed
|
[
"# Dataset Card for \"some_chives_ones\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"some_chives_ones\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"some_chives_ones\"\n\nMore Information needed"
] |
4ced0558a1f294283b8a3686750784de8182d0e6
|
# Dataset Card for "sql-create-context-10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rams901/sql-create-context-10k
|
[
"region:us"
] |
2023-09-17T10:30:55+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4688784, "num_examples": 10000}], "download_size": 2097435, "dataset_size": 4688784}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-17T10:30:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "sql-create-context-10k"
More Information needed
|
[
"# Dataset Card for \"sql-create-context-10k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"sql-create-context-10k\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"sql-create-context-10k\"\n\nMore Information needed"
] |
700a109443aa8fa4179612f5569287996668db4e
|
# Dataset Card for "augmented_above_70yo_elderly_people_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
aviroes/augmented_above_70yo_elderly_people_dataset
|
[
"region:us"
] |
2023-09-17T11:14:23+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "input_length", "dtype": "float64"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 8097082928.0, "num_examples": 8430}, {"name": "test", "num_bytes": 159444680, "num_examples": 166}, {"name": "validation", "num_bytes": 96050136, "num_examples": 100}], "download_size": 1755695943, "dataset_size": 8352577744.0}}
|
2023-09-17T11:19:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "augmented_above_70yo_elderly_people_dataset"
More Information needed
|
[
"# Dataset Card for \"augmented_above_70yo_elderly_people_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"augmented_above_70yo_elderly_people_dataset\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"augmented_above_70yo_elderly_people_dataset\"\n\nMore Information needed"
] |
61e7133935a70fbb396f02204381333127e3d7cd
|
# Dataset Card for "CtoCollege_all_ForFineTune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vincenttttt/CtoCollege_all_ForFineTune
|
[
"region:us"
] |
2023-09-17T11:33:24+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1546267, "num_examples": 3673}], "download_size": 300520, "dataset_size": 1546267}}
|
2023-09-17T11:33:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "CtoCollege_all_ForFineTune"
More Information needed
|
[
"# Dataset Card for \"CtoCollege_all_ForFineTune\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"CtoCollege_all_ForFineTune\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"CtoCollege_all_ForFineTune\"\n\nMore Information needed"
] |
a6e033397424c049a192cb1fcfd641c25471d98f
|
# Dataset Card for "CtoDepartment_all_ForFineTune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vincenttttt/CtoDepartment_all_ForFineTune
|
[
"region:us"
] |
2023-09-17T11:36:24+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1560937, "num_examples": 3673}], "download_size": 304590, "dataset_size": 1560937}}
|
2023-09-17T11:36:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "CtoDepartment_all_ForFineTune"
More Information needed
|
[
"# Dataset Card for \"CtoDepartment_all_ForFineTune\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"CtoDepartment_all_ForFineTune\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"CtoDepartment_all_ForFineTune\"\n\nMore Information needed"
] |
01f0ff24a92f6d67be9181748cad5d179b4e8bc6
|
# Dataset Card for "english_preference_chatbot_arena_unfiltered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DialogueCharacter/english_preference_chatbot_arena_unfiltered
|
[
"region:us"
] |
2023-09-17T11:46:06+00:00
|
{"dataset_info": {"features": [{"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 53048541, "num_examples": 23294}], "download_size": 26870764, "dataset_size": 53048541}}
|
2023-09-17T11:46:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "english_preference_chatbot_arena_unfiltered"
More Information needed
|
[
"# Dataset Card for \"english_preference_chatbot_arena_unfiltered\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"english_preference_chatbot_arena_unfiltered\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"english_preference_chatbot_arena_unfiltered\"\n\nMore Information needed"
] |
20644cbcdac4f0cbc0b1e7e3c610db0989b9b03d
|
# Dataset Card for "english_preference_hh_helpful_unfiltered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DialogueCharacter/english_preference_hh_helpful_unfiltered
|
[
"region:us"
] |
2023-09-17T11:47:09+00:00
|
{"dataset_info": {"features": [{"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 261122999, "num_examples": 124503}], "download_size": 147966858, "dataset_size": 261122999}}
|
2023-09-17T11:47:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "english_preference_hh_helpful_unfiltered"
More Information needed
|
[
"# Dataset Card for \"english_preference_hh_helpful_unfiltered\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"english_preference_hh_helpful_unfiltered\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"english_preference_hh_helpful_unfiltered\"\n\nMore Information needed"
] |
c60873407eba0b9678f93208dbaefeee3a434015
|
# Dataset Card for "english_preference_mt_bench_unfiltered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DialogueCharacter/english_preference_mt_bench_unfiltered
|
[
"region:us"
] |
2023-09-17T11:47:50+00:00
|
{"dataset_info": {"features": [{"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19869968, "num_examples": 4375}], "download_size": 1369235, "dataset_size": 19869968}}
|
2023-09-17T11:47:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "english_preference_mt_bench_unfiltered"
More Information needed
|
[
"# Dataset Card for \"english_preference_mt_bench_unfiltered\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"english_preference_mt_bench_unfiltered\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"english_preference_mt_bench_unfiltered\"\n\nMore Information needed"
] |
ecf54cde80735962aa7705438e25ff64026f5ddd
|
# Dataset Card for "english_preference_stanfordnlp_SHP_unfiltered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DialogueCharacter/english_preference_stanfordnlp_SHP_unfiltered
|
[
"region:us"
] |
2023-09-17T11:48:27+00:00
|
{"dataset_info": {"features": [{"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 315493419, "num_examples": 112568}], "download_size": 75641649, "dataset_size": 315493419}}
|
2023-09-17T11:48:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "english_preference_stanfordnlp_SHP_unfiltered"
More Information needed
|
[
"# Dataset Card for \"english_preference_stanfordnlp_SHP_unfiltered\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"english_preference_stanfordnlp_SHP_unfiltered\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"english_preference_stanfordnlp_SHP_unfiltered\"\n\nMore Information needed"
] |
e1575e4ef2ee50017ac870da1689983674d55933
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
DoctorSlimm/mozart-api
|
[
"region:us"
] |
2023-09-17T11:57:16+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.csv"}]}]}
|
2023-09-19T17:35:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
6,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
941df7a92d7b863fa3b1696da7742bcadccdad92
|
# Dataset Card for "duped-embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
usvsnsp/duped-embeddings
|
[
"region:us"
] |
2023-09-17T12:01:23+00:00
|
{"dataset_info": {"features": [{"name": "sequence_id", "dtype": "int64"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 12057291504, "num_examples": 7788948}], "download_size": 16876467166, "dataset_size": 12057291504}}
|
2023-09-17T12:13:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "duped-embeddings"
More Information needed
|
[
"# Dataset Card for \"duped-embeddings\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"duped-embeddings\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"duped-embeddings\"\n\nMore Information needed"
] |
3707fce76b2cfcd557ae03e18a1ccbf7a4b0be2f
|
share to 大哥
|
fbw/share
|
[
"region:us"
] |
2023-09-17T12:09:54+00:00
|
{}
|
2023-09-17T12:11:07+00:00
|
[] |
[] |
TAGS
#region-us
|
share to 大哥
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
73c64306c945d095ee53b4bcded5b1bb643abf61
|
# Bangumi Image Base of Shirobako
This is the image base of bangumi Shirobako, we detected 52 characters, 3771 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 66 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 18 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 6 | [Download](2/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 3 | 12 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 511 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 205 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 22 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 27 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 42 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 66 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 10 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 90 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 115 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 21 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 29 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 57 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 29 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 54 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 24 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 17 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 16 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 16 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 18 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 764 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 112 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 126 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 18 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 49 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 20 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 164 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 17 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 41 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 68 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 116 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 100 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 20 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 23 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 33 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 132 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 33 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 7 | [Download](40/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 41 | 21 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 111 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 41 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 16 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 11 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 18 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 9 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 32 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 9 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 6 | [Download](50/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| noise | 183 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/shirobako
|
[
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] |
2023-09-17T12:18:00+00:00
|
{"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]}
|
2023-09-30T11:10:33+00:00
|
[] |
[] |
TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
|
Bangumi Image Base of Shirobako
===============================
This is the image base of bangumi Shirobako, we detected 52 characters, 3771 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
|
[] |
[
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
[
25
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
b5db3ceb428a06b13041836ea73f5c4aba8d8c4e
|
# Dataset Card for "humanAIsentencesnewsmedium100k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
stealthwriter/humanAIsentencesnewsmedium100k
|
[
"region:us"
] |
2023-09-17T12:19:19+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 23908976, "num_examples": 180000}, {"name": "validation", "num_bytes": 2654251, "num_examples": 20000}], "download_size": 17496159, "dataset_size": 26563227}}
|
2023-09-17T12:19:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "humanAIsentencesnewsmedium100k"
More Information needed
|
[
"# Dataset Card for \"humanAIsentencesnewsmedium100k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"humanAIsentencesnewsmedium100k\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"humanAIsentencesnewsmedium100k\"\n\nMore Information needed"
] |
b4ba6ec497b7a54fa0e515d0f6b49302c50f9c82
|
# Dataset Card for "magical_world_animals"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/magical_world_animals
|
[
"region:us"
] |
2023-09-17T12:21:04+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 155286, "num_examples": 1000}], "download_size": 18574, "dataset_size": 155286}}
|
2023-09-17T12:21:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "magical_world_animals"
More Information needed
|
[
"# Dataset Card for \"magical_world_animals\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"magical_world_animals\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"magical_world_animals\"\n\nMore Information needed"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.